[comp.os.minix] Bloat costs

chip@tct.uucp (Chip Salzenberg) (05/30/90)

According to jca@pnet01.cts.com (John C. Archambeau):
>chip@tct.uucp (Chip Salzenberg) writes:
>>Competent C compilers can be written in small model.  I once worked on
>>a C compiler that ran on a PDP-11, which as everyone knows, is limited
>>to 64K of data under most (all?) Unix implementations.
>
>Which brings forth the argument in favor of progress.  How many people
>actually use PDP-11's anymore?

PDP-11 usage statistics matter not at all.  The point is
that it can be done, but some people would have you think
that it can't be done, so they can escape the mental effort
required to do it.

The "What do you want to do, return to the dark ages?"
retort reminds me of a quote from Theodor Nelson, who in
turn was quoting a computer consultant of the 70s:

    "If it can't be done in COBOL,
     I just tell them it can't be done by computer.
     It saves everyone a lot of time."

Obviously this consultant was a trogolodyte.  One would
hope that such attitudes are a thing of the past.

Substitute "four megabytes of RAM" for "COBOL", however,
and you get a depressingly accurate summary of the attitude
of the day.  Am I implying that that 4M-or-die programmers
are trogolodytes as well?  You bet your data space I am.
-- 
Chip Salzenberg at ComDev/TCT   <chip%tct@ateng.com>, <uunet!ateng!tct!chip>

jtc@van-bc.UUCP (J.T. Conklin) (05/31/90)

In article <2662D045.3F02@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>Substitute "four megabytes of RAM" for "COBOL", however,
>and you get a depressingly accurate summary of the attitude
>of the day.  Am I implying that that 4M-or-die programmers
>are trogolodytes as well?  You bet your data space I am.

Although I agree with Chip in general, there are some cases where
using memory is better than scrimping on principle.

I'm sure that many faster algorithms had to be passed by because
of limited address space.  Some of the GNU equivelents of UNIX
programs are many times faster because of the faster, yet more
memory intensive, algorithms.

I don't think I have to mention another optimization that ``wastes''
memory: large lookup tables.  It was quite common to be required to
re-compute indexes each iteration because there wasn't enough memory.

Another unrelated application is high resolution image processing.  Is
procesing 16MB frame-buffer with kerjillions of processors doing ray-
tracing wasting mmoryy?


On the other hand, there is something to be said about giving
beginning programmers 6 MHz Xenix/286 machines to work on.  I
think you'd be suprised at the small, fast, and portable code
that can come out of that enviornment.  I recomend it, as the
good habits that result will last for life.


To summarize, I have written programs that need 4M to run --- only
because it takes 4M to do the job.  Programs that require less, take
less. I do not consider myself a trogolodyte.

	--jtc

-- 
J.T. Conklin	UniFax Communications Inc.
		...!{uunet,ubc-cs}!van-bc!jtc, jtc@wimsey.bc.ca

chip@tct.uucp (Chip Salzenberg) (06/01/90)

According to jtc@van-bc.UUCP (J.T. Conklin):
>I'm sure that many faster algorithms had to be passed by because
>of limited address space.  Some of the GNU equivelents of UNIX
>programs are many times faster because of the faster, yet more
>memory intensive, algorithms.

However, as has been pointed out before, the memory isn't
free, paging takes time, swap space isn't free, etc.  At the
very least, where practical, programs with memory-eating
algorithms should include a more frugal algorithm as an
option.  IMHO, of course.

>Another unrelated application is high resolution image processing.  Is
>procesing 16MB frame-buffer with kerjillions of processors doing ray-
>tracing wasting mmoryy?

Well, there are exceptions to every rule.  :-)

>On the other hand, there is something to be said about giving
>beginning programmers 6 MHz Xenix/286 machines to work on.

Amen.
-- 
Chip, the new t.b answer man    <chip%tct@ateng.com>, <uunet!ateng!tct!chip>

wsd@cs.brown.edu (Wm. Scott `Spot' Draves) (06/02/90)

In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
   According to jtc@van-bc.UUCP (J.T. Conklin):

   >On the other hand, there is something to be said about giving
   >beginning programmers 6 MHz Xenix/286 machines to work on.

   Amen.

If you are suggesting that novice programmers be given slow/obsolete
hardware so that they learn to write efficient code, I would disagree
with you strongly.

Efficiency is just one of many attributes that are generally
desirable in programs.  Learning to program on a machine that is
slower than the state of the art will artificially skew the importance
of eff. programming.

One of the wonderful things about 20Mip 32Mb workstations is that I
don't have to worry about eff. when writing most code.  I can
concentrate on other issues such as clarity of code, speed of
execution, speed of development, fancy features, ...

by "eff." i mean "frugal of code and data".

--

Scott Draves		Space... The Final Frontier
wsd@cs.brown.edu
uunet!brunix!wsd

wwm@pmsmam.uucp (Bill Meahan) (06/02/90)

In article <WSD.90Jun1130958@miles.cs.brown.edu> wsd@cs.brown.edu (Wm. Scott `Spot' Draves) writes:
>In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>   According to jtc@van-bc.UUCP (J.T. Conklin):
>
>  [stuff deleted]
>
>One of the wonderful things about 20Mip 32Mb workstations is that I
>don't have to worry about eff. when writing most code.  I can
>concentrate on other issues such as clarity of code, speed of
>execution, speed of development, fancy features, ...
>
>by "eff." i mean "frugal of code and data".
>

May I be among the first to say HORSEPUCKY!

There seems to be a mindset among many CS majors that
"memory is cheap and hardware is fast, so why worry about efficiency?"

This kind of thinking is the result of looking only at chip prices and
the latest hot-rod announcements.  In truth, only a SMALL subset of the
(potential) customers for any given piece of software are running the
'latest and greatest' with beaucoup RAM.  The rest of us are running on
whatever we've got now and often this is older equipment or 'bare-bones'
versions of the hotter stuff because that was all we could afford.

There is a simple financial reality that is often overlooked:

	1) Regardless of the **theoretical prices**, if I don't HAVE 'it'
	   I have to go buy it.
	2) The money I have to go buy 'it' with could also go towards
	   the purchase of other things.
	3) Therefore, I have to demonstrate (to myself, my spouse,
	   my manager, the bean-counters, etc) that buying 'it' has
	   sufficient return on investment to justify THAT purchase
	   instead of some other.
	4) It is very hard to justify continual upgrades of equipment
	   just to get the 'latest and greatest' features, unless these
	   features translate DIRECTLY into some real benefit.
	5) If the latest and greatest is not directly upwards compatible
	   with my current configuration, there is an ADDITONAL hidden cost
	   associated with converting/replacing my current installed base
	   of software and hardware.
	6) Even 'cheap' upgrades get expensive if you have to buy a lot
	   of copies.  (This site has over 250 PC's, think the Controller
	   wants to spend $500 each to upgrade the memory just to get some
	   fancier display?)
	7) Customers DON'T CARE how clear/modular/elegant your code is
	   unless the clarity/elegance/whatever has some demonstratable
	   benefit to THEM!

Maybe all CS majors should be forced to take a few economics courses along
with the rest of their curriculum!

FAST, SMALL, CHEAP   <--- Pick any 2, you can't have all 3.
-- 
Bill Meahan  WA8TZG		uunet!mailrus!umich!pmsmam!wwm
I speak only for myself - even my daughter's cat won't let me speak for her!

mike@thor.acc.stolaf.edu (Mike Haertel) (06/02/90)

In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>According to jtc@van-bc.UUCP (J.T. Conklin):
>>On the other hand, there is something to be said about giving
>>beginning programmers 6 MHz Xenix/286 machines to work on.
>
>Amen.

Not a 286!  If you want to teach someone about memory constraints give
them a PDP-11 running UNIX v7.  A much cleaner architecture.

The problem is, people all too often assume that their past experience
defines how things "should" be, and so when they in turn design things in
the future they apply their preconceptions.  We don't need any intellectual
descendents of the 286.
--
Mike Haertel <mike@acc.stolaf.edu>
``There's nothing remarkable about it.  All one has to do is hit the right
  keys at the right time and the instrument plays itself.'' -- J. S. Bach

V2057A%TEMPLEVM.BITNET@cornellc.cit.cornell.edu (Juan Jose Noyles) (06/02/90)

Wain, in your tome on this subject, you stated that

a = b = c = 1;   is less readable than  a=1;b=a;c=b;  or a=1;b=1;c=1;

then you give various reasons why this is so.

I don't know about the rest of you folks, but the first instance flows a lot
better to me than either of the other two.  I'm not interested in attacking
you or your beliefs, but I think you chose the wrong reasons to believe that
the others are better than the first.

Maybe it's just my naivete, but when I do write code (after definition & design
like all good programmers), I find it more satisfying to squeeze every drop of
performance I can out of the code.  If that means I work a little harder, so
what?  I like programming, because I get paid for thinking, and its pretty ent-
ertaining (well, maybe not as much as sex or Arsenio, but...) to 'get it right'
in every way possible.

I also think your reference to first grade primers was a little warped, too.  I
don't write for first graders, and I wish everyone would get into the habit of
communicating with their peers on their level, instead of condescending or wor-
shipping them.  It'd make programs a lot easier to read.

Often, this is called optimization.  Perhaps you have noticed that it is easier
to converse with someone when you use the common base of knowledge between you?
It's similar with programming.  At some point in your relationship with a per-
son you decide that you know enough about them to call them your friend.  So it
also is with programming.  In the process of becoming friends, there are often
instances where its hard to express yourself.  As you become more familiar with
the language that your relationship understands, you learn to say more with
less words.  Your conversation is still basically intelligible to the outside
world, though.

Likewise with programming.  Since you don't worry while you're talking to your
friend about the portability of your conversation, why introduce 'needless' (I
don't know a better term) stricture at that time?  When it comes time to tell
someone else what you and your friend were talking about, the translation is so
trivial as to not be noticed.  This also holds for programming.  When you know
your friend's language well enough, you see that porting to another compiler
isn't such a big deal.

However, we all know that we don't like everyone we talk to, and discourse with
those people is nowhere nearly as pleasant as talking with friends.  That's
where we should 'program defensively'.

aglew@oberon.csg.uiuc.edu (Andy Glew) (06/09/90)

>With an orthogonal architecture and a good compiler, you can write
>maintainable programs in high-level languages and still produce
>products that run quickly on machines with a lot fewer than 20 MIPS.

With a good compiler you don't care how orthogonal the architecture is.
--
Andy Glew, aglew@uiuc.edu

bp@retiree.cis.ufl.edu (Brian Pane) (06/09/90)

In article <1990Jun1.200333.10672@pmsmam.uucp> wwm@pmsmam.UUCP (Bill Meahan) writes:
>In article <WSD.90Jun1130958@miles.cs.brown.edu> wsd@cs.brown.edu (Wm. Scott `Spot' Draves) writes:
>>In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>>   According to jtc@van-bc.UUCP (J.T. Conklin):
>>
>>  [stuff deleted]
>>
>>One of the wonderful things about 20Mip 32Mb workstations is that I
>>don't have to worry about eff. when writing most code.  I can
>>concentrate on other issues such as clarity of code, speed of
>>execution, speed of development, fancy features, ...
>>
>>by "eff." i mean "frugal of code and data".
>>
>
>May I be among the first to say HORSEPUCKY!
>
>There seems to be a mindset among many CS majors that
>"memory is cheap and hardware is fast, so why worry about efficiency?"
>
>This kind of thinking is the result of looking only at chip prices and
>the latest hot-rod announcements.  In truth, only a SMALL subset of the

If such a mindset exists, it is not because of the abundance of powerful
hardware.  It is because CS majors are taught to build robust, maintainable,
and therefore seemingly elegant programs rather than compact and clever
programs.  If we get used to writing ruthlessly brilliant programs,
we'll only add to the "software crisis" when we graduate.

I agree that efficiency is important, but  it must be kept in
its proper perspective.  This group is devoted to the implementation
of a UNIX-like OS on an architecture that should have been allowed to
die ten years ago.  No matter how well you write your C code, an average
compiler will probably produce disgracefully slow executables.  There
is little to be gained by writing efficient C programs for inefficient
machines.  You *can* write fairly efficient code for the 8086--in
assembly language.  However, few people have that much time to waste.
While you're shouting about the expense of "improved" software and the
expense of the hardware on which such software must run, don't forget
about the cost of programmer time.

Finally, note that large and "inefficient" programs advance the state
of the art in software more often than small and clever programs.
Consider X Windows.  It is a huge system designed for flexibility
rather than efficiency, and it requires significant hardware power,
but it has revolutionized the way we use computers.

>Maybe all CS majors should be forced to take a few economics courses along
>with the rest of their curriculum!
>
Don't blame us for the economic problems of software development; blame the
EE's who design the hardware.  With an orthogonal architecture and a
good compiler, you can write maintainable programs in high-level languages
and still produce products that run quickly on machines with a lot fewer
than 20 MIPS.


>FAST, SMALL, CHEAP   <--- Pick any 2, you can't have all 3.
Not yet.  And not ever, if we all devote our efforts to
optimizing tiny programs for tiny machines.  20-MIPS
workstations will become affordable only when lots of software
is available for them.

>Bill Meahan  WA8TZG		uunet!mailrus!umich!pmsmam!wwm
>I speak only for myself - even my daughter's cat won't let me speak for her!

-Brian F. Pane
-------------------------------------------------------------------------
Brian Pane	University of Florida Department of Computer Science
bp@beach.cis.ufl.edu		Class of 1991

"If you can keep your expectations tiny,
 you'll get through life without being so whiny" - Matt Groening

#ifdef OFFENDED_ANYONE
#  include "disclaimer.h"
// Sorry to indulge in such 8086-bashing, folks, but I had a point to make.
#endif
-------------------------------------------------------------------------

peter@ficc.ferranti.com (Peter da Silva) (06/09/90)

In article <23473@uflorida.cis.ufl.EDU> bp@beach.cis.ufl.edu (Brian Pane) writes:
> If such a mindset exists, it is not because of the abundance of powerful
> hardware.  It is because CS majors are taught to build robust, maintainable,
> and therefore seemingly elegant programs rather than compact and clever
> programs.  If we get used to writing ruthlessly brilliant programs,
> we'll only add to the "software crisis" when we graduate.

Lots of nice buzzwords there, fella. Trouble is, it doesn't mean anything.
First of all, I haven't noticed that much, if any, difference in the quality
of net contributions from academia and industry. Quantity, yes... industry
can't afford the time to write the latest and greatest freeware. Second,
nobody's advocating gratuitous microefficiency here, just a consideration
of space-time tradeoffs in choosing algorithms. Like not loading a whole
file when you can get away with reading a line at a time. Or if you *do*,
check how much there is to read before you read it instead of just allocating
a big array and doubling in size when it fills up. Using a simplistic
algorithm makes as much sense as using bubble-sort on a megabyte array.

> Finally, note that large and "inefficient" programs advance the state
> of the art in software more often than small and clever programs.
> Consider X Windows.

Yes, lets.

> It is a huge system designed for flexibility
> rather than efficiency, and it requires significant hardware power,
> but it has revolutionized the way we use computers.

Actually, it was the Xerox Star and the Apple Macintosh that did that.
Machines with a fraction of the resources of the typical X workstation.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

icsu8053@ming.cs.montana.edu (Craig Pratt) (06/09/90)

In article <8M_3OF3@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>In article <23473@uflorida.cis.ufl.EDU> bp@beach.cis.ufl.edu (Brian Pane) writes:
>> If such a mindset exists, it is not because of the abundance of powerful
>> hardware.  It is because CS majors are taught to build robust, maintainable,
>> and therefore seemingly elegant programs rather than compact and clever
>> programs.  If we get used to writing ruthlessly brilliant programs,
>> we'll only add to the "software crisis" when we graduate.
> (some of Peter's stuff deleted)
>> Finally, note that large and "inefficient" programs advance the state
>> of the art in software more often than small and clever programs.
>> Consider X Windows.
>
>Yes, lets.
>
>> It is a huge system designed for flexibility
>> rather than efficiency, and it requires significant hardware power,
>> but it has revolutionized the way we use computers.
>
>Actually, it was the Xerox Star and the Apple Macintosh that did that.
>Machines with a fraction of the resources of the typical X workstation.

There are actually a few different revolutions going on and I don't think
the one sparked by Xerox/Apple is the most important.  I think the most
revolutionary idea was sparked by Unix.  It's a bit more philosophical
than technical.  As I understand it, the idea behind Multics and, subse-
quently, Unix was to build an OS which does almost everything without
taking into consideration the performance or cost of its use.  Notice that
X windows is similiar in that respect:  it is huge, powerful and flexable.
Xwindows takes no shortcuts to do its thing.  Another example would have
to be Ada.  Ada is almost certainly one of the most powerful languages
in existance.  But, like Unix and Xwindows, it is not very fast or 
efficient.

So, why do these packages exist?  Well, it's not difficult to see that
the speed and power of computer hardware increases at an amazing rate.
The people behind these packages were simply smart enough to realize this
and they wrote their requirements accordingly.  Sure, these packages are/
were slow when they were initially realeased, but by the time the hard-
ware caught up, these packages are/will be standards.  If you ask me,
standards are what is needed most in the computer industry.  Far too
often things work in the opposite direction.  Packages and OS's are written
to work great on the current hardware but are quickly surpassed by new
hardware and the software that runs on the new hardware.  It is very dif-
ficult for standards to exist in this environment.

Minix seems to fit within the smart software category as well.  Sure, an
OS written in assembler would be faster, but if it were written in 
assembley, it would be many times more difficult to port and it would 
exist on far fewer platforms.  Subsequently, you would see far less
support and software written for Minix.  This, of course, would diminish 
its usefulness greatly.

I hope the kind of trend inspired by these packages continues.  It only
makes sense.  Sure, it's kind of fun to see how efficient you can write a 
certain loop or whatever but if it breaks the rules, you're going to have a 
LOT of fun when it comes time to port it.  Unfortunately, in the past,
education and industry have been in conflict on this philosophy.  Xwindows
would not have been a real marketable product when it was released.  Now
that its a standard and the machines are getting faster, companies are
realizing that there is a market.

The setting of standards is often a necessary but unprofitable step in
software evolution.  Hopefully, this will change as and if companies start
to consider the long term and not just the sort-term profits.  I applaud
and admire the people behind the above mentioned products.  They are true
visionaries.

This topic is more related to software engineering and doesn't really
belong in the Minix group.  I guess this isn't the first time this has
happened, though. :^>

Craig

>-- 
>`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
> 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
>@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

--
   / Craig Pratt                          / Craig.Pratt@msu3.oscs.montana.edu /
  / Montana State University, Bozeman MT / icsu8053@caesar.cs.montana.edu    /
 /~~~~~~ " My after-life is sooo boring!  If I have to sing koombia one ~~~~/
/_________more time... " - Heather #1, "Heathers" _________________________/          

V2057A%TEMPLEVM.BITNET@cornellc.cit.cornell.edu (Juan Jose Noyles) (06/10/90)

Brian, you sound like a freshman that just decided to stop cutting classes and
impress your friends with the new words you've learned.  There doesn't seem to
be any hint of experience in your tome.  Don't get me wrong, this isn't a per-
sonal flame, because I used to talk like that too.

You said a lot of trash that amounted to 'when everyone has big, fast machines,
who cares that we waste a few cycles doing nothing?  My big, fast machine won't
notice, and neither will my users.'  Wrong, dude.  If you'd been in class and
concentrated on understanding why a fast search on a slow machine beats the
pants off of a slow search on a fast one, you'd also understand why users are
always complaining about stupid programmers and the schools that graduate them.

There's a difference between efficiency and effectiveness.  I hope you learn it
before you graduate.

bp@condo.cis.ufl.edu (Brian Pane) (06/10/90)

In article <21579@nigel.udel.EDU> V2057A%TEMPLEVM.BITNET@cornellc.cit.cornell.edu (Juan Jose Noyles) writes:
>Brian, you sound like a freshman that just decided to stop cutting classes and
>impress your friends with the new words you've learned.  There doesn't seem to
>be any hint of experience in your tome.  Don't get me wrong, this isn't a per-
>sonal flame, because I used to talk like that too.
>
I didn't use the software engineering buzzwords to impress anyone.  In fact,
I generally consider "software engineering" an oxymoron.  I pointed out the
problems of maintainability, readability, portability, etc. because they
are real problems which actually exist outside the minds of CS students and
professors.  I used to think efficiency was the primary consideration in
programming.  I now know better.

>You said a lot of trash that amounted to 'when everyone has big, fast machines,
>who cares that we waste a few cycles doing nothing?  My big, fast machine won't
>notice, and neither will my users.'  Wrong, dude.  If you'd been in class and
>concentrated on understanding why a fast search on a slow machine beats the
>pants off of a slow search on a fast one, you'd also understand why users are
>always complaining about stupid programmers and the schools that graduate them.
>
I'm afraid you've completely misinterpreted my posting.  I did *not* say that
fast hardware justifies bad algorithms.  I said that we shouldn't have to
write bizzare code that exploits the peculiarities of a particular machine.
Spending an hour debugging a partitioning routine so that you can replace
an insertion sort with a quicksort is a productive activity.  The resulting
program will be more efficient than the previous version, regardless of the
underlying instruction set, data bus width, register limitations, etc.
(Yes, some "efficient" algorithms thrash badly with virtual memory
systems, but let's not venture down that path of discussion.)
Spending an hour rewriting your insertion sort in assembly language to
take advantage of the fact that only the CX register can be used in an
autopostdecrement mode is not a productive activity.
     One important difference between "big, fast" machines and "small,
slow" machines is the fact that the former provide a much better base
than the latter for the implementation of efficient algorithms.  I think
I can clarify my point (and avoid a lengthy stream of followups calling
me an ignorant undergrad) by comparing two machines.  The first is
a 68030-based workstation in the department's UNIX lab; I'm using it
right now.  The second is one of the 80535-based 8-bit single-board computers
with which I'm building a network at work.  I do most of my programming
for both machines in C, and--despite your poor opinion of my intellectual
ability--I write efficient code.  I don't allocate more memory than I need,
I try to use the algorithm with the best O(f(N)) for often-repeated routines,
etc.  However, the 68030 ends up executing much better code than the 80535,
because our 80535 compiler must struggle to  produce code for the very
limited architecture instruction set and register set.  The 80535 was
designed to be programmed in assembly language, while the '030 provides
many instructions to support high-level language programming.  To get
my C programs to run efficiently on the 80535, I must introduce all sorts
of "optimizations" (char instead of int, avoiding arrays whenever
possible, lots of temporary register variables, static variables that
should logically be automatic, etc.) that have nothing to do
with the efficient functioning of my algorithms.  The alternative is
assembly language, which is economically unfeasible.  On the '030, gcc
produces fast enough code that I have never had to bother studying
the assembly language it produces.  One of my co-workers is digitally
storing and playing back sound with the 80535; doing so requires a
ridiculous amount of bank-switching.  With a 32-bit address bus, the
workstation doesn't have to do bank-switching.  Of course, many people
who posted or e-mailed a followup to my original article have emphasized
the evils of large CODE size, but nobody has yet acknowledged that
emerging applications need lots of DATA space and need to access it
very quickly.  Of course, it is a bit ridiculous to compare a 32-bit CPU
with an 8-bit microcontroller, but the comparison helps to emphasize
the correlation between software evolution and hardware power.  The
benefit of powerful hardware is not that it enables bad programmers to
hide their stupidity, but rather that it enables good programmers to
develop new programs--and new types of programs--that would have
been impossible with fewer MIPS.

>There's a difference between efficiency and effectiveness.  I hope you learn it
>before you graduate.
Efficiency:    You write compact code that doesn't excessively strain
	       computer resources or do anything incredibly stupid.
Effectiveness: You get the program done, thus avoiding unemployment.
	       The people you're working for sell lots of copies and
	       avoid bankruptcy.  The program is not as elegant as
	       it might be, but it doesn't kill anybody.  If you
	       get enough complaints, you fix the program.  If you
	       get enough compliments, you fix the program.

-------------------------------------------------------------------------
Brian Pane	University of Florida Department of Computer Science
bp@beach.cis.ufl.edu		Class of 1991

"If you can keep your expectations tiny,
 you'll get through life without being so whiny" - Matt Groening

#ifdef OFFENDED_ANYONE
#  include "disclaimer.h"
#endif
-------------------------------------------------------------------------

peter@ficc.ferranti.com (Peter da Silva) (06/10/90)

In article <2066@dali> icsu8053@ming.cs.montana.edu (Craig Pratt) writes:
> There are actually a few different revolutions going on and I don't think
> the one sparked by Xerox/Apple is the most important.  I think the most
> revolutionary idea was sparked by Unix.

Do you know what that idea is?

> It's a bit more philosophical
> than technical.  As I understand it, the idea behind Multics and, subse-
> quently, Unix was to build an OS which does almost everything without
> taking into consideration the performance or cost of its use.

I guess not. The idea behind UNIX was the software tools approach. Design
small tools that do one job well, and combine them using powerful but simple
techniques, primarily the pipeline, to build larger tools. UNIX was the direct
opposite of the kitchen-sink approach to O/S design: it only does the things
that are needed to support the software tools.

Setting windowing standards at this point in software development makes about
as much sense as settling on Watt steam engines... planetary gears and all...
to power industry.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

peter@ficc.ferranti.com (Peter da Silva) (06/10/90)

In article <23495@uflorida.cis.ufl.EDU> bp@condo.cis.ufl.edu (Brian Pane) writes:
> I'm afraid you've completely misinterpreted my posting.  I did *not* say that
> fast hardware justifies bad algorithms.  I said that we shouldn't have to
> write bizzare code that exploits the peculiarities of a particular machine.

Nobody is saying you should. You're attacking a straw man, here. Just as you
are with your division of the world into 68030 class processors and
microcontrollers. The vast majority of computers out there are in the gap
between the two.

People aren't saying you shouldn't use time-efficient but space-wasteful
algorithms when time is critical and space isn't. The problem is that there
are systems out there, like X, that are fundamentally flawed. Putting the
code to handle expose events into an application program makes about as much
sense as putting erase and kill handling into "cat". This means that every
program on the system has its own slightly different version of what is
basically O/S code. Why? Because when X was first designed, they couldn't
afford to put much memory in the display servers. Talk about bizzarre code
to deal with the peculiarities of a particular machine. And this particular
design decision... which might have made sense at one point in time... is now
being cast in concrete. Wonderful.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

kt4@prism.gatech.EDU (Ken Thompson) (06/11/90)

>In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>   According to jtc@van-bc.UUCP (J.T. Conklin):
>
>  [stuff deleted]
>
>One of the wonderful things about 20Mip 32Mb workstations is that I
>don't have to worry about eff. when writing most code.  I can
>concentrate on other issues such as clarity of code, speed of
>execution, speed of development, fancy features, ...
>>
>by "eff." i mean "frugal of code and data".
>

I strongly disagree that efficiency(including code/date size) can reasonably 
be ignored. No matter how quickly the power of machines grow, the things that
we want to do with them grow even faster. I believe it is a grave mistake
not to be concerned with the efficiency of the algorithms used in programming.
IMHO, this attitude has led to a severe decline in the capability of software
vs. the hardware resources required to execute it. Note I did not say 
anything about the cost of these resources. I find this depressing to say
the least.

				Ken
 

-- 
Ken Thompson  GTRI, Ga. Tech, Atlanta Ga. 30332 Internet:!kt4@prism.gatech.edu
uucp:...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!kt4
"Rowe's Rule: The odds are five to six that the light at the end of the
tunnel is the headlight of an oncoming train."       -- Paul Dickson

wwm@pmsmam.uucp (Bill Meahan) (06/12/90)

In article <10342@hydra.gatech.EDU> kt4@prism.gatech.EDU (Ken Thompson) writes:
>>In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>>   According to jtc@van-bc.UUCP (J.T. Conklin):
>>
>>  [stuff deleted]
>>
>>One of the wonderful things about 20Mip 32Mb workstations is that I
>>don't have to worry about eff. when writing most code.  I can
>>concentrate on other issues such as clarity of code, speed of
>>execution, speed of development, fancy features, ...
>>>
>>by "eff." i mean "frugal of code and data".
>>

So far, nobody has addressed my major point: "improved" software/hardware
MUST have an economically justifiable benefit TO THE USER in order for it
to be worth purchasing.  Certainly, nobody in their right mind would suggest
that programs ONLY be written for 256K PC/XT's or equivalent, but it does
mean that "features" and benefit to the software WRITER must be tempered
by the benefit to be gained by the end user.

For example, suppose your new package allows the processing of
some data 10 times as fast as whatever I'm using now.  And suppose that it
requires me to replace 10,000 PS2/50Z's in the company.  The question is,
what will processing that data 10 times faster allow me to do?  Can I get
rid of 90% of the PS2's ? How about the people whose desks they are on,
can I lay off 9000 clerks?  5000?  1000? 10? 1?  Can I cut my overtime
expenditures by enought $$ to offset the costs of buying the super software
AND workstations at least enough to get a 30% TARR (Time Adjusted Rate of
Return)?  If not, then no matter how wonderful the technical aspects of
your super software, I simply can't afford it, even if the company has
a few $billion sitting in the bank since I can get a BETTER financial
return with another investment.  So, YOU lose.

My point is this: customers do not have unlimited funds so that only the
technical wonderfulness of software matters - it's got to give enough
real-world return to be worth buying it!

Bill Meahan  WA8TZG		uunet!mailrus!umich!pmsmam!wwm
I speak only for myself - even my daughter's cat won't let me speak for her!

-- 
Bill Meahan  WA8TZG		uunet!mailrus!umich!pmsmam!wwm
I speak only for myself - even my daughter's cat won't let me speak for her!

greg@sce.carleton.ca (Greg Franks) (06/13/90)

In article <23473@uflorida.cis.ufl.EDU> we find:
...
>>There seems to be a mindset among many CS majors that
>>"memory is cheap and hardware is fast, so why worry about efficiency?"
>>
>>This kind of thinking is the result of looking only at chip prices and
>>the latest hot-rod announcements.  In truth, only a SMALL subset of the
>
>If such a mindset exists, it is not because of the abundance of powerful
>hardware.  It is because CS majors are taught to build robust, maintainable,
>and therefore seemingly elegant programs rather than compact and clever
>programs.  If we get used to writing ruthlessly brilliant programs,
>we'll only add to the "software crisis" when we graduate.

David Parnas would beg to differ.  He is not certain which is worse,
an Engineer who has been writing Fortran for the last 20 years, or a
present day CS major.  The former do not know ``modern'' programming
practices, hence they produce goto-full programs that do one thing
rather well.  The latter produce ``elegant'' programs that not only do
what the customer wanted (maybe), but twenty billion other things as
well.  After all does `ls' really need 18 different options?
Unfortunately, computer programming still seems to live in the CISC
era.

Prof. Parnas recently wrote an article in IEEE Computer on this very
subject.  I recommend reading it.

From:  "just call me Tex (as in massacre) - my productivity is
measured in negative lines"  :-) :-) :-)
-- 
Greg Franks, (613) 788-5726              |"The reason that God was able to
Systems Engineering, Carleton University,|create the world in seven days is
Ottawa, Ontario, Canada  K1S 5B6.        |that he didn't have to worry about
greg@sce.carleton.ca uunet!mitel!sce!greg|the installed base" -- Enzo Torresi

bpendlet@bambam.UUCP (Bob Pendleton) (06/13/90)

From article <2662D045.3F02@tct.uucp>, by chip@tct.uucp (Chip Salzenberg):

> Substitute "four megabytes of RAM" for "COBOL", however,
> and you get a depressingly accurate summary of the attitude
> of the day.  Am I implying that that 4M-or-die programmers
> are trogolodytes as well?  You bet your data space I am.
> -- 
> Chip Salzenberg at ComDev/TCT   <chip%tct@ateng.com>, <uunet!ateng!tct!chip>

A long time ago (about 10 years), at a company that has since changed
its name several times, I and 3 other damn good programmers spent a
year or so writing the runtime support libraries for a COBOL system
that generated code for an 8080 based "terminal" called the UTS400.
The compiler ran on a number of different machines and generated code
that ran on the '400. You linked the code with our runtime code and
you got an application you could down load to an eight inch floppy and
then boot on the '400. 

Our library did all the weird arithmetic and data formatting that
COBOL needs.  It also implemented a disk file system, host
communications, screen formatting, data entry validation,
multithreading (yes it was a multiuser system, up to 4 users if I
remember correctly), and segment swapping. It fit in 10K bytes. Normal
'400s had 24K, some had 32K. I know that at least one 20K lines COBOL
program ran on the machine all day, every day. 

Marketing decided we should also support indexed sequential files.
They "gave" us 1K to implement it. That is, the code for the indexed
sequential file system could not increase the size of the library by
more than 1K bytes.  We wrote the indexed sequential files module in
2K and rewrote the rest of the system to fit in 9K. 

So when people tell me they have done incredible things in tiny
memories on absurd machines I beleive them. I've even been know to buy
them a drink. 

Yes, it can be done. But for most things it is an absurd waste of
time. I can write code 5 to 10 times faster when I DON'T have to
worry about every byte I spend than when I'm memory tight. And I can
write code that RUNS several times faster when I'm free with memory
than when I have to count every byte. 

Some times you must run a ton of program on a pound of computer. Many,
if not most, commercial programs in the MS-DOS world fall into that
realm. But, most programming done in the name of "memory efficiency"
is just wasted time. You have to sell a lot of copies to make back the
cost of all that code tightening. Not to mention what it does to the
cost of further development. 

			Bob P.

P.S.

I also learned an important lesson on the power of structured design
and prototyping form this project. But, that's another story. 

-- 
              Bob Pendleton, speaking only for myself.
UUCP Address:  decwrl!esunix!bpendlet or utah-cs!esunix!bpendlet

                      X: Tools, not rules.

oz@yunexus.UUCP (Ozan Yigit) (06/14/90)

In article <23473@uflorida.cis.ufl.EDU> bp@beach.cis.ufl.edu (Brian Pane) 
babbles:

>Finally, note that large and "inefficient" programs advance the state
>of the art in software more often than small and clever programs.

And, you are writing this on an operating system that advanced the "state
of the art" without appearently needing 1/50th of what you may have on
your desk as a computing resource. So ironic.

oz
-- 
First learn your horn and all the theory.	Internet: oz@nexus.yorku.ca
Next develop a style. Then forget all that 	uucp: utzoo/utai!yunexus!oz
and just play.		Charlie Parker [?]	York U. CCS: (416) 736 5257

hedrick@athos.rutgers.edu (Charles Hedrick) (06/16/90)

Indeed.  I ported Kermit to Minix.  It took me several days to do.
On other versions of Unix you do it by typing "make", and maybe
fixing a few system dependencies.  The time was spent removing help
facilities and shortening text strings to get it to fit.  This is
not the way I want to spend my time (aside from being irked that
Kermit's nice user interface is being butchered in the process).

peter@ficc.ferranti.com (Peter da Silva) (06/16/90)

In article <Jun.16.00.15.42.1990.13822@athos.rutgers.edu> hedrick@athos.rutgers.edu (Charles Hedrick) writes:
> Indeed.  I ported Kermit to Minix.  It took me several days to [get it
> to fit]

Indeed. Which kermit were you using? Ours runs fine in small model.

+ which kermit
/usr/bin/kermit
+ size /usr/bin/kermit 
62124 + 30776 + 8606 = 101506 = 0x18c82
+ file /usr/bin/kermit 
/usr/bin/kermit:	separate executable not stripped
+ dates /usr/bin/kermit
C-Kermit, 4C(057) 31 Jul 85
Unix tty I/O, 4C(037), 31 Jul 85
Unix file support, 4C(032) 25 Jul 85
C-Kermit functions, 4C(047) 31 Jul 85
Wart Version 1A(003) 27 May 85
C-Kermit Protocol Module 4C(029), 11 Jul 85
Unix cmd package V1A(021), 19 Jun 85
User Interface 4C(052), 2 Aug 85
Connect Command for Unix, V4C(014) 29 Jul 85
Dial Command, V2.0(008) 26 Jul 85
Script Command, V2.0(007) 5 Jul 85
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.
<peter@ficc.ferranti.com>