[comp.software-eng] Software Technology

chris@mimsy.UUCP (Chris Torek) (11/02/87)

A potpourri of responses:

In article <5084@utah-cs.UUCP> shebs%defun.uucp@utah-cs.UUCP (Stanley T.
Shebs) writes:
>>Which of the tasks to be solved should the software developer consider?
>>On one machine there may be hundreds or even tens of thousands of users.
>>Some of these users may have dozens of different types of problems.  

>Then the software developer has dozens of tens of thousands of tasks to
>solve!  ... really means that some compromises have to be made.  This
>situation is not unique to software.  For example, bridge designers don't
>usually get new and unique rivets for each bridge - instead, they have
>to order from a catalog.

If you have ever wondered why some military projects are so expensive, it
is because they like to get new and unique rivets, and so forth.
It is often necessary to do so, in order to get the last few percent
of performance.

(I agree with most of the rest of his article; however:)

>>The VAX has twelve general-purpose registers available. ... I object
>>to the compiler, which does not need any registers for other purposes,
>>only giving me six.

>How do you know it "does not need any registers for other purposes"?
>Are you familiar with the compiler's code generators and optimizers?
>Writers of code generators/optimizers tend to be much more knowledgeable
>about machine characteristics than any user, if for no other reason
>than that they get the accurate hardware performance data that the
>customers never see...

Stan is right in general; in this case, however, the `>>' author
appears to be using the Portable C Compiler.  The compiler is more
than a decade old, and is not bad for what it was, which most
certainly does not include optimisation.  Better compilers are
available.  DEC sells their VMS C compiler for Ultrix; Tartan
labs sells an optimising PCC; GNU CC optimises.  The last is even
free.

In article <275@gethen.UUCP> farren@gethen.UUCP (Michael J. Farren) writes:
>... "Are all of the applications currently available and under
>development being written so that they exhibit the maximum efficiency
>possible, both in speed and in size?".  The answer, in many cases,
>is definitely "NO".

So?

In the same vein, in article <516@esl.UUCP> ssh@esl.UUCP (Sam) writes:
>I consider this socially irresponsible.  Rather than company XYZ
>paying the extra $YYY to properly structure the code, they rely on the
>ZZZ thousands of consumers to pay extra for disk storage, power, time,
>etc. to store, feed, and wait for this software.

On the other hand, rather than company XYZ paying the extra $YYY
to properly structure the code (structuring code typically makes
it slower, by the way*), thereby being late to market, company XYZ
gets it out in time, and more important, the ZZZ thousands of
customers consider it worthwhile, even though it uses extra disk
storage, power, time, etc.
-----
* This is likely to change with new breeds of optimising compilers,
  which optimise across function calls.  Software technology on the
  march! :-)
-----

>Let's reserve the productivity / portability argument for those few of
>a kind cases such as custom-designed software (eg. military /
>government contracts), but let's not get carried away by excusing the
>laziness of the commercial software market.

Who cares how lazy the market is?  The idea behind competition is
that if someone *can* do better, someone *will*.  Perhaps those
who have faster, smaller software (that is nonetheless later to
market) have failed in the competition because that is not what
the customers really want.  Or perhaps not.  What does it matter?
Instead of railing against the inefficiency of some software, ask
*and wait* for something more efficient, or write it yourself.  If
everything out there is worse than it might be, improve upon it
and sell it (or give it away).

I think that everything *is* worse than it might be, but the cost
of improving it is more than people wish to spend.  That this is
not currently true of hardware is an interesting fact, but not a
reason to claim that software technology sucks.  It does make a
good point for comparison, however.  Why *are* we willing to spend
millions of dollars on hardware improvements, but not on software
improvements?  I suspect the answer is this:  an improvement in
hardware affects every bit of software that runs on that hardware.
Rewriting one program to make it more efficient affects only that
one program.  There is at least one place this is not true, namely
inside compilers.  And surprise! people are spending quite a bit
on compiler development too.

(It all comes back to counting the beans.  Some beans are multipliers,
and fewer of those is more significant than fewer beans in some of
the addends.  In fact, it reminds of tuning programs: of optimising
the 90% of the code that takes only 10% of the time versus the 10%
that takes 90% of the time.)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

dhesi@bsu-cs.UUCP (11/04/87)

In article <9193@mimsy.UUCP> chris@mimsy.UUCP (Chris Torek) writes..
     of optimising the 90% of the code that takes only 10% of the time
     versus the 10% that takes 90% of the time.

After one optimises the 10% of the code that takes 90% of the time, one
is left with 10% of the code that now only takes 10% of the time.

NOW the remaining problem is how to optimize the 100% of the code that
takes 100% of the time.
-- 
Rahul Dhesi         UUCP:  <backbones>!{iuvax,pur-ee,uunet}!bsu-cs!dhesi