[comp.unix.wizards] beware the hardware/software time warp

rcd@ico.isc.com (Dick Dunn) (04/10/91)

kemnitz@POSTGRES.BERKELEY.EDU (Greg Kemnitz) writes:
> X now has been elevated to the status of "Standard"...
> ...it seems that machines like the Sparc II and DEC 5K have finally gotten
> enough moxie to run X well and their prices will be in the "easily affordable"
> range in a couple years or less...

I think Greg's assessment is correct about where we stand on hardware capa-
bility:  Realistically, we're just at the point where existing "workstation"
hardware can handle the resource demands of X; it's somewhat unrealistic to
say that X has been "usable" until very recently.

[some complaints about effect of X on earlier machines]
> ...But it appears that it is easier to wait for fast
> machines rather than to design standard graphics protocols that aren't bloated,
> politically acceptable masses.  Also, it appears that the de facto trend in
> industry is to hope hardware improves fast enough to let poorly written
> software run well rather than writing software properly, and it is hard to
> argue that this strategy has been a complete failure,...

All well said, but there's still a trap.  Greg's premise is that in a
couple years, sufficient hardware to run X will be cheap...but that skews
the timing, because in a couple of years we won't be running today's soft-
ware.  Instead, we'll have what we've got today plus two years' accretion
of features/chrome/goo/crap...and it's likely that the cheap machines then
will still be not quite able to keep up!  If you're going to project two
years in the future on the hardware (price, capacity, performance), you've
got to project two years out on the software too...and that makes the
future look rather less rosy.

I think this points up a trap:  Too many people are "designing for the
future" in counterproductive ways.  They're not planning ahead so much as
fudging ahead--counting on future hardware advances to save them from bad
implementation decisions and bloated code.  It leaves us with the Red
Queen's warning--roughly "you have to run as fast as you can just to stay
in one place; if you want to go anywhere you must run twice as fast as that."

Another way to look at the practice is "deficit spending"!  We keep
spending performance we don't have (yet).  The only reason we've gotten
along this far is that UNIX started us off "in the black."
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...Lately it occurs to me what a long, strange trip it's been.

kemnitz@POSTGRES.BERKELEY.EDU (Greg Kemnitz) (04/10/91)

In article <1991Apr9.200858.8347@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>[deleted...]

>> Also, it appears that the de facto trend in
>> industry is to hope hardware improves fast enough to let poorly written
>> software run well rather than writing software properly, and it is hard to
>> argue that this strategy has been a complete failure,...
>
>All well said, but there's still a trap.  Greg's premise is that in a
>couple years, sufficient hardware to run X will be cheap...but that skews
>the timing, because in a couple of years we won't be running today's soft-
>ware.  Instead, we'll have what we've got today plus two years' accretion
>of features/chrome/goo/crap...and it's likely that the cheap machines then
>will still be not quite able to keep up!  If you're going to project two
>years in the future on the hardware (price, capacity, performance), you've
>got to project two years out on the software too...and that makes the
>future look rather less rosy.  

When I said that this strategy was "not a complete failure", I didn't want
to imply that I endorsed it or used it (certainly not!  Postgres runs better
on a good ol' Sun 3/1XX now than it ever has in the past).  What I meant was
that those who persued this "strategy" (not a true "strategy" really - actually
a lack of one) have at this point been able to maintain and possibly enhance
their presense in the marketplace.

[Dick Dunn goes on to point out that this is a trap...]

It's no trap - it's capitalism at it's finest!

After all, if I'm a hardware company, I don't want software to be very usable
on my low-end iron; like big cars, high-end hardware is where I make my money.
And I certainly don't want software vendors concentrating on writing code well
so that customers don't need the extra MIPS.  If I'm smart, I'll either write
the tools inefficiently so software vendors can't write good code, or I'll
write the code myself and dump lots of what amounts to no-ops in it.  Or else
how would I sell them a new computer every three years?  If software didn't
deterioriate in quality as hardware speed increased, the only time the
customer would ever need a new computer is when their actual *needs* changed!

Why, in my last OS upgrade, did all the binaries in my /usr partition double
in size??  Gee, they must want me to buy some new hardware, even though I
was getting along just fine with the old disk until the upgrade :-(  And the
binaries the compiler generates are about 20% bigger, too.  The thing runs
like a pig, and the only OS bug they seemed to fix was the one that let me get
around buying a user upgrade license :-|
 
Other factors that work in this direction:

1.  In order to get the software on the desktop, you've got to be able to
    sell it.  And as environments have gotten more graphical in general,
    your graphics has to "look better" than the other guy's.  Hoping for
    "educated consumers" to make the right decision and choose the best
    written and most efficient software over the one with the slickest
    presentation and fanciest demo graphics just won't do; software companies
    that make this assumption fill the bankruptcy courts of America.
    Graphics, like sex, sells.  _You_ may not buy it, but how many people like
    _you_ are out there?  Enough to make much of a difference in the
    marketplace?  I didn't think so.

2.  Interfaces to window systems are complex enough (I know they don't have
    to be, having designed interfaces myself that were far simpler, but they
    *are*, for whatever reasons) that software making extensive use of them
    represents a rather large manpower investment.  It would be a difficult
    move for any manager to justify chucking man-years or man-decades worth of
    work so that code can be rewritten properly, even if those man-decades
    were badly invested in the first place.  This simple fact defeats many
    of the most carefully designed software engineering methodologies.

>[Dick Dunn mentions that writing code that presumes hardware speed increases
> is tentamount to deficit spending]

This is quite true.  Even though I seemed to imply in the above that I was
accusing hardware vendors of some evil plot, this is actually what is going
on.  And the effects on the marketplace are the same as if they were!

>-- 
>Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
>   ...Lately it occurs to me what a long, strange trip it's been.

-----------------------------------------------------------------------
Greg Kemnitz                  |      "I ran out of the room - I
Postgres Chief Programmer     |      didn't want to be killed by a pile
278 Cory Hall, UCB            |      of VMS manuals" :-)
(415) 642-7520                |
kemnitz@postgres.berkeley.edu |      --A friend at DEC Palo Alto in the Quake