[comp.misc] A tirade about inefficient software & systems

kraig@biostr.biostr.washington.edu (Kraig Eno) (10/25/90)

Excuse me, I'm fed up.  If you are one of the world's many purveyors of 
fat software systems, please take these comments to heart; otherwise, 
ignore my ranting.

I see in today's client/server supplement to Digital Review (a true 
cutting-edge rag if there ever was one) that DEC's VP of VAX/VMS Systems 
and Servers, William Demmer, says "The rate of technological change is 
accelerating beyond what we expect will be the world's ability to absorb 
it.  That puts the burden back on the suppliers to help figure out how to 
use that capability." Examine that second sentence.  That's right, the 
user is so overwhelmed with computing power that he can't think of a way 
to use it all, so they (the seller) have to think of NEW WAYS of using the 
hardware.  Why, I ask you?  Because the seller wants MONEY. And if the 
seller can't think of a legitimate use for the resources, they will merely 
bog their CPUs down doing useless things, because it sells the product.

This states briefly what is wrong with the entire software industry right 
now, and I am sick of it!  I used to think that, as a computer 
professional, I was supposed to build systems which did what the user 
wanted to do in the most efficient, direct, and complete manner possible.  
But NO!  Software companies continually produce bigger, slower packages, 
and the entire industry is locked into this struggle of software "taking 
advantage" of faster hardware, and hardware vendors trying to keep pace 
with resource-hogging software.  Since when are we trying to "figure out 
how to use" resources?  We have twice as many MIPS as we had previously.  
GREAT!  Let's make more complicated software that does the same thing 
we've always done, but slower!

It wouldn't be so bad if the new software gave us fundamentally different 
capabilities, but IMHO it rarely does.  Whether you agree with me on this 
or not I want to ask you to take a step back and answer a question.  Will 
someone please tell me why a Mac Plus isn't a screaming fast machine?  
6MHz or so, SCSI bus, a megabyte of memory, 2 dizzyingingly fast serial 
ports.  These are awesome resources for any of the following easy tasks:
    Entering text
    Printing a letter (or even a 100-page manuscript)
    Searching a properly-built disk database (never mind a RAM-based one)
    Drawing little lines and boxes on the screen
    Terminal emulation (be it serial I/O or over a network)
    C programming
    Doing arrays of calculations in a spreadsheet

The above list encompasses practically everything the typical user really
does with their personal computer, yet the thing seems slow as a dog EVEN
WHEN DOING THESE SAME TASKS.  Where have all the cycles gone, I wonder?
It's because the machine spends all its (quite abundant) CPU time doing 
things that don't really contribute to the final goal.  This is the same 
reason that a NeXT machine with a fast 68030 and gobs of memory is slow, 
slow, slow.  It wouldn't be so bad, except that I find no machines 
available with great hardware that are not crippled by system software.  
The software mafia is even trying to convince us that DOS machines should 
have windows.

Can someone convince me that all the world NEEDS windows, bitmapped 
graphics, image processing, DTP, virtual memory, and PostScript?  And for 
crying out loud, can someone tell me why every operating system upgrade in 
the history of computing is BOTH bigger AND slower?  I get so tired of it 
all.  Think of where macs would be if the system gobbled only 200K, 
MultiFinder was 50K, and a simple word processor would fit into, say, 
100K.  100K!  That's a ridiculously huge chunk of memory!  I normally 
don't have these fits; I behave like a sheep, and believe the system 
vendors when they say they need 2 megabytes simply to run the OS.  That's 
crazy!  Think about what 2 MB is, and how much DATA fits there. Is there 
really 2 MB of functionality in MS-Windows?  So you tell me that under 
CP/M, 
we had less to take care of, less to worry about.  EXACTLY.  We convince 
the world that they need bitmapped graphics for everything, 
what-they-see-is-what-they-get (even though it STILL never comes out they 
way it looks on screen, even on a NeXT under Display PostScript).  We 
convince them that showing the same characters through fancy windows 
somehow gives them a huge increase in capability. It RARELY DOES.

Think about your computer in the old input/output terms that you learned 
in college: "an algorithm takes some input, operates on it, and produces 
some output".  Think about your secretary's computer the same way.  Take 
a day's keystrokes and mouse clicks, collect them together, and think 
about whether it should take a 33MHz anything with 6 MB of RAM to 
transform them to what comes out the other end.

Don't get me wrong, I do NOT want to put us back in the stone ages of 
computers.  But I think that the same old operations we've been doing all 
along should be incredibly fast on any current computer, INCLUDING a 
"slow" 8 MHz XT.  I type these 1-page text-only letters on this Mac IIx in 
the latest version of Word, and send the output to a LaserWriter NT.  And 
I have to wait 30 seconds to get my page.  That's DUMB.  Sure, if I were 
downloading 8-bit grayscale bitmaps, I would expect to wait.  But 1000 
bytes of text?  Come on now, someone did their job badly.  There is NO 
REASON a system with sophisticated capabilities must be SLOW on the easy 
jobs.  My foremost assertion, the foundation of this whole long-winded 
article:  COMPLEX OPERATIONS MAY BE SLOW, but SIMPLE JOBS SHOULD BE FAST.  
And not just fast, but small as well.

And an important corollary to my assertion: PRACTICALLY ALL THE OPERATIONS 
WE TYPICALLY DO WITH OUR COMPUTERS ARE SIMPLE JOBS.  This includes the 
normal text editing, spreadsheet, database, communication sorts of tasks 
that people spend their hours doing all over the world.  The problem only 
comes when we see the current capability, and expand our expectations 
based on it.  People never used to want to make their business letters 
look like they were printed at a shop.  Why do they now?  Instead of 
selling them new fonts, we should sell them common sense.

What is the solution?  I put forward a few simple requests.

DON'T make every application do everything under the sun.  Do the 
essential operation, do it well, and make it efficient!

Look at the resources demanded by the bare application, then look at what 
it requires with all the bells and whistles added.  Then ask yourself if 
the gadgets are worth the mind-boggling amount of CPU time and RAM that 
are spent on them.  If you think you need them to sell your product, then 
spend less on marketing and don't be so greedy.

Ask yourself why program Z isn't blazingly fast on an XT, and be honest.  
Is it because the theoretical minimum operation count is simply too large, 
or did you do it the easiest way you could think of just to get it 
done--knowing that a ripping fast CPU would make the performance 
"acceptable"?

And finally, one for the non-programmer.  Quit bellyaching!  I know one 
company which insists on printing 1000 envelopes through a PostScript 
printer for mass mailings.  And they complain because it doesn't work 
well.  Of course it doesn't work well!  You are putting characters on 
envelopes.  You don't need PostScript, you don't need a Mac, and you don't 
need a network!  What you need is a stack of label stock, a data file, an 
XT, and a 20-line C program.  (I had to set up the merge procedure anyhow, 
so the C code wouldn't have cost them anything.)  Users who persist in 
requiring publication-quality output for their inter-office memos, who 
think they need an SQL database for their company address list, or rely on 
fancy windowing systems to save them having to think -- these people are 
only fueling the fire of wastewater-quality software engineering.

End of tirade.

Kraig Eno, kraig@biostr.washington.edu
"To Coin a Phrase, I'm Perplexed." -- James Gaskin



P.S., a few favorite examples from the real world.

NeXTStep, where windows suck CPU time displaying themselves while you move 
the frame around.  That's cool, sure, but what does it buy you?  It sure 
wastes CPU time.

The Mac, where copying a 100-byte file to a floppy takes 3 seconds.  Has 
disk access slowed since the olden days?  Or have the pretty pictures 
perhaps overshadowed the actual work being done?

Adobe Photoshop and SuperCard (2 that I know, there are lots more) which 
display these really neat-O bitmapped graphics when you enter or leave the 
program, but have nothing at all to do their function.  Sure, I have an 
extra 10K of disk.  No problem, I don't mind wasting it.

Wonderful programs like PageMaker 4.0 that are so monstrous that they 
won't fit on one (800K!) disk -- you have to run a special utility to 
concatenate the pieces and store the resultant mass of code on your hard 
disk.  A few of these and you don't have any room left on your 20MB disk 
for any data at all.  So?  Go buy more disk, it's cheap!

One positive example of a workable platform is SGI, which has graphics 
hardware that tromps all over the competition and a programming library 
built to push that hardware to its limits, efficiently.  Unfortunately, 
they've saddled this core with NeWS running over DPS (or is it the other 
way around? ), so just logging in to my account to type in a text window 
takes over 25 seconds.  Do they think I'm going to do desktop publishing, 
or something?  If I were, I would save my money and buy a Mac.  I want the 
hardware, I don't want PostScript.  Not for my screen graphics, anyway.

sanjay@ccwf.cc.utexas.edu (Sanjay Keshava) (10/25/90)

In article <9886@milton.u.washington.edu> kraig@biostr.biostr.washington.edu (Kraig Eno) writes:
> Excuse me, I'm fed up.  If you are one of the world's many purveyors of 
> fat software systems, please take these comments to heart; otherwise, 
> ignore my ranting.
> 
> This states briefly what is wrong with the entire software industry right 
> now, and I am sick of it!  I used to think that, as a computer 
> professional, I was supposed to build systems which did what the user 
> wanted to do in the most efficient, direct, and complete manner possible.  
> But NO!  Software companies continually produce bigger, slower packages, 
> and the entire industry is locked into this struggle of software "taking 
> advantage" of faster hardware, and hardware vendors trying to keep pace 
> with resource-hogging software.  Since when are we trying to "figure out 
> how to use" resources?  We have twice as many MIPS as we had previously.  
> GREAT!  Let's make more complicated software that does the same thing 
> we've always done, but slower!

This is an interesting point.  But one needs to make a practical
distinction between computer science and business.  One of the main
thrusts in CS is to develop the best way of doing things, just like
FFTs replaced DFTs.  In a business environment, especially targetted
towards the layperson, a company's concern is to market a product of
"reasonable" quality at minimum cost.  "Reasonable" is the key word.
Bug-free would be ideal, but in many cases it is better to enter a
market first to gain popularity than to enter late.  One can always,
as is customary, go back and fix the bugs later.

A former manager steadfastly believed that most programs could double
their speed if they were re-written properly.  From my own experience
and seeing what's commercially available these days, I must agree with
the original poster and my former manager.

> It wouldn't be so bad if the new software gave us fundamentally different 
> capabilities, but IMHO it rarely does.  Whether you agree with me on this 

I think a recent Wall Street Journal issue had an article about the
software houses playing the upgrade game.  They initially sell a
minimally functional product for hundreds of dollars and subsequently
sell upgrades for about $100.  (I think MS-DOS Word was used as an
example.)  This ensures a predictable steady cash flow to perpetuate
the process.  Most users don't realize that some upgrades provide
little additional functionality and are usually bug-fixes to older
versions.  In my opinion, a user shouldn't be charged for an upgrade
that merely contains bug-fixes to his existing version.

> Can someone convince me that all the world NEEDS windows, bitmapped 
> graphics, image processing, DTP, virtual memory, and PostScript?  And for 

This is purely a marketing issue.  Non-hackers find windows and icons
much easier to use than a command prompt.  Few executives and/or
non-computer-techies have the time or inclination to learn/memorize
commands and all their available options and switches.  Remember the
MacIntosh commercial comparing 1 user manual with several PC or DOS
manuals?  That was a powerful statement.  As for the other stuff, a
salesperson will sell you anything he/she thinks will make your life
easier.

> crying out loud, can someone tell me why every operating system upgrade in 
> the history of computing is BOTH bigger AND slower?  I get so tired of it 

It's called PROGRESS.   :-)

> we had less to take care of, less to worry about.  EXACTLY.  We convince 
> the world that they need bitmapped graphics for everything, 
> what-they-see-is-what-they-get (even though it STILL never comes out they 
> way it looks on screen, even on a NeXT under Display PostScript).  We 

I've noticed this too.  Even with WYSIWYG displays, I and many others
still waste much paper in printouts.  (disclaimer:  I like trees and
try to minimize my printouts.)  It seems like the information age has
not only enhanced information processing but also made it easier to
generate more paperwork.

> And an important corollary to my assertion: PRACTICALLY ALL THE OPERATIONS 
> WE TYPICALLY DO WITH OUR COMPUTERS ARE SIMPLE JOBS.  This includes the 
> normal text editing, spreadsheet, database, communication sorts of tasks 

The jobs may be simple, but the underlying implementation may not be.
Additionally, user interfaces consume HUGE amounts of code and cpu
cycles, especially graphics.

The user interface is a major selling point.  It attracts the
non-technical user by making the application easier to use.  Many
applications are not targetted to the individual consumer or hobbyist.
Big business with the big bucks is a more lucrative market.

> Kraig Eno, kraig@biostr.washington.edu

In general, I agree with your frustrations.  But from a
business/consumer point of view, writing the best program doesn't
always make sense because a user doesn't care how its done as long as
it suits his/her needs.  The additional time required to clean up
programs and make them elegant gives the competition an advantage.


                                           Sanjay Keshava
                                               ->|<-
                             Student in the UT Graduate School of Business

peter@ficc.ferranti.com (Peter da Silva) (10/25/90)

In article <9886@milton.u.washington.edu> kraig@biostr.biostr.washington.edu (Kraig Eno) writes:
> Will someone please tell me why a Mac Plus isn't a screaming fast machine?  

Because the system software was crippled by having to provide *some* sort
of performance on a 128K thin Mac. Design decisions that were appropriate
for that environment become stunningly poor when more memory becomes
available. The Amiga 1000, with half the RAM available, is quite a snappy
little machine. Why? Because the system is built around a fast multitasking
kernel: the whole system doesn't lock up when some software component is off
doing its thing, and context switches are fast enough (no MMU or FPU involved:
you just swap registers) that it spends most of its time doing useful work
instead of waiting on I/O or shuffling task contexts.

> can someone tell me why every operating system upgrade in 
> the history of computing is BOTH bigger AND slower?

Because they're selling the sizzle, and you want to eat steak. The guy who
designed the Mac went on to build the Canon Cat: a dedicated word-processor
that died in the market because it didn't have the sizzle.

> Think about what 2 MB is, and how much DATA fits there. Is there 
> really 2 MB of functionality in MS-Windows?

No. It's got considerably less capability than AmigaOS 1.0, which ran
in 256K ROM and 256K RAM.

> DON'T make every application do everything under the sun.  Do the 
> essential operation, do it well, and make it efficient!

Bravo!

> Look at the resources demanded by the bare application, then look at what 
> it requires with all the bells and whistles added.  Then ask yourself if 
> the gadgets are worth the mind-boggling amount of CPU time and RAM that 
> are spent on them.  If you think you need them to sell your product, then 
> spend less on marketing and don't be so greedy.

Unfortunately, it's the consumers fault. Anyone who buys a Mac or PC on the
basis of Multifinder or Windows is just encouraging the behaviour you're
complaining about. People don't want to look at alternatives... they just
want to be told they're doing the right thing so they can cut a check and
get out of there. Why anyone would buy a Mac or a PC when there are machines
like the Amiga, the Acorn Archimedes, or even the Atari ST (for all it's a
bug-for-bug-copy of DOS with Gem, it's a *cheap* copy) is beyond me.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

seals@uncecs.edu (Larry W. Seals) (10/25/90)

It occurs to me that you have to look past the recent PC history to
what came before.  I've worked on mainframe and mini systems with 
bulky operating systems where the upgrades (some free, some expensive)
are nothing more than fixes of the SPARs that came before.  On the 
system I'm currently working with, we're on version 22.1.1.R3 of the 
Op/Sys which we recently upgraded to from vs. 22.1.1.R1.  We skipped
R2 because it was so buggy (R3 fixed the regression errors from R2).
At least our upgrades were free.  Same thing with our word processing
software and our DBMS package.

I agree also that building software so fat and kludgy and hoping that
faster CPUs, disks and memory will make them efficient is plain 
stupid!  My previous boss was of that school.  Our company was writing
test administration software to be administered to physicians either
at home or at our site.  He insisted that it be written in COBOL (gag!)
and all developement and testing took place on P/S 2 Model 50s. When
the software was demonstrated, of course it ran like the wind. What we
couldn't make him understand is that not everyone has power like that
at their disposal. Two years later the project is still underway (I
left over a year ago because of disputes over just this issue - I 
advocated writing the software in C with some sort of optimization for
the lowest common denominator in terms of hardware) with probably
thousands of lines of COBOL code written. I can't wait to see how it
runs on a plain vanilla PC or XT (I know how it runs on a 286 P/S 2
Model 30... can you say DOG? [I knew you could ;-) ]).

What a waste!

**********************************************************************
    Wanted: low cost .sig on rental basis...
    Larry Seals @ Trailing Edge Software
    Our motto: Solving today's problems with yesterday's technology.
    Our credo: When it doesn't have to be the very best.

herrickd@iccgcc.decnet.ab.com (10/26/90)

In article <9886@milton.u.washington.edu>, kraig@biostr.biostr.washington.edu (Kraig Eno) writes:

[lots of good stuff about common sense in use of resources deleted]
> 
> And finally, one for the non-programmer.  Quit bellyaching!  I know one 
> company which insists on printing 1000 envelopes through a PostScript 
> printer for mass mailings.  And they complain because it doesn't work 
> well.  Of course it doesn't work well!  You are putting characters on 
> envelopes.  You don't need PostScript, you don't need a Mac, and you don't 
> need a network!  What you need is a stack of label stock, a data file, an 
> XT, and a 20-line C program.  (I had to set up the merge procedure anyhow, 
> so the C code wouldn't have cost them anything.)  Users who persist in 

"putting characters on envelopes"

Think about how you treat your mail when it comes in.  You stand over
a wastebasket and leaf through the pile.  Some stuff goes straight
into the wastebasket.  Some goes into a pile to be opened later
(never).  Some gets opened immediately.

Which of those three places does the mail with the pasted on labels
go?  How about the number ten envelopes that have your name printed
or typed or handwritten directly on the envelope?

All of the money and effort (including your consulting services) spent
on an envelope that the addressee throws directly into the wastebasket
is wasted.  If enough addressees do that with your client's mail, your
client goes out of business and you need to find a new client.

Now, what is appropriate?

If they can only afford one printer, Postscript is kind of useful.
$1500 for another printer to print envelopes might be a useful
investment, however.

A mailing of 1000 pieces is getting kind of marginal for doing
in house.  It might cost your client less to have the work done
by a lettershop.  Yellow pages under Advertising or Mail Order or
Direct Mail.  They need Direct Impression addressing.  They also
need to believe they trust the people in the lettershop.  Accomplished
by asking a lot of questions, watching them do the work, and salting
the list.

dan herrick
dlh Performance Marketing
POBox 1419
Mentor, Ohio 44061
(216)974-9637

PS.  Thanks for your tirade.  I'm going to try to use it in some
discussions I am having.

dhesi%cirrusl@oliveb.ATC.olivetti.com (Rahul Dhesi) (10/27/90)

In <9886@milton.u.washington.edu> kraig@biostr.biostr.washington.edu
(Kraig Eno) writes:

     Will someone please tell me why a Mac Plus isn't a screaming fast
     machine?

I don't like this question.  If the Mac Plus were a sizzing fast
machine, it wouldn't be a Mac Plus any more.  There are quite a few
other choices in the marketplace (the Amiga has been mentioned, and
then there is the 80386 family which sizzles at fairly good prices).

The real problem is that not all users buy what they want.
--
Rahul Dhesi <dhesi%cirrusl@oliveb.ATC.olivetti.com>
UUCP:  oliveb!cirrusl!dhesi
A pointer is not an address.  It is a way of finding an address. -- me

seanf@sco.COM (Sean Fagan) (10/29/90)

In article <=YN6UN5@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>In article <9886@milton.u.washington.edu> kraig@biostr.biostr.washington.edu (Kraig Eno) writes:
>> can someone tell me why every operating system upgrade in 
>> the history of computing is BOTH bigger AND slower?

Gee, that's strange.  SCO UNIX 3.2v2 is smaller and faster than 3.2.0 (less
buggy, as well, and with more features [some of them actually nice 8-)]).

>> DON'T make every application do everything under the sun.  Do the 
>> essential operation, do it well, and make it efficient!
>Bravo!

I agree.  And don't make your OS do everything, if you can make the
applications do it for you.  Just my opinin, though...

-- 
-----------------+
Sean Eric Fagan  | "*Never* knock on Death's door:  ring the bell and 
seanf@sco.COM    |   run away!  Death hates that!"
uunet!sco!seanf  |     -- Dr. Mike Stratford (Matt Frewer, "Doctor, Doctor")
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

peter@ficc.ferranti.com (Peter da Silva) (10/29/90)

In article <8460@scolex.sco.COM> seanf (Sean Fagan) writes:
> >> DON'T make every application do everything under the sun.  Do the 
> >> essential operation, do it well, and make it efficient!
> >Bravo!

> I agree.  And don't make your OS do everything, if you can make the
> applications do it for you.  Just my opinin, though...

Finally, be careful *what* application you get to do a given task. For
example, window refreshing belongs in the display server task, not duplicated
in every application.

(yes, the inevitable X flame)
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

rcpieter@svin02.info.win.tue.nl (Tiggr) (10/30/90)

peter@ficc.ferranti.com (Peter da Silva) writes:

>Finally, be careful *what* application you get to do a given task. For
>example, window refreshing belongs in the display server task, not duplicated
>in every application.

Bad example.  You should have mentioned MS-DOS where every application
is very bad because the programmer spends ages on writing display and
printer drivers instead of writing a Good Program.

Window refreshing must be done by the task owning the window.  It must
get information on the region of the screen to be update from the
window manager.  Having the window manager store bitmaps is stupid,
restricting and it consumes much too much memory.

Tiggr

stachour@sctc.com (Paul Stachour) (10/30/90)

>In article <8460@scolex.sco.COM> seanf (Sean Fagan) writes:
>> >> DON'T make every application do everything under the sun.  Do the 
>> >> essential operation, do it well, and make it efficient!
>> >Bravo!

>> I agree.  And don't make your OS do everything, if you can make the
>> applications do it for you.  Just my opinin, though...

Here I must totally disagree.  Putting things in applications that
belong in the OS is one of the worst forms of development known.
Every application developer puts them in.  And the cost is enormous.
And the speed is horrible, because no-one has enough time to check
performance and optimize correctly.  And the reliability is horrible
because because it's not checked and reused.  And the reliability
is even worse because when the application fouls up, it crashes
the whole thing.

Done correctly, putting it in the OS (and paging it in when needed,
so that all applications don't pay for a feature they don't need)
is the performant, correct, consistant way.  And it's the cheapest.

I've spent soo-much time re-doing in applications software things
that should have been done in the OS.  And when I got through,
it was of much lower quality than I was capable of.

But when people insist on using programs that are not really
operating systems and calling them OS's, then we all have problems.
...Paul
-- 
Paul Stachour         Secure Computing Technology Corp
stachour@sctc.com      1210 W. County Rd E, Suite 100           
		 	   Arden Hills, MN  55112
                             [1]-(612) 482-7467

peter@ficc.ferranti.com (Peter da Silva) (10/31/90)

In article <1534@svin02.info.win.tue.nl> rcpieter@svin02.info.win.tue.nl (Tiggr) writes:
> peter@ficc.ferranti.com (Peter da Silva) writes:
> >Finally, be careful *what* application you get to do a given task. For
> >example, window refreshing belongs in the display server task, not duplicated
> >in every application.

> Bad example.

Nope, a good example. An example everyone agreed with would be a bad
example.

> You should have mentioned MS-DOS where every application
> is very bad because the programmer spends ages on writing display and
> printer drivers instead of writing a Good Program.

I don't think there's anyone here who needs to be enlightened about
MS-DOS.

> Window refreshing must be done by the task owning the window.

Even if it's at the other end of a 2400 baud SLIP link from the display
server?

> Having the window manager store bitmaps is stupid,
> restricting and it consumes much too much memory.

It consumes just as much memory either way, and it requires the client
be able to respond in real-time... something that's just plain not
possible in UNIX. It makes as much sense to have "xman" handle update
events as it does to have "cat" do erase and kill processing.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

cheeks@edsr.eds.com (Mark Costlow) (10/31/90)

I'm not going to reference anyone in particular, but I just have to put my
two cents in:

A couple times a week I come across some task, or am asked to do some task,
for which shell programming is a natural.  So, I whip out my favourite
shell (I won't tell you which one :-), and start glomming together various
instances of awk/sed/grep/du/df/wc/ps/etc, etc, etc. to get the job done.
Usually, the standard tools supplied by the OS will do what I need done,
but about once a month, one of the utilities will break (missing some vital
feature, has some stupid static table size limitation, or just plain dumps
core).  Typically what I do when this happens is prepend a "g" to the name
of the offending utility to use the GNU version of it, and almost without
fail, the gnu utility will do what I wanted to do, and FASTER too.  Not
just a little bit faster, but 2-10 TIMES faster.

So, more often than not, I reach for the GNU version of a utility if it
exists.  The only problem is that there's no GNU OS proper, so maintaining
the GNU stuff is almost like maintaining two versions of the OS
simultaneously (you know the story: The minute you throw away the system
awk, you stumble on a shell script from some random vendor that depends on
a bug in the system's awk :-).

So, there's my two cents.  I guess what I'd like to see is the OS vendors
incorporating more up-to-date algorithms in some of the old utilities
(egrep and awk seem to be the biggest offenders).  Or, a full-blown GNU OS
would be cool, but that's still quite a ways off.

Mark

PS:  I'm aware of PERL ... haven't had time to really get into it yet.
-- 
cheeks@edsr.eds.com    or     ...uunet!edsr!cheeks

jik@athena.mit.edu (Jonathan I. Kamens) (10/31/90)

  (Note the Followup-To.)

In article <P5S6=T1@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes:
|> Even if it's at the other end of a 2400 baud SLIP link from the display
|> server?

  If you're running over a SLIP line, then you put "*backingStore: true" and
"*saveUnder: true" in your .Xresources file, and good clients will
automatically request backing-store and save-under on all windows as a result
of this, and good X servers will grant them.

  Note that X was not designed so that the X server *can't* store state.  It
was designed so that the X server *might* be able to serve state, and so that
clients can tell it whether or not this is an important requirement for them.

  As far as I know, all of the sample servers that come from MIT for X11R4 can
do backing-store and save-under, unless they're disabled explicitly when the X
server starts up.  Whether or not vendor X servers support backing-store and
save-under is the problem of the vendors.

|> It consumes just as much memory either way,

  Hogwash.  If the server is storing the contents of a window, it has to store
a pixmap of the entire window.  For a monochrome display, that's
1bitXwidthXheight, for a color, might be up to 32bitsXwidthXheight.

  On the other hand, very few X clients do refreshing by storing a complete
pixmap of the window.  Xterm, for example, only needs to store the characters
it needs to draw, i.e. 8bitsXcharacter-widthXcharacter-height.  Most graphics
programs only store the attributes of the shapes that have been drawn, so that
they can be redrawn when a refresh events come in.

  Unless an application stores a full pixmap, the memory required for an
application to store window state is far less than the memory required for the
server to store window state.

|> and it requires the client
|> be able to respond in real-time... something that's just plain not
|> possible in UNIX.

  It requires the clients to be able to respond in a reasonable amount of
time.  Funny, I never noticed any problem with X just because the clients
can't "respond in real-time", and I work under it almost exclusively, without
backing-store and save-under on my windows.

|> It makes as much sense to have "xman" handle update
|> events as it does to have "cat" do erase and kill processing.

  Fiddlesticks.  It's more efficient memory-wise, and where other problems
make it less efficient (e.g. the 2400 baud SLIP you mentioned above), it's
possible to get the server to do it.  So you win either way.

-- 
Jonathan Kamens			              USnail:
MIT Project Athena				11 Ashford Terrace
jik@Athena.MIT.EDU				Allston, MA  02134
Office: 617-253-8085			      Home: 617-782-0710

peter@ficc.ferranti.com (Peter da Silva) (11/01/90)

In article <3341@tantalum.UUCP> cheeks@edsr.eds.com writes:
> So, there's my two cents.  I guess what I'd like to see is the OS vendors
> incorporating more up-to-date algorithms in some of the old utilities
> (egrep and awk seem to be the biggest offenders).

I'd be satisfied if one of them went through their source and whenever
they have !fprintf(stderr, "Can't open %s\n", file);! replaced it with
!perror(file)!. It'd be nicer if they'd apply some smarts to the problem
but even a brain-dead fix like this one would be wonderful!
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

rcd@ico.isc.com (Dick Dunn) (11/01/90)

kraig@biostr.biostr.washington.edu (Kraig Eno) writes:
> Excuse me, I'm fed up.  If you are one of the world's many purveyors of 
> fat software systems, please take these comments to heart; otherwise, 
> ignore my ranting.

A sympathetic reply from the (not always fat) purveyor side:  First, why
would you have the other folks ignore your ranting?  You might note that,
although there's plenty of fat software being made, it's being sold.  It
does not languish on the shelves.  Address yourself not only to those who
make it, but to those who buy it as well, and who vote with their dollars
for the bloat.

> ...William Demmer, says "The rate of technological change is 
> accelerating beyond what we expect will be the world's ability to absorb 
> it.  That puts the burden back on the suppliers to help figure out how to 
> use that capability." Examine that second sentence.  That's right, the 
> user is so overwhelmed with computing power...[etc]

Also note the discrepancy between "absorb" in the first sentence and "use"
in the second.  It's as if there's some danger in having resources we're
not using!  (Do they get stale?:-)

> ...Why, I ask you?  Because the seller wants MONEY. And if the 
> seller can't think of a legitimate use for the resources, they will merely 
> bog their CPUs down doing useless things, because it sells the product.

Not even that complicated:  If the seller can't think of a use (legitimate
or otherwise) for the resources, there will be no reason for the new
product.  It's no different than the way a company puts the same old
laundry soap (cereal, whatever) in a wider, taller, shallower box and
shouts "New! Improved!"  It's the American way, son.

> ...Software companies continually produce bigger, slower packages, 
> and the entire industry is locked into this struggle of software "taking 
> advantage" of faster hardware, and hardware vendors trying to keep pace 
> with resource-hogging software...

I think it is wrong to say that the hardware vendors are trying to keep
pace.  They're way ahead of us.  I know of one recent egregious exception,
where a massive hardware improvement (in the "second try" at a product
line) has been nearly completely destroyed by software--but that was
stupidity above and beyond the call of duty on the part of the software
folks, and it's really not that common.  Mostly the sparkies give us
improvements faster than we can waste them.

No, instead what happens is that software folks, after some past decades
of being "against the wall" with demands increasing much faster than hard-
ware capability, have gone through a decade or so of hardware advancing
faster than we can figure out what to do with it.  Face it, we just do not
know how to manage software growing at memory-capacity rates (a factor of
four every few years) or CPU speeds (a factor of two per year)!  It's like
pushing hard against a door, when someone suddenly opens it and you fall
through...the result is not a graceful entry to the next room.

But, of course, this hasn't stopped people from trying to "absorb" all that
excess capacity.  So what happens to it?  It must go somewhere, right?

FEATURES!!!

Let's add features.  Let's goop up the system with stuff we've needed...and
then stuff we've wanted, and stuff that somebody once said he wanted, and
stuff that somebody might want, and even stuff that nobody wants but that
might sell a system.  Let's add features like there's no tomorrow. The
hardware will support it.  We've got the memory, CPU cycles, and disk.

Unfortunately, we don't have the software technology to manage the
complexity begotten of this feature-madness.  It's not even clear if we
should want to be able to manage the monsters we're creating, because
they're going to be incomprehensible no matter what, to *every*one--the
designer, the implementor, the maintainers (poor souls, and they're most of
the software world), and most importantly, the end user.  There's just Too
Much Crud.

But wait...you were railing against software bloat, and I say it's coming
from features...why do we keep adding features?  Why keep making our jobs
harder?

BECAUSE IT SELLS!!!

Yes, say what you will about the nasty software vendors, they're creating
what they can sell.  Start with a system X that's got n bullet-list
features in it.  Now create a system Y that's got n+1 features but is 10%
slower and larger, and create a system Z that's got n-1 features but is 10%
faster and smaller.  Take 'em to market; Y will out-sell Z by 5 to 1!  I
*hate* that (because I'd rather use--let alone maintain--system Z).  But I
can't ignore it.

Vendors make bigger, slower systems because that's what happens when you
keep adding features.  People buy features.  In the realm of software,
very few people buy performance.  (The only way to sell performance is to
find a huge performance win and sell it as a feature.:-)

> The software mafia is even trying to convince us that DOS machines should 
> have windows.

Come on, the DOS world has been adding mostly chrome for five years.  The
things that sell are flash--color, cute graphics, and such.  The window
game is just another big piece of chrome.

Or look at it this way:  The window systems are "packaging."  They are to
software systems what the bold logo and fluorescent colors are to a box of
laundry detergent.

> Can someone convince me that all the world NEEDS windows, bitmapped 
> graphics, image processing, DTP, virtual memory, and PostScript?...

Let's not throw out the baby with the bathwater.  Consider some of them:
  - bitmapped graphics?  A standard glass tty is using bitmaps of
    characters; all raster devices do that.  Just taking some bits out
    of RAM instead of a character-generator ROM is no big deal.  It's when
    you pile a megabyte of ill-conceived code on it that you got problems.
  - virtual memory?  That may be the longest piece of rope ever given to
    software people.  So what if some of them tie nooses with it?  VM can
    make the job a lot easier for lots of programmers--we don't have to
    keep figuring out nits of memory handling.  Trouble starts when people
    treat VM as real, and use it as if there were always real memory under-
    neath.  VM isn't the devil; it's just subject to abuse on a galactic
    scale.  (I could digress here that part of the problem comes from the
    feature madness driving us to employ ever-more-mediocre programmers,
    such that the few folks who really *can* think get stuck doing main-
    tenance 'cause they're the only ones who can understand the mess.)
  - PostScript?  I can't think of anything that's ever come along to make
    the task of dealing with a printer simpler, more regular, more trans-
    parent.  I need think of one interface.  I don't have to deal with all
    those @&^%!! escape sequences that I can't read.  I don't have to
    stumble through ninety-leven unnecessary limitations.  I don't have to
    worry about the printer's resolution, or which way it thinks is up, or
    how it counts.  I just say "put this stuff there" and it works.  For
    once, I can change printers without rewriting the back end of every
    printing program that does more than trivial text.  Sure, it's expen-
    sive...but at least this one buys us all something.
The ones I didn't mention, I agree you can toss.  For the ones I did
mention, the two-bit summary is that they're tools which happen to be
easily misused.

> ...can someone tell me why every operating system upgrade in 
> the history of computing is BOTH bigger AND slower?...

You can guess my answer:  more features.

> What is the solution?  I put forward a few simple requests.
> 
> DON'T make every application do everything under the sun.  Do the 
> essential operation, do it well, and make it efficient!

Doesn't sell.  Hey, I agree with you completely, but it g* d* absof*lutely
*doesn't* sell!  Look at UNIX as it was a decade ago.  UNIX was made up of
a small, elegant kernel, plus a large collection of small applications,
each of which did a few things very well.  You put together tools from
this neat toolkit, and they just worked.

Now look at UNIX today, and look at what people are putting on top of it.
There's lot's of "stuff" to sort through, but when you get to the bottom of
it, what you find is that the big sellers are...applications that do every-
thing under the sun!  Why?  Can't get anyone to think how to use what's
there.  They want it all fitted out for them; if they want four, they're
not willing to put two and two together.  In the harshest way of stating
it, they're buying the software bloat so they don't have to think instead.

Over in comp.unix.sysv386, within a two-week period we had people post
programs to solve two separate problems, each of which could have been
solved with one- or two-line scripts.  And these folks are supposed to be
technoids!  It's certainly not a lack of industriousness, but there's sure
an overwhelming intellectual laziness.  ("think for 30 seconds" loses out
to "code/test for half an hour"???)

At the level of the most inept user, it's vastly worse.  He wants to be
guided through every possibility; he wants choices maximally constrained so
he can only choose what makes sense and even then doesn't have to choose
among too many things.  This is hard!  And it's hard to make it convenient!
It's one thing to build a toolbox full of the right selection of quality
tools; it's much harder to make tools which stand up and say things like
"perhaps I can help?  I'm a saw; I will cut things into pieces along
straight lines.  Perhaps what you need is one of these, except cut to a
different shape?"  So we end up adding goo.  Some of the goo is there to
make it look "user friendly."  Some of it is to try to organize information
on the screen so that someone who doesn't know what he's doing can act as
if he does.  A lot of goo is devoted to trying to present too much infor-
mation--all the help that shouldn't be necessary--in a tiny area of screen,
in a way that might be useful to someone who isn't paying attention.

> Look at the resources demanded by the bare application, then look at what 
> it requires with all the bells and whistles added.  Then ask yourself if 
> the gadgets are worth the mind-boggling amount of CPU time and RAM that 
> are spent on them.  If you think you need them to sell your product, then 
> spend less on marketing and don't be so greedy.

Sorry, wrong.  Leave out the bells and whistles, and you'll be an obit
entry in next month's business section.  And the fewer bells and whistles
you have, the *more* marketing you need!

My challenge to you, speaking as software-producer to software-user, is to
tell us how to teach everyone that feature-madness is doing us all in.
Figure out how to change things so that chrome-plated bloat doesn't sell
(or even so that a svelte system *will* sell).  Tell us how to sell some
nice, simple, organic software without MSG/BHA/BHT/artificial-color/
artificial-flavor/emulsifiers/thickeners/stabilizers, in a plain brown
wrapper.

> And finally, one for the non-programmer.  Quit bellyaching!  I know one 
> company which insists on printing 1000 envelopes through a PostScript 
> printer for mass mailings.  And they complain because it doesn't work 
> well.  Of course it doesn't work well!  You are putting characters on 
> envelopes.  You don't need PostScript, you don't need a Mac, and you don't 
> need a network!...

I saw someone counter this one with an argument about making nice-looking
envelopes.  There was a good point hiding underneath his retort, namely
that (alas) style wins over substance.  But still, what you do even to make
it look ultra-spiffy, is to get some transparent label stock, use the fancy
envelopes, and write the little program or script to generate the labels.
You get about a 30-fold improvement in speed 'cause even a PostScript
printer is limited by engine speed for labels, and there's about 30 labels
on a page.  I guarantee it can chunk 'em out as fast as you can stick 'em
on envelopes...and the script to generate the PostScript will be about the
20-line C program to spit out labels anyway.  I've got several of these
little 20-30 line scripts to churn out mailing lists, custom fancy
letterhead envelopes, whatever.  PostScript ain't the culprit.  But if you
have it around for other reasons, you can use it right instead of being
ultra-dumb about it.
_ _ _ _ _

For my parting shot...I found some stuff recently that made me feel better,
in a bittersweet sort of way.  I bought copies of the "UNIX Research Sys-
tem - 10th Edition" books.  Those are the reference manuals for the system
they're using in the research group at Bell Labs.  One can find out that
the kernel on that OS has only grown a factor of two or so in the past
decade--not the factor of ten or more of other kernels.  The system is
still described by a couple of convenient-sized books, not a shelf'o'
dead-trees.  It retains the "toolkit" feeling.  Yet it's a modern system.

So what's the bitter half of that?  It's unlikely it will see the light of
day.  Why not?  It's a research tool for one group; OK, fine.  But it's
also unlikely that a system *like* it would ever become a product.  Why
not?  Because it doesn't have the...
  **************
<<**FEATURES!!**>>
  **************
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...but Meatball doesn't work that way!

al@escom.com (Al Donaldson) (11/02/90)

Dick,

Excellent description of the state of the market.

> Start with a system X that's got n bullet-list...
>
People make decisions based on bullet lists.  OK, you expect it when
Fred Flintstone goes to the store to buy a dessert topping (Hmmmm.
this one is a dessert topping AND a floor polish...) but you somehow
expect more when large corporations make decisions.  But they're
the worst.

> Vendors make bigger, slower systems because that's what happens when you
> keep adding features.  People buy features.  In the realm of software,
> very few people buy performance.
>
Bloat isn't just confined to software; our hardware cousins are just as bad.
I once worked on a secure LAN project where our goals were to keep it
simple and fast, in addition to secure.  But our hardware designers would
stay up at night trying to find interface chips that had extra FEATURES on
them so that they could add extra logic to the board to use those features
which would require an n+1 layer board instead of an n layer board.
Well, someone might need to use that feature sometime or other...

> Look at UNIX as it was a decade ago.  UNIX was made up of
> a small, elegant kernel, plus a large collection of small applications,
> each of which did a few things very well.  You put together tools from
> this neat toolkit, and they just worked.
>
For a more current example, look at MINIX for the PC (AT, etc..) family.
Andy Tanenbaum designed it to be small and simple, but there have been 
calls to rebuild it to handle large-model applications (larger than 64K+64K).
Why?  So people can run G-this and X-that, applications that never were
dreamed of for Version 7.

> They want it all fitted out for them; if they want four, they're
> not willing to put two and two together.  In the harshest way of stating
> it, they're buying the software bloat so they don't have to think instead.
>
I once worked on a network interface for an unnamed government organization
where the spec required an incoming packet to be processed and transmitted 
in 21 millisecs.  Hard requirement, they said.  Plus you had to have a 50% 
reserve capacity built in, so that meant doing the job in 14 msec.  Why?  
Well, that was the maximum clock rate on the UARTs (224Kbps), and their 
hardware didn't support queueing.  It never occured to these people to break 
the line up into four 56Kbps lines, which would have been duck soup from 
a performance point of view.  So the government and all their hired guns 
(Mitre and Aerospace) and the contractor and subcontractors all spun their 
wheels for a year trying to figure out if a certain CPU could do the job 
(with a 50% safety margin!!) instead of just building the damn system and 
buying three extra cables if necessary.  So with all the studies and 
alternate proposals that were done, the actual cost for the contractor was 
something like $20 million instead of the bid price of $5 million.  And that 
didn't count all the green stamps that the government and its hired guns 
spent looking over our shoulder.  Well, not MY shoulder.  I finally had 
enough and left.  

Al

peter@ficc.ferranti.com (Peter da Silva) (11/02/90)

In article <1990Nov1.002513.8984@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
> Not even that complicated:  If the seller can't think of a use (legitimate
> or otherwise) for the resources, there will be no reason for the new
> product.  It's no different than the way a company puts the same old
> laundry soap (cereal, whatever) in a wider, taller, shallower box and
> shouts "New! Improved!"  It's the American way, son.

Yeh, but the soap inside still has the same user interface.

Here's another challenge: figure out how to make that damn wrapper into
a real wrapper, so it uses all those nice tools instead of replacing
the soap with an integrated-showerhead-soap-dispenser-system. It's not
that hard, really. AT&T did it on the 3b1 in 1983 or 1984, without all
the bells and whistles.

If you can write window applications using shell scripts, they will come.
Just look at how much stuff is being done with Hypercard. It's basically
the same thing, just with a crummy script language.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

seanf@sco.COM (Sean Fagan) (11/02/90)

In article <1990Nov1.002513.8984@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>Sorry, wrong.  Leave out the bells and whistles, and you'll be an obit
>entry in next month's business section.  And the fewer bells and whistles
>you have, the *more* marketing you need!

True.  And this *is* a problem!  If I can:  Xenix 2.3 (for the '386) was a
nice little OS.  The OS fit on 4 or 6 floppies, I believe (for SCO-familiar
people, the X and N disks).  It ran quite well in 2Mb of memory (for 2
people or so), and with 4, it was ecstatic.

At the time we released Xenix 2.3, UNIX 3.2 was also being worked on and
released by other people (the exact timeframe for everything is a bit hazy,
but I'm sure hundreds of people will jump on me if I'm wrong 8-)).  Because
3.2 had more *features*, people wanted it!  Even if they were only going to
do the same thing that they were currently doing on the Xenix 2.2 systems,
or if 2.3 fit all of their needs, they wanted *3.2*.  Features, mah boy!

Stock 3.2 (from *any* vendor) is larger and slower than 2.3, and has a lot 
less history than xenix does (at least on the intel architecture).  So why
go with it?

(You already know the answer 8-).)

People who pay attention know the beating SCO got for coming out so late
with our version of UNIX 3.2.  We'd spent some time before getting XENIX 2.3
ready to be shipped, and in a good and usable state.

End of anecdote... 8-;

>My challenge to you, speaking as software-producer to software-user, is to
>tell us how to teach everyone that feature-madness is doing us all in.

One of the reasons I like Mach, or at least it's potential, is that the
"features" everyone wants can be added as user-level programs.  That is,
they can be swapped out, run only when needed, removed from the disk if you
don't want it, etc.  (Some of the features everyone wants are built in, such
as the VM, task switching, etc.  Others, such as unix compatability [which
unix? you ask 8-)] can be made seperate modules, as can device drivers.
Including networking.)

Keep it small and simple, and try to allow for generalities.  One of the
nicest things to come from AT&T in recent history was the STREAMS stuff.  If
a STREAMS module could be added without reconfiguring the kernel (and
rebooting), and could optionally run in user-mode, it would be *wonderful*.
(As it is, it's only nifty-keen 8-).)

How does this fit into the thread?  Well, like programming, if you keep your
programs and/or utilities small and simple, there is less chance of things
breaking (or, when they do, you will [hopefully] have a better chance of
understanding and fixing the code).  By splitting the various services into
different programs, and documenting the interface, someone can decide to
throw in new features of their own, or replace existing ones (as people do
already with shells [ksh vs. csh vs. sh vs. bash etc]).

As Dick said (or implied, whatever), we need people working on "fixing" the
current code, to make the product better.  We also need people working on
adding features, to actually be able to sell it to the public.  And the
latter wins, a lot of the time...

Anyway, just my opinions, even if I am awfully vocal about them 8-).

-- 
-----------------+
Sean Eric Fagan  | "Quoth the raven,"
seanf@sco.COM    | "Eat my shorts!"
uunet!sco!seanf  |     -- Lisa and Bart Simpson
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

esink@turia.dit.upm.es (Eric Wayne Sink) (11/02/90)

>If the Mac Plus were a sizzing fast
>machine, it wouldn't be a Mac Plus any more.

>Rahul Dhesi <dhesi%cirrusl@oliveb.ATC.olivetti.com>

This is worth quoting somewhere :-) !  I'm considering adding it
to my .sig !

Maybe later...

Eric W. Sink			Residence:	C/Brasil,4 - 9B
Departamento de Telematica			28850 Torrejon de Ardoz
Universidad Politecnica de Madrid		(Madrid) SPAIN
esink@turia.dit.upm.es				{011 341} 677-4429

rcd@ico.isc.com (Dick Dunn) (11/03/90)

al@escom.com (Al Donaldson) writes in response to my response...:

> Bloat isn't just confined to software; our hardware cousins are just as bad.
[Al includes some hardware-bloat horror stories]

I agree that bloat isn't confined to software, but I really disagree that
the hardware folks are as bad.

In particular, the CPU-makers have figured out just how big the wins can be
by going for simplicity--RISC processors easily beat CISCs.  They are
faster, move to new technology quicker, and have fewer lingering nasty
bugs.

For some time, I've wondered what the software equivalent of RISC could
be.
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...but Meatball doesn't work that way!

seanf@sco.COM (Sean Fagan) (11/04/90)

In article <1990Nov2.203059.13930@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>For some time, I've wondered what the software equivalent of RISC could
>be.

Ah ha!

This was discussed, albeit briefly, at the summer UseNIX in Anaheim (Dick,
you were there, didn't you see it?).  Anyway, the talk was, "Why isn't
software get faster as fast as hardware," or something like that, and the
point about RISC was brought up.

One of the points mentioned that a way to slow down an OS is to have lots of
context switches.  I agree; Mach is certainly worse in that respect than,
say, SysV.  However, just like the CISC vs. RISC debate, there's something
else to note: Mach has *quicker* context switches than generic Unix (and
that's why a BSD program, given the same hardware, only different OS's, will
run faster under Mach than SunOS [study done at CMU]).

Of course, the CFOS% designers can make their context switch times as fast
as the RFOS% people can, but the RFOS people will just keep getting
faster....  (Now, I *know* I've heard that before... 8-) 8-))

------
% CFOS = "Complex Featured Operating System," RFOS = "Reduced Feature
Operating System"
------

-- 
-----------------+
Sean Eric Fagan  | "*Never* knock on Death's door:  ring the bell and 
seanf@sco.COM    |   run away!  Death hates that!"
uunet!sco!seanf  |     -- Dr. Mike Stratford (Matt Frewer, "Doctor, Doctor")
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

Richard.Draves@CS.CMU.EDU (11/06/90)

> Excerpts from netnews.comp.misc: 4-Nov-90 Re: A tirade about ineffici..
> Sean Fagan@sco.COM (1461)

> One of the points mentioned that a way to slow down an OS is to have
> lots of
> context switches.  I agree; Mach is certainly worse in that respect than,
> say, SysV.  However, just like the CISC vs. RISC debate, there's
> something
> else to note: Mach has *quicker* context switches than generic Unix (and
> that's why a BSD program, given the same hardware, only different OS's,
> will
> run faster under Mach than SunOS [study done at CMU]).


Why do you think Mach has more context-switches than SysV?  I think the
reason Mach generally does better than SunOS or Ultrix has very little
to do with context-switch performance.  In general, important operations
like fork/exec are faster, and the VM system does a better job of
caching data.

Rich

anton@bkj386.uucp (Anton Aylward) (11/06/90)

In article <8539@scolex.sco.COM> seanf (Sean Fagan) writes:
>
>One of the reasons I like Mach, or at least it's potential, is that the
>"features" everyone wants can be added as user-level programs.  That is,
>they can be swapped out, run only when needed, removed from the disk if you
>don't want it, etc.  (Some of the features everyone wants are built in, such
>as the VM, task switching, etc.  Others, such as unix compatability [which
>unix? you ask 8-)] can be made seperate modules, as can device drivers.
>Including networking.)

Huh? You can do this already with UNIX !!
Why SCO puts everything in the kernel I don't know.
Some things hae to be there, yes, but....

	The auto-logout is done as a user-domain daemon on most systems,
	SCO seem to have it in the kernel.

	Networking is done in the user domain on ATT boxes and ATT derived 
	UNIX.  SCO went the berkeley way and put it in the kernel.

Now correct me if I'm wrong, but isn't it a whole lot harder to page 
the kernel than it is to page a user process ?
Doesn't this mean that the bloated kernel is probably resident ?

>Keep it small and simple, and try to allow for generalities.  One of the
>nicest things to come from AT&T in recent history was the STREAMS stuff.  If
>a STREAMS module could be added without reconfiguring the kernel (and
>rebooting), and could optionally run in user-mode, it would be *wonderful*.
>(As it is, it's only nifty-keen 8-).)

As I understand it, it can, as long as you have the "head" and the 
"tail" ends alredy in there.   I've used 3B machines with almost 
all the comms (network, tty etc) in user space.
I see no reason why the disk strategy and drivers as well as the buffers
couldn't be done this way.

>How does this fit into the thread?  Well, like programming, if you keep your
>programs and/or utilities small and simple, there is less chance of things
>breaking (or, when they do, you will [hopefully] have a better chance of
>understanding and fixing the code).  

Right.  Like "old" UNIX before G-XXX and X-XXXX.
Shell type things, like K&R & Pike espoused.
Before Berkeleyisms bloated "cat" by adding the "harmful" -v.

					By splitting the various services into
>different programs, and documenting the interface, someone can decide to
>throw in new features of their own, or replace existing ones (as people do
>already with shells [ksh vs. csh vs. sh vs. bash etc]).

Gee, sounds like Object something or other to me.
Maybe I've been OOP-ing aloing is shell for the last 12 years 
without realising it?

/anton aylward		Analysis Synthesis Consulting Inc.
			anton@analsyn.uucp
			12 Years with UNIX

paul@frcs.UUCP (Paul Nash) (11/08/90)

seanf@sco.COM (Sean Fagan) writes:

> [ .. ]  If I can:  Xenix 2.3 (for the '386) was a
>nice little OS.  The OS fit on 4 or 6 floppies, I believe (for SCO-familiar
>people, the X and N disks).  It ran quite well in 2Mb of memory (for 2
>people or so), and with 4, it was ecstatic.

This is one of the many even nicer features of Minix -- it can run
quite happily on a two-floppy machine, has transparent networking,
comes with full source and so on ...

There is (apparently) a '386 version, but the 8086 _and_ 80286
versions together costs US $150!

Viva Andy Tannenbaum!
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Paul Nash			 Flagship Wide Area Networks (Pty) Ltd
  paul@frcs.UUCP			  ...!ddsw1!proxima!frcs!paul

pcg@cs.aber.ac.uk (Piercarlo Grandi) (11/09/90)

On 29 Oct 90 23:27:33 GMT, stachour@sctc.com (Paul Stachour) said:

stachour> Putting things in applications that belong in the OS is one of
stachour> the worst forms of development known.  Every application
stachour> developer puts them in.  And the cost is enormous.  And the
stachour> speed is horrible, because no-one has enough time to check
stachour> performance and optimize correctly.  And the reliability is
stachour> horrible because because it's not checked and reused.

Seemingly true, IMNHO.

stachour> Done correctly, putting it in the OS (and paging it in when
stachour> needed, so that all applications don't pay for a feature they
stachour> don't need) is the performant, correct, consistant way.  And
stachour> it's the cheapest.

Totally false, IMNHO.

Here we have another example of a good misunderstanding. OSes should be
not about features, but about multiplexing, as somebody says of MACH in
another article, lauding its ability to put features in user level
programs -- an OS should be the universal glue, i.e. not a function, but
a functional.

Programs are really made of two types of stuffs: functions and
functionals (features and feature combinators) in alternating layers.

UNIX got it right in perspective, but with the wrong flavour: it is
(was...) an IO multiplexor, which is a poor paradigm for unviersal glue.
Multics got it better: it was a library multiplexor, which is far more
flexible. PLAN 9 got is wrong again: it is a filesystem multiplexor,
which is a better uniform referent than files, but still innatural.
System V is a streams/filesystem multiplexor, BSD is a virtual circuit
multiplexor, and IPC connections are a better choice. Amoeba, MACH,
etc... are nearly the right stuff: they are full capability
multiplexors.

Yet even the UNIX way, using pipes (virtual files) as glue, with all its
in-built limits, has demonstrated that this is the way to go.

OSes and shells are the functionals; libraries are the functions. An OS
should provide abstraction mechanisms, not collections of features. User
programs should be written as composition of function libraries, not
collections of features.

The alternative is not just between monolithic operating systems and
monolithic applications -- if it were, you are right that monolithic
operating systems are better than monolithic applications, but the
problem isn't just like that.

It is the lack of correct conceptual structure that causes bloat,
as much as the endless quest for features.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

guy@auspex.auspex.com (Guy Harris) (11/14/90)

>	Networking is done in the user domain on ATT boxes and ATT derived 
>	UNIX.  SCO went the berkeley way and put it in the kernel.

As opposed to AT&T who, with all the TLI stuff in S5R3 and S5R4...

...went the Berkeley way and put it in the kernel.

I shall assume that by "networking" you mean "network protocols up to
and including the transport layer"; most protocols above that layer are,
in UNIX systems, done in user mode.

The generic statement "Networking is done in the user domain on ATT
boxes and ATT derived UNIX" is false.  There may be some implementations
of networking on some AT&T machines and some AT&T-derived flavors of
UNIX (although bear in mind that 4.[123]BSD is AT&T-derived UNIX...)
that do their networking in user mode, but not *all* of them do.

>I see no reason why the disk strategy and drivers as well as the buffers
>couldn't be done this way.

As long as you don't have to use the disk strategy and drivers to page
the disk strategy and drivers into memory....  (You might have the disk
driver for the disk containing other disk drivers wired down.)