[comp.unix.wizards] readline bashing

mjr@hussar.dco.dec.com (Marcus J. Ranum) (04/04/91)

In article <564@bria> uunet!bria!mike writes:

>Robert is quite right.  Quite often, the overhead incurred (memory usage,
>CPU hogging, and frustrated users) is not worth the minimal advantage of
>having 1001 inline editing capabilties, most of which go unused.

	This is something that's always amazed me - I'm suspect that if
format studies were done, we'd find that only a minor amount of the
"nifty functionality" that gets added to applications is ever used. Does
anyone have pointers to any real research in this area? Has anyone done
any studies about, say, what amount of the average editor's command
set is used (10%? 15%?) or - the average window manager's? How much
useless code is actually out there - has anyone even tried to measure
it?

	I seem to recall reading someplace that the original UNIX
accounting utilities were also used as a tool to feed back what
commands were consuming how many resources, versus how much they
were being used, etc. Does anyone still do this? Does anyone *DARE*!?

	A good friend of mine has this theory that computers today
are really no more useful than the woefully "obsolete" ones we see
in the computer museum - by the time you factor in the amount of
sheer gunk they're wasting their time doing (painting nifty-keen 3-d
widgets, etc, etc, etc) and the sheer human cost of *understanding*
all that gunk, they are no faster, no more cost effective, and no
more capable at doing "real work" than they used to be. Of course,
that's an utterly insane argument, isn't it?

mjr.

--
the preceeding was personal opinion of the author only.

ed@mtxinu.COM (Ed Gould) (04/05/91)

>	A good friend of mine has this theory that computers today
>are really no more useful than the woefully "obsolete" ones we see
>in the computer museum - by the time you factor in the amount of
>sheer gunk they're wasting their time doing (painting nifty-keen 3-d
>widgets, etc, etc, etc) and the sheer human cost of *understanding*
>all that gunk, they are no faster, no more cost effective, and no
>more capable at doing "real work" than they used to be. Of course,
>that's an utterly insane argument, isn't it?

If you have all that gunk in there - particularly if you have it
embodied in one of the current implementations like X or DECwindows -
it's true.  Today's machines don't get any more real work done
than did yesterday's.  But they sure work harder at getting it
done.

There are, however, alternatives.  Some are available; others are
proprietary, but we would do well to learn from them.  The two I
think of in particular as I write this are MGR, written at Bellcore
and available, and the Plan 9 stuff at Bell Labs.  Both of those
window systems have more than adequate graphics capabilities, but
don't eat their server alive (except, of course, where the server
is the terminal and is designed to be consumed by the window system).

Some of us remember getting real work done on PDP-11s.  (Actually,
some of got real work done even before there *was* a PDP-11.)
That's right, 64KB address spaces (128KB if you were lucky and had
an 11/45 or 11/70, and had a good balance between code and data),
about three-quarters of a MIPS at best, and we shared them with
others.  Yup - they were usually timesharing machines.  We got as
much done with an ASCII terminal connected to a well-configured
11/70 shared with 20 users as on a one user workstation today.  We
worked a bit harder, not having multiple windows and all that, but
we got a lot done.  Of course, the kernel fit into one (well,
actually, two on the 11/70) of those 64KB address spaces back then,
too.

I know that there are a lot of things that legitimately need more
than 64KB (even the worlds smallest fully-functional Unix kernel -
the one at Bell Labs research - is larger), but most of the things
that take more than that do so because they're bloated with excess
goo, badly coded, or - most likely - both.

-- 
Ed Gould			No longer formally affiliated with,
ed@mtxinu.COM			and certainly not speaking for, mt Xinu.

"I'll fight them as a woman, not a lady.  I'll fight them as an engineer."

bzs@world.std.com (Barry Shein) (04/06/91)

	"More and more pixels doing less and less work"

		-someone

-- 
        -Barry Shein

Software Tool & Die    | bzs@world.std.com          | uunet!world!bzs
Purveyors to the Trade | Voice: 617-739-0202        | Login: 617-739-WRLD

ken@dali.cc.gatech.edu (Ken Seefried iii) (04/06/91)

In article <BZS.91Apr5121703@world.std.com> bzs@world.std.com (Barry Shein) writes:
>	"More and more pixels doing less and less work"
>		-someone

someone == Rob Pike, if I'm not mistaken (and often am...;')

--
	 ken seefried iii	ken@dali.cc.gatech.edu

	"If 'ya can't be with the one you love, 
		   honey, love the one you're with..."

mjr@hussar.dco.dec.com (Marcus J. Ranum) (04/06/91)

ed@mtxinu.COM (Ed Gould) writes:

>I know that there are a lot of things that legitimately need more
>than 64KB (even the worlds smallest fully-functional Unix kernel -
>the one at Bell Labs research - is larger), but most of the things
>that take more than that do so because they're bloated with excess
>goo, badly coded, or - most likely - both.

	One fellow sent me mail in response to my earlier posting that
was very thought-provoking. I can't recall exactly what his phrase was,
but the gist of it was:

We are running CISC software on our RISC machines.

	In fact, the analogy is quite interesting - I've never been a
"hardware guy" but didn't RISC computing evolve from studies that showed
effectively that only a certain core set of instructions were needed,
and that RISC architectures would be easier and cheaper to make, and
that they got the job done just as well?

	Does anyone have any pointers to papers on the original CISC Vs.
RISC comparisons and theory that a hardware illiterate can understand?
I'm kind of intrigued by all this - I'm sure some interesting metrics
could be generated for software - what portions of a large windowed
application's user interface are actually used? How much of GNU-emacs
is actually used, etc, etc, etc.

	The computer-customer community seems addicted to "features"
(without reasoning why - those 3d borders *are* c00l) but there might
be an interesting niche for software that has the parts that are used
and omits the glop. Of course it'd be a really problem getting a machine
with RCS (Reduced Complexity Software) to fairly benchmark against a
machine with CIS (Complexity Inflated Software).

mjr.
--
    The deadliest bullshit is odorless and transparent.
                   - William Gibson   1988

gwyn@smoke.brl.mil (Doug Gwyn) (04/07/91)

In article <1991Apr5.072447.4432@mtxinu.COM> ed@mtxinu.COM (Ed Gould) writes:
>We worked a bit harder, not having multiple windows and all that, but
>we got a lot done.

All that is quite true.  However, having gotten used to spiffy user
interfaces I'm no longer sure I could be productive if forced to
revert to old methods.  One's personal standards change based on
experience.

>... most of the things that take more than that do so because they're
>bloated with excess goo, badly coded, or - most likely - both.

Don't forget another possibility, which is lack of integration in their
design.  What makes systems like the original UNIX and Plan 9 so slick
is the care that is put into conceptual integration; systems that are
"designed" with much less care tend to end up supporting several distinct
features where one properly-designed facility would have sufficed.

mjr@hussar.dco.dec.com (Marcus J. Ranum) (04/08/91)

gwyn@smoke.brl.mil (Doug Gwyn) writes:
>
>[...] having gotten used to spiffy user
>interfaces I'm no longer sure I could be productive if forced to
>revert to old methods.

	The idea is not necessarily to be a counter-revolutionary and
just pitch all windows systems (and GNUEmacs) but to study the
functionality of the windows systems and eliminate redundancy and little
used routines wherever possible. I agree 100% with Doug that part of
Plan-9's slickness is its careful attention to making things conceptually
consistent - a feature of the original UNIX. In fact, the conceptual
elegance of the original UNIX is about the only part that hasn't been
preserved in some grottied-up backwards compatibility hack - it's just
plain gone.

	Picking on windows systems is easy because they're, er, such
large targets. How many different ways can you do the same thing under
MOTIF or Open Look? All that configurability, keyboard-mapping, resizing,
etc. does not come for free. It comes with a cost to the user, too, since
you can easily waste hours frobbing Xdefaults files, startup scripts,
and (possibly) reading the fine manual.

	I agree with the fellow from the Postgres team that spiffy
user interfaces are a Good Thing, in that they make computers accessible
to less technical users - but the less technical users aren't going to
(initially) use all those wonderful slow buggy features that have been
laboriously added into their user interface. Apple (do they still?)
used to do some interesting research about what components of the
window system were used or not, though possibly they exerted some
control because of limitations in ROM-space.

	Being productive with the new methods consists of, what, being
able to have more than one application running, being able to quickly
set your input focus, being able to cover and uncover applications if
there is a size problem, and being able to start and stop them. What
else? I agree, I like my workstation with all these fine windows - it's
like having 8 VT52s with only one keyboard. :) Having my window manager
duplicate functionality of my shell is absurd.

mjr.

ed@mtxinu.COM (Ed Gould) (04/09/91)

>However, having gotten used to spiffy user interfaces I'm no longer
>sure I could be productive if forced to revert to old methods.
>One's personal standards change based on experience.

Personal standards certainly do change.  I don't think I want to
go back to using a one-window tube (although, that's what I am
using just now) for real work, either.  I'd rather, though, have an
efficient and productive interface than one that eats more of the
machine than it really needs.

>>... most of the things that take more than that do so because they're
>>bloated with excess goo, badly coded, or - most likely - both.

>Don't forget another possibility, which is lack of integration in their
>design.

Agreed.  Good integration - especially on the conceptual level -
has many benefits.  Among others, it's the best way I know of to
avoid the excess goo.

-- 
Ed Gould			No longer formally affiliated with,
ed@mtxinu.COM			and certainly not speaking for, mt Xinu.

"I'll fight them as a woman, not a lady.  I'll fight them as an engineer."

andrew@alice.att.com (Andrew Hume) (04/10/91)

	just to amplify a little on what ed said: the original
Jerq terminal (Pike&Locanthi) offered a simple but complete windowing
system although sadly lacking in features by today's standards.
It was something like a 10-12MHz 68000. I am pleased to note that
with the advent of the 7?MIP Snake workstation, we can now buy
a workstation with a window system that is as responsive to the touch
of a mouse as the jerq was. it may have taken 9 years to accomplish
this, but it did take a factor of 50 in CPU speed.

	andrew hume

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (04/14/91)

In article <1991Apr5.072447.4432@mtxinu.COM> ed@mtxinu.COM (Ed Gould) writes:
> I know that there are a lot of things that legitimately need more
> than 64KB (even the worlds smallest fully-functional Unix kernel -
> the one at Bell Labs research - is larger), but most of the things
> that take more than that do so because they're bloated with excess
> goo, badly coded, or - most likely - both.

Heh. On a DECsystem here running Ultrix 4.1, accton fails with ``not
enough core.'' Not enough core? The machine has 32MB memory which is
nowhere near full, not to mention 64MB or so of virtual memory. What
do the geniuses at DEC tell us? Virtual memory should be at least 4x
real---that's right, 128MB for a computer that rarely uses more than
20MB---on any running machine. Brilliant.

[sigh] I always thought the Apple II's 16K expansion card was a huge
improvement: dozens of extra programs in memory, or even enough room
to run a (gasp) optimizing compiler. Oh, well.

---Dan

stripes@eng.umd.edu (Joshua Osborne) (04/17/91)

In article <1991Apr04.025733.18462@decuac.dec.com>, mjr@hussar.dco.dec.com (Marcus J. Ranum) writes:
> 	This is something that's always amazed me - I'm suspect that if
> format studies were done, we'd find that only a minor amount of the
> "nifty functionality" that gets added to applications is ever used. Does
> anyone have pointers to any real research in this area? Has anyone done
> any studies about, say, what amount of the average editor's command
> set is used (10%? 15%?) or - the average window manager's? How much
> useless code is actually out there - has anyone even tried to measure
> it?

I tend to use about 60% of the commands available to me in vi (I don't use map
very offen), however this is 60% of what I have read about, not 60% of what is
in Ultrix and SunOS's vi's.  I use *all* the commands available to me in my window
manager.  I configured out everything I don't use, and put in things I do.
However most of the code in the version of tvtwm I use is never executed (I used
a profiler), approx 40% of the code is used (by me) on a mono system.  More on a
color one.

> 	I seem to recall reading someplace that the original UNIX
> accounting utilities were also used as a tool to feed back what
> commands were consuming how many resources, versus how much they
> were being used, etc. Does anyone still do this? Does anyone *DARE*!?

For a class assignment I checked what commands diffrent classes use, but I
didn't check how much CPU was used (my program calculated think times, and
could re-create a not very realistic command sequence baised on Markov chains).

I found that for classes ranging from the freshman "This is Unix, this is Unix
Mail..." to grad classes doing numerical analysis with a 4th year "Advenced OS"
course sandwiched in between that the top 20 commands account for 80% to 95% of
all command invocations (at least over a 2 week period).
 
> 	A good friend of mine has this theory that computers today
> are really no more useful than the woefully "obsolete" ones we see
> in the computer museum - by the time you factor in the amount of
> sheer gunk they're wasting their time doing (painting nifty-keen 3-d
> widgets, etc, etc, etc) and the sheer human cost of *understanding*
> all that gunk, they are no faster, no more cost effective, and no
> more capable at doing "real work" than they used to be. Of course,
> that's an utterly insane argument, isn't it?

Well I can tell you that I get alot more done today on a X terminal running off
a Sun 4/60 (SS1) then I did a few years ago with an Ataris ST, and I got more done
on that then I got done on a C=64, I got less done on the 64 then I got done on a
IBM 370, I did get more done on the ST then the 370.  So for me I get more done
on a "modern" computer then the old ones.  However I don't use fake 3D, it doesn't
work real well on a mono system.  (and I don't on color ones, I do like color
better, I can find my mouse quicker on them).
-- 
           stripes@eng.umd.edu          "Security for Unix is like
      Josh_Osborne@Real_World,The          Multitasking for MS-DOS"
      "The dyslexic porgramer"                  - Kevin Lockwood
"CNN is the only nuclear capable news network..."
    - lbruck@eng.umd.edu (Lewis Bruck)

steve@endgame.gsfc.nasa.gov (Steve Rezsutek) (04/18/91)

In article <1991Apr17.153508.28645@eng.umd.edu> stripes@eng.umd.edu (Joshua Osborne) writes:

   > 	A good friend of mine has this theory that computers today
   > are really no more useful than the woefully "obsolete" ones we see
   > in the computer museum - by the time you factor in the amount of
   > sheer gunk they're wasting their time doing (painting nifty-keen 3-d
   > widgets, etc, etc, etc) and the sheer human cost of *understanding*
   > all that gunk, they are no faster, no more cost effective, and no
   > more capable at doing "real work" than they used to be. Of course,
   > that's an utterly insane argument, isn't it?

   Well I can tell you  that I get alot  more  done today on a  X  terminal
   running off a Sun 4/60 (SS1) then I did  a few years  ago with an Ataris
   ST, and I got more done on  that then I got done  on a C=64,  I got less
   done on the 64 then I got done on a IBM 370, I did get  more done on the
   ST then the 370.  So for me I get more done  on a "modern" computer then
   the old ones.  However I don't use fake 3D, it doesn't work real well on
   a mono system.  (and I don't  on color ones,  I do like color  better, I
   can find my mouse quicker on them).

This is perhaps a silly comparison, but it will [hopefully] illustrate my
point. Let's assume (dangerous, I know ;-) that in comparing MS-DOS to 
Unix, that Unix fits the description of software that has all the bloated
``gunk'' while MS-DOS is the ``lean. mean computin' machine''. [I've heard
this opinion expressed by not just a few DOS die-hards.] On the *same*
hardware, I'd venture to guess that Unix will "eat up" maybe 15% of the 
available computes, but I certainly get a *lot* more done using Unix, than
I ever did/will with MS-DOS (unless getting frustrated and having to reboot
constitute "getting things done").

Now to carry this further, I think that if I want to illustrate a paper
I'm working on, I would get a lot more done using X and something like
Tgif than hacking straight Postscript over a dialup. On the other hand, 
if I'm reading news/mail, then I'll stick to emacs on a terminal (xterm 
or otherwise). Mice et al just don't seem to be as efficient when coping 
with textual things like composing mail or writing code as a good [and 
perhaps a bit overweight ;-)] text editor.

My point is that how effective something is at "getting things done" might
well change in relation to what one is trying to get done. ``When all you 
have is a hammer, everything starts to look like a nail.''

Just my 20 milli-dollars worth.

Steve

vandys@sequent.com (04/18/91)

steve@endgame.gsfc.nasa.gov (Steve Rezsutek) writes:

>... but I certainly get a *lot* more done using Unix, than
>I ever did/will with MS-DOS (unless getting frustrated and having to reboot
>constitute "getting things done").

	This cuts both ways.  I wrote a bulletin board to run on
UNIX for amateur radio packet, and took the system to a user's group
and demonstrated it.  An MS-DOS programmer came up and asked me ("and
be honest!") how many times I crashed the system during development.
My answer of "several core dumps, no crashes" left him staring at me
blankly.

	On the other hand, one doesn't realize how sickly adb, dbx, and
even gdb are until you've used magic CodeView on a 50 Mhz i486.  That thing
is SLICK, and runs like greased lightning.  Reboots real fast, too :->.

					Just my opinions,
					Andy Valencia
					vandys@sequent.com