[comp.unix.wizards] Large programs

edw@ius1.cs.cmu.edu (Eddie Wyatt) (09/29/87)

  I was reading "The UNIX Time-Sharing System", by Dennis Ritchie and Ken
Thompson, 1978 for a qual and I came across something I found to be humorous
and pertainate to the discussion about large programs.

"In the absence of the ability to redirect output and input, a still
clusmsier method would have been to require the 'ls' command to accept user
request to paginate its output, to print in multi-column format, and
to arrange that its output be delivered off-line. Actually it would be
surprising, and in fact unwise for efficiency reasons, to expect
authors of commands such as 'ls' to provide such a wide variety of output
options."

   Its seems very funny that they use 'ls' as an example since that
command now is so burdened with options.  The functionality of which
could be provided by piping the output of the command into other
UNIX utilities. It seems that someone lost sight of the original plan.

-- 

					Eddie Wyatt

e-mail: edw@ius1.cs.cmu.edu

roy@phri.UUCP (09/29/87)

In article <1046@ius1.cs.cmu.edu> edw@ius1.cs.cmu.edu (Eddie Wyatt) writes:
>    Its seems very funny that they use 'ls' as an example since that
> command now is so burdened with options.  The functionality of which
> could be provided by piping the output of the command into other
> UNIX utilities. It seems that someone lost sight of the original plan.

	Once again, it seems that two comp.unix.wizards discussions have
converged to a common point.  In the one, we have people arguing about how
much exta baggage ls should have which could be done with piping through a
formatter and on the other hand we have people arguing about RISC vs. CISC
and whether to make integer divide an instruction or a subroutine.

	It's really the same argument.  You start with a simple set of tool
modules which you can plug together in various ways to do whatever you
want.  Then, you watch people for a long time and try to spot patterns in
how they plug the modules together.  If you see that almost every
invocation of "ls" is piped into "pr -4" to get multicolumn output, you
start to think it might be worthwhile to just build it into ls and save a
fork/exec every time.  Same argument for hardware divide instructions.

	Of course, what I've just described is creeping featureism, the
philosophy-non-grata of today's RISC-oriented society.  CF hit hardware
design like a ton of bricks with things like the Vax and the 68020 and the
industry (over?) reacted to the plague with Clipper, MIPS, SPARC, etc.  Are
we to see the same reaction in Unix?  Is that what GNU and Mach are all
about?  Interesting to note that SUN, while going whole-hog on software
complexity (YP, suntools, etc) also has embraced RISC as a hardware design
paradigm.
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

gwyn@brl-smoke.UUCP (10/01/87)

In article <1046@ius1.cs.cmu.edu> edw@ius1.cs.cmu.edu (Eddie Wyatt) writes:
>... It seems that someone lost sight of the original plan.

Oh, you noticed that?

A partial remedy is to get as many people as possible to read
Kernighan & Plauger's "Software Tools" and Kernighan & Pike's
"The UNIX Programming Environment".  Software toolkits still
make good sense, but lots of people just aren't aware of the
ramifications.

Bentley's "Programming Pearls" also has good examples of
toolkit usage.

mc68020@gilsys.UUCP (Thomas J Keller) (10/03/87)

In article <1046@ius1.cs.cmu.edu>, edw@ius1.cs.cmu.edu (Eddie Wyatt) writes:
> 
> "In the absence of the ability to redirect output and input, 
>	[ stuff about why ls shouldn't have lot's of options ]
> authors of commands such as 'ls' to provide such a wide variety of output
> options."
> 
>    Its seems very funny that they use 'ls' as an example since that
> command now is so burdened with options.  The functionality of which
> could be provided by piping the output of the command into other
> UNIX utilities. It seems that someone lost sight of the original plan.

   Okay, now I am the first to admit that I am a relative neophyte to UNIX and
its philosophy, but it seems to me that a crucial point is being missed here.

   I read quite frequently about how programs should be kept small, simple,
single-purpose, and then tied together with pipes to perform more complex
tasks.  This is all well and good from one perspective.  But it seems to me
that it ignores a perspective which is highly important (not altogether
surprising, as UNIX has a well established tradition of ignoring this aspect
of computing), specifically, the user interface.

   1)  entering a command which uses three to seven different small programs,
all piped together, is a *PAIN* in the arse!  In many cases, a single command
is much more desireable, certainly less prone to errors, and always eaiser and
faster to use.

   2)  speaking of speed, we all seem to have forgotten that each one of those
lovely small programs in the chain has to be loaded from disk.  Clearly, the
overhead necessary to fork & spawn multiple processes, which in turn load
multiple program text into memory, is **MUCH** greater than spawning and 
loading a single program!  Waiting time is important too, you know?

   I use the power of the I/O re-direction in UNIX whenever it makes sense to
do so, and I find it extremely useful  I would suggest, however, that mono-
maniacal adherence to a so-called "UNIX Philosophy" which for the most part
blatantly ignores the needs and convenience of the USERS is an error.  Sure,
it's FUN to be a wizard, and know how to invoke arcane sequences which 
accomplish what are really fairly simple tasks, and to have unsophisticated
users in awe of your prowess.  Fun and very satisfying.  But not very effective,
and for my money, highly counter-productive.  

   There is no reason that UNIX should remain a mysterious and arcane system
which typical users are fearful to approach, yet this is the case.  Continuing
promulgation of the "UNIX Philosophy", as it currently exists, can only ensure
that fewer people will learn and use UNIX.  It is time for us to get our egos
and our heads out of the clouds, and make UNIX a reasonable, effective
environment for everyone, not just the wizards.

   [stepping down off soapbox, donning asbestos suit (don't tell the EPA!)]


-- 
Tom Keller 
VOICE  : + 1 707 575 9493
UUCP   : {ihnp4,ames,sun,amdahl,lll-crg,pyramid}!ptsfa!gilsys!mc68020

guy%gorodish@Sun.COM (Guy Harris) (10/06/87)

>    1)  entering a command which uses three to seven different small programs,
> all piped together, is a *PAIN* in the arse!  In many cases, a single command
> is much more desireable, certainly less prone to errors, and always eaiser
> and faster to use.

Which means that any commonly-used such sequence should be wrapped up in e.g. a
shell script or an alias.  Unfortunately, many such commonly-used sequences
aren't so bundled, e.g. the "ls | <multi-column filter>" sequence so often
suggested as preferable to having "ls" do the job.  (I'm curious how
general-purpose such a multi-column filter would be if it were to give you
*all* the capabilities of the current multi-column "ls"; i.e., were something
such as "ls * | <multi_column_filter>" in a directory with multiple
subdirectories able to give a listing of the form

	directory1:
	file1.1		file3.1
	file2.1		file4.1

	directory2:
	file1.2		file3.2
	file2.2		file4.2

If the filter couldn't do that, I wouldn't find it acceptable.  If it could do
*more* than that, e.g. converting "ls /foo/*.c /bar/*.c" | <multi-column
filter>" into

	foo:
	alpha.c		gamma.c
	beta.c

	bar:
	delta.c

I'd find it wonderful.)
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

crowl@cs.rochester.edu (Lawrence Crowl) (10/06/87)

In article <1130@gilsys.UUCP> mc68020@gilsys.UUCP (Thomas J Keller) writes:
]   I read quite frequently about how programs should be kept small, simple,
]single-purpose, and then tied together with pipes to perform more complex
]tasks.  This is all well and good from one perspective.  But it seems to me
]that it ignores ... the user interface.

I think you should be careful to distinguish between ignoring the user
interface and choosing a user interface you feel is inappropriate.

]   1)  entering a command which uses three to seven different small programs,
]all piped together, is a *PAIN* in the arse!  In many cases, a single command
]is much more desireable, certainly less prone to errors, and always eaiser and
]faster to use.

This problem is easily solved with a shell script.  This gets you a single
command and the convenience of not having to place all the filters in the
program.

]   2)  speaking of speed, we all seem to have forgotten that each one of those
]lovely small programs in the chain has to be loaded from disk.  Clearly, the
]overhead necessary to fork & spawn multiple processes, which in turn load
]multiple program text into memory, is **MUCH** greater than spawning and 
]loading a single program!  Waiting time is important too, you know?

You forgot an important speed difference.  In the pipe approach, each program
in the pipe does a lot of file I/O and string to data to string conversions.
A system which operates on the data values themselves without the intermediate
file representation can be much more efficient.

]   I would suggest, however, that mono-maniacal adherence to a so-called
]"UNIX Philosophy" which for the most part blatantly ignores the needs and
]convenience of the USERS is an error.  Sure, it's FUN to be a wizard, and know
]how to invoke arcane sequences which accomplish what are really fairly simple
]tasks, and to have unsophisticated users in awe of your prowess.  Fun and very
]satisfying.  But not very effective, and for my money, highly
]counter-productive.  

But the intended users of Unix are (or were initially) wizards!  They were
assumed to be doing weird things with consistent need for rapid, "hack"
solutions that a more structured environment might inhibit.

]   There is no reason that UNIX should remain a mysterious and arcane system
]which typical users are fearful to approach, yet this is the case.  Continuing
]promulgation of the "UNIX Philosophy", as it currently exists, can only ensure
]that fewer people will learn and use UNIX.  It is time for us to get our egos
]and our heads out of the clouds, and make UNIX a reasonable, effective
]environment for everyone, not just the wizards.

If you want to change the basic design premise of the system, fine.  But don't
get mad because someone else wants to maintain the original design premise.  I
believe there is a good compromise out there, but it is not obvious.
-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

hwe@beta.UUCP (Skip Egdorf) (10/06/87)

(comp.arch has been removed from Newsgroups:. This is no longer dealing
with computer architecture.)

In article <1130@gilsys.UUCP>, mc68020@gilsys.UUCP (Thomas J Keller) writes:
> In article <1046@ius1.cs.cmu.edu>, edw@ius1.cs.cmu.edu (Eddie Wyatt) writes:
> > 
> > "In the absence of the ability to redirect output and input, 
> >	[ stuff about why ls shouldn't have lot's of options ]
> > authors of commands such as 'ls' to provide such a wide variety of output
> > options."
> > 
> >    Its seems very funny that they use 'ls' as an example since that
> > command now is so burdened with options.  The functionality of which
> > could be provided by piping the output of the command into other
> > UNIX utilities. It seems that someone lost sight of the original plan.
> 
>    Okay, now I am the first to admit that I am a relative neophyte to UNIX and
> its philosophy, but it seems to me that a crucial point is being missed here.
> 
>    I read quite frequently about how programs should be kept small, simple,
> single-purpose, and then tied together with pipes to perform more complex
> tasks.  This is all well and good from one perspective.  But it seems to me
> that it ignores a perspective which is highly important (not altogether
> surprising, as UNIX has a well established tradition of ignoring this aspect
> of computing), specifically, the user interface.

This is the statement that is deserving of flame. I hope, however, that
the discussion below provides more light than heat.

> 				... I would suggest, however, that mono-
> maniacal adherence to a so-called "UNIX Philosophy" which for the most part
> blatantly ignores the needs and convenience of the USERS is an error.
> ...

> -- 
> Tom Keller 

Having been around UNIX for a few years, I would like to point out
that this is a twisting of what I understand to be the Unix Philosophy.

The original idea of Unix, was to write a simple program (e.g. a naked
ls command with no options) and then profile it to find out how it
was used. The classic example of this was the early examination of
the ed editor that found that the single major use of the editor was to
search for text patterns with no modification via
  g/regular-expression/p
Finding this usage was common, it was packaged as a seperate command
very naturally named "grep".

Many of the command line arguments (and commands themselves) came to
be in this way. Most of the initial accounting on early Unix (V5, V6, ...)
was not for charging users, but to find how the system was used so that
it could be improved. This was viewed as the only REAL way to provide
for the user's needs, since some wizard would have, at best, a poor
understanding of those needs.

The whole Unix Philosophy was driven by examination of and support for
"the needs and convenience of the USERS" (to quote Tom).

It is unfortunate that a gaggle of hackers far less capable than a
Ken Thompson or a Dennis Ritchie ignored this philosophy, and produced
a set of commands with "features" added because the programmer thought
they were "neet" rather than in response to user's needs, and with
usage that made it harder to build and combine.

The often quoted lack of Unix "User Friendly"ness comes from ignoring
the Unix philosophy and building tools in the same old way as all the
OTHER systems do. The scene would much more attractive had the Unix
philosophy been heeded.

Don't blame a lack of Unix philosophy, rather work for its return.

					Skip Egdorf
					hwe@lanl.gov

edw@ius1.cs.cmu.edu (Eddie Wyatt) (10/06/87)

>    I read quite frequently about how programs should be kept small, simple,
> single-purpose, and then tied together with pipes to perform more complex
> tasks.  This is all well and good from one perspective.  But it seems to me
> that it ignores a perspective which is highly important (not altogether
> surprising, as UNIX has a well established tradition of ignoring this aspect
> of computing), specifically, the user interface.
> 
>    1)  entering a command which uses three to seven different small programs,
> all piped together, is a *PAIN* in the arse!  In many cases, a single command
> is much more desireable, certainly less prone to errors, and always eaiser and
> faster to use.

   Is it??  What options to "ls" sorts by time last modified, time created,
prints in single columns, multi columns.....

   Having the interface to each command so large, makes it hard just to
remember what damn switches to set to get things done.  So in my opinion,
piping output around is no more complex than the "switch" aproach.
This fact does not justify the modular approach over the monolithic though.
You gain by using pipes in that

	1) Once you know how to perform some operation on some data
	   (like sorting the output of ls by file size) you can extend it
	   to any command (like sorting the output of df by size).

	2) From the implemation standpoint, modularity can reduce the
	   amount of duplicated efforted. -- Does ls bother calling
	   sort for its sorting of output or did someone implement yet
	   another sort in the ls code??

	3) Uniformity is achieved.  Does the -v switch for ls do the same
	   thing for cat?? Probably not. (Though I have to admit some
	   attempt at uniformity in switches is made: -i for cp, rm, mv
	   does basically the same thing)


> 
>    2)  speaking of speed, we all seem to have forgotten that each one of those
> lovely small programs in the chain has to be loaded from disk.  Clearly, the
> overhead necessary to fork & spawn multiple processes, which in turn load
> multiple program text into memory, is **MUCH** greater than spawning and 
> loading a single program!  Waiting time is important too, you know?

   Admittedly, speed in execution is one of the prices you pay for taking
the modular approach, but things aren't all that bad.  Piped processes
get executed concurrently. If you had a parallel processor, who knows,
maybe each program could be executed on a different processor. The
pipes could provide a course grain break down of the computing needed. 8-}

-- 

					Eddie Wyatt

e-mail: edw@ius1.cs.cmu.edu

gwyn@brl-smoke.ARPA (Doug Gwyn ) (10/06/87)

One of the things I'm surprised nobody has mentioned yet is that
programs that try to provide everything already nicely packaged for
the user generally do not provide any way to accomplish a new, hitherto
unenvisioned task for which no support was planned in advance.

To take a simple example, not long ago in net.puzzles there were some
questions such as:  What is the longest word containing letters in
alphabetical order?  It is most unlikely that ANY "word processing"
package design would have anticipated such questions; however, it was
quite easy to answer such questions by imaginative combinations of
standard UNIX tools.

This isn't to say that fancy environment packages for naive users are
bad; however, there is a need to support the person who is exploring
new territory as well.  I don't see any way to keep that from
amounting to some form of high-level programmability, either.  That
kind of capability is what attracted many of us to UNIX in the first
place.  Rather than decrying it, it would be far better for those who
want another user interface to develop it AS AN ADDITION TO, not as a
replacement for, the traditional UNIX shell toolkit interface.

preece%mycroft@gswd-vms.Gould.COM (Scott E. Preece) (10/07/87)

  From: Eddie Wyatt <edw@ius1.cs.cmu.EDU>
> > 1) entering a command which uses three to seven different small
> > programs, all piped together, is a *PAIN* in the arse!  In many cases, a
> > single command is much more desireable, certainly less prone to errors,
> > and always eaiser and faster to use.
> 
>    Is it??  What options to "ls" sorts by time last modified, time created,
> prints in single columns, multi columns.....
----------
Well, actually, I don't have any trouble keeping those in my head,
BECAUSE I use them all the time.  I also alias some things I use
occasionally to a single command.  Aliases obviously vitiate this
argument in any case:  you can tie that complex pipe to a simple command
so you don't have to type so much.

> 	1) Once you know how to perform some operation on some data
> 	   (like sorting the output of ls by file size) you can extend it
> 	   to any command (like sorting the output of df by size).
----------
I often think the ls example is misleading.  There are some Unix tools
which work very well in a pipes and filters arrangement and many others
that don't.  Ls happens to work pretty well for the obvious cases and
less well for some others (sure, it's possible to do a sort by access
date ls using pipes, but it's a real pain.  So, you end up tying it to a
shell script that cans the sort specification for you.  Now you have to
remember the name of the shell script.  So much for ease of use.)

> 	2) From the implemation standpoint, modularity can reduce the
> 	   amount of duplicated efforted. -- Does ls bother calling
> 	   sort for its sorting of output or did someone implement yet
> 	   another sort in the ls code??
----------
There is no guarantee that the system sort is the right sort for any
particular sorting application.  It's very likely that the sorting
required by ls can be done faster and cheaper using an internal sort.
If it's done enough, that's a good trade off.

> 	3) Uniformity is achieved.  Does the -v switch for ls do the same
> 	   thing for cat?? Probably not. (Though I have to admit some
> 	   attempt at uniformity in switches is made: -i for cp, rm, mv
> 	   does basically the same thing)
----------
I don't see what this argument means.  Why does contruction from pipes
say anything about uniformity of switches?

>    Admittedly, speed in execution is one of the prices you pay for
> taking the modular approach, but things aren't all that bad.  Piped
> processes get executed concurrently. If you had a parallel processor,
> who knows, maybe each program could be executed on a different
> processor. The pipes could provide a course grain break down of the
> computing needed. 8-}
----------
And a fine-grained parallelism approach would get you the same gain from
the monolithic program, without the pain of having to exec several
different programs to do one (external) function.  Performance matters;
it makes blindingly obvious sense to work to improve performance of the
things that get done most.  I think you'll find that ls is one of the
two or three most used (by humans) commands on most Unix systems.  It
makes sense to make it do its most common functions efficiently.  It
also makes sense to make it compatible with other tools, so that more
obscure things can be done in combination.

The tool composition approach is wonderful and is obviously an important
thing for us to provide in our systems, but to jump from that to a
belief that no tools should support compound functions is just silly.

bzs@bu-cs.bu.EDU (Barry Shein) (10/07/87)

>   Admittedly, speed in execution is one of the prices you pay for taking
>the modular approach, but things aren't all that bad.  Piped processes
>get executed concurrently. If you had a parallel processor, who knows,
>maybe each program could be executed on a different processor. The
>pipes could provide a course grain break down of the computing needed. 8-}

Maybe? Don't say maybe, definitely. On our Encores here at BU all
those wonderful habits of piping things and programs forking off
subprocesses and kindred shell scripts etc do just that, each end of
the pipe will tend to be scheduled on its own CPU (the only factor
being of course available processors.) The speed-ups can be tremendous
and it's totally transparent (so much for "gee, where we gonna get
software for parallel processors" we heard a few years ago, the users
are doing true parallel processing here and don't even know it most
of the time.)

Meanwhile the big, hairy application programs will only use one CPU
unless they're re-worked in a major way. And I bet the folks who wrote
most of them said "we can't use pipes, this is a *real* program, we
need speed...", oh well, they lose. Now that Encore, Sequent and many
others are providing this sort of transparent and effective
parallelism (and who will follow?) how many of those packages will
have rendered themselves dinosaurs in record time? I hear even the new
IBM/PC line (PS/2) was designed to have multiple CPUs. Ho hum. Looks
like another software shakeout comin' down the pike.

	-Barry Shein, Boston University

stpeters@dawn.steinmetz (10/08/87)

In article <10908@beta.UUCP> hwe@beta.UUCP (Skip Egdorf) writes:
>The often quoted lack of Unix "User Friendly"ness comes from ignoring
>the Unix philosophy ...

Hardly.  It comes from having a basic user interface designed for
experts.  Our site has some 2000+ users, gurus to secretaries, maybe
85% of them non-programmers.  Discussions of the UNIX philosophy of 20
years ago is about as relevant as discussions of original intent in
constitutional-law arguments about women's rights.

A more reasonable philosophy would be that features that someone,
*anyone*, wants and that do no harm are ok.  'ls' is the perfect
example: sit a user down at a SYSV UNIX console, tell him (or her)
that 'ls' is the command to list a diretory, watch him type it and see
the single-column list scroll off the screen, and you've lost one to
VMS.  He'll leave the room, shaking his head.  I've seen it happen.

When they invented UNIX, they didn't have screens.  When a directory
listing came out, it was on paper, and anyway it went by slowly enough
so you could memorize it.

(Why do these 'ls' discussions keep popping up year after year?  And
especially why this time when the subject is "large programs"?)
Dick St.Peters                        
GE Corporate R&D, Schenectady, NY
stpeters@ge-crd.arpa              
uunet!steinmetz!stpeters

gwyn@brl-smoke.ARPA (Doug Gwyn ) (10/08/87)

In article <30035@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
>(I'm curious how
>general-purpose such a multi-column filter would be if it were to give you
>*all* the capabilities of the current multi-column "ls"; i.e., were something
>such as "ls * | <multi_column_filter>" in a directory with multiple
>subdirectories able to give a listing of the form
>	directory1:
>	file1.1		file3.1
>	file2.1		file4.1
>	directory2:
>	file1.2		file3.2
>	file2.2		file4.2

The standard DMD "mc" utility does just that, but I think it's a kludge
(not everyone agrees with me on this) because it "understands" the special
"ls" format (name1:\nentry1\n...entryN\nname2:\n...).  It might as well be
built into "ls", since almost no other utility produces this format.

The issue of "ls" columnation is not whether it is useful to have such an
option built into "ls"; given the special nature of multi-directory listing,
it obviously is.  The real issue is whether "ls" should decide on its own
whether or not to columnate, such as the Berkeley version does, or not do
so unless directed by a specific option, such as the System V version does.
From the toolkit viewpoint, I think it is obvious that the latter is correct
behavior, since no matter how much "smarts" is built into "ls", it cannot
possibly make the correct columnation decision under all circumstances.
(The Berkeley "ls" behavior is yet another example of assuming a too-
restrictive usage model, a point I keep harping on.  It works fine for
simple interactive usage, but gets out of control in a complex streams
environment where processes may be trying to get "ls" to behave sensibly
on what APPEARS to "ls" to be an "interactive device".)  By the way, I use
a non-columnating "ls", but usually I invoke it by the name "l" (which is
even LESS work than typing "ls"), where I've defined the shell function "l"
via my interactive-shell startup file:
	l(){ (set +u; exec ls -bCF $*); }
Thus I decide for myself whether I want columnation or not.

	"Symmetry -- it's the way things should be.."
		- Jane Siberry

hwe@beta.UUCP (Skip Egdorf) (10/10/87)

In article <7573@steinmetz.steinmetz.UUCP>, stpeters@dawn.steinmetz writes:
> In article <10908@beta.UUCP> hwe@beta.UUCP (Skip Egdorf) writes:
> >The often quoted lack of Unix "User Friendly"ness comes from ignoring
> >the Unix philosophy ...
> 
> Hardly.  It comes from having a basic user interface designed for
> experts.  Our site has some 2000+ users, gurus to secretaries, maybe
> 85% of them non-programmers.  Discussions of the UNIX philosophy of 20
> years ago is about as relevant as discussions of original intent in
> constitutional-law arguments about women's rights.
> ...
> Dick St.Peters                        
> GE Corporate R&D, Schenectady, NY
> stpeters@ge-crd.arpa              
> uunet!steinmetz!stpeters

Please re-read my posting. I claimed that the Unix philosophy was to
provide the (few) wizards with the profiling tools and the interface
building tools (e.g. yacc) to provide the user interface required by
the users at a place at a time. This was followed in early Unix. It was
only when the 'time' and 'place' changed without a change in the user
interface to match the new time and place that the image of user hostility
appeared.

The last time that this approach was used was in the mid 70s, and the
same tools that were so far ahead of anything else then (because they
were built with the best tools, and designed with user feedback from the
accounting and profiling) are the same tools we now have. I no longer work
on a silent 700 and neither do most Unix users. The problem is that
the profiling and tool building philosophy has gone away.

If those who had produced the current crop of tools (most of which are
still from version 6 roots) had followed the earlier tool building and
profiling approach, our interface today would be much better.

I believe that this discussion IS relevant to our work today. What is the
most used command on your system this week? What mode was it used in?
what were its command-line arguments by frequency of usage? Does anyone
still have shell accounting??? When was the last time you built a simple
user-friendly interface language in yacc?

There is light on the horizon. Try X Windows on a VAXStation. Try playing
with windows on a Apollo. Play with NeWS or Sunview on a Sun.
I would like to have recorded for history (so I can look back 10 years
from now at the successes and the failures) which new systems use user
profiling to help direct interface design?? Sun Apollo MIT CMU ...
are you listening??

					Thanks for listening
					Skip Egdorf
					hwe@lanl.gov

bzs@bu-cs.bu.EDU (Barry Shein) (10/10/87)

From: stpeters@dawn.steinmetz
>A more reasonable philosophy would be that features that someone,
>*anyone*, wants and that do no harm are ok.  'ls' is the perfect
>example: sit a user down at a SYSV UNIX console, tell him (or her)
>that 'ls' is the command to list a diretory, watch him type it and see
>the single-column list scroll off the screen, and you've lost one to
>VMS.  He'll leave the room, shaking his head.  I've seen it happen.

Although I agree with the sentiment I find it amusing that my first
memories of VMS were that I typed DIR and it spun past my screen,
there was no piping so no more (I suppose in the last 10 years they've
hacked something in somewhere.) I walked out of the room shaking my
head...

Anyhow, I do think a lot of these discussions devolve into these
fairly subjective claims of what a user interface should look like and
what's good and bad about Unix. Little or no counter examples are ever
offered. It always seems to be Unix against "the as yet undefined".

I do know that AT&T has had a fair amount of success in other
endeavors asking people to type in long strings of digits to contact
their friends and business associates, there's probably more to user
interfaces than meets the eye. I've found (informally) that a person's
ability to adapt to a device is highly correlated with their
motivation to make use of it. You can make an interface a lot more
friendly by giving the employee involved a significant raise in pay.

It would be nice if people would perhaps rise above this hot-rod
mentality and try to provide balanced examples and maybe even some
reputable research from the human factors engineering or similar
fields, even just some metrics and hypotheses statement would help a
lot (user-friendly?  to whom? Our administrators? scientists?
students? small warm-blooded animals of unspecified lineage?)

Otherwise it just sounds like so much boy's-night-out bar-babble.
My ls is bigger than your ls, indeed (oops, that's an old joke.)

	-Barry Shein, Boston University

mike@turing.unm.edu.unm.edu (Michael I. Bushnell) (10/12/87)

In article <6524@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
~To take a simple example, not long ago in net.puzzles there were some
~questions such as:  What is the longest word containing letters in
~alphabetical order?  It is most unlikely that ANY "word processing"
~package design would have anticipated such questions; however, it was
~quite easy to answer such questions by imaginative combinations of
~standard UNIX tools.

This principle can be often forgotten.  There was also the following
puzzle:  Which prime comes first in an alphabetical list?

A fairly good answer is arrived at with the command

 primes 2 | head 1000000000 | number | sort | head





					Michael I. Bushnell
					a/k/a Bach II
					mike@turing.unm.edu
					{ucbvax,gatech}!unmvax!turing!mike
---
I want to kill everyone here with a cute colorful Hydrogen Bomb!!
				-- Zippy the Pinhead

jpdres10@usl-pc.UUCP (Green Eric Lee) (10/15/87)

Keywords:

Summary:

Expires:

Sender:

Followup-To:


In message <6530@brl-smoke.ARPA>, gwyn@brl-smoke.ARPA (Doug Gwyn ) says:
>In article <30035@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
>The issue of "ls" columnation is not whether it is useful to have such an
>option built into "ls"; given the special nature of multi-directory listing,
>it obviously is.  The real issue is whether "ls" should decide on its own
>whether or not to columnate, such as the Berkeley version does, or not do
>so unless directed by a specific option, such as the System V version does.

I use both BSD and Sys V. On both systems I have put this alias into my
.cshrc:

alias ls "ls -C"

so that columnization is always enabled. The problem I always have
with the columnization of "ls" is that sometimes BSD looks, sees it's
talking to a pipe, and says "gee, he must not want columns!". Very
irritating if I happen to be piping it to my favorite pager (e.g., for
a very long directory listing). 

Still, for NAIVE users, the usual default is probably the best. I have
not had any experiences where I got multi-column when I wanted
single-column, BSD seems pretty smart about figuring out the
difference between a terminal device and other devices, and since
those experiences are apparently rare, invoking "ls" with the "ls -1"
option when necessary seems to be more appropriate than requiring all
the users to always be typing "ls -C", which is a very commonly used
operation.

My "man 1 ls" produced (on BSD4.2):

     ls	[ -acdfgilqrstu1ACLFR ]	name ...

At first hand that looks a bit excessive, but the alternative, for
many of these options, would be a totally seperate command (couldn't
be done with pipes and filters, in other words). For example, the -F
option causes directories to be marked with a trailing "/" and
executables with a trailing "*", a feature so nice that I aliased it
into my "ls" alias on BSD4.2.

Considering that the binaries for /bin/ls are 20K on a Vax 780 under
BSD4.3, I don't see any point on unfairly picking on ls for being a
pig... there's lots of other pigs that are truly worth picking on. As
long as you don't point to GNU Emacs (with which I'm both reading
posting news nowadays :-).

--
Eric Green  elg@usl.CSNET       from BEYOND nowhere:
{ihnp4,cbosgd}!killer!elg,      P.O. Box 92191, Lafayette, LA 70509
{ut-sally,killer}!usl!elg     "there's someone in my head, but it's not me..."

howard@cpocd2.UUCP (Howard A. Landman) (10/17/87)

In article <2946@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>In article <1130@gilsys.UUCP> mc68020@gilsys.UUCP (Thomas J Keller) writes:
>]   2)  speaking of speed, we all seem to have forgotten that each one of those
>]lovely small programs in the chain has to be loaded from disk.  Clearly, the
>]overhead necessary to fork & spawn multiple processes, which in turn load
>]multiple program text into memory, is **MUCH** greater than spawning and 
>]loading a single program!  Waiting time is important too, you know?
>
>You forgot an important speed difference.  In the pipe approach, each program
>in the pipe does a lot of file I/O and string to data to string conversions.

???  A pipe need not do any file I/O at all!  The data is buffered in memory.
One of the advantages of pipes is that they still work when your file system
is full, whereas writing intermediate files (the normal alternative under
many operating systems) won't.

Also, while the pipe transmits a byte-stream, conversions are not necessary.
Most of the existing UNIX utilities operate on text, but it is possible to
pass any datatype through a pipe as long as the receiving program is expecting
it.  Try using fwrite() instead of printf() sometime inside a filter program;
you'll be *amazed* at the performance difference!  The drawback is, this won't
work if the data crosses the boundary between systems with different byte or
halfword ordering conventions, whereas text will work just fine.  It's an
issue of portability versus speed.

>A system which operates on the data values themselves without the intermediate
>file representation can be much more efficient.

There is no "intermediate file representation", unless by "file" you mean
"byte stream".  I don't find it generally useful to confuse these terms.

-- 
	Howard A. Landman
	{oliveb,hplabs}!intelca!mipos3!cpocd2!howard	<- works
	howard%cpocd2%sc.intel.com@RELAY.CS.NET		<- recently flaky
	"Unpick a ninny - recall Mecham"

ron@topaz.rutgers.edu (Ron Natalie) (10/21/87)

Excuse me but pipes do file I/O on some systems (neglecting MiniUNIX
which doesn't have pipes and implements them with real files) real
UNIX pipes used disk I/O.  In non BSD implementations, an inode is
allocated and disk blocks are allocated.  Hopefully these stay in
the buffer cache rather than needing to be written to disk, but if
necessary they will get written out.  Back in the days before FSCK,
it was usually necessary to clri some of these pipe turds that were
left in a crash (they neither have directory entries nor a link count).

The System V R 2 V 3 on our 3B20 still does pipes this way.
Berkeley UNIX implements pipes as network sockets.  The data
is stored in MBUFS, I suppose as virtual memory these can get
paged out incurring disk I/O as well.

-Ron