[comp.sys.amiga.tech] Pipe syntax... I think I'd better think it out again...

pete@violet.berkeley.edu (Pete Goodeve) (11/18/90)

Sheesh folks... here we've been pounding on this way long thread about pipe
syntax, separator characters, and such, and nobody -- NOBODY -- has pointed
out that we've all been haring off in exactly the wrong direction!

I got close to it earlier, when I noted that there "aren't many filters
in AmigaDOS", but I didn't follow the implication.  Heck, there're hardly
ANY!  How many of the standard command set will process Standard-Input to
Standard-Output?  Exactly.

Sure, we can write programs that behave as filters, and there are some unix
imports that do -- 'compress' filters like a champ, and I suppose grep does
too (haven't checked) -- but I wonder if that's the best way to do it
anyway.

I mentioned (in another article) a simple unix piped command that I'd used:

            ls -l | grep Nov

Well, OK -- how would you do the equivalent with standard AmigaDOS commands
(LIST and SEARCH), even given that you had a shell with pipe characters?

Using named pipes there SHOULD be no problem [pardon my ego -- I'll use
mine, but any pipe has the same result]:

            run list >IP:xx
            search IP:xx Nov

(Well, it looks good.  Unfortunately if you try it, it doesn't work!
'SEARCH' won't read anything but a file -- not a PIPE: not a CON:!
This is a bug, though -- nothing inherent as far as I can tell. [Use Mat
instead, folks (:-)) -- it's three times as fast as SEARCH anyway...]
To make the idiocy complete, 'SORT' has no qualms about pipes! Replacing
the second command above by:

            sort from IP:xx to *

works fine...)

Leaving aside such little frustrations, there's no way to use un-named
pipes here, unix fashion.  You'd have to redesign the syntax of all the
commands, and I guess it's a little late for that...

We don't have to give up on un-named pipes, though -- just pull our
thinking out of the unix rut.  The pipe/filter concept was a magnificent
stroke of genius when it was thought up, but that was 20 years ago now.
Let's see if we can do better.  While we're at it, I believe we can provide
for the more complex branching pipe-work that Kent Dolan and I were talking
about a few articles back.

Let's begin with an assumption or two.  The main one is that the command
lines as seen by piped programs will be exactly in the form they are at
present.  This means that an input (file) -- maybe an output too --is
usually specified by an argument in the line, identified either by position
or keyword.  We CAN'T expect a program's source of data to be to Standard
Input, although very probably Standard Output will get the results.  This
all means that the program has to open its own inputs (and maybe outputs)
-- it can't always get filehandles from the shell.

I think we have to abandon the popen/filehandle line of thought.
The simple key is to have the shell, PIPE command, or whatever, generate
suitable unique pipe names, and insert them as STANDARD FILENAME STRINGS
into the command line at user specified places.  What we need is a syntax
to specify the insertion points.  Unlike the unix situation, where direction
of dataflow is indicated by sequence on the line, our syntax will have to
say whether each pipe-argument is an input or output.

Some more assumptions:  Most commonly -- but not necessarily -- the source
end of a pipe will be Standard Output; the output end will be an argument
in a command line.  The usual situation will be the unix linear-filter one,
where each process has input from the previous one and outputs to the next,
but we also want to allow for cases where a program may be taking inputs
from more than one stream, and possibly have more than one output as well.
(We already encounter the latter -- have you ever tried to redirect a
program (like C++) that insists on sending to Standard Error?!)

Whatever syntax we choose should be compact and convenient for the
commonest cases (and of course should not get too gross for the other
possibilities...).  I can't claim to have come up with any sort of
satisfactory design at this stage, but I'll throw out some ideas for
thinking about.  For one thing, I'm inclining towards Peter da Silva's
PIPE command as the way to go, at least initially, because it nicely
avoids disastrous consequences from too-quickly-made changes in Shell
syntax: if a particular command breaks, just don't PIPE it!

If we go with the PIPE command, it follows that every piped command
except the last will end with a '+', so that all standard shells will
be happy with it.  I would guess that by default PIPE would assume that
Standard Output from an intermediate command line was to be piped;
if you wanted it to go elsewhere, you'd just redirect it as usual.
(Redirection to the console window would be ">*".)  For an "input
connector" from the previous command we should use a single character
(which BTW would HAVE to be surrounded by spaces -- it's an argument!).
One possibility is our old friend '|'.  As it can't occur alone in a
pattern, or as its first character, there'd be no confusion.  Another
choice might be '+', but that could be nasty for commands that did
arithmetic, or had '-/+' switches [which I kinda like (:-))]
Mmm, has anyone used '@' for anything yet?  (And why not allow the user
choice of the marker character, anyway, with a command line switch, or
maybe better an ENV: variable?)

Using my original example again [and assuming a working SEARCH!] the
suggested syntax would be:

            PIPE list +
            search | Nov

For source ends, and for multi-branch pipes, the syntax would have to
be extended beyond this.  I suggest that various codes beginning with '|'
(or whatever special marker was used) might work.  Thus an output argument
taking the place of Standard Output might be indicated '|+'.  Branches
might be given numeric suffixes: '|1', '|2', '|+1' etc...

Just some meandering thoughts... (and I'm still thinking.)

                                        -- Pete --

peter@sugar.hackercorp.com (Peter da Silva) (11/18/90)

In article <1990Nov18.090654.24747@agate.berkeley.edu> pete@violet.berkeley.edu (Pete Goodeve) writes:
> We don't have to give up on un-named pipes, though -- just pull our
> thinking out of the unix rut.  The pipe/filter concept was a magnificent
> stroke of genius when it was thought up, but that was 20 years ago now.
> Let's see if we can do better.

About 6 years ago I got a bug up my ass about pipes being strictly linear.
I played with various "better" shells, and implemented a couple of syntaxes.
Working with a *text* interface I could never get anything better, though I
must admit ksh has made a bit of a breakthrough on the output side...

But...

The Mac had just come out and I was thinking about visual shells. What I
came up with was something like this:

Each program has an icon with a set of input and output ports:

	+--------+		+---------+
	|        |	        |         + O
      I +  grep  + O          I |   tee   |
	|        |		|         + O
	+--------+              +---------+


To build a program, you move the icons into place and connect them up with
lines. Files are represented by tanks:

	     +--------+	  +---------+
	     |        |   |         +---(workfile)
( ifile ) ---+  grep  +---+   tee   |
	     |        |	  |         +---...
	     +--------+   +---------+
-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

lrg7030@uxa.cso.uiuc.edu (Loren Rittle) (11/19/90)

In message <1990Nov18.090654.24747@agate.berkeley.edu> Pete wrote:

> I got close to it earlier, when I noted that there "aren't many filters
> in AmigaDOS", but I didn't follow the implication.  Heck, there're hardly
> ANY!  How many of the standard command set will process Standard-Input to
> Standard-Output?  Exactly.

While I do agree that many standard commands as shipped from Commodore
(under 1.3 and before) don't ``pipe'' well, all the GNU unix tools do.
All the so called buggy ARP commands that I use everyday with no
problem support reading from standard-in and writing to standard-out.
And every ARexx program I have written that can support piping, does.

> Sure, we can write programs that behave as filters, and there are some unix
> imports that do -- 'compress' filters like a champ, and I suppose grep does
> too (haven't checked) -- but I wonder if that's the best way to do it
> anyway.

> I mentioned (in another article) a simple unix piped command that I'd used:

>             ls -l | grep Nov

<talk about how one might do this under the AmigaOS with standard commands>
<with the conclusion being that unnamed pipes won't work ala unix style.>

I have used that exact command line under the WShell on the Amiga with
GNUgrep v1.5 and LS v4.1ljr.  While these tools were not shipped with
my machine from Commodore, I don't see see it as a problem, as they are
as standard as the Commodore C: commands in my opinion.  Then again my
C: directory is now about 5MB in size, so what I consider standard is
a bit more than some (most perhaps) others.  If the standard tools
don't pipe they should be fixed, and may very well already be fixed in 2.0.
I just think it is important to note that un-named piping does work
under the AmigaOS if the programs are written to be used as filters.
And as I state above, many have been designed with filtering in mind.

What am I saying here, you might wonder?  I like piping to work the
way it currently works on my system.  This is not to say that you
should not continue work on the PIPE command it does sound neat, but
the statement that AmigaOS does not support unnamed pipes is false.
I think the PIPE command with it's ability to fork output and input
to multiple people would be quite useful, an extension to what I
currently have, but the new ability to add something like unnamed
piping to commands that don't support it sounds like a kludge.
By the way, many commands that won't handle piping won't handle
any better under your new idea. The MORE command as distributed
by Commodore is a prime example:

MORE <PIPE:AA
and 
MORE PIPE:AA

using any pipe device you care to use causes the MORE command to
get quite upset, at least under the 1.3 version of MORE.
The command seems to have a problem with files that it can't seek
on. (PS Bill Hawes version of MORE has a similar problem when
working with pipes, at least the latest version I have does.)
LESS, another non-standard standard command is the only pager
that I have found that supports pipeing the way it should.

There I have said my peace, go for the PIPE command (like RUN) as
you were talking about, but this last change you talked about
sounds like it is of questionable value.
Loren J. Rittle

pete@violet.berkeley.edu (Pete Goodeve) (11/19/90)

In  <7072@sugar.hackercorp.com> (18 Nov),
Peter da Silva (peter@sugar.hackercorp.com) writes:
> About 6 years ago I got a bug up my ass about pipes being strictly linear.
> I played with various "better" shells, and implemented a couple of syntaxes.
> Working with a *text* interface I could never get anything better, though I
> must admit ksh has made a bit of a breakthrough on the output side...
>
> But...
>  [... goes on to outline a "visual shell" scheme...]

Yes -- I pretty much agree.  Prescribing multi-way links in ANY textual
language is bound to get pretty hairy.  I don't foresee ever people typing
command lines like that.  I've been trying it that way to test out my
manifold pipes, and -- unless I've planned it out on paper first -- I
ALWAYS end up chasing around to find which pipes I've left hanging THIS
time...!  Errors are always recoverable, but a naive user could get lost
awfully fast.  On the other hand, if you're willing to spend a little
effort debugging your scripts, I think it's very worthwhile to have such
a facility available.

A visual, two-dimensional, way of laying out your pipes would be ideal.
I've dreamed of such many times too.  Even then, though, I'm not sure
that I see Joe/Jolene user bothering with drawing a layout for a single
command sequence.  The end product would have to be some sort of executable
file -- a script again, I would guess -- that would be used repeatedly.

                                            -- Pete --

yarnall@opusc.csd.scarolina.edu (Ken Yarnall) (11/19/90)

In article <7072@sugar.hackercorp.com> peter@sugar.hackercorp.com (Peter da Silva) writes:
+
+The Mac had just come out and I was thinking about visual shells. What I
+came up with was something like this:
+
+Each program has an icon with a set of input and output ports:
+
+	+--------+		+---------+
+	|        |	        |         + O
+      I +  grep  + O          I |   tee   |
+	|        |		|         + O
+	+--------+              +---------+
+
+
+To build a program, you move the icons into place and connect them up with
+lines. Files are represented by tanks:
+
+	     +--------+	  +---------+
+	     |        |   |         +---(workfile)
+( ifile ) ---+  grep  +---+   tee   |
+	     |        |	  |         +---...
+	     +--------+   +---------+

There is a program called AVS (A Visualization System) for Ardent computers
that uses a scheme very similar to this.  There is a bank of icons for
various modules, either built in or user written, that have i/o ports.  You
drag them about and connect them up to build `filters' that process images
you are working on.  It is a truly impressive program.  

This is the nicest idea I've seen thus far for the piping problem.  I know I
said that I was tired of this thread and wasn't gonna post in it anymore, but
this is just too nice to pass up.  Time to think...

+Peter da Silva.   `-_-'

ken
-- 
     Ken Yarnall                 ///   yarnall@usceast.cs.scarolina.EDU
      Math Department, USC   \\\///   yarnall@ucseast.UUCP
       Columbia, S.C. 29208   \\\/   (803)777-5218
    `You'd better tie me up.' -- from the movie, "Tie Me Up, Tie Me Down"

mwm@raven.relay.pa.dec.com (Mike (My Watch Has Windows) Meyer) (11/20/90)

In article <1990Nov18.090654.24747@agate.berkeley.edu> pete@violet.berkeley.edu (Pete Goodeve) writes:
   Using named pipes there SHOULD be no problem [pardon my ego -- I'll use
   mine, but any pipe has the same result]:

	       run list >IP:xx
	       search IP:xx Nov

   (Well, it looks good.  Unfortunately if you try it, it doesn't work!
   'SEARCH' won't read anything but a file -- not a PIPE: not a CON:!

Hmm - I remember setting up complex pipes that way using Search. Then
again, I may have been using Ed Pucket's (I think that's it) PIPE:,
that lets you do a "dir" on them. It's been a while since I did that.

   I think we have to abandon the popen/filehandle line of thought.
   The simple key is to have the shell, PIPE command, or whatever, generate
   suitable unique pipe names, and insert them as STANDARD FILENAME STRINGS
   into the command line at user specified places.  What we need is a syntax
   to specify the insertion points.  Unlike the unix situation, where direction
   of dataflow is indicated by sequence on the line, our syntax will have to
   say whether each pipe-argument is an input or output.

Pete, Unix has had that for _years_. I installed it on the machine
you're running on now (though I can't guarantee that someone hasn't
uninstalled it since). Try running ksh, and the syntax:

	grep Nov (ls -l)

should do just what you want.

I like that way of doing things. The command whose output is going to
appear in the file name substituted is in the place the name will
appear. That means you have to look fewer places to find the
information. When you get multiple input streams going (which I've
done), it makes it obvious what's going on where. It also deals
cleanly with embedding commands that embed commands.

I've been trying to get Bill Hawes to add this feature to WShell for a
couple of years now...

	<mike

--

bruce@zuhause.MN.ORG (Bruce Albrecht) (11/20/90)

>In article <1990Nov18.090654.24747@agate.berkeley.edu> pete@violet.berkeley.edu (Pete Goodeve) writes:
>I mentioned (in another article) a simple unix piped command that I'd used:
>
>            ls -l | grep Nov
>
>Well, OK -- how would you do the equivalent with standard AmigaDOS commands
>(LIST and SEARCH), even given that you had a shell with pipe characters?
>
>Using named pipes there SHOULD be no problem [pardon my ego -- I'll use
>mine, but any pipe has the same result]:
>
>            run list >IP:xx
>            search IP:xx Nov
>
>(Well, it looks good.  Unfortunately if you try it, it doesn't work!
>'SEARCH' won't read anything but a file -- not a PIPE: not a CON:!
>This is a bug, though -- nothing inherent as far as I can tell. [Use Mat
>instead, folks (:-)) -- it's three times as fast as SEARCH anyway...]

It works fine on AmigaDos 2.02 using the standard PIPE:.  Maybe we should
all take a deep breath, and chant "It's in there!"  Seriously, though, if
a Commodore standard AmigaDOS program doesn't support input from pipes,
send them a bug report.  By the way, someone else claimed in another posting
that More chokes on pipes, but it seems to work fine on the 2.0 version.


--


bruce@zuhause.mn.org	   

FelineGrace@cup.portal.com (Dana B Bourgeois) (11/23/90)

[ these comments pertain to the appended message from Pete G. ]

The 'visual shell' could be a way to automatically generate complex
shell scripts suing the syntax that you work out.  Just like Power
Windows (tm) does for creating code that creates windows.

 
Dana Bourgeois @ Cup.Portal.Com

================ included post ================

In  <7072@sugar.hackercorp.com> (18 Nov),
Peter da Silva (peter@sugar.hackercorp.com) writes:
> About 6 years ago I got a bug up my ass about pipes being strictly linear.
> I played with various "better" shells, and implemented a couple of syntaxes.
> Working with a *text* interface I could never get anything better, though I
> must admit ksh has made a bit of a breakthrough on the output side...
>
> But...
>  [... goes on to outline a "visual shell" scheme...]

Yes -- I pretty much agree.  Prescribing multi-way links in ANY textual
language is bound to get pretty hairy.  I don't foresee ever people typing
command lines like that.  I've been trying it that way to test out my
manifold pipes, and -- unless I've planned it out on paper first -- I
ALWAYS end up chasing around to find which pipes I've left hanging THIS
time...!  Errors are always recoverable, but a naive user could get lost
awfully fast.  On the other hand, if you're willing to spend a little
effort debugging your scripts, I think it's very worthwhile to have such
a facility available.

A visual, two-dimensional, way of laying out your pipes would be ideal.
I've dreamed of such many times too.  Even then, though, I'm not sure
that I see Joe/Jolene user bothering with drawing a layout for a single
command sequence.  The end product would have to be some sort of executable
file -- a script again, I would guess -- that would be used repeatedly.

                                            -- Pete --

pete@violet.berkeley.edu (Pete Goodeve) (11/24/90)

In  <1990Nov19.031449.25071@ux1.cso.uiuc.edu> (19 Nov),
Loren  Rittle (lrg7030@uxa.cso.uiuc.edu) writes:
>
> While I do agree that many standard commands as shipped from Commodore
> (under 1.3 and before) don't ``pipe'' well, all the GNU unix tools do.
> All the so called buggy ARP commands that I use everyday with no
> problem support reading from standard-in and writing to standard-out.

Yes, agreed, but I think what I was trying to put across is that the
`filter' model isn't appropriate for a lot of operations you can otherwise
do with pipes.  For example the other day I wanted to check the changes
I'd made in a file I was editing before I saved it, so I simply dumped it
out to a pipe and ran `dif' on that and the original.  (Only one of the
inputs was a pipe, so in theory I suppose it could have been stdin, but
even unix `diff' doesn't handle that possibility!)

> [.....]
> By the way, many commands that won't handle piping won't handle
> any better under your new idea. The MORE command as distributed
> by Commodore is a prime example:
>
> MORE <PIPE:AA
> and
> MORE PIPE:AA
>
> using any pipe device you care to use causes the MORE command to
> get quite upset, at least under the 1.3 version of MORE. [.....]
> LESS, another non-standard standard command is the only pager
> that I have found that supports pipeing the way it should.

How very odd.  On my system, MORE works reasonably happily with a pipe
(and I know Carolyn intended it to...) -- except that it double-spaces
for some reason.  (It does that also if you give it input from a console
window.)  LESS (the version I have. anyhow), on the other hand, objects
that it "Can't accept input from a terminal".  (Whereas you CAN pipe to
less on unix -- it just won't backtrack past its current buffer.)

Could it be, I wonder, that your "Bug-Free" ARP system is not quite so..?
(:-)) (:-))  Actually, I've run into little snags of this kind every time
I've tried ARP -- or tried to use scripts that run fine on my system
on somebody else's with ARP installed -- so that is why it is NOT on mine...
[but that's another thread, isn't it.]

                                            -- Pete --

pete@violet.berkeley.edu (Pete Goodeve) (11/24/90)

In  <MWM.90Nov19143124@raven.relay.pa.dec.com> (19 Nov),
Mike Meyer (mwm@raven.relay.pa.dec.com) writes:
> In article <1990Nov18.090654.24747@agate.berkeley.edu> pete@violet.berkeley.edu (Pete Goodeve) writes:
|> [says....
>
|>              run list >IP:xx
|>              search IP:xx Nov
|>   ....doesn't work ]
>
> Hmm - I remember setting up complex pipes that way using Search. Then
> again, I may have been using Ed Pucket's (I think that's it) PIPE:,
> that lets you do a "dir" on them. It's been a while since I did that.

Yup.  Ed's pipes are fine for that, because they are the only ones that
are a true file-system, with Locks and all.  I like his scheme a lot,
and I'd still be using it, except for a couple of things: it's a little
bulky compared to others, and -- like the others -- it BUFFERS!

>    [.....]                               What we need is a syntax
>    to specify the insertion points.  Unlike the unix situation, where direction
>    of dataflow is indicated by sequence on the line, our syntax will have to
>    say whether each pipe-argument is an input or output.
>
> Pete, Unix has had that for _years_. I installed it on the machine
> you're running on now (though I can't guarantee that someone hasn't
> uninstalled it since). Try running ksh, and the syntax:
>
>       grep Nov (ls -l)
>
> should do just what you want.

Heh.  I guess it all depends what'cha mean by "unix" dunnit...  (:-))
It also seems to depend on what you mean by "ksh".  You're right --
the violet version of ksh has this feature (but boy was it hard to find
in the docs) but the one we have available for Suns doesn't, neither
does the one supplied as the standard shell for our new IBM 6000.
And Peter da Silva reported yet ANOTHER ksh (or did he mean sksh?)
syntax using '$' a few messages back.

> I like that way of doing things. The command whose output is going to
> appear in the file name substituted is in the place the name will
> appear. That means you have to look fewer places to find the
> information. When you get multiple input streams going (which I've
> done), it makes it obvious what's going on where. It also deals
> cleanly with embedding commands that embed commands.
>
Yes, it's quite neat, but it still has a strict hieararchical structure.
The sort of schemes we've been tossing around would allow connections
that would be impossible under the ksh method.  Take this hypothetical
[and doubtless dumb] requirement:

        A process P generates a stream of data that is cloned
        and fed in parallel to filter processes Q and R.

        A `diff' process takes the two outputs Q and R, and
        generates a differences stream.
                           ____
               ---- Q --->|    |
               |          |    |
           P-->|          |diff|---> result
               |          |    |
               ---- R --->|____|

This could be done with several of the PIPE command variants people have
suggested, but not with ksh embedding.  Q and R could be embedded as diff
args, but how would you get the output of P to them both?

Of course the PIPE command would also raise the dreaded "loop" spectre,
but although loops could easily cause a lockup, programs CAN be written
to avoid this, so I don't think they should be prohibited -- just warned
against!
                                    -- Pete --

peter@sugar.hackercorp.com (Peter da Silva) (11/25/90)

In article <1990Nov24.073827.10945@agate.berkeley.edu> pete@violet.berkeley.edu (Pete Goodeve) writes:
> And Peter da Silva reported yet ANOTHER ksh (or did he mean sksh?)
> syntax using '$' a few messages back.

Well, it's a different meaning:

	grep Nov (ls -l)   means   ls -l | grep Nov
	grep Nov $(ls)     means   grep Nov `ls`    (or)    grep Nov *

The first syntax conflicts a little with AmigaOS, so I suggested keeping
the second and changing the first to @(...).

>                            ____
>                ---- Q --->|    |
>                |          |    |
>            P-->|          |diff|---> result
>                |          |    |
>                ---- R --->|____|

Visual shell, anyone?
-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

mwm@raven.relay.pa.dec.com (Mike (My Watch Has Windows) Meyer) (11/26/90)

In article <1990Nov24.073827.10945@agate.berkeley.edu> pete@violet.berkeley.edu (Pete Goodeve) writes:
   Heh.  I guess it all depends what'cha mean by "unix" dunnit...  (:-))
   It also seems to depend on what you mean by "ksh".  You're right --
   the violet version of ksh has this feature (but boy was it hard to find
   in the docs) but the one we have available for Suns doesn't, neither
   does the one supplied as the standard shell for our new IBM 6000.

Actually, the Unix and the ksh involved are all nearly the same.
There's a relatively standard extension providing a new pseudo-device
(/dev/fd/*). Ksh has, for quite a while, looked for that device and
turned on the '()' feature if it was there. That's really the only
difference between all those systems - whether or no the the fd
pseudo-device is on them.

   And Peter da Silva reported yet ANOTHER ksh (or did he mean sksh?)
   syntax using '$' a few messages back.

Actually, Peter suggested the ksh features, using a form modified to
be acceptable in an AmigaDOS environment. I think his form is just
fine. $( ) replaces ` ` in the standard Unix shells (this is a
kshism), and @( ) replaces ( ) in ksh, as AmigaDOS already uses ()'s
for grouping in patterns. The ( ) quoting of embedded commands beats
using the three "standard" quotes because it handles embedded commands
without further work.

   Yes, it's quite neat, but it still has a strict hieararchical structure.
   The sort of schemes we've been tossing around would allow connections
   that would be impossible under the ksh method.  Take this hypothetical
   [and doubtless dumb] requirement:

	   A process P generates a stream of data that is cloned
	   and fed in parallel to filter processes Q and R.

	   A `diff' process takes the two outputs Q and R, and
	   generates a differences stream.
			      ____
		  ---- Q --->|    |
		  |          |    |
	      P-->|          |diff|---> result
		  |          |    |
		  ---- R --->|____|

   This could be done with several of the PIPE command variants people have
   suggested, but not with ksh embedding.  Q and R could be embedded as diff
   args, but how would you get the output of P to them both?

Actually, this isn't dumb. This is the kind of thing I wind up doing
all to often (except I tend to use uniq or comm instead of diff).
Typical is: for everyone in the password file in WSE, pull their mail
address from the aliases file, save that, and then verify we got
everybody. In the appropriate environment, this looks like:

grep WSE /etc/passwd | awk -F: ' { print $1 } ' | sort |
	???? |			# this needs to be the split
	fgrep -f /dev/stdin /usr/lib/aliases | sed '/ /d' |
	tee aliaslist | sed 's/.*://' | sort |
	uniq -u - (????)	# and this is where the split needs to rejoin

The result of this would be a list of everyone in the password file who
doesn't have a mail machine listed in the aliases file (given assumptions
I happen to know are true about our aliases file, of course), along
with the alias list in a file for later perusal.

The script I mentioned for AmigaDOS handled this just fine using
PIPE:. In reality, you don't care if a pipe buffers in this instances.
In fact, you probably prefer it - fewer context switches, because one
process keeps chewing stuff until it blocks, then the other starts in
on it.

So, do you have a syntax that correctly deals with this? As far as I'm
concerned, PIPE: does just fine. The user needs to specify when things
rejoin, and has to provide a name at both ends to do that. Intelligent
name choices prevent should prevent clashes. PIPE: to indicate that
this is a pipe instead of a real file is acceptable.

BTW, the people at GIT did something along these lines for the
Software Tools on Primos nearly 20 years ago. All I've heard is
rumors, though.

	<mike

lrg7030@uxa.cso.uiuc.edu (Loren Rittle) (11/26/90)

Pete Goodeve writes on (Nov 24, 1990):


>In  <1990Nov19.031449.25071@ux1.cso.uiuc.edu> (19 Nov),
>Loren  Rittle (lrg7030@uxa.cso.uiuc.edu) writes:
>>
>> While I do agree that many standard commands as shipped from Commodore
>> (under 1.3 and before) don't ``pipe'' well, all the GNU unix tools do.
>> All the so called buggy ARP commands that I use everyday with no
>> problem support reading from standard-in and writing to standard-out.
>
>Yes, agreed, but I think what I was trying to put across is that the
>`filter' model isn't appropriate for a lot of operations you can otherwise
>do with pipes.  For example the other day I wanted to check the changes
>I'd made in a file I was editing before I saved it, so I simply dumped it
>out to a pipe and ran `dif' on that and the original.  (Only one of the
>inputs was a pipe, so in theory I suppose it could have been stdin, but
>even unix `diff' doesn't handle that possibility!)

OK, I see what point you are making now, I concur with you about
this.  I just want it to be clear that there is nothing wrong with
AmigaOS which inhibits the use of `filters' and `pipes' (unnamed
unix style!).  I do agree that the named pipe mechanism is quite
powerful, but I want unnamed pipes also (I currently have them and
use them quite often on my Amiga).  No good reason why we can't have
both, right?  With the WShell, I get quite reliable unnamed pipes.
I, at least, would only want an extension to this, I would not be 
willing to give up my easy to use unnamed pipes for what is turning
into a big mess (Another's observation [yours perhaps] after looking 
at the discussion, so take no offense, none meant)!  Let's have both.

>> [.....]
>> By the way, many commands that won't handle piping won't handle
>> any better under your new idea. The MORE command as distributed
>> by Commodore is a prime example:
>>
>> MORE <PIPE:AA
>> and
>> MORE PIPE:AA
>>
>> using any pipe device you care to use causes the MORE command to
>> get quite upset, at least under the 1.3 version of MORE. [.....]
>> LESS, another non-standard standard command is the only pager
>> that I have found that supports pipeing the way it should.
>
>How very odd.  On my system, MORE works reasonably happily with a pipe
>(and I know Carolyn intended it to...) -- except that it double-spaces
>for some reason.  (It does that also if you give it input from a console
>window.)  LESS (the version I have. anyhow), on the other hand, objects
>that it "Can't accept input from a terminal".  (Whereas you CAN pipe to
>less on unix -- it just won't backtrack past its current buffer.)

I guess the point I tried to make is that if all the standard
commands and commands written by others don't work as filter (as you say)
because they don't like reading from or writing to pipes then
they will not work with the new ideas for multiple named pipes.
I still think that this is a correct statement, but I would also
argure that most commands (maybe not Commodore's 1.x commands)
that can, support piping.  So the whole point is moot in my mind.
BTW, The double-space problem is the same one I encountered.  This is 
not a feature! It's a bug!  I was just about to type, ``Ahh, the
reason your version of LESS does not support piping is that you
are most likely using version 1.3 from later Fish disks,'' but
then I checked my latest version of Less v2.0ljr and guess what ---
It does not work with the pipe: device!  But it does work just
fine with WSH's pip: device!  I would be willing to bet that
pipe: files look like interactive streams, as this is the only
check LESS does on a file before allowing it to be viewed.
(BTW, this is the same check that is done under the UNIX version
of LESS! Right or wrong this is the restraint LESS puts on a file,
it sounds OK to me.) If this is the case, then it is the PIPE:
device that has a bug, not the LESS program.  I will look into
this problem and report back.

>Could it be, I wonder, that your "Bug-Free" ARP system is not quite so..?
>(:-)) (:-))  Actually, I've run into little snags of this kind every time
>I've tried ARP -- or tried to use scripts that run fine on my system
>on somebody else's with ARP installed -- so that is why it is NOT on mine...
>[but that's another thread, isn't it.]
>
>                                            -- Pete --

[I assume that you are refering to the MORE problems...]
Humm, (:-) :-)) guess not as I have looked into the piping problems quite
extensively with and without ARP...

Loren J. Rittle
[Free plug: my version of LESS only opens a window if started from the
WorkBench, else it will use the console window from which it was started
from.  Fully supports unnamed piping with the PIP: device under the 
WShell. (And as soon as I get a chance to look into the problems
involved, named piping from the PIPE: device, I currently think
that the fix will involve a kludge, I hate kludges...)]

ben@epmooch.UUCP (Rev. Ben A. Mesander) (11/27/90)

>In article <1990Nov24.073600.10802@agate.berkeley.edu> pete@violet.berkeley.edu (Pete Goodeve) writes:
>
>In  <1990Nov19.031449.25071@ux1.cso.uiuc.edu> (19 Nov),
>Loren  Rittle (lrg7030@uxa.cso.uiuc.edu) writes:
>>
>> [.....]
>> By the way, many commands that won't handle piping won't handle
>> any better under your new idea. The MORE command as distributed
>> by Commodore is a prime example:
>>
>> MORE <PIPE:AA
>> and
>> MORE PIPE:AA
>>
>> using any pipe device you care to use causes the MORE command to
>> get quite upset, at least under the 1.3 version of MORE. [.....]
>> LESS, another non-standard standard command is the only pager
>> that I have found that supports pipeing the way it should.
>
>How very odd.  On my system, MORE works reasonably happily with a pipe
>(and I know Carolyn intended it to...) -- except that it double-spaces
>for some reason.  (It does that also if you give it input from a console
>window.)  LESS (the version I have. anyhow), on the other hand, objects
>that it "Can't accept input from a terminal".  (Whereas you CAN pipe to
>less on unix -- it just won't backtrack past its current buffer.)

The version of less that I am using for the Amiga accepts input fine from
a pipe... just like the UNIX version.

>Could it be, I wonder, that your "Bug-Free" ARP system is not quite so..?
>(:-)) (:-))  Actually, I've run into little snags of this kind every time
>I've tried ARP -- or tried to use scripts that run fine on my system
>on somebody else's with ARP installed -- so that is why it is NOT on mine...
>[but that's another thread, isn't it.]

I don't think so. I use ARP commands exclusively, and the MORE command 
accepts input from a pipe just fine, albeit with the same weird 
double-spacing that you get without ARP. I don't think this is one of
the things that ARP has broken. Please, people, don't blame every
little problem you may be having on ARP - some of us use it rather
reliably. It's like the PC users who blame everything on viruses...

>                                            -- Pete --

--
| ben@epmooch.UUCP   (Ben Mesander)       | "Cash is more important than |
| ben%servalan.UUCP@uokmax.ecn.uoknor.edu |  your mother." - Al Shugart, |
| !chinet!uokmax!servalan!epmooch!ben     |  CEO, Seagate Technologies   |

pete@violet.berkeley.edu (Pete Goodeve) (11/28/90)

In  <1990Nov26.103402.2714@ux1.cso.uiuc.edu> (26 Nov),
Loren  Rittle (lrg7030@uxa.cso.uiuc.edu) writes:
> [responding to my comments about the usefulness of 'non-linear' pipes...]
> OK, I see what point you are making now, I concur with you about
> this.  I just want it to be clear that there is nothing wrong with
> AmigaOS which inhibits the use of `filters' and `pipes' (unnamed
> unix style!). I do agree that the named pipe mechanism is quite
> powerful, but I want unnamed pipes also (I currently have them and
> use them quite often on my Amiga).  [....]   Let's have both.

I think really there's not much distance between us.  I certainly wouldn't
want to destroy the facility for in-line pipes in commands, for those
who have it.  I don't see any use in throwing away a simple mechanism
that makes sense for a lot of purposes, just for the sake of a clumsier
one that is SOMETIMES useful!  I, also, don't see why we can't have both,
especially if the added facilities are through a PIPE command external
to any shell.

>
|> [...discussing MORE and LESS's failings with pipes.]
>
> I guess the point I tried to make is that if all the standard
> commands and commands written by others don't work as filter (as you say)
> because they don't like reading from or writing to pipes then
> they will not work with the new ideas for multiple named pipes.

Sorry -- I wasn't trying to argue against that.  It just seemed
that your experience with those programs was contrary to mine.

> [......]
> BTW, The double-space problem is the same one I encountered.  This is
> not a feature! It's a bug!

For sure!  I just thought you meant a total failure!

And, as to my further comments...
|> Could it be, I wonder, that your "Bug-Free" ARP system is not quite so..?
|> (:-)) (:-)) [....]
>
> Humm, (:-) :-)) guess not as I have looked into the piping problems quite
> extensively with and without ARP...
>

And in  <ben.3578@epmooch.UUCP> (27 Nov),
Rev. Ben A. Mesander (ben@epmooch.UUCP) writes:
>
> I don't think so. I use ARP commands exclusively, and the MORE command
> accepts input from a pipe just fine, albeit with the same weird
> double-spacing that you get without ARP. I don't think this is one of
> the things that ARP has broken. Please, people, don't blame every
> little problem you may be having on ARP - some of us use it rather
> reliably. It's like the PC users who blame everything on viruses...
>

Yes. I apologise for somewhat hasty conclusions.

                                            -- Pete --

peter@sugar.hackercorp.com (Peter da Silva) (11/28/90)

In article <MWM.90Nov25180550@raven.relay.pa.dec.com> mwm@raven.relay.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
+ PIPE=/tmp/pip$$; mknod $PIPE p
> grep WSE /etc/passwd | awk -F: ' { print $1 } ' | sort |
| 	tee $PIPE |	# this needs to be the split
> 	fgrep -f /dev/stdin /usr/lib/aliases | sed '/ /d' |
> 	tee aliaslist | sed 's/.*://' | sort |
| 	uniq -u - $PIPE	# and this is where the split needs to rejoin
+ rm $PIPE

I know, not as pretty... On the Amiga this could be handled quite easily
just by making PIPE=pipe:temp. But the rest of the command is more managable
using the UNIX pipe syntax.
-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

mwm@raven.relay.pa.dec.com (Mike (My Watch Has Windows) Meyer) (11/29/90)

In article <7151@sugar.hackercorp.com> peter@sugar.hackercorp.com (Peter da Silva) writes:

   Path: bacchus.pa.dec.com!news.crl.dec.com!shlump.nac.dec.com!decuac!haven!purdue!bu.edu!rpi!zaphod.mps.ohio-state.edu!lavaca.uh.edu!menudo.uh.edu!sugar!peter
   In article <MWM.90Nov25180550@raven.relay.pa.dec.com> mwm@raven.relay.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
   + PIPE=/tmp/pip$$; mknod $PIPE p
   > grep WSE /etc/passwd | awk -F: ' { print $1 } ' | sort |
   | 	tee $PIPE |	# this needs to be the split
   > 	fgrep -f /dev/stdin /usr/lib/aliases | sed '/ /d' |
   > 	tee aliaslist | sed 's/.*://' | sort |
   | 	uniq -u - $PIPE	# and this is where the split needs to rejoin
   + rm $PIPE

   I know, not as pretty... On the Amiga this could be handled quite easily
   just by making PIPE=pipe:temp. But the rest of the command is more managable
   using the UNIX pipe syntax.

Well, with pipe: it works just fine. But using a real file for $PIPE
means you buy back many of the problems pipes were invented to get
around.

	<mike
--

FelineGrace@cup.portal.com (Dana B Bourgeois) (11/30/90)

Pete G. says he uses ARP and has no problems.  I also use ARP without
problems.  Something I have seen in the posts of those who have problems
with ARP is that they use the ARP shell.  I don't.  I use WShell.  I wonder
if other people who also have no problems with ARP do without the ARP
shell.  Maybe that is where the problem is.  Not the library or the
commands, the shell.

Pete?

Dana

mwm@raven.relay.pa.dec.com (Mike (My Watch Has Windows) Meyer) (12/01/90)

In article <36371@cup.portal.com> FelineGrace@cup.portal.com (Dana B Bourgeois) writes:
   Pete G. says he uses ARP and has no problems.  I also use ARP without
   problems.  Something I have seen in the posts of those who have problems
   with ARP is that they use the ARP shell.  I don't.  I use WShell.  I wonder
   if other people who also have no problems with ARP do without the ARP
   shell.  Maybe that is where the problem is.  Not the library or the
   commands, the shell.

   Pete?

No, I'm not Pete. But I had problems with ARP (pre 1.3) without the
ARP shell. The problems were supposedly fixed in 1.3, but by then I
had a hard disk. Wasn't worth installing.

	<mike

--

lphillips@lpami.wimsey.bc.ca (Larry Phillips) (12/01/90)

In <36371@cup.portal.com>, FelineGrace@cup.portal.com (Dana B Bourgeois) writes:
>Pete G. says he uses ARP and has no problems.  I also use ARP without
>problems.  Something I have seen in the posts of those who have problems
>with ARP is that they use the ARP shell.  I don't.  I use WShell.  I wonder
>if other people who also have no problems with ARP do without the ARP
>shell.  Maybe that is where the problem is.  Not the library or the
>commands, the shell.

I use WShell as well, and I have only had problems with a few commands.

-larry

--
The only things to survive a nuclear war will be cockroaches and IBM PCs.
+-----------------------------------------------------------------------+ 
|   //   Larry Phillips                                                 |
| \X/    lphillips@lpami.wimsey.bc.ca -or- uunet!van-bc!lpami!lphillips |
|        COMPUSERVE: 76703,4322  -or-  76703.4322@compuserve.com        |
+-----------------------------------------------------------------------+