[comp.unix.wizards] Detecting Pipe Using Bourne Shell

ifenn%ee.surrey.ac.uk@nss.cs.ucl.ac.uk (Ian Fenn) (04/06/89)

Hello all. I wonder if one of you can help me. I have a Bourne shell program 
which maintains a database of telephone numbers. If I enter the program with 
no arguments:

% phone

I display a main menu which offers options to change entries,
delete entries, etc, etc. If I enter the program with arguments:

% phone Bloggs

Then it searches through the database (using grep) for the arguments in the 
database of telephone numbers (in this example it would look for Bloggs). I 
test for arguments by using:

if test $# -ne 
then	.....search for arguments in database.....
	.....then exit
fi
....rest of program (i.e. Main Menu).

The only problem with this is that I cannot pipe the output from another 
program into it because it drops into the main menu and out again! Probably 
due to the test for arguments. Can anyone therefore tell me how to detect a 
pipe using Sh, so that the following will work? Or suggest another way 
round the problem?

% cat datafile | phone 

Thanks in advance.

--
Ian Fenn,                     +------------------------------------------------+
Computer System Technician,   |  Network Address : ifenn@ee.surrey.ac.uk       |
                              |  Telephone       : +44 483 571281  ext. 9104   |
Department of                 |  Direct line     : +44 483 509104              |
    Electrical Engineering,   |  Telex           : +44 859331                  |
University Of Surrey,         |  Fax             : +44 483 34139               |
Guildford,                    +------------------------------------------------+
Surrey.                       | "It is easier to change the specification to   |
GU2 5XH.                      |       fit the program than vice versa."        |
                              +------------------------------------------------+

kremer@cs.odu.edu (Lloyd Kremer) (04/07/89)

In article: <18992@adm.BRL.MIL> ifenn%ee.surrey.ac.uk@nss.cs.ucl.ac.uk (Ian Fenn) writes:

>If I enter the program with 
>no arguments:
>I display a main menu which offers options to change entries,
>delete entries, etc, etc. If I enter the program with arguments
>Then it searches through the database (using grep) for the arguments
>using:

>if test $# -ne 
>then	.....search for arguments in database.....
>	.....then exit
>fi
>....rest of program (i.e. Main Menu).

>The only problem with this is that I cannot pipe the output from another 
>program into it because it drops into the main menu and out again!
>% cat datafile | phone 


You are trying to interpret "no arguments" as two entirely different directives:
a) present menu and exit
and
b) read stdin instead of argument list for search targets

You must devise a way of differentiating between these meanings.  Many UNIX(tm)
programs treat an argument of  -  as meaning 'read stdin instead of files'.

How about something like this:

	#!/bin/sh
	# phone lookup utility

	if [ $# = 0 ]
	then
		present menu
	elif [ $1 = - ]
	then
		while read $i
		do
			grep "$i" database
		done
	else
		for i in $*
		do
			grep "$i" database
		done
	fi
	exit 0

Or, to be a bit more elegant:

	#!/bin/sh
	# phone lookup utility

	if [ $# = 0 ]
	then
		present menu
	else
		if [ $1 = - ]
		then
			set `while read i;do echo "$i";done`
		fi
		for i in $*
		do
			grep "$i" database
		done
	fi
	exit 0


Either of these should allow
	some_program | phone -
to work properly.

					Hope this helps,

					Lloyd Kremer
					{uunet,sun,...}!xanth!kremer

chris@mimsy.UUCP (Chris Torek) (04/08/89)

In article <8385@xanth.cs.odu.edu> kremer@cs.odu.edu (Lloyd Kremer) writes:
>You are trying to interpret "no arguments" as two entirely different
>directives:
>a) present menu and exit
>and
>b) read stdin instead of argument list for search targets
>
>You must devise a way of differentiating between these meanings.

Correct analysis, and a workable solution.  However:

>Many UNIX(tm) programs treat an argument of  -  as meaning 'read
>stdin instead of files'.

what is really needed here is /dev/stdin.  Alas, neither SysV (at least
SysV R 1, 2, 2.2, for 3B2, 3B5, Unix-PC, ... how many standards do we
have these days anyway? :-> ) nor BSD (at least I can count the BSD
revisions <-: ) come with a /dev/stdin.

The trend (as shown by /dev/fd/*, /proc, network file systems, and
so forth) seems to be to put everything that is visible into the file
name space ... and I think it is the right trend.  (Too bad about
S3 shm and BSD sockets.)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

leo@philmds.UUCP (Leo de Wit) (04/08/89)

In article <18992@adm.BRL.MIL> ifenn%ee.surrey.ac.uk@nss.cs.ucl.ac.uk (Ian Fenn) writes:
    []
|The only problem with this is that I cannot pipe the output from another 
|program into it because it drops into the main menu and out again! Probably 
|due to the test for arguments. Can anyone therefore tell me how to detect a 
|pipe using Sh, so that the following will work? Or suggest another way 
|round the problem?

if test -t 0
then
    # stdin from terminal
else
    # stdin not from terminal
fi

    Leo.

peter@ficc.uu.net (Peter da Silva) (04/08/89)

if [ -t 0 ]
then
	: input is a terminal, display the menu.
else
	: input is a file or pipe, sklorb up some names.
fi
-- 
Peter da Silva, Xenix Support, Ferranti International Controls Corporation.

Business: uunet.uu.net!ficc!peter, peter@ficc.uu.net, +1 713 274 5180.
Personal: ...!texbell!sugar!peter, peter@sugar.hackercorp.com.

nichols@cbnewsc.ATT.COM (robert.k.nichols) (04/09/89)

In article <999@philmds.UUCP> leo@philmds.UUCP (Leo de Wit) writes:
>
>if test -t 0
>then
>    # stdin from terminal
>else
>    # stdin not from terminal
>fi

Programs and procedures that use this means of distinguishing their
input source are one of my pet peeves.  Sometimes I want to interpose an
editing filter between my terminal and stdin of some program, giving me
the ability to repeat previous input lines (perhaps with modifications),
include the contents of an existing file in the input stream, etc.  Any
program that uses "test -t 0" (or an equivalent) will break in such an
environment.

Acceptable alternatives are:
	1.  Use an argument of "-" to mean "read stdin as though it were
	    a file."
	2.  Go ahead and use the "test -t 0" mechanism, but provide for a
	    "-i" flag to force interactive mode when something is hiding
	    the fact that stdin is really a terminal.
-- 
.sig included a no extra charge.           |  Disclaimer: My mind is my own.
Cute quotes and batteries sold separately. |  Copyright 1989 Robert K. Nichols.
                                           |  For USENET use only.

ccdn@levels.sait.edu.au (DAVID NEWALL) (04/11/89)

In article <18992@adm.BRL.MIL>, ifenn%ee.surrey.ac.uk@nss.cs.ucl.ac.uk (Ian Fenn) writes:
> [ how can you tell if a program's input comes from a pipe? ]

I recommend testing to see if stdin is a terminal.  I believe the following
will do what you want:

        if [ -t 0 -a $# -eq 0 ]; then
                echo menu
                ...
        else
                ...
        fi

(I assume you only want a menu when the program is run from a terminal)

David Newall                     Phone:  +61 8 343 3160
Unix Systems Programmer          Fax:    +61 8 349 6939
Academic Computing Service       E-mail: ccdn@levels.sait.oz.au
SA Institute of Technology       Post:   The Levels, South Australia, 5095

ske@pkmab.se (Kristoffer Eriksson) (04/14/89)

In article <457@cbnewsc.ATT.COM>, nichols@cbnewsc.ATT.COM (robert.k.nichols) writes:
> In article <999@philmds.UUCP> leo@philmds.UUCP (Leo de Wit) writes:
> Programs and procedures that use this means of distinguishing their
> input source are one of my pet peeves.  Sometimes I want to interpose an
> editing filter between my terminal and stdin of some program, giving me
> the ability to repeat previous input lines (perhaps with modifications),
> include the contents of an existing file in the input stream, etc.  Any
> program that uses "test -t 0" (or an equivalent) will break in such an
> environment.

I would like too see this problem with piping into (out of) programs that
want to use ioctl calls on its input (output) permanently cured some day.

I think the cure would be to make it possible for ioctl:s to be transferred
throu the pipe (or actually more likely, throu some parallell but unbuffered
mechanism), and read by the process at the other end of the pipe with a new
system call that could be named ioctlread(). That process then could emulate
the requested ioctl or pass it on to some other file or pipe. This would be
perfectly suited for editing filters, windowing systems, networked terminal
sessions, and more, without using ptys or System V stream modules.

Ptys are not equivelent to these extended pipes (let's call them "e-pipes").
With e-pipes, all ioctls can be passed throu the filter process or processes
all the way to the actual devices they filter, thus not limiting them to
handle tty style devices only, with only the ioctls implemented by the pty
driver (better generality). You also could have tty modes on remotely logged
in terminals in a network, propagate (with some suitable protocol) from the
remote host to the terminals local host's tty driver (or editing filter),
to do all input processing near the terminal (increased efficiency), all
without any special kernel drivers.

To further increase the usefulness of e-pipes, eliminating the need for
filter programs to know anything about e-pipes, there could be a default
"by-pass" setup where ioctls coming in on stdin automatically were passed
on to stdout, and vice versa, until the process explicitly requests to
receive the ioctls itself. This would only involve checking and optionally
following a "by-pass pointer" in the kernel, when sending ioctls. Maybe
the shell that started the pipe could arrange the by-pass according to how
the pipe was specified, using a new bypass() or shunt() system call.

With this facility it would be possible to e.g. use good old tr in a
pipe between your terminal and any other program, like editors, mail
readers, pagers, anything, without trouble. Today you can't.

I don't know enough to compare e-pipes to stream modules, but from what I've
seen on the net, they seem to be designed mostly for use from within the
kernel, and complicated to set up for use with user processes. Maybe the
shell could be made to set up streams similar to e-pipes, but I think there
would be problems/limitations.

What do you say? I want pipes usable with modern, interactive, screen
oriented, modular programs, in addition to the usual old text processing.
-- 
Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
Fax:   +46 19-11 51 03  !  or ...!{uunet,mcvax}!sunic.sunet.se!kullmar!pkmab!ske

bernsten@phoenix.Princeton.EDU (Dan Bernstein) (04/15/89)

In article <910@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
> I would like too see this problem with piping into (out of) programs that
> want to use ioctl calls on its input (output) permanently cured some day.

I've written a program ``pty'', at version 1.2 or so, that suffices for my
needs; I can do stuff like, e.g.,

  tr a b | pty /usr/ucb/vi | tee out

and get sensible results. pty runs under BSD and uses pseudo-terminals;
you can imitate script with ``pty $SHELL'', or use it far more flexibly.
I'll post it soon, when I'm happy with the option set.

> I think the cure would be to make it possible for ioctl:s to be transferred
> throu the pipe (or actually more likely, throu some parallell but unbuffered
> mechanism), and read by the process at the other end of the pipe with a new
> system call that could be named ioctlread(). That process then could emulate
> the requested ioctl or pass it on to some other file or pipe. This would be
> perfectly suited for editing filters, windowing systems, networked terminal
> sessions, and more, without using ptys or System V stream modules.

System V streams are more general than what you propose and can imitate
your e-pipes quite nicely. ptys are less general but satisfactorily
allow programs to use ioctl() as usual, even in the middle of a pipe.
If BSD goes past ptys, it should go all the way to streams.

> With this facility it would be possible to e.g. use good old tr in a
> pipe between your terminal and any other program, like editors, mail
> readers, pagers, anything, without trouble. Today you can't.

Oh, c'mon. I can do that without trouble, today and yesterday and many
days before that. And I believe you're overoptimistic in saying that
e-pipes would not need kernel drivers; they'd certainly need quite a
bunch of other extensions to the kernel as you describe them.

> What do you say? I want pipes usable with modern, interactive, screen
> oriented, modular programs, in addition to the usual old text processing.

I'll make sure to email you pty.c before I post it.

---Dan Bernstein, bernsten@phoenix.princeton.edu

mike@thor.acc.stolaf.edu (Mike Haertel) (04/15/89)

In article <910@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
>I would like too see this problem with piping into (out of) programs that
>want to use ioctl calls on its input (output) permanently cured some day.
>
>I think the cure would be to make it possible for ioctl:s to be transferred
>throu the pipe (or actually more likely, throu some parallell but unbuffered
>mechanism), and read by the process at the other end of the pipe with a new
>system call that could be named ioctlread(). That process then could emulate
>the requested ioctl or pass it on to some other file or pipe. This would be
>perfectly suited for editing filters, windowing systems, networked terminal
>sessions, and more, without using ptys or System V stream modules.

This has already been done, in ninth edition streams.  A ninth edition pipe
is a bidirectional stream; a module can be pushed into it that will turn
ioctl requests (and other control messages) into formatted data blocks,
and vice versa.

>Ptys are not equivelent to these extended pipes (let's call them "e-pipes").
>With e-pipes, all ioctls can be passed throu the filter process or processes
>all the way to the actual devices they filter, thus not limiting them to
>handle tty style devices only, with only the ioctls implemented by the pty
>driver (better generality). You also could have tty modes on remotely logged
>in terminals in a network, propagate (with some suitable protocol) from the
>remote host to the terminals local host's tty driver (or editing filter),
>to do all input processing near the terminal (increased efficiency), all
>without any special kernel drivers.

All of this is already done in ninth edition, with streams.  Let's hope that
Berkeley gets these ideas into 4.4BSD . . .

> [ . . . stuff deleted . . . ]
>
>I don't know enough to compare e-pipes to stream modules, but from what I've
>seen on the net, they seem to be designed mostly for use from within the
>kernel, and complicated to set up for use with user processes. Maybe the
>shell could be made to set up streams similar to e-pipes, but I think there
>would be problems/limitations.

Setting up a slave process with (v9) streams is no more difficult than
setting up a slave process with traditional Berkeley ptys.  There is
no reason that you couldn't invent a shell syntax to do this sort of
thing, except this is a very specialized and rarely used thing to deserve
special shell syntax.

	pipe(fds);
	ioctl(fds[0], PUSH, mesg_ld);
	if (fork()) {
		close(fds[1]);
		do_fancy_processing_on(fds[0]);
	} else {
		close(fds[0]);
		dup2(fds[1], 0);
		dup2(fds[1], 1);
		dup2(fds[1], 2);
		close(fds[1]);
		execl("something_to_be_run_with_controlled_standard_fds",
		      "hello, world", 0);
		_exit(1);
	}

Disclaimer: I don't have a v9 system to try this on.  Unfortunately.
Real v9 people should feel free to rip me apart on details.

Oh well, maybe we can get this into GNU.
-- 
Mike Haertel <mike@stolaf.edu>
In Hell they run VMS.

allbery@ncoast.ORG (Brandon S. Allbery) (04/21/89)

As quoted from <910@pkmab.se> by ske@pkmab.se (Kristoffer Eriksson):
+---------------
| In article <457@cbnewsc.ATT.COM>, nichols@cbnewsc.ATT.COM (robert.k.nichols) writes:
| > In article <999@philmds.UUCP> leo@philmds.UUCP (Leo de Wit) writes:
| > Programs and procedures that use this means of distinguishing their
| > input source are one of my pet peeves.  Sometimes I want to interpose an
| 
| I would like too see this problem with piping into (out of) programs that
| want to use ioctl calls on its input (output) permanently cured some day.
| 
| I think the cure would be to make it possible for ioctl:s to be transferred
| throu the pipe (or actually more likely, throu some parallell but unbuffered
| mechanism), and read by the process at the other end of the pipe with a new
| system call that could be named ioctlread(). That process then could emulate
| the requested ioctl or pass it on to some other file or pipe. This would be
| perfectly suited for editing filters, windowing systems, networked terminal
| sessions, and more, without using ptys or System V stream modules.
+---------------

If AT&T hadn't introduced the I_STR silliness into Streams and instead had
elected to pass otherwise-unrecognized ioctls along the Stream, you could do
this with a Stream.  I expect that this will be in V.4 and *is* in SunOS4;
building a tty driver with I_STR isn't exactly the way to be compatible, so
I think it's safe to assume that Streams now passes random ioctls().  Given
that, a raw Stream should be able to send and receive ioctls with the
standard Streams message calls (I hope! -- I can imagine the Stream head
refusing to accept unknown IOCTL packets, but there should be a way to
request that they be passed through).

With the ability to pass ioctls through a Stream, Streams-based networks can
be designed to do local terminal handling as you suggested, and you can
write a version of script(1) which doesn't need to have an intelligent
pty-style device in the loop (I assume that causes some kernel overhead); it
can just dump the ioctls of the child to its stdin.  Seems more efficient to
me.

Of course, all this assumes that AT&T/Sun decides to do something rational.
The jury's still out on that one; I won't believe that they did it until I, or
someone I trust on the subject (say, Chris Torek or Doug Gwyn), actually
try it on a V.4 system.


++Brandon
-- 
Brandon S. Allbery, moderator of comp.sources.misc	     allbery@ncoast.org
uunet!hal.cwru.edu!ncoast!allbery		    ncoast!allbery@hal.cwru.edu
      Send comp.sources.misc submissions to comp-sources-misc@<backbone>
NCoast Public Access UN*X - (216) 781-6201, 300/1200/2400 baud, login: makeuser

gwyn@smoke.BRL.MIL (Doug Gwyn) (04/22/89)

In article <13589@ncoast.ORG> allbery@ncoast.UUCP (Brandon S. Allbery) writes:
>I think it's safe to assume that Streams now passes random ioctls().

The fundamental problem is that ioctls typically have associated data
structures, and if their format is unknown (as would be the case for
ioctls unknown to the local system), there is no way to ensure that
the data would not be mangled by the time it reached a remote system's
ioctl handlers.  The whole ioctl scheme needs rethinking for
heterogeneous networked environments.  I suspect SVR4 will use something
like XDR for passing ioctls over stream connections, but that doesn't
really solve the problem.  All the really good solutions I've been able
to think of are fundamentally incompatible with existing practice.