[net.unix] reading and writing to another process

chongo@nsc.UUCP (09/28/83)

how do i do the following:

   write a program foo which establishes both a read and a write to pipe
   to the program bar.  bar is in binary form.  bar was written to read/write
   to stdin and stdout. (i.e., i cant modify bar to do the job)

and:

   same setup except that i want the program foo to read stderr of bar too.

chongo /\../\

moss%brl-vld@sri-unix.UUCP (09/29/83)

From:      Gary S. Moss (301)278-6647 <moss@brl-vld>

No problem.

1)  Set up two pipes using pipe(2) one for sending to "bar"
and one for recieving from "bar".

2)  Use fork(2) to fork a process.

3)  In the child, close stdin and stdout and use fcntl(2) to rename stdin
as the read end of the first pipe, and rename stdout as the write end of
the 2nd pipe, close all open file descriptors, then execl(2) "bar".

4)  In the parent open the 1st pipe for writing using open(2) or
fdopen(3S), and open the 2nd pipe for reading.

Example:

typedef struct {
	int	rd;
	int	wr;
} Pipe;
Pipe	pipe1fd, pipe2fd;

main() {
	if( pipe( &pipe1fd ) == -1 ) {
		perror( "main()" );
		exit( errno );
	}
	if( pipe( &pipe2fd ) == -1 ) {
		perror( "main()" );
		exit( errno );
	}
	if( (pid1 = fork()) == -1 ) {
		perror( "main()" );
		exit( errno );
	} else	if( pid1 == 0 ) {
		/* C h i l d   Read from 1st pipe, write to 2nd.
		 */
		(void) close( 0 );
		(void) fcntl( pipe1fd.rd, F_DUPFD, 0 );
		(void) close( 1 );
		(void) fcntl( pipe2fd.wr, F_DUPFD, 1 );
		(void) close( pipe1fd.rd ); /* Close all open file desc. */
		(void) close( pipe1fd.wr );
		(void) close( pipe2fd.rd );
		(void) close( pipe2fd.wr );
		(void) execl( "/.../bar", "bar", 0 );
		perror( "bar" );
		exit( errno );
	}
	/* P a r e n t  write to 1st pipe, read from 2nd.
	 */
	wrPipeFp = fdopen( pipe1fd.wr, "w" );
	/* 
		.
		.
		.
		write to "bar".
		.
		.
		.
	 */
	(void) fclose( wrPipeFp );
	(void) close( pipe1fd.rd );
	(void) close( pipe2fd.wr );

	rdPipeFp = fdopen( pipe2fd.rd, "r" );
	/*
		.
		.
		.
		read from "bar".
		.
		.
		.
	*/
	(void) fclose( rdPipeFp );
	while( wait( &status ) != -1 ) /* Wait for all children.	*/
		;
	exit( 0 );
}

- Moss.

RICH.GVT%office-3@sri-unix.UUCP (09/29/83)

In general, you create the pipe, close stdin (or stdout, or stderr), then "dup" 
the appropriate input or output pipe fd which will cause it to reuse the std* 
number you just closed.  You do this for each file you want connected via a 
pipe, then execl the child process.  The parent and child processes can each 
close any unused pipe fd's.

I have an example of exactly this in a program on an off-net computer, and can 
send the actual code later if really needed, but I'd have to read it from one 
terminal and retype it in on another...

You can also specify redirection in the execl command line, using the pipe fd 
number instead of going through the close/dup stuff ( >&4 for instance).  This 
lets you keep the parent's std* connections intact.

Cheers,
Rich <Zellich@OFFICE-3>

dave@utcsrgv.UUCP (Dave Sherman) (10/03/83)

Since pipes are one of the real obscurities of UNIX, and this is
supposedly a group for novices, here's a posted answer to nsc!chongo:

	int pipeline[2];

	pipe(pipeline);	/* check it returns -1 to be safe */

	pid = fork();	/* again, check for -1 */

	if(pid == 0)	/* child */
	{
		close(0);
		dup(pipeline[0]);
		close(pipeline[0]);

		close(1);
		dup(pipeline[1]);
		close(pipeline[1]);

		execl(bar .......)
		/* exit with error message about execl failing */
	}

	/* parent */
	write(pipeline[1], ....) to write on the pipe
	read(pipeline[0], ....) to read from the pipe


Presto. Now the child will write to the pipe when writing to stdout,
and read from the pipe when reading stdin.

The dup works because it uses the lowest available file descriptor.
Since you just closed 0, dup() will dup it to 0. It has the effect
of making reads on the pipeline[0] be reads from 0 too. Obscure indeed.
It takes a little getting used to (so did printf when I first saw it).


Dave Sherman
-- 
 {cornell,decvax,ihnp4,linus,utzoo,uw-beaver}!utcsrgv!lsuc!dave

alanw@microsoft.UUCP (10/04/83)

Dave Sherman's solution to the problem of writing to and reading
from a process must be taken with a grain of salt.  Unless one
has some knowledge about the behavior of the program through which
the data is being filtered, it is very easy to block on either the
input or the output.  For example, if the filter is something like
sort which reads all of its input before writing anything, the pro-
gram must write all its output to the pipe and close the write des-
criptor before attempting to read the input.

If, on the other hand, the filter is a more ordinary one like grep,
it is very difficult to tell when to read from or write to the pipe.
If the total amount to be written is less than one pipe buffer full
(normally 4096 characters), it's okay to to write the full output,
close the write descriptor and then read the filter's output.  In
other cases there is no way to tell if input is available from the
pipe or if there is space in the pipe for the output of the write.

			Alan Whitney
			Microsoft Corp.
			{decvax,uw-beaver,fluke}!microsoft!alanw

gwyn@brl-vld@sri-unix.UUCP (10/05/83)

From:      Doug Gwyn (VLD/VMB) <gwyn@brl-vld>

I don't believe your example is correct.  It seems to me you have the
child's stdin connected to his own stdout and have the parent reading
and writing on his own pipe.

gwyn@brl-vld@sri-unix.UUCP (10/06/83)

From:      Doug Gwyn (VLD/VMB) <gwyn@brl-vld>

In general this will be timing-dependent, as the parent process
and the child may get into a deadlock waiting for each other's
communications.  Various solutions work in different situations;
practically all of them involve two pipes and a communications
protocol to avoid deadlock.

moss@brl-vld@sri-unix.UUCP (10/06/83)

From:      Gary S. Moss (301)278-6647 <moss@brl-vld>

In practice, it is best to set up a batch operation requiring only
one pipe. The parent writes to a temporary file, then forks.
The child reads from that file rather than from a pipe
(just substitute the file descriptor for the pipe descriptor
in the fcntl(2) or dup(2) system calls). The parent, immediately
after forking reads from the pipe.

This way, the child does not block during reading so there
is no chance for a deadlock.  This does not allow a true
conversation to occur because when the child hits an EOF,
it will terminate, however, it is sufficient for most
applications. The thing to watch out for is that a process
writing on a pipe will block if the buffer gets full and
nobody is reading from it. Therefore, the parent should
finish writing to the child so that it can be reading
from it before the child starts writing.

Setting up communications protocols will not work as a solution
to the original question because the child is blind to the whole thing.
- Moss.

dave%berkeley@utcsrgv.UUCP (10/07/83)

You are right, of course. My example should have been limited to
one pipe for reading or writing in one direction, and another pipe
to come back the other way.

Thanks for pointing it out. I work with single-direction pipes
all the time, and had forgotten about what you have to do for
pipes going both ways.

Dave Sherman

mark@cbosgd.UUCP (Mark Horton) (10/18/83)

It's worth pointing that the pipeline method posted by Dave
Sherman, while the ONLY portable way to do it, only works if
the two processes in question have a common ancestor and knew
they wanted a pipeline before they separated into two processes.
So it works fine for the shell, or for applications that want to
start out as a single command and break into processes.  But it
does not work for arbitrary processes to talk to each other, as
with a user/server model.

There are many methods to implement arbitrary IPC.  None of them
will work in every version of UNIX.  In fact, very few of them
will work in more than one version of UNIX.  These include (but
are not limited to)
	Named pipes		System III and V
	Messages		System V
	Sockets			4.2BSD
	Semaphores		various, all incompatible.