djo7613@hardy.u.washington.edu (Dick O'Connor) (09/05/90)
One of our users phoned me with the question "How do I pipe a stream of output into two separate processes at the same time?" Well, I said tee, not thinking, and he said, processes, not files. So I went to the FAQ List and My Favorite References and I experimented with parentheses and background processes, but I couldn't make it happen. This sounds fairly elementary, and *this* Watson would appreciate enlightenment from any of you Sherlocks out there. Thanks!... "Moby" Dick O'Connor djo7613@u.washington.edu Washington Department of Fisheries *I brake for salmonids*
les@chinet.chi.il.us (Leslie Mikesell) (09/06/90)
In article <7053@milton.u.washington.edu> djo7613@hardy.u.washington.edu (Dick O'Connor) writes: >One of our users phoned me with the question "How do I pipe a stream of >output into two separate processes at the same time?" Well, I said tee, >not thinking, and he said, processes, not files. Both tee and processes are perfectly happy with FIFO's if your system has them. /etc/mknod fifo p process3 < fifo & process1 | tee fifo | process2 rm fifo process3 will block until something is written to the fifo and will receive EOF when the last data has been read and it is no longer open by any process for writing. Les Mikesell les@chinet.chi.il.us
jimr@hp-lsd.COS.HP.COM (Jim Rogers) (09/06/90)
My solution is not elegant, but it does work: 1) Create a fifo file (named pipe) using the mkfifo command. i.e. mkfifo foofifo 2) Start the process(s) which write to the named pipe first. I found it easiest to run this part in the background. i.e. cat foo.input | tee foofifo | lp & 3) Start the processes which read from the named pipe in the foreground after the background process has been started. i.e. cat foofifo | more This will allow the data to be simultaneously printed and displayed to your screen. I suspect there are more elegant approaches. I hope this note string brings some of them out. Jim Rogers Hewlett Packard Company
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (09/11/90)
In article <1990Sep06.150939.18741@chinet.chi.il.us> les@chinet.chi.il.us (Leslie Mikesell) writes: > Both tee and processes are perfectly happy with FIFO's if your system > has them. Even if your system does have FIFOs, it's a bad idea to use tee to pipe into two processes at once. If one of the processes blocks, tee will block after a pipeful. Using multitee (under BSD) is much more sensible. ---Dan
les@chinet.chi.il.us (Leslie Mikesell) (09/11/90)
In article <17680:Sep1021:08:2890@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >Even if your system does have FIFOs, it's a bad idea to use tee to pipe >into two processes at once. If one of the processes blocks, tee will >block after a pipeful. The same thing would happen with a simple pipe to the slower process. Why is this a problem? >Using multitee (under BSD) is much more sensible. Does multitee just use larger buffers than pipes or is something else involved? Les Mikesell les@chinet.chi.il.us
les@chinet.chi.il.us (Leslie Mikesell) (09/13/90)
In article <20005:Sep1114:10:2290@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >It's axiomatic that you lose turnaround time when your slowest process >is forced to block. (The weakest link...) Say you have data flowing from >A into B into C into F, and from A into D into E into F. Say B and E are >the bottlenecks. Now if we use your tee-based solution for the A-B-D >split, then D will block as soon as B does. E won't get all its data >until B is basically done. So you've pretty much doubled your turnaround >time. >Now do you understand the problem? I understand the situation, but I don't see why it should be a problem unless the producer is interactive with the faster of the two consumers. Otherwise you're not done till you're all done, and you might as well pace the output to the slowest consumer. >multitee notices when an output blocks and sends the data to the other >outputs. It can in theory be configured to buffer up to the memory size, >but with any buffer it has a much lower chance than tee of being the >bottleneck in a pipeline. But pipes buffer all by themselves, up to PIPE_BUF anyway so the processes don't have to sync down to the read/write level. I don't see the gain from any additional buffering except in the odd situation noted above. Most programs are going to use block buffered stdio going into the pipe in the first place which is good for efficiency but not for interactive type things. Les Mikesell les@chinet.chi.il.us