mitchell@cadovax.UUCP (Mitchell Lerner) (01/22/87)
Is the Loop-Around driver as shown in the the System V version 3 Streams Programmers Guide in chapter 12 a way that I can do inter-process communication? I'm not sure that you can actualy open the driver from streams on two separate processes (or more?) and have the driver connect the two streams. Can I use this type of Streams driver as the basis for an inter- process communications facility? -- Mitchell Lerner # {ucbvax,ihnp4,decvax}!trwrb!cadovax!mitchell # cadovax!mitchell@ucla-locus.arpa
geoff@desint.UUCP (Geoff Kuenning) (01/23/87)
In article <1341@cadovax.UUCP> mitchell@cadovax.UUCP (Mitchell Lerner) writes: > Is the Loop-Around driver as shown in the the System V version 3 Streams > Programmers Guide in chapter 12 a way that I can do inter-process > communication? I'm not sure that you can actually open the driver from streams > on two separate processes (or more?) and have the driver connect the two > streams. Can I use this type of Streams driver as the basis for an inter- > process communications facility? Yes, except that you are better off using the "stream pipe" driver, "sp.c", that comes with V.3. Sp.c works by cross-linking stream structures, so it is more efficient. The sp driver performs the cross-link when an M_PROTO (I think that's the right one) control message is sent. The control message contains a pointer to the other stream which is to be cross-linked; this pointer is generated using the I_PASSFP (pass file pointer) ioctl. (The details are undocumented; what you need to know is that the message contains the file pointer at offset zero and nothing else.) The tricky part is that I_PASSFP needs to be sent by someone who has an fd for the relevant stream. To make things clearer, here is the single-process case (the funny ioctl syntax is because the info actually goes through a structure, and I don't remember details): fd 4 (e.g.) fd 5 (e.g.) | | | V | ioctl (5, I_PASSFP (..., 4)) | | V V Stream A Stream B When the PASSFP of fd 4 is sent on fd 5, the cross-link happens and everything is hunky-dory. This is very easy to arrange in a single process, and you then have a two-way stream pipe that can be used much like a regular pipe, notably by forking children. Careful attention to open modes can even make it into a one-way pipe. The no-common-ancestor case is a bit trickier. What saves you is that the I_PASSFP ioctl can be used to pass a file pointer in a data message, which can be read by another process. Here's how I did it for LX-Windows, for Locus Computing Corp: Server: (1) Open up a pair of well-known sp minor numbers (I use the highest two in the sp minor list, e.g., 30 and 31 for 32 sp's) and cross-link them. (2) Enter main "listening" loop. One of the well-known devices is held open, but otherwise ignored. The other is the "connection request" device; the server listens on this stream for new clients. We will assume that 31 is the device the server listens on. (Actually, I give them mnemonic names in /dev). Client: (3) Open the well-known "request" device (minor 30). Also open up an unused 'sp' minor number, creating it with the clone driver. Say we get minor 0. (4) Send a single byte on the "request" device, minor 30. The contents do not matter. Server: (5) When a request byte arrives on minor 31, open up another unused 'sp' minor number with the clone driver. Call this minor 1, and suppose it gets fd 9. (6) Using an I_PASSFP ioctl, send the address of minor 1 (fd 9)'s data structures back out on minor 31 as a *data* message. Note that, in the case of simultaneous requests by multiple clients, we are not sure which client this will go to. However, it turns out that this doesn't hurt us. Client: (7) Read the PASSFP message from minor 30. Since it is a data message, you will receive the value of a kernel pointer to a stream structure. (8) Using the pointer from step 7, create an M_PROTO message and send it on minor 0, which we opened in step 3. This message is *not* sent with I_PASSFP; instead it is sent as an ordinary M_PROTO control message. The sp driver will see this message and cross-link minor 0 to minor 1 (because minor 1 was the pointer passed in step 6). (9) Close the well-known request device (minor 30). The ugly thing about this is that steps 7 and 8 actually take a kernel pointer and pass it back to the kernel via the client process. The REALLY UGLY thing about this is that the sp driver does not do any checking for the validity of the pointer; it grabs it from the message and promptly dereferences it. This is obviously a security hole that you will want to fix if you plan to ship 'sp.c'. One last note: the stream buffer allocation scheme is simply stupid in combination with the high/low water mechanism. I won't go into details, but if you send a lot of small packets that clog up, you will exhaust the stream buffer pool long before you hit the high-water mark. In our case, the X clients used up the pool with line-drawing requests, and the server then couldn't get a buffer to post a mouse-click event that was necessary to terminate the line-drawing requests! -- Geoff Kuenning {hplabs,ihnp4}!trwrb!desint!geoff
markh@ico.UUCP (Mark Hamilton) (01/23/87)
In article <1341@cadovax.UUCP>, mitchell@cadovax.UUCP (Mitchell Lerner) writes: > Is the Loop-Around driver as shown in the the System V version 3 Streams > Programmers Guide in chapter 12 a way that I can do inter-process > communication? I'm not sure that you can actualy open the driver from streams > on two separate processes (or more?) and have the driver connect the two > streams. Can I use this type of Streams driver as the basis for an inter- > process communications facility? Yes, you can open this device from two separate processes and connect them. Each process should perform a clone open (assuring a unique minor number), and then one of the processes should do a "LOOP_SET" ioctl. (Actually both could do the ioctl if they are prepared to ignore the EBUSY error.) You cannot connect more than two streams in this way, as each extra connect returns EBUSY, however, it should be fairly easy to add the ability to do that if you know what the semantics should be. As an aside, you may want to consider the stream-pipe device which I believe is part of the standard V.3 distribution (/dev/spx). The operation of the device simulates a pipe between two processes, but you have the added advantage of being able to push other modules onto the "pipe". -- Mark Hamilton InterActive Systems
rki@apollo.UUCP (01/26/87)
In article <1341@cadovax.UUCP> mitchell@cadovax.UUCP (Mitchell Lerner) writes: > Is the Loop-Around driver as shown in the the System V version 3 Streams > Programmers Guide in chapter 12 a way that I can do inter-process > communication? I'm not sure that you can actualy open the driver from streams > on two separate processes (or more?) and have the driver connect the two > streams. Can I use this type of Streams driver as the basis for an inter- > process communications facility? > Two separate processes can open the same minor device on such a loopback driver and get in effect get a uni-directional communication path between them. All processes that concurrently open a given (major, minor) pair share the same stream. Frankly, named pipes are probably a better choice for this type of IPC. You can easily write a driver that provides a bi-directional communication path by pairing minor device numbers, so that each even-numbered minor number N loops back to minor number N+1. This requires only a very simple handshake on opening and closing to make it work. (This is similar to the scheme Dennis Ritchie used to create pipes using streams in Version 8.) If you want to get slightly more sophisticated, you can write the driver to allow any two minor devices to be cross-connected, using the I_FDINSERT ioctl (a shameless hack, but we had to do something) to inform one minor device of the identity of the other. You could then do IPC by having one process open two separate minor devices, connect them together via the ioctl, and then use create a name for one of the minor devices for use by a second process can open. (This is essentially the way the Stream Pipe (SP) driver used by the RFS name server works.) There are still other variations that can be done; for instance, you could implement a raw tty interface and then use the line discliple modules to create pty's. If you are into connectionless IPC, you could allow each minor device user to register a service-id, and define a message format that includes a (destination-id, sender-id) pair that permits the driver to route messages to the appropriate destination queue (watch out for flow control problems with this case). Or if you want to make local IPC look identical to network IPC, you might write a driver that implements the transport provider interface. Of course you can always use the standard System V IPC stuff, which has the advantage of already being there and, um, working. The best choice of what to use really depends on the nature of the application. Bob Israel Apollo Computer apollo!rki Disclaimer: the above statement does not necessarily reflect the opinions, beliefs, or market strategies of my employer, or of any past employer who might consider the subject matter to be of proprietary concern.
rki@apollo.UUCP (01/26/87)
In article <287@desint.UUCP> geoff@desint.UUCP (Geoff Kuenning) writes: > > The sp driver performs the cross-link when an M_PROTO (I think that's > the right one) control message is sent. The control message contains > a pointer to the other stream which is to be cross-linked; this pointer > is generated using the I_PASSFP (pass file pointer) ioctl. (The details > are undocumented; what you need to know is that the message contains > the file pointer at offset zero and nothing else.) > > [A discussion of Geoff's use of the sp driver] I don't know why I don't let this pass like all of the other strange articles about STREAMS that have appeared in the last year, but here goes. Geoff has certainly been very clever in figuring out how the SP driver works, but his method of using it is rather baroque. The intended method of use is: (1) a server process (e.g. RFS name server) opens any pair of minor devices via the clone interface and cross connects them via the I_FDINSERT ioctl, issued on one minor device with the file descriptor of the other, using a NULL data part and a control part just big enough to hold a pointer, with an offset of 0. This in effect causes the creation of an M_PROTO message containing the address of the read (I think, I forgot which one) queue of driver on the target stream, which is sent down the control stream. Hence, the user process at no time needs to be in possession of a kernel address. (2) mknod() is used to create a name for the client end of the stream pipe. (3) When a client process wants to obtain a private stream pipe to the server, it first does step (1) itself to obtain a private stream pipe. (4) It then opens the client end of the server pipe, and uses the I_PASSFD ioctl to pass a reference to one end of the private pipe to the server process. (5) The server process then forks; the parent goes back to listening on the server pipe and the child has a private conversation with the client on the client pipe. This was used very successfully with the RFS name server. > > One last note: the stream buffer allocation scheme is simply stupid in > combination with the high/low water mechanism. I won't go into details, > but if you send a lot of small packets that clog up, you will exhaust > the stream buffer pool long before you hit the high-water mark. In our > case, the X clients used up the pool with line-drawing requests, and > the server then couldn't get a buffer to post a mouse-click event that > was necessary to terminate the line-drawing requests! > -- I'll admit that I was not very satisfied with the flow control weightings; once we had decided that the small message problem was going to cause real-life difficulties, political problems prevented us from fixing them. One easy way to alleviate the small message flow control problem is to change the flow control weighting tables, so that no block receives a weight of less than 128 (bytes). I found this quite satisfactory on my own system, where network tty traffic was chewing up all the small message blocks. The PC7300 solution was to do flow control based on the number of messages rather than the size of the messages; this was done by weighting ALL blocks as 1. Unfortunately, these are not tunable parameters, so you would have to recompile the system :=(ugh) to fix it. [If you are really desperate, you can probably have your driver write new weighting values into the array upon initialization, but you didn't hear this from me.] > > Geoff Kuenning > {hplabs,ihnp4}!trwrb!desint!geoff Bob Israel apollo!rki Disclaimer: The above ramblings in no way represent the opinions, intentions, or legal negotiating positions of my employer, or of any past employer that may have a proprietary interest in the subject matter.