AWalker@RED.RUTGERS.EDU (*Hobbit*) (09/21/87)
You don't need any more than the normal two privs to get things talking via a mailbox -- the problem is cluing the other process in as to which mailbox is yours. This information [like a logical name mbx$pid_000007a6] could be kept in a> a world-readable, owner-writeable logical name table or b> a file or c> something else I haven't thought of at the moment. Mailboxes would probably be quite a bit faster than files, since you'd avoid the open/close overhead, and the mailbox device would vanish with the process, signaling to those sending to it that something was wrong. _H* -------
jeh@crash.CTS.COM (Jamie Hanrahan) (09/21/87)
A previously-posted summary of the standard interproc. comm. techniques (can't seem to find the article, or I'd run this as a followup thereto) was quite good. I can only add two things: First, global sections alone are not enough -- you also need an interproc. synchronization technique to control access to the section. This can be common event flags (limited to procs within a UIC group), hiber/wake, or the lock manager. Which is more suitable depends on the function you want. Second, references to "shared memory" only apply to multiple 780s with MA780 multiport memory. These are 780s running individual copies of VMS, each with its own private memory, but with a shared memory area for fast sharing of data between systems. Global sections, common event flag clusters, and mailboxes can all be created in the shared memory. I mention this because many people think that "shared memory" is synonymous with "global section" -- it isn't. 780s with MA780s are rare beasts these days and I wouldn't spend a minute writing code to accomodate them. BUT, all is not lost. If you can ask that the users of the program have NETMBX, you can do generalized (multi-group-UIC) interprocess comm. with no others privs, by using DECnet task-to-task communication. This will naturally be a bit slow (I've measured it at 10 msec per $QIO call on a 1-MIP VAX; this is for reader and writer processes on the same node -- naturally it gets worse when there's a real internode link involved), but it will work, and it's also very general -- ie processes running on remote nodes use exactly the same code to talk to the "master" node as those running on the "master". The best way to use DECnet this way is by running a "server" process that keeps track of a database that's private to itself. All other processes connect to the server and send it messages to request info from, or to write into, the database. This neatly sidesteps all synchronization issues (since the requests to the server process will be single-threaded). Some cooperation from the system manager on the node that will run the server is required. But, the server process (and the account under which it runs) needs only normal privs. If you can get PRMMBX and SYSNAM privs, you can speed things up on the node that runs the server by letting them talk to the server through mailboxes. Only the server need have these -- once it starts up, it creates the mailbox from which it will read commands; other procs on the same node create a temporary mailbox for reading responses from the server, and send the temp mailbox's physical device name (or something that can be mapped thereto) in all requests to the server. ("here's a request; send the reply *here*.") Note to system analysts: This is a good model for any application that needs a common database accessed by multiple "clients". The clients need know nothing about how the database is set up; they only need know where and how to send messages. Transaction logging is simple -- just copy all the incoming request messages to a file. (Then if the database is munged, just restore it from the most recent backup and play all the request messages that came in since that backup into the server's mailbox again...) You can change the database implementation without changing the clients, too. Students of VAX/VMS will recognize this model in the job controller, among other places. Good luck! --- Jamie Hanrahan Simpact Associates, San Diego, CA pnet01!jeh@crash.CTS.COM or jeh@pnet01.CTS.COM ...sdcsvax!crash!pnet01!jeh
BEB@UNO.BITNET (09/23/87)
<Mailers are mudders. This line is mudder fodder, brudder.> >From: Jamie Hanrahan <UCSDHUB!JACK!MAN!CRASH!JEH@SDCSVAX.UCSD.EDU> > >A previously-posted summary of the standard interproc. comm. techniques >(can't seem to find the article, or I'd run this as a followup thereto) >was quite good. I can only add two things: First, global sections >alone are not enough -- you also need an interproc. synchronization >technique to control access to the section. This can be common event >flags (limited to procs within a UIC group), hiber/wake, or the lock >manager. Which is more suitable depends on the function you want. Or use the BBSSI inst to implement mutual exclusion routines. Example: .Entry MUTEXON,^M<> ; loop til MUTEX_FLAG cleared, then set 1$: BBSSI #0,@MUTEX_FLAG,1$ ; where MUTEX_FLAG is in global section MOVB #1,@4(ap) ; we have control RET .Entry MUTEXOFF,^M<> ; give up control of global section CLRB @MUTEX_FLAG ; clear flag in global section space CLRB @4(AP) ; return "no control" status RET Or for Macro haters, the LIB$BBSSI RTL routine allows a HLL access to the BBSSI inst. Of course you immediately notice this turkey is a polling loop gobbling CPUtime like there is no DD-MMM-YYYY +1. Well, for an (user interactive) application that can hibernate and wake up, say, ten or more times a second, it works very well, and isn't too expensive, depending on how much you have to do every time it wakes up, how much contention for mutex, etc... The big win here is that it is really simple to set up. Just create your global section and bracket any code that munges on it with the mutexon/mutexoff calls. Bruce <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> <>Handle: Bruce Bettis <>USnail: University of New Orleans <> <> <> Computer Research Center <> <>BITnet: <BEB@UNO.BITNET> <> New Orleans, La. 70148 <> <> <> <> <>Voices: (504) 286-7067 <> (Assume appropriate disclaimer) <> <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>