[comp.unix.questions] interprocess communication

rich@eddie.MIT.EDU (Richard Caloggero) (01/06/88)

     I am curious how people feel about various IPC techniques.  I work
mostly on Apollo workstations, but have recently been asked to play
"god" on an Alliant (it runs Concentrix, which is basically a native
BSD4.X kernel). On the Apollo, under their Aegis operating system,
there isupport for 3 basic models of interprocess communication
available to a user program.
	1). The shared memory facility allows two processes to
	communicate through a common chunk of memory, and support is
provided for mutual-exclusion locking allowing one to implement a
samaphore abstraction.  This method does not work across a network,
only within a single node. It is thus useless for implementing truly
parallel systems on an Apollo ring, but seems to me that it is the
fastest -- requires the least amount of system overhead.

	2). Aegis provides a socket abstraction, similar in spirit (I
	think -- never played with it) to the standard Unix socket
abstraction. Again, this does not work between nodes, although I can
see no reason why this restriction exists (can anyone shed some light
on this point).

	3). The third model is a server-client model. The actual
	communication takes place through a *mailbox* file, located in
a particular place. This abstraction works across nodes.  It assumes
the existance of a filesystem which allows transparent internode file
access (such as the Aegis file system).  Seems to me to be the simplest
to use, and since it works across the net, allows the implementation of
truly parallel applications. It also seems to be the slowest, requiring
the most amount of system overhead of all three methods I've
mentioned.

     Can anyone out there comment on this stuff?  What I'd like to know
(aside from the basic "your totally confused about xxx, it realy is
like this...", is why Unix supports strictly socket-oriented IPC, and
why the formentioned restrictions exist.  It seems to me that even
memory could be shared among nodes -- the network paging software does
something like that doesn't it?  Any comments, flames, "your totaly
confused" responses are quite welcome.  Please, if you intend to post
to one of the indicated newsgroups, send a copy to me directly, for I
don't normaly read all of these!  Thanx a lot (in advance).


-- 
						-- Rich (rich@eddie.mit.edu).
	The circle is open, but unbroken.
	Merry meet, merry part,
	and merry meet again.

george@hyper.lap.upenn.edu (George Zipperlen) (01/07/88)

I was going to respond by e-mail, so as not to expose my ignorance to 
whole net, but decided that others might be interested in my ramblings,
since a previous news message inquired about heterogenous networks. 

A few words about our configuration: 
we currently have our Apollo ring running Aegis and Domain/IX (bsd4.2 TCP) 
connected to a local ethernet.  Also on the local net we have a Gould 
powernode (UTX - bsd4.2 with some sys5 features); and a Xerox dandelion 
(XDE, ViewPoint).  The local net is connected to the Penn-Net, which links 
us to the Arpa Internet (shin bone connected to the thigh bone (-:).  
We've got the Gould talking tcp, and we're currently trying to talk to 
the dandelion. (Credit here should mostly go to my co-worker, Adam Feigin).
Another possibility is to get XNS running on the Gould.
We also have PC/ATs (UGH) connected by DPCI over serial lines, and 
Macintoshes running MacApollo.

Disclaimer: I am mainly familiar with mailboxes and sockets, and am 
theorizing about shared memory.

In article <7808@eddie.MIT.EDU> you write:
> 	1). The shared memory facility allows two processes to
> 	communicate through a common chunk of memory, and support is
> provided for mutual-exclusion locking allowing one to implement a
> samaphore abstraction.  This method does not work across a network,
> only within a single node. It is thus useless for implementing truly
> parallel systems on an Apollo ring, but seems to me that it is the
> fastest -- requires the least amount of system overhead.

    I see several problems with shared memory in a network.
Which node's memory is being mapped?  If more than one, how do you 
synchronize all changes?  I think that what you would need for this
is remote procedure call.  My limited knowledge of Apollo's NCS 
(Network Computing System) is insufficient to answer these questions.

> 	2). Aegis provides a socket abstraction, similar in spirit (I
> 	think -- never played with it) to the standard Unix socket
> abstraction. Again, this does not work between nodes, although I can
> see no reason why this restriction exists (can anyone shed some light
> on this point).

    To use sockets between nodes (or over the ethernet to other systems)
you need the TCP/IP server. This in fact works very well. For example: 
telnet, ftp, X-windows... (This one I know fairly well (:-) )

> 	3). The third model is a server-client model. The actual
> 	communication takes place through a *mailbox* file, located in
> a particular place. This abstraction works across nodes.  It assumes
> the existance of a filesystem which allows transparent internode file
> access (such as the Aegis file system).  Seems to me to be the simplest
> to use, and since it works across the net, allows the implementation of
> truly parallel applications. It also seems to be the slowest, requiring
> the most amount of system overhead of all three methods I've
> mentioned.

I agree completely. The only addendum I would make is that mailboxes
only work within the AEGIS domain, not in a heterogenous network.
Back to theorizing mode: I think you could write code for extended
mailboxes using the open systems toolkit. This is something we may
need to do to communicate with XNS systems - possibly easier than
trying to get a TCP server going on the other end.

> 						-- Rich (rich@eddie.mit.edu).
> 	The circle is open, but unbroken.
> 	Merry meet, merry part,
> 	and merry meet again.

--------------------------------------------------------------------------------
George Zipperlen                    george@apollo.lap.upenn.edu
Language Analysis Project           (215)-898-1954
University of Pennsylvania          Generic Disclaimer
Philadelphia, Pa. 19103             Cute saying
--------------------------------------------------------------------------------

mishkin@apollo.uucp (Nathaniel Mishkin) (01/07/88)

In article <2958@super.upenn.edu> george@apollo.lap.upenn.edu (George Zipperlen) writes:
>In article <7808@eddie.MIT.EDU> you write:
>> 	1). The shared memory facility allows two processes to
>> 	communicate through a common chunk of memory, and support is
>> provided for mutual-exclusion locking allowing one to implement a
>> samaphore abstraction.  This method does not work across a network,
>> only within a single node.
>I see several problems with shared memory in a network.
>Which node's memory is being mapped?  If more than one, how do you 
>synchronize all changes?  I think that what you would need for this
>is remote procedure call.  My limited knowledge of Apollo's NCS 
>(Network Computing System) is insufficient to answer these questions.

Shared (virtual) memory -based IPC is, in general the fastest way to
go.  However, it depends on all the processes that are communicating
via IPC sharing a common physical memory.  The sharing is implemented
via the memory management hardware -- virtual pages in multiple processes
are mapped to the same physical memory pages.  (I.e. we don't pretend
to *try* to hack the synchonization problem.) Thus, VM-based IPC can
be used only when all processes that are communicating are running through
the same physical memory.

>> 	2). Aegis provides a socket abstraction, similar in spirit (I
>> 	think -- never played with it) to the standard Unix socket
>> abstraction. Again, this does not work between nodes, although I can
>> see no reason why this restriction exists (can anyone shed some light
>> on this point).
>
>    To use sockets between nodes (or over the ethernet to other systems)
>you need the TCP/IP server. This in fact works very well. For example: 
>telnet, ftp, X-windows... (This one I know fairly well (:-) )

In fact, the socket abstraction can in principle be used over a variety
of "protocol families" (e.g. IP, XNS).  Currently, the only protocol
family Apollo supports is IP.  However, the good news is that on Apollos,
anyone can add support for a new protocol family withOUT having to rebuild
the kernel.  (Well, you need one more include file that I'd happily donate
if someone really wanted to try to do this.)  This is done using Extensible
Streams (Open Systems Toolkit).  All you do is write a type manager that
supports the socket "trait" and then you can write programs that use
the regular BSD "socket", "send", "recv", etc. calls to talk over your
new kind of socket.

>> 	3). The third model is a server-client model. The actual
>> 	communication takes place through a *mailbox* file, located in
>> a particular place.
>
>I agree completely. The only addendum I would make is that mailboxes
>only work within the AEGIS domain, not in a heterogenous network.

Exactly.  This is one of the reasons we did NCS.  NCS is a remote procedure
call facility that is heterogeneous, not only in terms of what systems
(Unix, VMS, MS/DOS) it runs on, but also in terms of what protocol families
it runs over.

In addition to its being a good IPC mechanism because it supports
heterogeneity, NCS is good because RPC is often the most natural way
to do IPC.  Many IPC applications (especially those that are more oriented
to passing "control" and "query" information rather than doing bulk data
transfer) implemented using non-RPC techniques essentially end up doing
ad hoc RPC -- define a record, fill in values, send record over IPC
channel, wait for and receive reply with result record, extract values.
With NCS/RPC, you just define an interface with a set of procedures whose
parameters are the values you want to pass between processes.  To
communicate, you simply call the procedure defined in the interface.
(A special compiler turns the interface you wrote into a set of "stub"
procedures that do all the dirty work.)
-- 
                    -- Nat Mishkin
                       Apollo Computer Inc.
                       Chelmsford, MA
                       {decvax,mit-eddie,umix}!apollo!mishkin

guy@gorodish.Sun.COM (Guy Harris) (01/08/88)

> Exactly.  This is one of the reasons we did NCS.  NCS is a remote procedure
> call facility that is heterogeneous, not only in terms of what systems
> (Unix, VMS, MS/DOS) it runs on, but also in terms of what protocol families
> it runs over.

The Sun RPC facility is also heterogeneous; it has an object-oriented interface
to the transport.  The only instances officially offered in the current version
are to TCP and UDP, although a demo version that runs on top of the ISO network
layer exists and has been demonstrated.
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

mehdi@venus.SanDiego.NCR.COM (Mehdi Bonyadi) (03/15/90)

Hi everybody,
   I wrote a program that deals with connectivity information of a logic
design.  I took the connectivity information from a CAD tools set output.
What I want to do is to see if it is possible to "kind of" integrate
the schematic capture part of the CAD tool into my  program.  I do not have 
the sources for the CAD system, it is a commercial tool.  My program can find
some characteristics of the logic design. For some of its functions it needs
some input from the user, ie name of a part or name of a signal.  Currently,
the user must type these names in, but what I am thinking of doing is to
monitor the schematic capture process from outside, ie my program, and read
the input of the user and the response of the schematic capture program.  
This way the user can just use the mouse and pick a signal on the schematic
and ask for the information on that signal from the schematic capture
program, the response would be few lines of text giving the info about the
signal.  This information goes to a tty subwindow of the schematic capture
frame.  And I want to read this text.

I was told that I can look at the /dev/kmem and monitor the clist of that
tty window and go from there.  I was wondering about some of the complications
that I am putting myself into if I go through this path.  For one not
everybody has read permission to /dev/kmem, or how do I find the clist for
this tty subwindow, or if I am violating copyright if I look at the /dev/kmem
and monitor the clist.  

By the way if anyone had done such a thing before, I would appreciate if
I could take a look at the program.

I am open to any other suggestions that might be applicable to this problem.

---------------------------------------------------------------------------


   Mehdi Bonyadi,  NCR Corporation, E & M San Diego  - Mail Stop 4424
   16550 West Bernardo Drive
   San Diego, CA 92127
   (619) 485-2233 mehdi@venus.SanDiego.NCR.COM