[comp.sys.amiga.programmer] Inter process communication

rg20+@andrew.cmu.edu (Rick Francis Golembiewski) (03/07/91)

I'm wokring on an application that I'de like to implement with
seperatre tasks handling different sections (ie 1 task monityors user
IO, 1 task handles setting program parameters, 1 task does display). 

What I want is an efficient way to have these tasks talk to each
other.  I could use files, but I'de like to have a function like
waitformessage()  so that I don't have a busy wait loop, and I'de like
to have two way communication. 

I know that AmigaDOS has some message  system, however I don't have
sufficient documentation to iplement it (and I'de like to have fairly
straight forward calls, I don't want to mess around with lots of
structures). I seem to remember some kind of library for inter-process
communication beign released, does anyone know what this library was
called? 

I'de appreciate any suggestions anyone may have.

//     Rick Golembiewski  rg20+@andrew.cmu.edu  \\
\\       #include stddisclaimer.h               //
 \\  "I never respected a man who could spell" //
  \\               -M. Twain                  //

ken@cbmvax.commodore.com (Ken Farinsky - CATS) (03/07/91)

In article <gbpGPo600Vp3I4v4g4@andrew.cmu.edu> rg20+@andrew.cmu.edu (Rick Francis Golembiewski) writes:
>
>What I want is an efficient way to have these tasks talk to each
>other.  I could use files, but I'de like to have a function like
>waitformessage()  so that I don't have a busy wait loop, and I'de like
>to have two way communication. 
>
>I know that AmigaDOS has some message  system, however I don't have
>sufficient documentation to iplement it (and I'de like to have fairly
>straight forward calls, I don't want to mess around with lots of
>structures). I seem to remember some kind of library for inter-process
>communication beign released, does anyone know what this library was
>called? 

Purchase the "Amiga ROM Kernel Manual: Libraries and Devices" and read the
chapter called "Exec: Messages and Ports".  I recommend that you read
as much as possible as the Amiga is a very complex programming environment.
It is a good idea to have the "Amiga ROM Kernel Manual: Includes and
Autodocs" as well (for reference).

Available at a bookstore near you (you can order them if not in stock):

	Amiga ROM Kernel Manual: Libraries and Devices
		Addison Wesley
		ISBN 0-201-18187-8
	Amiga ROM Kernel Manual: Includes and Autodocs
		Addison Wesley
		ISBN 0-201-18177-0
-- 
--
Ken Farinsky - CATS - (215) 431-9421 - Commodore Business Machines
uucp: ken@cbmvax.commodore.com   or  ...{uunet,rutgers}!cbmvax!ken
bix:  kfarinsky

poe@daimi.aau.dk (Peter rb{k) (03/08/91)

rg20+@andrew.cmu.edu (Rick Francis Golembiewski) writes:

.....

>I know that AmigaDOS has some message  system, however I don't have
>sufficient documentation to iplement it (and I'de like to have fairly
>straight forward calls, I don't want to mess around with lots of
>structures). I seem to remember some kind of library for inter-process
>communication beign released, does anyone know what this library was
>called? 

It might be my odin.library that you have seen. It was posted on
comp.sources.amiga some time ago.
It should be ftp'able from abcdf20.larc.nasa.gov and wuarchive

Good luck...

   - Peter. (poe@daimi.aau.dk)

--
**************************************************************
* "Who other than IBM would want to put a mainframe on       *
*  everybodys desk."                                         *
**************************************************************

navas@cory.Berkeley.EDU (David C. Navas) (03/11/91)

In article <> jnmoyne@lbl.gov (Jean-Noel MOYNE) writes:
>
>       There is a good internal message system in the OS of the Amiga (in 
>the exec.lib to be precise). Message passing using exec works very well, 
>and is quite straight forward to understand and implement (see the RKM for 
>help).

Except for one rather hairy problem -- disappearing server ports...

>       But there is another solution: Pete Goodeve has written a package 
>called IPC for "Inter Process Communication" (how did you guess ? (-:). 
>
>PS: Pete's EMail is goodeve@violet.berkeley.edu

Does that work as well?  I've always mailed pete@violet.berkeley.edu, and
even his finger givs his login name as "pete" not "goodeve"?

PPIPC (Pete & Peter's IPC) is very good for server-client modelled
communications, can get hairy for one-to-many broadcasts, and still doesn't
solve the disappearing ReplyMsg port, but I use it *always* instead of the
regular calls, because of the added protection they provide.
At least one other person on the net has used it in his scientific
application, but I'll let him discuss what he thinks about it...


David Navas                                   navas@cory.berkeley.edu
"Saddam was a man who trusted only himself.  Seems like he trusted one man
 too many..." [Also try c186br@holden, c260-ay@ara and c184-ap@torus]

kent@swrinde.nde.swri.edu (Kent D. Polk) (03/12/91)

In article <11854@pasteur.Berkeley.EDU> navas@cory.Berkeley.EDU writes:
>In article <> jnmoyne@lbl.gov (Jean-Noel MOYNE) writes:
>
>>       But there is another solution: Pete Goodeve has written a package 
>>called IPC for "Inter Process Communication" (how did you guess ? (-:). 
>>
>
>PPIPC (Pete & Peter's IPC) is very good for server-client modelled
>communications, can get hairy for one-to-many broadcasts, and still doesn't
>solve the disappearing ReplyMsg port, but I use it *always* instead of the
>regular calls, because of the added protection they provide.
>At least one other person on the net has used it in his scientific
>application, but I'll let him discuss what he thinks about it...

...Um...
...Me?..

At the risk of taxing the patience of those who keep seeing me talk
about this, please forgive.

I use PPIPC in server/client and peer/peer relationships in my data
acq. system. I mainly use it to get around the "disappearing ReplyMsg
port" problem in conjunction with a semaphore-controlled message-port
list of peer/peer tasks. I haven't noticed a problem with this using
PPIPC under 2.0 in the client/server mode, as my mechanisms pretty much
prevent this from happening (I think).

However, it can happen during the peer/peer modes, and so use the
semaphore list & PPIPC facilities to determine what the replyport is.
The semaphore list usually gives me an available replyport since the
semaphore is obtained while a peer task is shutting down. (I control
startup/shutdown of tasks to explicitly control this).  In the case
when I have lost the peer msgport, I then again consult the semaphore
list to determine the next peer to send the message to. This mechanism
allows my data acq. system to handle inserting/deleting filter tasks
while messages/data are running through the system.

As you can see, I don't have as much of a problem since I attempt to
control the situations which would lead to the problems you guys are
discussing. I also use very few of the fancy PPIPC capabilities right
now, but am working on it. :^)

Kent Polk: Southwest Research Institute (512) 522-2882
Internet : kent@swrinde.nde.swri.edu
UUCP     : $ {cs.utexas.edu, gatech!petro, sun!texsun}!swrinde!kent

navas@cory.Berkeley.EDU (David C. Navas) (03/15/91)

In article <1808@swrinde.nde.swri.edu> kent@swrinde.nde.swri.edu (Kent D. Polk) writes:
>In article <11854@pasteur.Berkeley.EDU> navas@cory.Berkeley.EDU writes:
>>and still doesn't
>>solve the disappearing ReplyMsg port, but I use it *always* instead of the
>>regular calls, because of the added protection they provide.
>>At least one other person on the net has used it in his scientific
>>application, but I'll let him discuss what he thinks about it...
>
>...Um...
>...Me?..

Why yes, how did you guess :) :)...

>At the risk of taxing the patience of those who keep seeing me talk
>about this, please forgive.

Nah, this is csa.p where such things live to be talked about :)

>I use PPIPC in server/client and peer/peer relationships in my data
>acq. system. I mainly use it to get around the "disappearing ReplyMsg
>port" problem in conjunction ...

Hmm, I think either we have a misunderstandment, a mistyping, or I have missed
a rather important part of PPIPC!!

It seems from your later statements that you are using PPIPC to avoid the
"disappearing port" problem, not the "disappearing ReplyPort" problem.

IN particular, I think you have the following (please forgive me if I get the
	PPIPC calls named incorrectly, it's been to long...):


	ServerList --> port1 --> port2 --> port3 --> port4 --->||

   Each port has been properly UsePort()'ed, and some time later:

	Task1:  ServeIPCPort(port1);
	Task2:  ServeIPCPort(port2);
	Task3:  ServeIPCPort(port3);
	Task4:  ServeIPCPort(port4);

   Now when you go to send a message you do a PutIPCMessage(port1, msg),
and if it succeeds, you await a reply, if not you try port2.  When you
receive a reply you try port2.  This process continues.

At any time thereafter, task4 can ShutIPCPort(port4), and port4 will no
longer accept *new* messages, although it will still be around....

The ReplyPort problem I have is this:

	Let's say that when you send a message off to port1, you *don't*
	await your response, and send off to port2, port3 and port4.  Now
	you shut yourself down.

	What do you do when with those messages when the ReplyMsg (really,
	the PutIPCMsg(msg->ipc_Msg.mn_ReplyPort, msg) ) fails?  Specifically,
	when TaskX tries to reply, the message *bounces* off the shut port.
	Do you delete the message?  What if the message has pointers in it --
	do you de-allocate those regions too?  How?


If you have developed a task-independent solution both Pete and myself would
be rather interested...  (of course, that's short of counting the number of
messages actually sent out and awaiting their reply, which under certain
circumstances might be quite difficult...)

>Kent Polk: Southwest Research Institute (512) 522-2882
>Internet : kent@swrinde.nde.swri.edu
>UUCP     : $ {cs.utexas.edu, gatech!petro, sun!texsun}!swrinde!kent

Thanks Kent!


David Navas                                   navas@cory.berkeley.edu
"Oh, that's an Apple???  I though they just shot themselves in the head..."
[Also try c186br@holden, c260-ay@ara and c184-ap@torus]

kent@swrinde.nde.swri.edu (Kent D. Polk) (03/15/91)

In article <12004@pasteur.Berkeley.EDU> navas@cory.Berkeley.EDU writes:
>
>IN particular, I think you have the following (please forgive me if I get the
>	PPIPC calls named incorrectly, it's been to long...):
>
>
>	ServerList --> port1 --> port2 --> port3 --> port4 --->||
>
>   Now when you go to send a message you do a PutIPCMessage(port1, msg),
>and if it succeeds, you await a reply, if not you try port2.  When you
>receive a reply you try port2.  This process continues.

Actually I make use of a special message id which by design does not
expect a reply. All others do. The non-replying message is passed to
the succ message port as in your example below.

>The ReplyPort problem I have is this:
>
>	Let's say that when you send a message off to port1, you *don't*
>	await your response, and send off to port2, port3 and port4.  Now
>	you shut yourself down.
>
>	What do you do when with those messages when the ReplyMsg (really,
>	the PutIPCMsg(msg->ipc_Msg.mn_ReplyPort, msg) ) fails?  Specifically,
>	when TaskX tries to reply, the message *bounces* off the shut port.
>	Do you delete the message?  What if the message has pointers in it --
>	do you de-allocate those regions too?  How?
>

I provide a solution to the problem by limiting the possibilities.
Unfortunately, limiting the possibilities also limits functionality,
but... I'm working on it. I handle the problem as such:

  Manager (Task launcher/manager and semaphore messageport list setter-upper:^)

   > Sampler --> port1 --> port2 --> port3 --> port4 --->|
   |                                                     |
   ------------------------------------------------------ 

In this mechanism, the sampler allocates a 'stack' of "BELT" messages,
and attaches a data event buffer ptr with each as they get sent out.
These messages have a "BELT" message id. When the receiver gets this
message, and recognizes it to be a "BELT" id, it nulls the ReplyPort
ptr, and calls relay_msg():

/******************************/
int relay_msg(msg)
struct IPCMessage *msg;
{
   int i;
   int didit = FALSE;

   if (my_node) {
      for (i=0;i<TIMES && !didit; i++) {
         ObtainSemaphore(sema);
         if (PutIPCMsg(my_node->succ_port, msg)) didit = TRUE;
         ReleaseSemaphore(sema);
         if (!didit) Delay(10);			/* "Drive friendly" */
      }
      if (!didit) mputs("Error sending message");
   }
   return (didit);
}

Note that the PutIPCMsg() has never failed the first time (except when I forced
it for testing purposes), even in the midst of message ports being inserted and
deleted because of the Semaphore controlled list.

Now, my solution to the problem is that the Sampler (which allocates
the BELT message stack) doesn't shut down until ALL BELT messages have
been returned. Note that this is possible since the sampler is the
originator of all BELT messages. BTW, the messages are allowed to come
in in any order as the 'stack' is an array of messages pointers. When a
new message comes in, it's address simply gets assigned to the array
index I am pushing back on the stack. In actuality, I don't believe
they have ever gotten out of order (except when I was developing this
stuff & lost a few messages here and there).

Actually there is one more message that is passed this way - an "INIT"
message, which tells all filters to reconfigure for another sampler.

Back to ReplyPort being nulled: I read in the RKM's that if the
RepoyPort is null, no message is sent. I use this to provide a simple
mechanism without exceptions in my message handler stuff -> Normal
messages get replyed to, BELT and INIT messages get relayed.

Now, back to your question:

>If you have developed a task-independent solution both Pete and myself would
>be rather interested...  (of course, that's short of counting the number of
>messages actually sent out and awaiting their reply, which under certain
>circumstances might be quite difficult...)

:^)

Now, can I join in on the discussion? I have some ideas... some things I
want to do - like expanding (rewriting) to provide for several new projects.
Specifically in this order:

- Getting rid of some of the limitations (Making better use of PPIPC capabilities)
- Providing a graphic environment for this stuff,
- Writing a graphic "logic simulator"
- Expanding & combining the logic simulator with the data-acq. stuff to
provide a complete data-acq. hardware simulator.

Ambitious, aren't I? (Wish I could do this for a living)

I was talking with Pete about this, but he abandoned me so he could go
play with his Amiga. Imagine that! (Pete; a sense of humor now...)

----------------------------------------------------------------------
This is while I try to work on that gpib.device. BTW, can anyone point
me to source or at least a complete outline for writing a multi-unit
device - as in something like scsi.device? I have started on the
design, but would like a better idea of how Amiga devices like this
should be set up - i.e. I'm starting from a Unix background and trying
to convert my ideas from there to the Amiga way of doing this stuff.
There are a number of ways of handling device commands as well as the
controller (local) commands themselves. Help would be very much
appreciated.

Thanks,
Kent Polk: Southwest Research Institute (512) 522-2882
Internet : kent@swrinde.nde.swri.edu
UUCP     : $ {cs.utexas.edu, gatech!petro, sun!texsun}!swrinde!kent

jdickson@jato.jpl.nasa.gov (Jeff Dickson) (03/16/91)

In article <12004@pasteur.Berkeley.EDU> navas@cory.Berkeley.EDU writes:
>In article <1808@swrinde.nde.swri.edu> kent@swrinde.nde.swri.edu (Kent D. Polk) writes:
>>In article <11854@pasteur.Berkeley.EDU> navas@cory.Berkeley.EDU writes:
>
>At any time thereafter, task4 can ShutIPCPort(port4), and port4 will no
>longer accept *new* messages, although it will still be around....

	Does that mean the message port never gets deleted?
>
>The ReplyPort problem I have is this:
>
>	Let's say that when you send a message off to port1, you *don't*
>	await your response, and send off to port2, port3 and port4.  Now
>	you shut yourself down.
>
>	What do you do when with those messages when the ReplyMsg (really,
>	the PutIPCMsg(msg->ipc_Msg.mn_ReplyPort, msg) ) fails?  Specifically,
>	when TaskX tries to reply, the message *bounces* off the shut port.
>	Do you delete the message?  What if the message has pointers in it --
>	do you de-allocate those regions too?  How?
>
>
>Thanks Kent!
>David Navas                                   navas@cory.berkeley.edu

	This is my two cents on how to implement an IPC system that is not
plagued by the problems I believe to be in the IPC talked of above.

	1. Have a master IPC task that hands out message ports on request.
	   The call from the client will really send a request message to
	   the master. The message will include the client's task id and
	   an allocated signal number. The master will construct the message
	   port, but it will live in the context of the client, because the
	   signal number belongs to the client and the constructed message
	   port SigTask field will be initialized to that of the client.

	2. >Let's say that when you send a message off to port1, you *don't*
	   >await your response, and send off to port2, port3 and port4.  Now
	   >you shut yourself down.

		Messages that have arrived at a particular message port are
	   queued in a list that is only accessible off it. The memory con-
	   sumed is not dependant on the underlying task being present or
	   not. You couldn't simply disappear. You can't do that with alloc-
	   ated memory, open files, etc. You would have to at least relinguish
	   the message port. The master would set SigTask to itself and the
	   signal to some preallocated "catch all" signal. That way the master
	   could do clean up before deleting it. Also, if the message port
	   were to stick around, this way you could arrange for it to no
	   longer accept messages. Really the task that the message port
	   belonged to could not be sent a message, but the master could be.
	   It wouldn't reply, but it would delete them. The above could be
	   considered synonomous with ShutIPCPort(port4).

	3. If the relinquished message port contains pointers to other
	   objects, then either the client has to deal with them before
	   dieing, or the master has to know them.

	This is merely speculative of how I would design an IPC system. I
	am at my real job and don't have my manuals at hand.

						Jeff

		P.S. Apologize if my way is illeagal. But I don't see how
	it could be.

pete@violet.berkeley.edu (Pete Goodeve) (03/17/91)

I guess I should join in on this thread -- a bit late I'm afraid.
[every time I get back from Christmas in England, it takes me months to
get back up to date.  One of the areas I've studiously been avoiding is
c.s.a.p...]  Thanks to Dave Navas for pointing it out to me a couple
of days ago.


In  <gbpGPo600Vp3I4v4g4@andrew.cmu.edu> (6 Mar),
Rick Francis Golembiewski (rg20+@andrew.cmu.edu) started it with:
>
>
> I'm wokring on an application that I'de like to implement with
> seperatre tasks handling different sections (ie 1 task monityors user
> IO, 1 task handles setting program parameters, 1 task does display).
>
> [........] I seem to remember some kind of library for inter-process
> communication beign released, does anyone know what this library was
> called?
>

As other people have suggested already, you're probably thinking of
ppipc.library, available on Fish #290 (around June '89).  This is still
the current version -- it is stable enough not to have needed any updates.
NOT because I've lost interest!  It is still very much alive, and is an
integral part on my own environment, especially the "IP:" pipe device
(Fish #374).

Dave Navas may not have made it completely clear, but he is the author of
JazzBench, of which ppIPC is also a component. [NOT the creative part!! (:-))]
Kent Polk has been using it a suite of Scientific Data Acquisition modules
that are very impressive.

To reiterate briefly, the idea of ppIPC is to use Exec messages -- which
are fast and flexible -- but give them a framework that makes it easier for
independently written program modules to communicate.  First, it provides
a "safety net" in the form of the IPCPort, which is protected against the
disappearance of the task that is receiving its messages (a hazard with
standard Exec ports).  Second, its IPCMessage has a standard, but
non-restrictive, structure that is "self-identifying" as to the function
of the message -- similar in spirit to IFF (but a lot simpler!); a receiver
can tell immediately from the header ID, and the various item IDs within
the message, what the Client requires.  (Or alternatively it can decide it
hasn't the foggiest idea what the Client requires, and reject the message..)



=======================================


In  <10708@dog.ee.lbl.gov> (7 Mar),
Jean-Noel MOYNE (jnmoyne@lbl.gov) writes:
>
>        There is a good internal message system in the OS of the Amiga
> [....]
>
>        But there is another solution: Pete Goodeve has written a package
> called IPC for "Inter Process Communication" (how did you guess ? (-:).
Thanks for the plug, Jean-Noel! (:-))
>
>        Anyway, even if you want to go fast and use IPC for your project,
> it is still a vey good idea to buy the RKM, and to read about exec's
> message passing, since it's used everywhere on the Amiga (Yes, you can
> actually say that AmigaOS is sort of "Object Oriented OS", no kidding).

Agreed -- on all points.

> PS: Pete's EMail is goodeve@violet.berkeley.edu

Oops!  Ektchewally, I'm on FIRST name terms with our computer...! (:-))
-- it's:  pete@violet.berkeley.edu


=======================================


In  <1991Mar8.112635.1393@daimi.aau.dk> (8 Mar),
Peter rb{k (poe@daimi.aau.dk) writes:
> It might be my odin.library that you have seen. It was posted on
> comp.sources.amiga some time ago.
What IS this!?  Is EVERYBODY who works on IPC named Peter?? (:-))
(I'm afraid I missed your system though -- have to see if I can find it...)

=======================================

In  <11854@pasteur.Berkeley.EDU> (10 Mar),
David C. Navas (navas@cory.Berkeley.EDU) writes:
> PPIPC (Pete & Peter's IPC) is very good for server-client modelled
> communications, can get hairy for one-to-many broadcasts, and still doesn't
> solve the disappearing ReplyMsg port, but I use it *always* instead of the
> regular calls, because of the added protection they provide.

Valid points.  One facility I've wanted for a long time is a "broadcast"
mechanism -- pre-dating ppIPC by quite a ways. [I remember doing some heavy
thinking on it while driving up to Expo 86...]  A partial solution is
the "mani" facility (only available from abcdf20.larc.nasa.gov so far,
I'm afraid) thaat clones a wide range of IPC messages to multiple
destinations. I would like, though, a more general "broker" that accepts
notifications of interest in "topics" from other processes, and handles the
distribution of relevant messages between them.  Some day.

The ReplyPort question has led to quite a bit of correspondence between
Kent and Dave which I hope you read, because I'm not going to reproduce it
here [I tried, but the summaries kept ending up longer that the
originals... (:-))]  I hope my comments have some relevance.
...........

I sort of get the feeling that *I'm* maybe missing something, but I've
never worried all that much about Replies [...as many of my correspondents
have noticed...(:-))].  There seem to me to be about three ways you could
handle the problem.

The most straightforward -- and the one I have usually adopted -- is that
if you send a message WITH a ReplyPort address included, you WILL wait
around til that message gets back; the ReplyPort is therefore just an
Exec Port, not an IPCPort, and the original sender remains responsible for
the message and its contents (unless transfer of some of the contents has
been requested by flags in the message).  This DOES require keeping a count
of outstanding messages, and assumes that the Server is not going to go
belly up before it replies, but so far I haven't had much problem with
that.

A second tack is NOT to expect a reply.  The protocol has no problem with
this -- though it may make some people uncomfortable...!  In this case,
though, the receiver of the message has to dispose of it when done, which
probably (not necessarily) means that the message contents should obey
all the rules for pointers and sizes and so on; if that's so, ANY receiver
could dispose of the message and its data blocks without trouble.  All data
would have to be transferred to the receiver -- no pointers to strings in
the Client code!

The third idea is derived from what I understand Kent has done -- don't
specify a ReplyPort directly, but keep passing the message on via
PutIPCMessage()s until it gets back to its original sender. (In the two
process case, this devolves to what I think Dave was talking about: using
PutIPCMessage to return the message directly to its sender.  Note that it
isn't part of the basic ppIPC spec to expect that the ReplyPort field of a
message is an IPCPort; it does no harm if it is, but it would be a real
hazard the other way round!)  It's critical here, of course, that all the
cooperating programs "know what they're doing", because it's not part of
the ppIPC specification, but, given the protective walls of specific port
names combined with specific message IDs, this shouldn't be too hard. The
tricky part is for a process to know where to pass the message; Kent
handles this with his `Manager-launcher...', and -- from his comments -- it
wasn't that easy to do, but it seems to work.

If I understand Dave aright, his problem is with deciding what to do, in
that sort of situation, when you try to pass on (or back) a message and it
hits a "shut" IPCPort.  You then have to dismantle and dispose of it
yourself.  As I said, though, you should only get involved in such a
situation if you're using "well formed" IPCMessages, with every data block
they reference having the "transfer" flag set, so that the data becomes the
responsibility of whoever owns the message at that moment. With that true,
you don't ever have to worry that you might bomb things by deleting
something you shouldn't.

Or have I indeed missed something...?

=======================================


>
> Now, can I join in on the discussion? I have some ideas...
> [....]
>
> I was talking with Pete about this, but he abandoned me so he could go
> play with his Amiga. Imagine that! (Pete; a sense of humor now...)
>
No, No... NO!!  Something MUCH more important -- my new Yamaha keyboard!
Of course when I get a MIDI box too, and CAN hook it up to my Amiga....

                                        -- Pete --

pete@violet.berkeley.edu (Pete Goodeve) (03/17/91)

In  <1991Mar15.185824.8200@jato.jpl.nasa.gov> (15 Mar),
Jeff Dickson (jdickson@jato.jpl.nasa.gov) writes:
>In article <11854@pasteur.Berkeley.EDU> navas@cory.Berkeley.EDU writes:
|>
|> At any time thereafter, task4 can ShutIPCPort(port4), and port4 will no
|> longer accept *new* messages, although it will still be around....
>
>       Does that mean the message port never gets deleted?
>
The thing about ppIPC is that a Port doesn't "belong" to either the Server
or Clients using it.  It is managed by the library, and stays around just
as long as there are any references to it.  "Shutting" a port is a
convenience that lets the server handle any outstanding messages while
blocking any more from arriving, before finally dropping its own reference
to the port.

>       This is my two cents on how to implement an IPC system that is not
> plagued by the problems I believe to be in the IPC talked of above.

Nahh! ...NO problems.... (:-))

Seriously, though, I think ppIPC is covering exactly the points you are
worried about.  Go check out the docs on Fish #290.

>
>       1. Have a master IPC task that hands out message ports on request.
>          The call from the client will really send a request message to
>          the master. [....]
As I say, the ppipc.library does essentially precisely this (although it
is a resident library rather than a separate task).  Because it is a single
"common channel" it can manage the assignment (and deallocation) of ports
safely and easily.

>
>       2. [...] Messages that have arrived at a particular message port are
>          queued in a list that is only accessible off it. The memory con-
>          sumed is not dependant on the underlying task being present or
>          not. You couldn't simply disappear. You can't do that with alloc-
>          ated memory, open files, etc. You would have to at least relinguish
>          the message port. The master would set SigTask to itself and the
>          signal to some preallocated "catch all" signal. That way the master
>          could do clean up before deleting it. Also, if the message port
>          were to stick around, this way you could arrange for it to no
>          longer accept messages. Really the task that the message port
>          belonged to could not be sent a message, but the master could be.
>          It wouldn't reply, but it would delete them. The above could be
>          considered synonomous with ShutIPCPort(port4).
Again, this is the way ppipc works. Except that there is really no need to
have a "surrogate" process to handle messages for a departed task.  If the
task no longer exists, the message is simply blocked (PutIPCMessage fails)
and the sender disposes of it again.  There is, though, provision in the
system for a "Broker" that handles loading of the processes that service
particular ports, and -- if there is some reason for it -- this is perfectly
capable of assuming responsibility for orphan messages.

>
>       3. If the relinquished message port contains pointers to other
>          objects, then either the client has to deal with them before
>          dieing, or the master has to know them.
...Taken care of by the definition of the IPCMessage format.  Flags indicate
for example whether the data remains the property of the sender (and will
not be dumped until the message is replied) or is to be transferred to
the server.

>
>       This is merely speculative of how I would design an IPC system. I
>       am at my real job and don't have my manuals at hand.

Well, they MUST be good ideas... they're awfully close to mine (and others)!
(:-)) (:-))
                                        -- Pete --