[comp.sys.amiga.tech] IPC - IPCMessage and Networks

shf@well.UUCP (Stuart H. Ferguson) (05/04/88)

In looking over Pete's IPC.h and the Pete/Peter IPCMessage design, I
have come to find what may be problems with the basic approach.  Pete
replied quite lucidly to my questioning the use of an array of pointers
instead of something more general such as a linked list, and I think I
can justify that decision for myself.  The objection I have now is
somewhat more serious, however. 

Pete says:

> ... One of the design goals
> (!) of the IPCMessage was that an "ignorant" server could for instance pass
> most items down a network, and would know which ones it couldn't, without
> needing any idea of the contents.  If a list node can contain "anything" we
> come bang up against my main objection to AREXX -- that its pointers can
> also contain "anything", so every process has to know "everything".  What's
> a poor dumb server to do?

Pete's "for instance" example of passing messages down a network is
really the *only* purpose I can find for defining a standard such as
this.  I can't see any other reason why a program would examine the
contents of a message unless it was the server for the message in which
case it would know what to expect a-priori. 

So first of all, are there any other uses of "dumb" servers besides
network front-ends? 

I think the IPCMessage format is going to run into some problems dealing
with networks.  Not the least of these problems is how does a network
server attach itself transparently into the IPC system?  Will client
programs have to talk explicitly to a network server, or will they just
attach to a server and not care if there's a network involved?  If this
is to be handled transparently using the functions specified in IPC.h,
then networking would have to be *built-in* to these functions.  How
will network servers be supported? 


Another problem is much more subtle.  It seems to me that the only way
to test if a given client-server protocol will work over a network is to
actually test it using a specific network server.  The reason is that
many of the flag bits specified by the Pete/Peter standard have little
meaning for clients and servers, but have _crucial_ meaning for a
network server.  As result, programs which work fine running on a single
Amiga could break trying to run on networked Amigas.  Worse still,
programs that work while running one network server could break when
running with a different server. 

By way of example, consider a client (C) and a server (S) using the
proposed IPC mechanism.  The relationship between the two consists of
"C" sending some data to "S" in a message and "S" modifying it and
replying.  The data gets communicated using one IPCItem with a pointer
to the data block which is not included in the message itself.  Now
let's say that "S" forgets to set the IPC_MODIFIED flag even though the
data for that IPCItem does get modified.  This mistake could be deadly
to networks in a subtle and nasty way. 

The communication will work fine on a single Amiga since neither "C" nor
"S" really care about the state of the IPC_MODIFIED bit in the returned
message.  "S" justs modifies the data in place in memory and when "C"
gets the message back it has control of the modified data.  The
communication could break down when using a network server as
intermediary, however.  The network front-end will transmit the message
and the data over the network for the "S" process running on the remote
system.  The remote "S" will modify the data but will again FORGET to
set the IPC_MODIFIED bit for that IPCItem.  Now, the network server,
crafty and efficient as it is, _does_ examine the IPC_MODIFIED bit and
only sends back the data blocks which have been modified, so the
modifications that "S" made do *not* get transmitted back over the
network.  As result, when the message comes back to "C", the IPCItem
still points to unmodified data and "C" has failed to get what it
requested from "S" at all. 

But it's worse than this.  What if the programmer who wrote "C" and "S"
DID test them out over a network, but the network he used was dumber
than the network described above?  This "dumb" network server might
always transmits all the data in the message without regard for the
IPC_MODIFIED bit.  In this case, the server would transmit the data, the
remote "S" program would modify it and the network server would transmit
the modified data back again so that "C" sees the modified data.  The
programmer says, "Great!  It works over a network -- call the
publisher!"  The user with the "smart" network server on the other hand
says, "(sarcastically) Great!!  So it works over a network, huh?  Call
the publisher ..." 

The general problem is that the IPCMessage is a "soft" standard.  It's
just a set of rules and nothing breaks if programmers violate the rules,
except down the line when someone expected the rules to be followed. 
It's rather like expansion memory in the old days of Amiga programming
-- the rule was to put all images into chip ram, but since everything
was chip ram a lot of programmers didn't follow the rules.  God knows
what will break if/when we get an MMU on the Amiga.  Shared data is
_supposed_ to be in public memory, but since everything is public, how
do you test it?

The specific example I gave could be tested against by having a
NETWORK_TEST mode for message ports so that the message passing
functions would simulate a network and catch all the unexpected nasties. 
Seems like a lot of overhead to have in the functions themselves, 
however.
-- 
		Stuart Ferguson		(shf@well.UUCP)
		Action by HAVOC		(shf@Solar.Stanford.EDU)

peter@sugar.UUCP (Peter da Silva) (05/05/88)

In article <5872@well.UUCP>, shf@well.UUCP (Stuart H. Ferguson) writes:
> Pete's "for instance" example of passing messages down a network is
> really the *only* purpose I can find for defining a standard such as
> this.  I can't see any other reason why a program would examine the
> contents of a message unless it was the server for the message in which
> case it would know what to expect a-priori. 

That's like saying "why define a standard for command line arguments?".
Because not all programs will have full capability. That's the big deal
about IFF formats, for example. A program can take what it wants and can
use from the file (or in this case the message) and send the rest back
unopened... (you owe nothing, and get to keep the genuine imitation diamond
ring :->).

> So first of all, are there any other uses of "dumb" servers besides
> network front-ends? 

What do you mean by a "dumb" server? A program with less than full
capacity? Sure... how about a stripped down version to run in machines
with only half a Meg (god, how blaze' you get about RAM). How about
a version you hacked up to try stuff out that only handles one message?
Maybe it turns out only a couple of programs use some capability so
you can generally get away with not providing it... and those programs
will still work, or at least not crash.

> I think the IPCMessage format is going to run into some problems dealing
> with networks.  Not the least of these problems is how does a network
> server attach itself transparently into the IPC system?

The discussion really hasn't addressed how you initiate a conversation. Just
how you talk once you've been introduced. I think the next topic should be
just this... rather than more wrangling over message formats.

I think the object-oriented approach presented recently has quite a bit of
merit.

Have you any ideas?

> Another problem is much more subtle.  It seems to me that the only way
> to test if a given client-server protocol will work over a network is to
> actually test it using a specific network server.

You can simulate the network by writing a gateway that filters out the
stuff that's meaningless over a network. File locks, rastports, message
ports... stuff like that. How would you address this problem?

> By way of example, consider a client (C) and a server (S) using the
> proposed IPC mechanism.  The relationship between the two consists of
> "C" sending some data to "S" in a message and "S" modifying it and
> replying.  The data gets communicated using one IPCItem with a pointer
> to the data block which is not included in the message itself.  Now
> let's say that "S" forgets to set the IPC_MODIFIED flag even though the
> data for that IPCItem does get modified.  This mistake could be deadly
> to networks in a subtle and nasty way. 

This mistake would be deadly anyway, because most programs will ignore
stuff that hasn't been marked as modified. This will break badly-behaved
servers quickly, since they'll look like they're not doing anything.

If you really want to be sure, add the capability to copy message and forward
replies to the network simulator.

> The communication will work fine on a single Amiga since neither "C" nor
> "S" really care about the state of the IPC_MODIFIED bit in the returned
> message.  "S" justs modifies the data in place in memory and when "C"
> gets the message back it has control of the modified data.

But how does it know which data's modified if it doesn't look at the bits?

> The general problem is that the IPCMessage is a "soft" standard.  It's
> just a set of rules and nothing breaks if programmers violate the rules,
> except down the line when someone expected the rules to be followed. 

What other sort of standard would you define? Let's hear your suggestions.
-- 
-- Peter da Silva      `-_-'      ...!hoptoad!academ!uhnix1!sugar!peter
-- "Have you hugged your U wolf today?" ...!bellcore!tness1!sugar!peter
-- Disclaimer: These aren't mere opinions, these are *values*.

shf@well.UUCP (Stuart H. Ferguson) (05/07/88)

+--- Peter da Silva 
| In article <5872@well.UUCP>, shf@well.UUCP (Stuart H. Ferguson) writes:
| > Pete's "for instance" example of passing messages down a network is
| > really the *only* purpose I can find for defining a standard such as
| > this.  

| That's like saying "why define a standard for command line arguments?".
| Because not all programs will have full capability. 
	...
| ... how about a stripped down version to run in machines
| with only half a Meg (god, how blaze' you get about RAM). How about

The point about commandline arguments makes sense.  I can see how you
might want to define a complex protocol for something and then implement
only the most useful part of it.  The IPCMessage makes that kind of
thing relatively easy.  My (small) objection to it is that the way the
format is defined a large part of the work involved rests on the
shoulders of the _client_, which seems wrong.  All the server has to do
is ignore the parts of the message it doesn't understand.  The client on
the other hand has to examine the replied message carefully to see what
the server ignored and what it understood.

| ... That's the big deal
| about IFF formats, for example. A program can take what it wants and can
| use from the file (or in this case the message) and send the rest back

I hope you don't want your message format hated as much as the IFF
standard. ;-)  In fact, IFF files turn out to be a bad analogy for these
very same reasons -- the work gets done by the wrong participant.  It's
MUCH easier to write IFF files than to read them, the reverse of the way
you would want it. 

| > ... how does a network
| > server attach itself transparently into the IPC system?
| The discussion really hasn't addressed how you initiate a conversation. Just
| how you talk once you've been introduced. I think the next topic should be
| just this... rather than more wrangling over message formats.

Ok.

| I think the object-oriented approach presented recently has quite a bit of
| merit.
| Have you any ideas?

Since the object-oriented approach IS my idea, I certainly do have
ideas. ;-)  The difference between my o-o approach and the Exec "named 
message ports" approach is analogous to the difference between the 
Yellow Pages and the White Pages.  Named message ports provide a
one-to-one mapping between names and ports, where the names are just
something that the programmer made up for their program.  This is like
the White Pages in that you have to know the specific name of the server
you want to use.  The object-oriented approach uses a many-to-one
mapping between services and ports, where services are an abstract
description of an operation that a server will perform.  This is like
the Yellow Pages in that you use the service you want performed to look
up the message port. 

I can think of a couple of ways for a network server to wedge itself
transparently into such a system right off the top of my head. 

The o-o approach needs more than what the IPCMessage provides, however, 
since it needs a standard for data exchange as well as messages.  The 
data exchange standard are the "objects" in the scheme.

[my example of a possible network problem and most of Peter's reply
deleted for brevity.] 

| But how does it know which data's modified if it doesn't look at the bits?

In my example, it knows by virtue of knowing in advance what the server
is supposed to do.  If the command returned success, the client doesn't
bother to check bits because it knows what the server did.  On a
non-networked Amiga, this assumption works even though it is a "wrong"
assumption.  It only fails using certain network servers. 
Hypothetically. 

| > The general problem is that the IPCMessage is a "soft" standard.  It's
| > just a set of rules and nothing breaks if programmers violate the rules,
| > except down the line when someone expected the rules to be followed. 
| 
| What other sort of standard would you define? Let's hear your suggestions.
| -- Peter da Silva      `-_-'      ...!hoptoad!academ!uhnix1!sugar!peter

What I call a "hard" standard is one whose design is such that mistakes
are less likely.  Pipes, for example, are much more smoothly integrated
into a networking situation, since the metaphore of a byte pipe is a
closer match to what networks do than passing pointers.  The filesystem
can be patched at the DOS level to work on a network, so programs that
communicate using the filespace will naturally work over a network with
no (or very little) hastle.  The network and the functionality of data
transfer are transparent to each other (unless you're being really
crufty) so the programmer doesn't have to worry about the two domains
interacting in unexpected ways.

I don't want to do my interprocess communication by pipe, however.  A
set of library routines for creating and dealing with messages would go
a long way to helping matters.  When programmers can't figure out what
bits to set on their own, they can just call the supported library
routine which is guarenteed to do the right thing.  Also, care needs to
be taken in designing the flag bits and the library routines to make
things easy for the client programmer.  At least that's how I would do
it. 

Overall, I like the IPCMessage format.  Even though I find there are
things I can't do with it, it does have a simplicity that I find
appealing.  I'm trying to create a hybrid standard using the best parts 
of both the IPCMessage format and the object-oriented design I have been 
working on.  I'll shout if I come up with anything brilliant.
-- 
		Stuart Ferguson		(shf@well.UUCP)
		Action by HAVOC		(shf@Solar.Stanford.EDU)

jesup@pawl23.pawl.rpi.edu (Randell E. Jesup) (05/08/88)

In article <5896@well.UUCP> shf@well.UUCP (Stuart H. Ferguson) writes:
 >Named message ports provide a
 >one-to-one mapping between names and ports, where the names are just
 >something that the programmer made up for their program.  This is like
 >the White Pages in that you have to know the specific name of the server
 >you want to use.  The object-oriented approach uses a many-to-one
 >mapping between services and ports, where services are an abstract
 >description of an operation that a server will perform.  This is like
 >the Yellow Pages in that you use the service you want performed to look
 >up the message port. 

	Sounds like a job for SUPER-MAPPER!  Seriously, the name mapper I'm
working on (mainly for expansion serial ports) would be perfect for the
Yellow Pages style lookup, since I generalized for use with software, as
well as hardware.

     //	Randell Jesup			      Lunge Software Development
    //	Dedicated Amiga Programmer            13 Frear Ave, Troy, NY 12180
 \\//	beowulf!lunge!jesup@steinmetz.UUCP    (518) 272-2942
  \/    (uunet!steinmetz!beowulf!lunge!jesup) BIX: rjesup
(-: The Few, The Proud, The Architects of the RPM40 40MIPS CMOS Micro :-)

peter@sugar.UUCP (Peter da Silva) (05/09/88)

In article <5896@well.UUCP>, shf@well.UUCP (Stuart H. Ferguson) writes:
> All the server has to do
> is ignore the parts of the message it doesn't understand.  The client on
> the other hand has to examine the replied message carefully to see what
> the server ignored and what it understood.

I don't see a way around this. Have you an alternative?

> I hope you don't want your message format hated as much as the IFF
> standard. ;-)

I *like* the IFF standard. I don't know why people are turned off by it.
The only thing that's obscure about IFF is the recursive stuff is a pain,
but very few programs use the recursive stuff.

> In fact, IFF files turn out to be a bad analogy for these
> very same reasons -- the work gets done by the wrong participant.  It's
> MUCH easier to write IFF files than to read them, the reverse of the way
> you would want it. 

This is a general problem with communication between programs written by
different people. You generally have a standard that's too simple to be
useful or one that's too complex to use. IFF is a good compromise.

> The o-o approach needs more than what the IPCMessage provides, however, 
> since it needs a standard for data exchange as well as messages.  The 
> data exchange standard are the "objects" in the scheme.

From my reading I got the impression that servers were the objects, and the
o-o stuff was mainly intended to get the right guys talking... with the
added advantage that you could do a "sendsuper" to your parent for messages
you didn't grok.

> Overall, I like the IPCMessage format.  Even though I find there are
> things I can't do with it, it does have a simplicity that I find
> appealing.  I'm trying to create a hybrid standard using the best parts 
> of both the IPCMessage format and the object-oriented design I have been 
> working on.  I'll shout if I come up with anything brilliant.

Please do. I'll give your stuff another read.
-- 
-- Peter da Silva      `-_-'      ...!hoptoad!academ!uhnix1!sugar!peter
-- "Have you hugged your U wolf today?" ...!bellcore!tness1!sugar!peter
-- Disclaimer: These aren't mere opinions, these are *values*.

shf@well.UUCP (Stuart H. Ferguson) (05/11/88)

+--- Peter da Silva writes:
| In article <5896@well.UUCP>, shf@well.UUCP (Stuart H. Ferguson) writes:
| > All the server has to do
| > is ignore the parts of the message it doesn't understand.  The client on
| > the other hand has to examine the replied message carefully to see what
| > the server ignored and what it understood.
| I don't see a way around this. Have you an alternative?

Yes, but you may not like it.  I would require that the server reply
with an error if it failed to understand ANY part of the message.  
Clients would then be guarenteed to get what they asked for or nothing
at all.  The message format could provide status flags for optional
arguments, but the client would have control over which arguments were
optional, not the server, so the client only has to check the
"understood" bit for items that it marked as "optional."  This puts the
work on the server side which is where I would want it. 

I could imagine a case where a client requests that a bitmap be modified
in a certain way, and the server only understands half the arguments and
does irreperable damage to the bitmap.  The idea above would prevent
this kind of misunderstanding. 

| > I hope you don't want your message format hated as much as the IFF
| > standard. ;-)
| I *like* the IFF standard. I don't know why people are turned off by it.
| The only thing that's obscure about IFF is the recursive stuff is a pain,

Despite my comment, I actually do like the IFF standard, and I agree
that it's a reasonable compromise given the problems with devising any
standard.  It's just that I've met alot of people who curse the IFF
format and the code EA provided for it.  I conclude that it's not the 
format that people hate, but rather the fact that there's no good,
generally available tools for reading and writing it.  An IFF.library
would do a great deal towards inproving the IFF format's P.R. problem. 

| > The o-o approach needs more than what the IPCMessage provides, however, 
| > since it needs a standard for data exchange as well as messages.  The 
| > data exchange standard are the "objects" in the scheme.
| From my reading I got the impression that servers were the objects, and the
| o-o stuff was mainly intended to get the right guys talking... with the
| added advantage that you could do a "sendsuper" to your parent for messages
| you didn't grok.

No, the situation you describe where the server itself is the object is
a degenerate case which can be used to get the more familiar "program
controls program" communication metaphore using the object-oriented
approach.  In general, objects are data structures describing the target
for the command in a message.  For example, if someone defined a
bitmap-class object, then this class of object could be operated on by
many different servers.  This results in a greater unification and
uniformity of tools (theoretically). 

In the degenerate case, the server declares itself to operate on a
"dummy" class of object which just allows clients to find its message
port(s).  The class heirarchy would also be implemented by the
IPC.library, at least the way I imagine it, so it would be transparent
to both clients and servers. 

| -- Peter da Silva      `-_-'      ...!hoptoad!academ!uhnix1!sugar!peter
-- 
		Stuart Ferguson		(shf@well.UUCP)
		Action by HAVOC		(shf@Solar.Stanford.EDU)

peter@sugar.UUCP (Peter da Silva) (05/11/88)

In article ... jesup@pawl23.pawl.rpi.edu (Randell E. Jesup) writes:
> 	Sounds like a job for SUPER-MAPPER!  Seriously, the name mapper I'm
> working on (mainly for expansion serial ports) would be perfect for the
> Yellow Pages style lookup, since I generalized for use with software, as
> well as hardware.

Could you perhaps let us in on SUPER-MAPPER, and maybe indicate how it's
an improvement over existing mechanisms (setup files, environment variables,
etcetera...)?
-- 
-- Peter da Silva      `-_-'      ...!hoptoad!academ!uhnix1!sugar!peter
-- "Have you hugged your U wolf today?" ...!bellcore!tness1!sugar!peter
-- Disclaimer: These aren't mere opinions, these are *values*.