craig@unicus.UUCP (Craig D. Hubley) (03/13/88)
Well, I wrote this early on in the debate and hung onto it to stew over it - I guess the time has come. Stuart is proposing an object-oriented model for the IPC, that is, `Please [method] my [object]' rather than `[program] controls [program]'. I think this is the right way to go, since it is no more difficult, lends itself to reliable control structures, and is a general case of the more specific master-slave paradigm. I've saved all the comments on this and was going to quote heavily from them and describe how this idea can satisfy everyone's concerns, but this message would just be too damn long. Suffice it to say that I think this structure addresses everything yet discussed in design terms. I also don't think it opens up any more implementation concerns (is a port still live, etc., than the others). I'm going to post this in three parts. The third one is far from complete, so it will wait, and I want net input on it. The first two are not complete either, and certainly have many bugs, but they are fairly explicit at least. This is Part (A) - environment. What the user knows about, what the programmer knows about, what the IPC system itself, which I'm calling `Ways and Means' since I know damn well that that won't become a defacto standard name, knows about. Those of you who hate conceptual discussions should skip this one. Part (B) is about the structure that implements (A). Despite the glorious all-encompassing views in (A), this is a very simple system. It boils down to: One process running all the time, at a fairly high priority. A list of methods available to programs that need them. A list of ports, one that implements each method. A list of `registered users' of each port, *absolutely* including those who have requested the port that implements a method, rather than just requesting that the method be performed. It must be assumed that these users are using it directly. A permanent editable list of where methods that are currently unavailable (not in memory) might be found. (A text file) A preference flag somewhere decides whether the user is to be consulted as a last resort, or whether the request fails. The standards it imposes are (M.C means Method on Class, in my notation, IFF is bunch of data, in an encapsulated IFF format or something): Message formats to do the following: for service users: tell `Ways and Means' that a capability M.C is in memory tell `Ways and Means' that, for this session, it can find M.C in location X (ie, df0:util/view.ANIM) this handily overrides capabilities that already exist. ask `Ways and Means' if a capability M.C is loaded ask `Ways and Means' for a capability M.C to be loaded ask `Ways and Means' for a M.C to be performed on data IFF. ask `Ways and Means' for a port that performs M.C tell `Ways and Means' that (that program) no longer needs M.C tell `Ways and Means' that the program (supplier or user) is disappearing. tell `Ways and Means' that M.C will be needed soon. this lets it optimize or prepare for requests, or let programs bottom out gracefully. etc... for service suppliers (from W&M to the program): tell the program to perform M.C on data IFF. tell the program that no one is using its services. It may unload itself, or whatever. for service users: verification messages for each of the above. Each of the library routines can encapsulate the request/verification. Any class exists because a program claims it exists. Any method exists because a program claims it exists. Note most programs will both supply and use services. `Ways and Means' keeps records of form: Class Method Port References The Class, Method it operates on, Port performing it, and References (given out or in use) to that port. If references are 0 to all services provided by a program, it will be informed of this condition. User Port (one record per reference) The user program that received the port's name (we must assume it is using it directly). This is deleted when the User program dies or indicates it no longer needs that service, and the reference is decremented. Method [default1] [default2] [default3] This allows user programs to ask for methods to be performed with incomplete information. For example, IFFX.include might have a default of `fromfile', so that an IFFX can always be included from a file. While running a hypertext system, default information for editors and viewers would include positional information. I've probably forgotten some, but this is an outline, after all :-) Part (C) talks about the code and exact standards and documentation required, as well as addressing the possible implementation problems. The structures Back to part (A): Here's my object-oriented model. It borrows heavily from Pete's message broker, but I have generalized it somewhat: The User Model -------------- The Amiga environment as it stands is preserved. Extensions to it consist of placing a conceptual envelope around each data structure to turn it into an object: A Class Set of methods performable on this class Note that, for data specific to an application, there is always at least one implicit method available to every instance of that class: start up the application with me (the instance) in it, ready to be used/run/etc. Note also that the user can often ignore what program provides the method. A Capabilities Browser or extended Workbench can be constructed to treat the Amiga environment as a set of Classes with Instances and Capabilities, but this is not necessary, and in fact would require a fair bit of work to `gloss over' the existing tools. Though it is a reasonable project, and would prove the Amiga's flexibility beyond a shadow of a doubt. The Programmer's Model ---------------------- The Amiga environment as it exists is preserved. The Programmer need not be concerned about these new facilities. They can be ignored, and he can run from start to finish without worrying about concerns other than those implicit in multitasking. For those who use the capabilities, however, here are the characteristics of the model: Any method available in the system need not be duplicated. Access to those methods is achieved through `Ways and Means', *but*: When ongoing access to a facility is required, such as data streams or continuous updating of the state of the file system, a direct access (the port) can be requested. This is as reliable as any other means of program-to-program communication. Programs whose facilities can be used are required to inform `Ways and Means' when they are available and unavailable. `Ways and Means' should have a way, given some overhead, of starting them up when their capabilities are required. That is, `Ways and Means' has a permanent record of what capabilities are known to exist where. This is a standard text file that can be edited, etc., and is only a guideline. `Ways and Means' is guaranteed only to try to load the capabilities it is told are standard. It fails gracefully, and usually will ask the user to load that facility or to insert a disk known to have it. It can use paths, etc. Programmers who `must' have a capability available can load it or supply it themselves, or use a `MustHave' message that tells `Ways and Means' to dig it up and verify it. A library that hides the request and verification process is provided. It also hides other niceties, such as having an include file declare the encapsulating structures. Nobody need actually send a message unless they want to, explicitly. Function calls can use methods on objects. Programmers need not know IFF structure unless they are constructing IFF files. Routines are provided to construct an IFF byte sequence from the fields in the encapsulated structure. A programmer can always refer to `that thing that I just made the user create with DPaint', without looking at it, to request that further operations be performed. Some functions are very simple and unambiguous, and can be performed in the library itself (checksum this structure, *or* this bitmap with that bitmap, etc.). This is typically the case for methods on objects that can be performed by simple combinations of routines in the ROMs, or standard libraries. No need to add task overhead to these. Programs supplying methods can include a standard library that will turn the request messages into simple function calls, from the supplier's point of view, or they can watch their ports for the messages themselves. If extensive port usage is going on anyway, this will usually be the case. Summary ------- Programs can both use and supply methods. If they are users, they need only request the services, usually through a standard library. If they are suppliers, they need only respond to the messages, in the simplest case by putting a function pointer to an internal method into a struct at runtime. This is simple enough to be done in a library. They must also inform W&M that their services are available, and when they are not available (usually shortly before terminating). If they are prevented from carrying out a service, or are unduly delayed, they should send a status report to the W&M. W&M can use this to decide how to respond to further requests for that capability. Generally, it doesn't call the same method before its last call is finished, because the perfect overlap of capabilities required can only be thrashing. Again, library facilities exist to `do a loop until its done or in error' or `try it and give me the results', and the message structures are known for direct monitoring. The using program can also request the port itself and monitor it directly. Specific Concerns ----------------- I see the role of ARexx in all this as a normal supplier/user. ARexx is an intelligent user of these services, a script language capable of automating most of what you would want to do with interacting programs. It should be considered both a user and supplier of methods, but putting an ARexx program in the bottleneck as the W&M itself is probably not practical. For one thing, there would be no general list of *all* of the capabilities available to the user or a using program. At least, this would not be guaranteed, and might lead to several ARexx scripts requiring the same resources attempting to load them several times, or similar problems. No doubt a good W&M could be written in ARexx, at least a good prototype, but ARexx itself is not one. Efficiency is an important concern. I have tried to structure things so that the users, suppliers, and W&M are all apprised of events as soon as possible. Users that require direct access to a method can request its port and are thereafter assumed to be using it directly. Library routines support this. Users that require streams can `negotiate' that with the supplier. I haven't thought this one out deeply yet, but it seems to me that continuous streams of data sent between programs that `know about' each other are extensions to the `do this to that' philosophy. I humbly request feedback, as this is built on all of your ideas plus my own. Though it's hard to say in advance what can of worms is being opened, I think that the information-processing load on the W&M is as minimal as possible while still maintaining generality. Furthermore, if hooks are provided to let programs optimize their use of each other and themselves, then that will make life very very easy for everyone. If this is totally transparent, we will have a truly object-oriented environment. Then we can rewrite Workbench! :-) Part (B), with all the errors in my outline (that *you* will tell me about) is on its way. Part (C) sits on many of the ideas already proposed. I count on their authors to point out where this model jives/doesn't jive with their proposed structures. Some examples of the system in use: When new utilities enter the system, they tell the W&M `hi, I'm VideoMagnaScuplt4D and I can preview an ANIM, edit an ANIM, use any of (list of IFF types), etc.' Meanwhile, you hook up your DialABrew to the serial port and it tells W&M: `hi, I'm DialABrew and I can brew some JAVA' The W&M stores them by class, that is it now knows it can: ANIM preview ANIM edit IFF1 use IFF2 use JAVA brew Now some application signals that it has an IFF2 done, shall we say Deluxe Paint has just finished a HAM picture. of the Mandrill, and notifies the W&M. Now it can: include IFF2 [from file] include IFF2 Mandrill Note that it can always do the former. Back in Video...4D, being an original sort, I decide to map the Mandrill onto a three-dimensional object. To get the mandrill I go to a standard `include' menu which consults the W&M. I get the list above. I select the mandrill and position it. This is under the W&M's control. As I click the mandrill into position, the Video...4D gets a message to include the Mandrill at that spot: include IFF2 Mandrill (location) It was ready to accept that message because I selected include from the menu, and it knows to wait for the W&M to tell it what to include. No selection facility, that's common. Just the ability to understand the message. When I'm done wrapping it around the block or whatever, I tell the W&M, with another operation, to take this new picture and make it available. W&M now can: include IFF2 [from file] include IFF2 Mandrill include IFF2 WarpedMandrill Even if Mandrill has been stored to save space, its name should stay until the end of the session. Meanwhile a nifty ANIM arrives via the news, and I have my uupc rigged to get such things to go to the W&M to get themselves ready to be reviewed so I can decide if I want them immediately. W&M is sent its filename and a request to be uncompressed: untar FILE MickeyMouse which W&M itself knows how to do, calling the utility. (Broker should be responsible for all data form conversions) It tells uupc that the job is done. uupc asks for a preview ANIM MickeyMouse Preview requests are things only the user can grant, at least on my machine, so the W&M's icon changes to a bursting W&M or something, and I can go look at it directly if I like, and activate the preview. When I do, all the other operations available show up and I decide that while looking at Mickey, I want some coffee. So I selected `brew JAVA'. since I didn't give it an instance of JAVA, it makes me find a JAVA file that tells it how to brew my favourite coffee. Then I select preview ANIM MickeyMouse. Since this is unambiguous, it looks for a previewers, finds Video...4D and sends it off. Matt: > -Macro Expansion capability (which is a superset of what ARexx > would give us) e.g. program X gives the driver a symbolic macro > which the driver executes, possibly causing remote-control > commands be sent back to program X and/or to other applications. ARexx itself could implement such capabilities once it knew our protocol. However, each message/request, since they would be encapsulated in structures anyway before being sent, could be sent as a list of such requests instead. > -Fully reentrant and stackable remote commands, with infinite > loop detection, etc... without taking a huge amount of > overhead. Tall order. Reentrant is OK, *if* the supplier gives W&M the address of the function. This is ambiguous in my model. I was counting on it responding to messages, which is an extra level of checking. Infinite loop detection, well, that would have to be a W&M `watcher', with the capability of asking the user if he wanted to stop something, or turning it off after a time declared in the `known capabilities' file. > e.g. program X sends a Macro to the driver, which sends resolved > commands back to program X which just happen to be macros in > program X which is expanded again and sent as a Macro to the > driver, which sends resolved commands to program's X and Y, > program X executing the command, etc.... If you actually pass pointers to functions around, then this is possible. I support doing that for this very reason. The program simply informs W&M that it is a user of method X.foo and a supplier of method X.foo. If you can rig it up so that, after the initial function call, this costs very little overhead, or none, that would be fantastic. Now that I think of it, it *can* be done... > Also allowing program X to handle such things synchronously or > asynchronously (not wait for macro completion before continuing, > but not getting confused either). That would be two different ways to call it through the library. W&M could also be queried directly about status, or the port itself could be asked for and used. > -Error recovery... so things don't freeze up. W&M should watchdog this, watching time spent and comparing against `should' If you wanted to expand the model, W&M could know of several programs capable of supplying the same method, such as `edit this ANIM', in a descending order of preference. I already outlined recovery methods for known capabilities in unknown places. > -Exit recovery. Allow program X to exit at any time without > screwing up any macro's in progress (have them gracefully fall > to their deaths). i.e. calling SarcClose() clears anything > pending. Perhaps the best way to do this is have the supplier programs `I'm going off the air' message wait for anything pending to clear before returning. I guess that's the same as your SarcClose(). > -Standard command format. Ability to specify logical or physical > streams (i.e. supply your own functions or use DOS functions, > etc...). Streams I haven't thought broadly about yet. Commands are basically divided into *available*, which request or change availability of methods, *demands* which require that methods be performed on objects, *inquiries* which ask about status of demands. > -Low level IFF decoder for arbitrary IFF streams. Since IFF is > structured, the decoder would not have to know specific formats, This would be W&M providing a class XXXX where XXXX is the IFF's name, with methods as named by the suppliers. This would be put into the standard IFF structure with a pointer to the `funny stuff', as always. > -Ditto Low level IFF encoder for arbitrary IFF chunks.. Structures > the chunks for you. Part of the same concept. I assume we have to translate chunks to structs anyway to manipulate them in memory. After all, who wants to write code that counts bytes ? > -Run time Library to handle interaction. Yes, yes, yes. On both ends. In my opinion, requiring programmers and to use messages, when this is still a novel idea to many people, may make the whole thing flop. Function-call encapsulation of requests, verifications, etc., is an absolute necessity. It can even be provided for the suppliers, if people aren't afraif of pointers to functions. Pete: >The reward is an integrated environment without equal. An interprocess >communication standard is not just another linear development, but has very >non-linear effects, leveraging each component of the system against the >others. Multi-tasking multiplies the power of individual tools together. >An interprocess communication standard promises to multiply them again. Yes, but only if it is easier to use that already-existing method than to write a new one. IPCs are nothing new. Ones that everyone understands are not. > 1. What is the communication model to be? Clients & servers? > Asyncronous free-for-all? Other ideas? Free-for-all in which clients can find servers. True free-for-all arrangements are possible, but normally can be set up directly amongst cooperating programs. It gets scary if they aren't designed to cooperate. Don't forget that priorities are available as a tool to regulate the amount of CPU time that gets eaten, and by whom. > 2. What are some important message classes to define as standard? What > message does everyone have to understand? All IFF classes should be encapsulated, with a generic structure. The standard libraries understand all *necessary* messages, maybe not all *convenient* ones. Really depends on time and space for implementors. > 3. How do the public named message ports figure into all of this, > especially question number 1? Public named message ports should remain as is. The library could simplify calls to them, making them appear as if you were calling W&M. Where more sophisticated capabilities are required, W&M can simply use methods that use the public named message ports and were polite enough to register themselves as suppliers. :-) > 4. Can devices be folded in as a special case, or should they just be > regarded as their own standard? As above. They can be accessed directly or via the library, as if W&M were handling them. Ron: >My feeling has always been that the Amiga and multitasking should obsolete >the "integrated" approach. Much better to have small modular programs that Yes, but not possible until *one function call* can substitute for writing your own editor, etc. I truly believe this. My experience with Interlisp has convinced me that it is only at this level that *tool use* is painless. Ron also wanted reliable ports and a sort of a `port pipe'. I hope that this model satisfies him. Stuart: >So, briefly this time, I propose the following for standard object and >message structures: > > struct Object { > long obj_class; > long obj_size; > char obj_data [ /* value of obj_size */ ]; > } You need a structure for Class as well. Object should be more general, it should cover everything from datafile names to structs in memory. This is necessary in order to hide `where it came from'. To be used, it must be loaded into memory, but useful references can be made to objects that are elsewhere (`that thing the user just made...', please put it on a disk for me). > struct IPCMessage { > struct Message ipc_mess; > struct Object *ipc_obj; > long ipc_code; > char ipc_data [ /**/ ]; > } > >The size of the message, and therefore ipc_data, is given by a field in >ipc_mess. Ipc_obj would point to the object that the message is >"directed at," and ipc_code would be a four-character code for the >operation requested by the message, such as SHOW or EDIT. In both This is nice because the using program doesn't have to know what class he's dealing with (one of the big pluses of O-O). However, messages sent to objects (via the W&M) with particular operations in mind will almost always have to provide additional information, if certain context information like screen location, etc., is required. The W&M is useful because it can have default values for the values not known. That is, as method users our calls can have *optional parameters*. >For the original concept that sparked this discussion -- the tools-based >hypercard system -- object-oriented programming is *IDEAL*. A CARD >could be a class of object and each link could be specified as a message >to another object. One link could tell an ANIM or SMUS object to PLAY >itself and another could just tell another CARD object to display >itself. The possible objects would be unlimited and user-extensible, >and the individual processes to do each thing could be small since they >can ride on the power of the other parts of the system. Yes, but there are of course problems. For one thing, in a hypertext environment it matters a great deal *where* viewers and editors are placed, and this location information is not part of an IFF text :-). The problem above has to be solved first, and fairly thoroughly. Thankfully, if you have a W&M that is responsible for defaults, you can give it a default like `when you have to place a hunk of text, call CardPlace()'. This allows context to greatly influence the non-essential aspects of method performance. But we have to make a list of these parameters beforehand. Of course we can always provide new, improved methods (with different names, of course). To make it a truly expandable environment, though, it must be backwards- compatible *and* able to incorporate ideas not thought of at inception. No trivial task, even if this *is* just a glorified port-pipe controller. This is too long. Part (B) coming up when I have time and feedback. *Please* give me feedback. Craig Hubley, Unicus Corporation, Toronto, Ont. craig@Unicus.COM (Internet) {uunet!mnetor, utzoo!utcsri}!unicus!craig (dumb uucp) mnetor!unicus!craig@uunet.uu.net (dumb arpa)