"PSI%SNMSN1::PSI%DSAVX2::WILLY@slb-test.CSNET" (03/15/87)
Forwarded for: Hever Bitteur (BITTEUR%M_SFMVX1@SLB-DOLL.CSNET) I would be very interested to hear about any one who has already worked on this problem, especially if neat solutions taking advantage of high level concept of Ada rendezvous, have already been found. The application deals with various tasks. A task uses some data as input and produces some data as output, which can be in turn used partially or totally by another task, and so on. A feature common to all these pieces of data is that they behave like flows of data : typically, we can have sampled raw data which come from sensors, and which are passed through filters to give engineering data, which in turn can be used to compute derived variables, and so on. Of course, transmission of data can be achieved with direct rendezvous between a producer and a consumer task. The problem is that we rapidly get into a mess of tasks all interconnected, where any modification is painful, and where it is difficult to add a new task, since we have to connect this new task to any relevant producer task and any relevant consumer task. One more problem is that this design does not insure that all the data used by a task at a given time is consistent : there may be some dephasing between this and this variable. A better approach seems to organize the task activities around one or several "Stream Managers", whose function is to store written data, to forward it to tasks interested in it, and to handle the data consistency in a centralized manner. Thus, the design looks like a data bus, where tasks (writers and/or readers) are plugged on. Any new task just have to know on which bus(es) it has to be plugged. I have heard of implementations of this concept in "classical" real time, using queues, events and shared memory. Since this is to be integrated in a multi-tasking Ada application, I am looking for a pure Ada solution. Of course I would like to organize such a (generic) stream manager as a pure server, which could be efficiently used by other Ada tasks. Thank you for any help.
ian@loral.UUCP (Ian Kaplan) (03/18/87)
It was stated in the original article that a "pure Ada" solution was sought. I am not sure what exactly this means. Presumable if it is coded in Ada it is "pure", but I think that the author ment more than this. At the heart of the Ada process model is Hoare's Communicating Sequential Processes (CSP) scheme. Pure process model algorithms are not very good at handling steam applications. Dataflow is very good at handling streams, however. The Lucid programming language is designed for stream processing. See "Lucid" by Ashcroft and Wadge, Academic Press. For a brief discussion of the dataflow and process model, see "Programming the LOral LDF 100 Dataflow Machine", by Ian Kaplan, which should appear in SigPlan Notices around May. "Processes" in dataflow communicate asynchronously. This can be simulated in a process based language by placing a mail box between the sender and the receiver. The rendezvous are with the mail box. The mail box not only allows asynchronous operation (e.g., pipelining) but it also functionally decouples the two communicating processes. Dataflow is not without its problems however. The mailbox must provide buffering. If the data producer is faster than the data consumer the mailbox buffers will fill up. When this happens the producer must suspend. This is what happens with UNIX pipes. For an excellent discussion of UNIX pipes and their implementation see Chapter 2 of "Operating Systems: design and implementation" by Andrew S. Tanenbaum, Prentice-Hall. Until March 27, 1987 Ian Kaplan Loral Dataflow Group Loral Instrumentation (619) 560-5888 x4812 USENET: {ucbvax,decvax,ihnp4}!sdcsvax!loral!ian ARPA: sdcc6!loral!ian@UCSD USPS: 8401 Aero Dr. San Diego, CA 92123