hm@uva.UUCP (HansM) (03/29/89)
We are trying to understand Ada tasking and there are two things we fail to understand: 1. When an exception is raised and not handled in a task body, the task is terminated and the exception is not further propagated, without notice (11.4.1.8). Why is this? Is there a way to invoke the kind of traceback that occurs when an exception is propagated out of the main program? 2. When a task has completed its execution, termination is delayed until all dependent tasks have terminated (9.4.6). As a result, our program fills up all memory with completed tasks unable to terminate. Why is this? Can something be done about it (without altering task dependency)? We have the impression that Ada was designed to deal with a small number of large tasks, whereas we are trying to create a large number of small tasks. Is this true? Does it matter? Can anybody enlighten us? AdvTHANKSance Hans Mulder Sjouke Mauw hm@uva.uucp sjouke@uva.uucp mcvax!uva!hm mcvax!uva!sjouke
peirce@claris.com (Michael Peirce) (03/30/89)
>2. When a task has completed its execution, termination is delayed until all > dependent tasks have terminated (9.4.6). As a result, our program > fills up all memory with completed tasks unable to terminate. Why is > this? Can something be done about it (without altering task dependency)? > > >We have the impression that Ada was designed to deal with a small >number of large tasks, whereas we are trying to create a large number >of small tasks. Is this true? Does it matter? > The way we dealt with this problem was to reuse our tasks. We had a situation where we wanted to dispatch a handler task for each incoming request from the network. In Vax Ada, the tasks weren't removed from memory until after the program exited the scope of the task declaration. This meant that in our outter most scope, a terminated tasks' memory was never reclaimed. To solve this problem, we set up a task manager that kept a list of idle tasks and whenever a task was requested it would reuse one of these tasks first before creating a new one. Of course, each task ended up having some extra code at the beginning to handle initialization and some at the end to handle returning itself to the free list. This overhead was minimal though. With this type of scheme in place we were able to run our system for days or weeks at a time using "new" tasks for each message, but never running into memory usage problems. -- michael
pcg@aber-cs.UUCP (Piercarlo Grandi) (03/30/89)
In article <9274@claris.com> peirce@claris.com (Michael Peirce) writes: >2. When a task has completed its execution, termination is delayed until all > dependent tasks have terminated (9.4.6). As a result, our program > fills up all memory with completed tasks unable to terminate. Why is > this? Can something be done about it (without altering task dependency)? > > >We have the impression that Ada was designed to deal with a small >number of large tasks, whereas we are trying to create a large number >of small tasks. Is this true? Does it matter? > The way we dealt with this problem was to reuse our tasks. We had a situation where we wanted to dispatch a handler task for each incoming request from the network. In Vax Ada, the tasks weren't removed from memory until after the program exited the scope of the task declaration. This is all nice and true, but of course hardly satisfactory. It essentially defeats the idea of using dynamically created tasks. It is a style of programming akin to that used in Concurrent Euclid or other languages with only a static number of tasks configurable. The scheme though is efficient and not too difficult to implement, and slightly more flexible. There is another problem with Ada tasking, and it is well known to those who know OS/MVS and IMS. When an Ada task takes a page fault, the entire address space is suspended waiting for resolution of the page fault; another Ada task is not redispatched, even if it could, because on virtually all the Ada implementations I know of (notably the VMS one) the OS does not know about Ada tasks at all. In other words, Ada tasking is not very good on virtual memory systems if one wants to keep track of multiple external events. The classic example is having a terminal monitor, with each terminal served by its own task. There are only two solutions, one fairly horrible, having a signal delivered by the OS on a page fault to the in-address space scheduler, the second and proper one is to have threads in the OS and associated Ada tasks with the threads. Unfortunately the second one can be fairly expensive, many systems have high overhead threads (e.g. OS/MVS). -- Piercarlo "Peter" Grandi | ARPA: pcg%cs.aber.ac.uk@nss.cs.ucl.ac.uk Dept of CS, UCW Aberystwyth | UUCP: ...!mcvax!ukc!aber-cs!pcg Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk
stt@ada-uts (03/31/89)
1. The concept of "trace-back" is a compiler/run-time-system feature. It is certainly possible to convince a compiler vendor to create some sort of debugging output on this kind of task termination. Alternatively, and more portably, you can create an "exception when others=>" in all task bodies to report appropriately the unexpected demise of a task. 2. It is certainly true that "masters" must wait for their dependent tasks to terminate. If you have a large number of completed tasks all with the same master, this suggests that perhaps your tasks should be structured so that they are reusable. That is, enclose the main operation of a task in a large loop headed by an entry call (to some kind of job manager, presumably) which receives the next job to do. This saves repeatedly creating and terminating tasks, both of which are frequently slow operations. (The job manager may indicate the next job for the task is to terminate itself, if it decides that there are more server tasks than needed.) It is possible to have the creator of the task *not* be the master by using access types. For tasks within objects designated by access types, the master is the block/unit enclosing the declaration of the access type. In most cases, unchecked deallocation of such objects-containing-tasks will reclaim the storage associated with the task as soon as the task completes. S. Tucker Taft Intermetrics, Inc. 733 Concord Avenue Cambridge, MA 02138
simpson@minotaur.uucp (Scott Simpson) (04/01/89)
In article <674@uva.UUCP> hm@uva.UUCP (Hans Mulder & Sjouke Mauw) writes: >We are trying to understand Ada tasking and there are two things we fail to >understand: > >1. When an exception is raised and not handled in a task body, the task > is terminated and the exception is not further propagated, without > notice (11.4.1.8). Why is this? > Is there a way to invoke the kind of traceback that occurs when an > exception is propagated out of the main program? From the Rationale (Section 14.4, page 325) "Note that if the exception where propagated to the parent task, it would mean that the child tasks could interfere asynchronously wihth their parent, and it would also mean that these interferences could occur simultaneously, with disastrous results." I think asynchronous is the key word. Scott Simpson TRW Space and Defense Sector oberon!trwarcadia!simpson (UUCP) trwarcadia!simpson@oberon.usc.edu (Internet)
callen@inmet (04/03/89)
>There is another problem with Ada tasking, and it is well known to those >who know OS/MVS and IMS. When an Ada task takes a page fault, the entire >address space is suspended waiting for resolution of the page fault; >another Ada task is not redispatched, even if it could, because on virtually >all the Ada implementations I know of (notably the VMS one) the OS does not >know about Ada tasks at all. In other words, Ada tasking is not very good on >virtual memory systems if one wants to keep track of multiple external >events. The classic example is having a terminal monitor, with each terminal >served by its own task. This behavior is dependent upon the Ada runtime system implementation. MVS supports its own flavor of tasking, in which several tasks (threads of control) run in the same address space. On a machine with more than one physical processor (which is very common these days), several tasks in the same address space can run simultaneously on different processors. If one of the tasks incurs a page fault, the other tasks do NOT wait. So what you want to look for is an implementation that allows you to map Ada tasks to "true" MVS tasks. There are at least 2. -- Jerry Callen Intermetrics, Inc. 733 Concord Ave. Cambridge, MA 02138 callen@inmet.inmet.com ...!uunet!inmet!callen
pcg@aber-cs.UUCP (Piercarlo Grandi) (04/11/89)
In article <124000035@inmet> callen@inmet writes: >There is another problem with Ada tasking, and it is well known to those >who know OS/MVS and IMS. When an Ada task takes a page fault, the entire >address space is suspended waiting for resolution of the page fault; This behavior is dependent upon the Ada runtime system implementation. MVS supports its own flavor of tasking, in which several tasks (threads of control) run in the same address space. On a machine with more than one physical processor (which is very common these days), several tasks in the same address space can run simultaneously on different processors. If one of the tasks incurs a page fault, the other tasks do NOT wait. Unfortunately I do not have a multiprocessor MVS system :->. However on this subject, I cited OS/MVS and IMS preceisely because the problem has been solved within them; MVS has one of the few multithreading facilities (if that is the right word :->) around, and among others PL/1 uses it etc... So what you want to look for is an implementation that allows you to map Ada tasks to "true" MVS tasks. There are at least 2. Let me add though that if I were selling an Ada compiler for MVS, I would not boast that it does have real multithreading because it maps Ada tasks onto MVS tasks; I would keep it a closely guarded secret. Why? Easy is the answer: it is fairly obvious that Ada tasking was designed to support a very fine grain of tasking, such as associating tasks with a buffer pool, etc... Too bad that MVS tasks have truly stupendous overheads. IBM are the first to admit this; in an old (early seventies) issue of their Systems Journal (devoted to explaining the new VS2, as it was then called) they discuss how it was decided that the MVS kernel internally would not use MVS tasks, but rather lightweight tasks, precisely because TCBs are too expensive to use in a multithread program like MVS itself. Also, there must be some good reason for which the major IBM databases or communication subsystems or transaction processing systems don't use TCBs... Too bad that those two vendors that you cite as allowing you to map Ada tasks to "true" MVS tasks did not take the trouble of duplicating the IMS or CICS internal schedulers, that do get page fault signals from the MVS kernel. By the way, there is a facility to get page fault signals also in VM, precisely because it is used to run multithreaded operating systems, and these when run under it do use them. On the other hand I must admit that using MVS tasks for Ada tasks, while being quite inappropriate to the Ada style of tasking, does have the advantage of being able to run them on multiple processors, which a simple page fault handling in address space scheduler cannot do. Unles sof course the in address space scheduler runs as multiple MVS tasks (it, not the Ada tasks it manages). I don't really remember well, but some recent version of IMS may do that. Summing up, I thoroughy agree with other posters that Ada really requires a lightweight thread implementation, and most current operating systems do not qualify, either because they do not have threads or because they are not lightweight. And, let me add, wasn't Ada supposed to run on embedded systems where all tasks are lightweight, and there is no notion of address spaces, not to speak of paging? :-] :-] -- Piercarlo "Peter" Grandi | ARPA: pcg%cs.aber.ac.uk@nss.cs.ucl.ac.uk Dept of CS, UCW Aberystwyth | UUCP: ...!mcvax!ukc!aber-cs!pcg Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk
stachour@umn-cs.CS.UMN.EDU (Paul Stachour) (04/13/89)
In article <796@aber-cs.UUCP> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes: > .... >There is another problem with Ada tasking, and it is well known to those *** Stop. Let's get the problem in the right place. It is NOT a *** problem with Ada, but with the OS that is running Ada. >who know OS/MVS and IMS. When an Ada task takes a page fault, the entire >address space is suspended waiting for resolution of the page fault; >another Ada task is not redispatched, even if it could, because on virtually >all the Ada implementations I know of (notably the VMS one) the OS does not >know about Ada tasks at all. > .... >There are only two solutions, one fairly horrible, having a signal delivered >by the OS on a page fault to the in-address space scheduler, the second >and proper one is to have threads in the OS and associated Ada tasks with >the threads. Unfortunately the second one can be fairly expensive, many >systems have high overhead threads (e.g. OS/MVS). *** Stop. There is at least a third solution. In ancient, *** multi-processor batch-systems that had time-sharing grafted on, *** such as Honeywell's GCOS3, the time-sharing monitor handled its *** own sub-threads, such as deciding when things were dispatchable *** when they were not and when to swap a subtask and ... *** And it let the OS do the dispatching. *** Substitute/time-sharing-monitor/Ada RSL/ and it's not too different. *** In the past, we had a multi-processing CPU that could do the *** equivalent of dispatching two "ready" Ada tasks within a single OS *** process. Today, you tell me that OS/MVS, VAX/VMS, and such ilk *** cannot even dispatch one such ready task. ...Sigh *** ...Ugh Isn't "progress" wonderful.
callen@inmet (04/15/89)
>/* Written 9:32 am Apr 11, 1989 by pcg@aber-cs.UUCP in inmet:comp.lang.ada */ >/* ---------- "Re: Two questions" ---------- */ >In article <124000035@inmet> callen@inmet writes: > > >There is another problem with Ada tasking, and it is well known to those > >who know OS/MVS and IMS. When an Ada task takes a page fault, the entire > >address space is suspended waiting for resolution of the page fault; > > On a machine with more than one physical > processor (which is very common these days), several tasks in the same > address space can run simultaneously on different processors. If one of the > tasks incurs a page fault, the other tasks do NOT wait. > >Unfortunately I do not have a multiprocessor MVS system :->. It doesn't matter; regardless of how many processors there are in the system, MVS will allow other tasks to run while one is blocked for a page fault. > <Comments about "true" MVS tasking being too expensive for Ada > tasks, and that IMS and CICS don't use tasking for that reason> > >Too bad that those two vendors that you cite as allowing you to map Ada >tasks to "true" MVS tasks did not take the trouble of duplicating the IMS or >CICS internal schedulers, that do get page fault signals from the MVS >kernel. IMS and CICS do NOT get page fault signals from MVS. They DO use their own internal schedulers, but of very different flavors. IMS uses multiple address spaces to achieve concurrency (and an address space is a much "heavier" entity than a task within an address space); the idea is that the terminals and databases are owned by a "control region" and the code for each transaction runs in a "message processing region." The control region does use "true" MVS tasks within itself to achieve concurrency. CICS, on the other hand, uses a single address space AND a single task, and then does its own scheduling in that single task. THIS IS A BAD PERFORMANCE BOTTLENECK! The reason is precisely the one you described: a page fault stops the entire region. In recent years CICS has acquired a facility similar to the IMS message processing region, called MRO (Multiple Region Option), to help with this problem (and other) problems, and has also begun to use multitasking (for DB2 and VSAM file access). In order to use the "page fault" option of the ESPIE macro (the macro that sets up "trap" handlers) you must be "authorized." This means, in MVS, allowed to enter supervisor state. Should every Ada program any user writes to be allowed to enter supervisor state? >On the other hand I must admit that using MVS tasks for Ada tasks, while >being quite inappropriate to the Ada style of tasking, does have the >advantage of being able to run them on multiple processors, which a simple >page fault handling in address space scheduler cannot do. Precisely. Since multiprocessors are rapidly becoming the norm for IBM 370 architecture machines, it is foolish not to exploit them. I think that MVS tasks are QUITE appropriate for Ada tasking, if used judiciously. >Summing up, I thoroughy agree with other posters that Ada really requires >a lightweight thread implementation, and most current operating systems >do not qualify, either because they do not have threads or because they >are not lightweight. Right, but if "midweight" threads are what you've got, you use them. :-) >And, let me add, wasn't Ada supposed to run on >embedded systems where all tasks are lightweight, and there is no notion >of address spaces, not to speak of paging? :-] :-] Yeah, but the customer wants MVS, and the customer is always right. :-) >Piercarlo "Peter" Grandi | ARPA: pcg%cs.aber.ac.uk@nss.cs.ucl.ac.uk >Dept of CS, UCW Aberystwyth | UUCP: ...!mcvax!ukc!aber-cs!pcg >Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk -- Jerry Callen Intermetrics, Inc. ...!uunet!inmet!callen callen@inmet.inmet.com