[comp.arch] Cooperative versus Competitive Multitasking

fouts@orville.nas.nasa.gov (Marty Fouts) (12/23/87)

In article <2989@cbmvax.UUCP> daveh@cbmvax.UUCP (Dave Haynie) writes:
>. . .  In the proposed senario, the OS will
>break up the tasks as evenly as possible amoung the existing CPUs in the
>system.  This distribution is completely transparent to the programmer; it
>doesn't look any different than if it were running on a single machine, other
>than the large increase in execution speed you may get.
>

In dealing with multitasking, I find it helpful to think of
cooperative and competitive multitasking.  Competitive multitasking is
plain old fashion multiprocessing.  Many processes are competing with
each other for cpu cycles.  Cooperative multitasking describes those
situations in which some of the processes are sharing some resource;
usually data.  Competitive multitasking is transparent to the
user/programmer, and multiple processors can be taken advantage of
with no additional coding.  Cooperative multitasking is a different
problem. 

In Mach (and other similar Unix derivitives) there are three ways in
which this kind of resource sharing occurs.  First, is through an IPC
mechanism.  Unix has always had this kind of multitasking in the form
of pipelines.  Pipelines usually have the property of being producer
consumer synchronized based on the data in the pipe and are normally
well behaved in a multiprocessor.  They also usually have the
disadvantage of limited operation overlap, since sometimes processes
down the pipe have to wait until the original process is done
computing and sends its one result out before they get any data to
compute on.  This kind of resource sharing occurs when sockets are
used to implement client/server applications.

The second kind of data sharing is through shared memory.  This is
available in system V and Mach.  Here there is no "transparent"
mechanism and all of the usual trappings of monitors or critical
regions are required to guarentee synchronization.  This mechanism
allows full granularity at the cost of high complexity.

The third kind of data sharing, the use of an entirely shared address
space is actualy a side effect of an attempt to do the second kind
without the overhead of context switches on a uniprocessor.  The whole
idea of tasks and a task scheduler wasn't originally to provide any
kind of unique multitasking capability that shared memory didn't
already provide, but rather to do it with less overhead.  It is ironic
that this approach is now being used as the multiprocessor solution,
when it introduces unnecessary complexity into an application (for
instance, task level scheduling is usually nonpreemptive, so each task
is responsible for blocking when it is idle) at a time when multiple
processors are supposedly eliminating the need to reduce the number of
context switches.

Mach multithreading is actually an inappropriate solution for the
problem at hand.  vfork, shared memory and semaphores provide an
adequate solution to the problem.

Marty