[comp.edu] Teaching Concurrency

wtwolfe@hubcap.clemson.edu (Bill Wolfe) (01/07/90)

   Some excerpts from an interesting article on the teaching of
   concurrency which can be found on pages 40-41 of ACM SIGAda
   Ada Letters, Volume VII, #5, Sep/Oct 1987 ("Experiences Teaching 
   Concurrency In Ada", Ronald J. Leach):

      The students were each assigned a project for which they 
      were required to write two Ada programs which involved at
      least two tasks.  In general, the tasks involved some simple
      idea that the students were very familiar with, at least in
      the case of sequential programs.  Thus, the difficulty was
      in understanding the concurrency and not in the computation
      performed by the individual task.  As an example, one student
      was asked to write a program with two tasks -- sort an array
      of integers and then search for a key using a binary search.
      The student was allowed to use any sorting algorithm.  Thus
      the student did not have difficulty implementing the individual
      algorithms for the tasks.  The troublesome part was the
      implementation of the synchronization or communication of the tasks.

      [...] Students were required to run their programs four times with
      the same input.  Most of the errors in the programs due to subtle
      assumptions about tasking made by the programmer became apparent
      after four runs.  Students' observations on this point were
      interesting.  In spite of several lectures on timing and
      synchronization of tasks, lengthy discussions on the nature of
      an Ada "logical processor", and numerous examples, students did
      not believe that programs could give different results or bomb
      when given the same input.  The sudden shock when THEIR program
      showed this behavior put the point across better than any lecture
      could.  Many of the students indicated that they had seen this kind
      of error at some time during program development.  Two of the students
      were so shocked by the different behavior of the sample runs that they
      turned in their projects with signatures of witnesses that their
      programs ran successfully at least once.  [...]
 
      ------ Other examples given of concurrent programming exercises ------

      1. Write a program which uses two tasks to solve quadratic equations
         using the quadratic formula.  Each task must perform at least 3
         arithmetic operations.

      2. Write a program to read an integer N and to have two tasks.  The
         tasks are to compute some simple function f(N) and to find all
         primes less than f(N).

      3. Write a program to simulate the donning of socks and shoes.  
         Putting on socks and putting on shoes are to be separate tasks.

      4. Write a program to read an array A of integers and to have two 
         tasks.  The tasks are to sort the array A in increasing order,
         passing this sorted array to B, and to sort the array B in 
         decreasing order.

mbenveni@irisa.irisa.fr (Marc Benveniste,lsp) (01/10/90)

From article <7588@hubcap.clemson.edu>, by wtwolfe@hubcap.clemson.edu (Bill Wolfe):
>       The students were each assigned a project for which they 
>       were required to write two Ada programs which involved at
>       least two tasks.  In general, the tasks involved some simple
>       idea that the students were very familiar with, at least in
>       the case of sequential programs.  Thus, the difficulty was
>       in understanding the concurrency and not in the computation
>       performed by the individual task. [...]
>       [...] Students were required to run their programs four times with
>       the same input. [...]  The sudden shock when THEIR program
>       showed this behavior put the point across better than any lecture
>       could. 

 In the same direction, I developed a toy concurrent language adopting
the robot metaphor R. Pattis introduced with "Karel the Robot".
Students have to program simple tasks and assign them to various independent
robots that share a so called world  and communicate by asynchronous
message passing. Dynamical creation of robots is possible with lexical
nesting and recursive task activations. The main advantage, besides the
ones already stated by the previous posting, is DIRECT OBSERVABILITY
of running processes. The main disadvantages are an anthropomorphic view
of processes and an interleaved interpretation of parallelism. 
I wrote a system called LPC (for Concurrent Programming Laboratory in
Spanish) that integrates a compiler for the language, a world editor
and an interactive interpreter. The interpreter allows the student
(or the teacher) to run programs in a stepping mode and to 
select different speeds for each robot (any running robot may be 
stopped). This interactive feature enables the operator to force
certain scenarios to arise. Besides, a formal semantics of LPC's toy 
language was written at the design stage of its development.
A proof system could be derived without to much effort.
I think LPC can be very useful to teach concurrency, particularly
in an introductory course.

References:

Benveniste, M., "LPC: A Concurrent Programming Laboratory",
                in proc. of STACS'88, R. Cori and M. Wirsing Eds.,
                February'88, LNCS #294, pp. 391-392.
                
Benveniste, M., "LPC: un Paradigma de la Programaci'on Concurrente",
                M.Sc. Thesis, IIMAS-UNAM, Technical report #526,
                Orange series, August'88. Mexico D.F., Mexico.
                
Pattis, R.      "Karel the Robot: a Gentle Introduction to the
                Art of Programming",  John Wiley & Sons Inc.,
                New York, 1981.


Marc Benveniste                      mbenveni@irisa.irisa.fr
IRISA
Campus de Beaulieu
35042 Rennes
FRANCE

marks@agcsun.UUCP (Mark Shepherd) (01/11/90)

In article <7588@hubcap.clemson.edu>, wtwolfe@hubcap.clemson.edu (Bill Wolfe) 
posts some excerpts from an article on the teaching of concurrency
in ACM SIGAda Ada Letters. The article discusses programming assignments
where concurrent tasks are used for things like sorting numbers, solving 
equations, finding primes, etc.

Although these programming assignments evidently had some valuable lessons
on multi-tasking, I feel that they may also inadventantly teach a
less desirable lesson: that it is OK to use concurrent tasks for things 
that could be done much more simply with subroutines. 

In real life (whether industrial, business, scientific research, or whatever),
multi-tasking applications carry substantial penalties
in complexity and resource utilization, and should only be used when
appropriate. (Of course, NOT using tasking when it should be used has  
equally dire consequences). 

I have seen large (and expensive!) systems crippled by inappropriate use
of tasking, and I hope the the computer science graduates of the future
will understand not only HOW to use multi-tasking, but WHEN.

Mark Shepherd
agcsun!marks@boulder.colorado.edu

pattis@cs.washington.edu (Richard Pattis) (01/11/90)

> ..in ACM SIGAda Ada Letters. The article discusses programming assignments
> where concurrent tasks are used for things like sorting numbers, solving 
> equations, finding primes, etc.
> 
> Although these programming assignments evidently had some valuable lessons
> on multi-tasking, I feel that they may also inadventantly teach a
> less desirable lesson: that it is OK to use concurrent tasks for things 
> that could be done much more simply with subroutines. 
> ...
> I have seen large (and expensive!) systems crippled by inappropriate use
> of tasking, and I hope the the computer science graduates of the future
> will understand not only HOW to use multi-tasking, but WHEN.
> 
> Mark Shepherd
> agcsun!marks@boulder.colorado.edu


OK, so this begs the question: what is the "smallest" assignment that can
use concurrency fruitfully. I would like to teach a bit about tasking in
one of my classes, but I don't want students to get "wrong" ideas from the
example I use.  Anyone out there have such an assignment? Is there some prime
example out there of a good use of multi-tasking that is amenable to
classroom instruction?

Rich Pattis
..............................................................................
..............................................................................
..............................................................................
..............................................................................
..............................................................................
..............................................................................
..............................................................................

firth@sei.cmu.edu (Robert Firth) (01/11/90)

In article <10330@june.cs.washington.edu> pattis@cs.washington.edu (Richard Pattis) writes:

>OK, so this begs the question: what is the "smallest" assignment that can
>use concurrency fruitfully. I would like to teach a bit about tasking in
>one of my classes, but I don't want students to get "wrong" ideas from the
>example I use.  Anyone out there have such an assignment? Is there some prime
>example out there of a good use of multi-tasking that is amenable to
>classroom instruction?

In the hope that it helps, I'll describe briefly the exercises i used
to teach concurrency as part of a real-time systems design course.
These are the one-page programs; the next step up was device drivers,
a simple message switch, &c.  A warning upfront: the tasking features
of Ada are inadequate to solve some of these exercises.

(1) delay.  the program contains one task, which simply prints
    "I'm still here" every five seconds.  explains: task structure,
    task creation and termination, delay.

(2) interleaf.  the program consists of six tasks, which do this

	print "." every second
	print "5" every five seconds
	print "6" every six seconds
	print "7" every five seconds
	print "8" every six seconds
	print a newline every 30 seconds

    explains: several tasks; reentrant use of task code (by middle 4),
    interleaving of task output; control of interleaving by priority.

(3) veryslowsort.  the program reads a list of integers and creates
    one task per integer, whose priority is given by the integer and
    whose sole action is to print the integer.  the integers are
    therefore output in task priority order, ie they are sorted.

    illustrates: dynamic task creation and priority assignment, use
    of task parameters.

(4) chain gang.  the program creates a reasonably large number of
    identical tasks (eg 100), each with a parameter that identifies
    itself.  Each task waits for a message from (task-1), prints
    "This is (task) and I have the message", and passes the message
    to (task+1).  Except that the last task sends the message to
    a master task.  The master creates the tasks, sends a message to
    the first, and waits for it to go all around the chain.

    illustrates: array of tasks, message passing, task parameters
    and dynamic task names, task blocking on message read.

(5) sampler.  one task computes the value of pi by an excruciatingly
    slowly converging series; the other task samples the computed
    value every couple of seconds or so. illustrates: simple shared
    variable, preemption of low-priority task.

(6) quantum mouse.  The program reads in a small maze, with an entry
    point and an exit point.  it creates a mouse (a task) at the
    entry point.  the task traverses the maze, creating a copy of
    itself at every branch point.  the traversal lays a trail through
    the maze, consisting of the letters ABCDE... in order, ie letter
    'J' is placed in a cell first reached by a mouse (any mouse) 10
    steps from the entry.  A mouse that can move only to a marked cell
    dies.  When the last mouse dies, the maze is printed out; every
    cell is marked with its minimum distance from the entry.

    illustrates: shared data, dynamic task replication with state
    inheritance, wait for termination of task set.

(Incidentally, the tasking features used in these exercises were
 contained in a set of subroutines described in a six-page handout and
 implemented on a bare machine in less than 200 lines of code.)

dalamb@qucis.queensu.CA (David Lamb) (01/11/90)

In article <602@agcsun.UUCP> marks@agcsun.UUCP (Mark Shepherd) writes:
>In real life (whether industrial, business, scientific research, or whatever),
>multi-tasking applications carry substantial penalties
>in complexity and resource utilization, and should only be used when
>appropriate. (Of course, NOT using tasking when it should be used has
>equally dire consequences).
>
This may be true now, but will it continue to be true?  Twenty years ago
people told us to avoid subroutine calls because of the unacceptable overhead;
today there are lots of production compilers that do better than most
programmers can (with inline expansion, special calling sequences, and so
on).

In the long term we're better off teaching the right abstractions, then
pushing our translators to generate efficient code.  In the short term
maybe some people need to hand-translate to efficient non-tasking code,
just as they once needed to hand-translate code with lots of subroutine
calls into code that avoided it.  As with the subroutine issue, I think
that even today the fraction of tasking code that needs such translation
is smaller than many people think.

David Alex Lamb			ARPA Internet:	David.Lamb@cs.cmu.edu
Department of Computing				dalamb@qucis.queensu.ca
    and Information Science	uucp:   	...!utzoo!utcsri!qucis!dalamb
Queen's University		phone:		(613) 545-6067
Kingston, Ontario, Canada K7L 3N6	

meissner@osf.org (Michael Meissner) (01/12/90)

In article <10330@june.cs.washington.edu> pattis@cs.washington.edu (Richard Pattis) writes:

| OK, so this begs the question: what is the "smallest" assignment that can
| use concurrency fruitfully. I would like to teach a bit about tasking in
| one of my classes, but I don't want students to get "wrong" ideas from the
| example I use.  Anyone out there have such an assignment? Is there some prime
| example out there of a good use of multi-tasking that is amenable to
| classroom instruction?

When I was at Data General, and we wanted to convey how to use the
operating systems (RDOS, AOS, AOS/VS, but not unix) task facility, we
would use a cut down version of the tape archiver as an example.  In
the simplest case, you want to copy a file, overlapping the I and O
portions, you would then have a producer task and a consumer task, and
n buffers to use between the two, and then go on to the various
strategys of inter-task communication and buffer management.

The full blown tape archiver was a little bit more complicated (it had
tasks which did the equivalent of opendir, readdir, and closedir, and
fstat on the files, tasks which read in the files, and an output task
which ordered things appropriate, and wrote to the tape).  It took all
of the parallelism to make most tapes stream when doing a through the
filesystem backup.

Another fun multitasking example, was to use two tasks to write an
arcade-style game with terminal graphics (one task for terminal input,
and the other task for output).

Finally, there is the example of a multi-tasked server, or transaction
processing system, but that is perhaps too complicated for a classroom
setting.

The problem with a real world examples like the above is if your
operating system does not have real tasks (threads, etc.), and merely
simulates it, you don't get any benifits from tasking, and it is
better to write the stuff single threaded.

IMHO, I believe that the normal task examples like slowsort, or what
have you are not so useful in real world programming, and may in fact
encourage bad habits.

--
Michael Meissner	email: meissner@osf.org		phone: 617-621-8861
Open Software Foundation, 11 Cambridge Center, Cambridge, MA

Catproof is an oxymoron, Childproof is nearly so

sccowan@watmsg.waterloo.edu (S. Crispin Cowan) (01/12/90)

In article <602@agcsun.UUCP> marks@agcsun.UUCP (Mark Shepherd) writes:
>I have seen large (and expensive!) systems crippled by inappropriate use
>of tasking, and I hope the the computer science graduates of the future
>will understand not only HOW to use multi-tasking, but WHEN.

When to use tasking critically depends on the cost of the tasking
operations.  Under UNIX, a context switch costs something on the order
of 1 or 2 ms, and forking somewhat more.  Under VMS, task creation is
much more expensive, but task switching is much cheaper, so a larger
number of tasks in a more static process structure is more
appropriate.

In the OS/Architecture project that I work in (Sylvan), light-weight
tasking is the objective, so that tasks CAN be used in place of
subroutines (at least big ones that have potential for concurrency).
Sending a message to a server and getting a response back has a cost
overhead of 100 to 200 microseconds.

The bottom line is that any unit of abstraction (subroutines, tasks,
objects, etc.) should be used to a level of granularity/modularity
that is appropriate, depending on the cost of the abstraction, in both
time and space.  Subroutines to add two integers are not appropriate,
similarly UNIX tasks to perform similarly small operations are not
appropriate.  Of course, if performance is not an issue, then the
appropriate level of modularity is that which makes the program the
easiest to read/write/maintain.

>Mark Shepherd
>agcsun!marks@boulder.colorado.edu

----------------------------------------------------------------------
(S.) Crispin Cowan, CS grad student, University of Waterloo
Office:		DC3548	x3934		Home phone: 570-2517
Post Awful:	60 Overlea Drive, Kitchener, N2M 1T1
UUCP:		watmath!watmsg!sccowan
Domain:		sccowan@watmsg.waterloo.edu

"The most important question when any new computer architecture is
introduced is `So what?'"
	-someone on comp.arch
	(if it was you, let me know & I'll credit it)

griest@ajpo.sei.cmu.edu (Tom Griest) (01/12/90)

Regarding the postings on teaching Ada tasking:

The important thing to consider is that tasks should be used
for operations which are really likely to be physically concurrent.
(Or in some cases to serialize concurrent access to devices.)
For example, embedded applications frequently have interactions
with real-world events which can happen concurrently.  An
automotive processor which must monitor oxygen content of fuel
combustion while monitoring velocity is a case where two activities
are occurring concurrently and it fits more naturally into a tasking
model than a subprogram that polls to achieve the same effect.

Another case where tasking can be used is to achieve higher performance
by allowing additional processors to work on a single program's problem.
This is only meaningful when you have a system which has more than one
CPU and software that supports the distribution of the load.  In these
cases, the Ada task provides a convenient method for designers to 
specify where it is "reasonable" to divide the work load.  The "reasonable"
decision must be made based on the overhead associated with communication
and scheduling among the cooperating processors.  This is similar to
any project where decisions must be made how divide up tasks among
workers.   LabTek Corp. uses this approach to divide up work to do rocket
trajectory calculations.  Since the calculations are independent between
different rockets, we can divide up the work to get the job done sooner.
Our technique involves using a dynamic array of tasks which is dimensioned
during elaboration based on information provided by an underlying
distributed runtime.  (The system has some fault tolerance and can 
reconfigure itself based on how many processors are available.)

The interesting part is being able to reduce the overhead when only a
few (or one) processor is available.  Obviously having a separate task per
rocket would consume considerable overhead in a case where there was
only 1 available processor and 20 rockets.  Therefore Each "guidance" task has
the ability to accept a work list which is dynamically constrained.
Therefore only one rendezvous is needed if only one processor is
available.  When additional processors are available the work list is
effectively "sliced" up to divide the work among additional "guidance"
tasks.  The tricky part is keeping track of persistent information
and maintaining a consistent work list when multiple CPUs are involved.
What I mean by this is that as rockets are launched and destroyed, the
"guidance" tasks must get the same rockets each time.  This is because
there is a substantial amount of "track history" related to each rocket
used in the calculations to implement the trajectory planner.  Since
the application is distributed across a network, transferring the data
would be additional overhead which would probably be intolerable.
We implemented a rather simple allocation routine to make sure that
once a rocket was assigned to a processor, it remained there.

Unless you want to teach your students to be rocket scientists, this
is probably overkill for a teaching application (although "Missile Command"
used to be a popular video game).  What I recommend is building a
simple game using a graphics display and a mouse (or joystick).  One
task moves a symbol on the screen while another accepts data from the
mouse and plots the cursor on the screen.  A third task might determine
if the symbol is "hit" by the cursor and does something in response to
this event.  You will probably need a forth task to insure mutual exclusion
to the graphics interface.

One problem you might have is getting a low cost Ada system that is capable 
of true tasking (interrupts, etc) and suitable for this application.
Typical DOS-based compilers have problems with concurrency because of
the lack of reentrancy in the BIOS.

If you're interested and need help with the graphics/mouse interface
send me a message.

    Tom Griest
    LabTek Corporation

manis@cs.ubc.ca (Vincent Manis) (01/12/90)

In article <602@agcsun.UUCP> marks@agcsun.UUCP (Mark Shepherd) writes:
>Although these programming assignments evidently had some valuable lessons
>on multi-tasking, I feel that they may also inadventantly teach a
>less desirable lesson: that it is OK to use concurrent tasks for things 
>that could be done much more simply with subroutines. 
>
>In real life (whether industrial, business, scientific research, or whatever),
>multi-tasking applications carry substantial penalties
>in complexity and resource utilization, and should only be used when
>appropriate. (Of course, NOT using tasking when it should be used has  
>equally dire consequences). 
Ah, but this assumes an environment in which parallel execution is
indeed less `efficient' than serial execution. That is indeed true if
one is operating on a single CPU in which parallelism is simulated, but
what about the use of highly parallel architectures or even networks? 

It seems clear that Mark had an underlying processor model in mind when
he wrote his article. Unfortunately, that processor model is becoming
ever less common, what with the advent of both distributed network
architectures such as NCS and ONC, as well as systems such as the
Connection Machine. 

In any case, what `efficiency' metric is one using? I have seen
commercial programs in which parallelism is ever so cleverly avoided, by
use of asynchronous system traps and the like, coupled with a knowledge
of how long an operation is expected to take. David Parnas even talks of
a real-time control system (I think it was one of the Navy plane systems
he consulted on, but I'm not sure) in which code for various tasks was
interleaved at assembly time, with the programmer expected to know when
to switch instruction streams. Surely readability, maintainability,
reliability etc., are at least as important as raw speed. (Note that I
am *not* claiming that parallelism always enhances readability or
reliability; just sometimes.)

All of this leads to two observations: first, that rather than teaching
students to do particular things `because it's efficient', we ought to
teach them to look at the underlying processor architecture, as well as
the other criteria which are important in the project, before selecting
particular design techniques. Second, don't expect `efficiency' to stay
the same for any length of time; my favourite quotation from IBM's
original PL/I manual (ca. 1966) is `Do not use procedures; they are
expensive.' 

Disclaimer: I have not read the article Bill Wolfe referred to, and
therefore can't comment on the appropriateness of the original examples.
Whenever I see the word `Ada', my eyes glaze over, and I turn the page. 


--
\    Vincent Manis <manis@cs.ubc.ca>      "There is no law that vulgarity and
 \   Department of Computer Science      literary excellence cannot coexist."
 /\  University of British Columbia                        -- A. Trevor Hodge
/  \ Vancouver, BC, Canada V6T 1W5 (604) 228-2394

manis@cs.ubc.ca (Vincent Manis) (01/12/90)

In article <5598@bd.sei.cmu.edu> firth@sei.cmu.edu (Robert Firth) writes:
>(3) veryslowsort.  the program reads a list of integers and creates
>    one task per integer, whose priority is given by the integer and
>    whose sole action is to print the integer.  the integers are
>    therefore output in task priority order, ie they are sorted.
Aaargh! I feel like I'm flogging a dead horse, posting two articles in
succession on the subject of underlying processor models. 

There is absolutely no reason why this should work. Not all schedulers
guarantee strict ordering of execution on a priority basis. Process
aging is one of many factors which could produce incorrect output.
Similarly, the scheduler might ignore priorities (the multi-task kernel
I wrote, for example!). 

However, veryslowsort is an excellent example of how *not* to use
parallelism. 

Apology dept: My last article contained a slighting remark about Ada. I
didn't notice that comp.lang.ada was included in the Newsgroups: line.
Mindful of Usenet protocol, I apologise for posting this to
comp.lang.ada. 
--
\    Vincent Manis <manis@cs.ubc.ca>      "There is no law that vulgarity and
 \   Department of Computer Science      literary excellence cannot coexist."
 /\  University of British Columbia                        -- A. Trevor Hodge
/  \ Vancouver, BC, Canada V6T 1W5 (604) 228-2394

bls@cs.purdue.EDU (Brian L. Stuart) (01/12/90)

In article <10330@june.cs.washington.edu> pattis@cs.washington.edu (Richard Pattis) writes:
>
>OK, so this begs the question: what is the "smallest" assignment that can
>use concurrency fruitfully. I would like to teach a bit about tasking in
>one of my classes, but I don't want students to get "wrong" ideas from the
>example I use.  Anyone out there have such an assignment? Is there some prime
>example out there of a good use of multi-tasking that is amenable to
>classroom instruction?
>
>Rich Pattis

Here at Purdue, in CS503 (Operating Systems) we have often given the
dining philosophers problem as a first assignement to teach concurrency.
Here, each philosopher is implemented as a separate task (well process
actually; we are using XINU).  It's a nice little problem that lets
the students get a feel for working with XINU before diving into its
guts.

Brian L. Stuart
Department of Computer Science
Purdue University

steiger@cs.nps.navy.mil (Robert Steigerwald x2468) (01/12/90)

>
->OK, so this begs the question: what is the "smallest" assignment that can
->use concurrency fruitfully. I would like to teach a bit about tasking in
->one of my classes, but I don't want students to get "wrong" ideas from the
->example I use.  Anyone out there have such an assignment? Is there some prime
->example out there of a good use of multi-tasking that is amenable to
->classroom instruction?
->
->Rich Pattis

A good example of a small multi-tasking problem is the Producer-Consumer
problem with an intermediate buffer.  It is covered well in the book by
Gehani entitled Ada An Advanced Introduction, 1984, Prentice-Hall, pages
158-161.

Bob Steigerwald

mph@oberon.inmos.co.uk (Mike Harrison) (01/12/90)

In article <33100@watmath.waterloo.edu> sccowan@watmsg.waterloo.edu (S. Crispin Cowan) writes:
>When to use tasking critically depends on the cost of the tasking
>operations.


Not necessarily so!

There are different possible goals in writing a program, run-time efficiency
is only one, another may be 'write-time' efficiency.

Ada tasking is, (in addition to permitting an implementation to allow the
use of multi-processor systems), an abstraction mechanism for the convenient
expression of concurrency. If a user can express his needs more easily and
effectively using tasking constructs, he may not be (seriously) concerned
about the running speed of the program.

For example, in a previous job, I had to write simulation of the behaviour 
of a multi-processor system (in executing Ada tasking as it happens), using
1, 2, 3 or 4 cpus.
It was much easier to represent the processors as separate tasks and to use
redzvous to simulate interactions than to write a single serial program with
explicit handling of the (simulated) paralellism.

However, I didn't care if the program took twice as long to run.


Mike.


Michael P. Harrison - Software Group - Inmos Ltd. UK.
-----------------------------------------------------------
UK : mph@inmos.co.uk             with STANDARD_DISCLAIMERS;
US : mph@inmos.com               use  STANDARD_DISCLAIMERS;

peter@ficc.uu.net (Peter da Silva) (01/13/90)

I believe that the following two problems are inappropriate, because
they are subject to race conditions related to task startup time,
quantum sizes, and so on:

> (3) veryslowsort.
> (6) quantum mouse.
-- 
 _--_|\  Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.
/      \
\_.--._/ Xenix Support -- it's not just a job, it's an adventure!
      v  "Have you hugged your wolf today?" `-_-'

new@udel.edu (Darren New) (01/13/90)

In article <=K01QVDxds13@ficc.uu.net> peter@ficc.uu.net (Peter da Silva) writes:
>I believe that the following two problems are inappropriate, because
>they are subject to race conditions related to task startup time,
>quantum sizes, and so on:

On the other hand, maybe getting around these sorts of problems is a good
way of learning what these sorts of problems is. are. whatever.

Anyway, these problems may not be as appropriate as they are 
educational. -- Darren

marks@agcsun.UUCP (Mark Shepherd) (01/13/90)

In article <499@qusundl.queensu.CA>, dalamb@qucis.queensu.CA (David Lamb) writes:
> In article <602@agcsun.UUCP> marks@agcsun.UUCP (Mark Shepherd) writes:
> >In real life (whether industrial, business, scientific research, or whatever),
> >multi-tasking applications carry substantial penalties
> >in complexity and resource utilization, and should only be used when
> >appropriate. (Of course, NOT using tasking when it should be used has
> >equally dire consequences).
> >
> This may be true now, but will it continue to be true?  

Yes. Like any other tool or technique, it will *always* be true that 
tasking should only be used when appropriate. Here is my view of some of
the circumstances under which this is the case:

- when different functions are unrelated
- when different functions have dissimilar lifetimes or priorities
- when different functions may or must run on different processors
- when a agent is needed to serialize or manage access to a 
  resource or device
- when errors must be isolated to a single sub-component of a system
- when a simpler or more effective design results (this of course is the
  acid test of appropriateness - all the "whens" are secondary)

The use of a separate task to, for example, add 2 numbers together, is 
usually inappropriate because it adds unnecessary complexity. Whether
or not it causes a performance problem depends on what the performance
requirements are. 

In the large and expensive system I alluded to in my previous posting, the
performance problems arose not because tasking was used, but because
it was *inappropriately* used. 

My view of performance or "efficiency" is that these are not absolute 
terms, but must always be viewed relative to the user's requirements. 
A program that produces results when the user needs them has good 
performance, even if it is "inefficient" (e.g. it uses PL/1 subroutines :-).
A program that will run in finite time only on a connection machine has 
good performance only if I happen to own such a machine.

Mark S

pgl@cup.portal.com (Peter G Ludemann) (01/13/90)

In my experience, most "multi-tasking" turns out to be nothing
but coroutining with message-passing for communication.

On an AS/400 (for example), a task switch is only slight more expensive
than a subroutine call (basically the cost of saving the 16 registers
plus a return address), so the additional clarity of writing with
multi-tasking is well worth it.

Furthermore, in some dialects of Prolog (e.g., NU-Prolog), a
corouting style can be used to vastly INCREASE the efficiency
of programs (the programs are written in a multi-tasking style but
are actually run as coroutining).

- peter ludemann  	pgl@cup.portal.com
	--- my opinions are my own responsibility, of course ---

macrakis@marat.osf.fr (Stavros Macrakis) (01/15/90)

In article <650@ajpo.sei.cmu.edu>, griest@ajpo.sei.cmu.edu (Tom Griest) writes:
> The important thing to consider is that tasks should be used
> for operations which are really likely to be physically concurrent.

I strongly disagree.  Tasking is a structuring mechanism as much as it
is a concurrency mechanism.  There are applications of tasks where
concurrency is a happy accident.  The standard name for this is "coroutines".

Applications for coroutines include simulation and data structure
transformation (cf. in particular the Jackson method).  Modules in producer-
consumer relations (cf. Unix pipes) are a good candidate -- of course
in this case there is often the possibility of concurrency.

For many years, assembly language programmers were the only ones to
benefit from coroutine structures because no mainstream programming
languages supported them.  Ada's support for coroutines is an important
step forward.

Of course, in order to benefit fully from this structuring mechanism, the
implementation should be efficient.  As far as I know, there are no Ada
compilers today where coroutine call is comparable to subroutine call in
its cost (as it can be in principle).

	-s

firth@sei.cmu.edu (Robert Firth) (01/15/90)

In article <=K01QVDxds13@ficc.uu.net> peter@ficc.uu.net (Peter da Silva) writes:
>I believe that the following two problems are inappropriate, because
>they are subject to race conditions related to task startup time,
>quantum sizes, and so on:
>
>> (3) veryslowsort.
>> (6) quantum mouse.

Thanks, Peter; my note omitted a couple of key points.  Please
let me clarify

(a) the exercises were intended to teach concurrency on ONE
    processor, as a preparatory to building a small local
    operating system.  (multiprocessor concurrency, and
    concurrency models transparent to number of processors,
    were covered elsewhere)

(b) the underlying model had, and relied upon, a scheduler
     whose operation was defined precisely.  the examples
    in part illustrated this.  however, the students were
    emphatically taught that, in real life, one does NOT
    rely on priority to effect ordering of task execution.

For example, the commentary explaining 'veryslowsort' pointed
out that

(1) the process reading input and creating tasks had the highest
    priority, so all the input would be read before any of the
    printing tasks ran

(2) the scheduler used strict priorities, so indeed the printing
    tasks would be run in the right order to print a sorted list

I didn't think it necessary to add that this was a pretty silly
way to sort numbers - the name alone seemed hint enough.  The
fact that the program in execution exhibits no concurrency at all
is one the students were able to deduce for themselves.

peter@ficc.uu.net (Peter da Silva) (01/16/90)

> For many years, assembly language programmers were the only ones to
> benefit from coroutine structures because no mainstream programming
> languages supported them.  Ada's support for coroutines is an important
> step forward.

How about Modula's support for coroutines? I think it has precedence.
-- 
 _--_|\  Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.
/      \
\_.--._/ Xenix Support -- it's not just a job, it's an adventure!
      v  "Have you hugged your wolf today?" `-_-'

kurtl@fai.UUCP (Kurt Luoto) (01/17/90)

In article <=K01QVDxds13@ficc.uu.net> peter@ficc.uu.net (Peter da Silva) writes:
>I believe that the following two problems are inappropriate, because
>they are subject to race conditions related to task startup time,
>quantum sizes, and so on:
>
>> (3) veryslowsort.
>> (6) quantum mouse.
>-- 
> _--_|\  Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.

Problems that are subject to startup problems and race conditions
make excellent material for learing about task synchronization
and coordination issues.  All the more reason to include them.

f
l
u
f
f

l
i
n
e
s
-- 

----------------
Kurt W. Luoto     kurtl@fai.com   or   ...!sun!fai!kurtl