[comp.realtime] Technical Details

johnb@srchtec.UUCP (John Baldwin) (08/29/90)

Not to start flames, but I must comment that it is about time we had some
*real* real-time postings....


In article <12582@netcom.UUCP> mcmahan@netcom.UUCP (Dave Mc Mahan) writes:
>
> In a previous article, gaulandm@tekigm2.MEN.TEK.COM writes:
>>..[excerpted]...  How do *you* measure the time it takes your system
>> to respond to an interrupt or input, or execute a subroutine?
>
> ...[followed by a generous, detailed, and useful response]...


Perhaps this is a bit off the normal track, but have any of us considered
how many postings are of the "what RT/OS do *you* recommend?" variety?
My comment is not meant to demean this class of postings, as they obviously
fill a real need: airing the user's perspective of several real-time "strata"
instead of just the usual marketing hyperbole.  My concern is simply that
there seems to be very little conversation concerning anything else.

Forgive my ignorance, but is it simply that most readers and/or posters are
very well-experienced in real-time, and so well-versed that design or
implementation issues are considered passe?  After reading comp.realtime
for a little more than two months, I simply cannot tell.

I, for one, am relatively new to real-time, yet have been tasked with helping
to develop an RT plan for the project I'm working on.
[Uh Oh.  NOW you've said it, John.  You've admitted you're a novice.
 Might as well forget being taken seriously.   :-) ]

What formal methods have you (personally!) tried in the past?  Which worked
and which didn't?  It would be especially interesting (and useful) to know
what things worked well on paper, but produced mediocre results, or worse.

Most of the reading I've been doing has been concerning the "Rate Monotonic
Theory."  A problem I've run up against is that I cannot easily find any
strongly opposed alternative methods.  I'm sure there must be at least one.

Does anyone know of a good paper presenting the "con" side of the RM theory,
or could you post a short (!) statement of the current position of
various methods?

-- 
John T. Baldwin                      |  johnb%srchtec.uucp@mathcs.emory.edu
Search Technology, Inc.              | 
                                     | "... I had an infinite loop,
My opinions; not my employers'.      |  but it was only for a little while..."

walden@dip.eecs.umich.edu (Eugene Marvin Walden) (08/29/90)

In article <182@srchtec.UUCP> johnb@srchtec.UUCP (John Baldwin) writes:
>Not to start flames, but I must comment that it is about time we had some
>*real* real-time postings....

   Hmmmm......

   Well, I am not sure if this is a "real" real-time post or not, but I'll try
to answer your questions.

>Most of the reading I've been doing has been concerning the "Rate Monotonic
>Theory."  A problem I've run up against is that I cannot easily find any
>strongly opposed alternative methods.  I'm sure there must be at least one.

   No, there really aren't. If an algorithm is proven to be optimal, as the RM
scheduling algorithm has been, then there is no "better" algorithm. In the case
of scheduling periodic tasks with priorities that remain constant through their
lifetime ("static" scheduling) on a single processor, the RM algorithm is op-
timal. Period. There are several problems with implementation, however, as I 
mention below. 
   In the case of scheduling periodic tasks on a single processor where the
priorities of the tasks can change ("dynamic" scheduling), the deadline-driven
algorithm is optimal. 
   Your search for an alternative method to an algorithm brings up a point. If
an algorithm is optimal, then there is no need to go any further. To say that
another algorithm is better than an optimal algorithm is like saying that Mary
is more pregnant than Sue. If an algorithm has not been proved optimal, then 
the question is still open. The hottest open topic in real-time scheduling re-
search is the problem of scheduling real-time tasks on a multiprocessor system.
This problem has been proven to be NP-complete (unsolvable) in the general
case. This means that no "optimal" algorithm will be possible with a reasonable
amount of computing time. 

>
>Does anyone know of a good paper presenting the "con" side of the RM theory,
>or could you post a short (!) statement of the current position of
>various methods?

     The following is an excerpt from a paper entitled,
"Solutions for some Practical Problems in Prioritized Preemptive Scheduling,"
by Sha, Lehoczky and Rajkumar. It appears in the 1986 IEEE  Real-Time Systems
Symposium. Here is the abstract:

   "Most existing real-time control systems use ad hoc static priority schedu-
ling methods. This is in spite of the fact that the rate monotonic scheduling
algorithm was proved to be the optimal static priority scheduling algorithm for
periodic tasks over a decade ago (1973). This lack of use is, in part, because
a direct application of this algorithm leads to a number of practical problems
which have not been fully addressed in the literature. In this paper, we give 
a comprehensive treatment of a number of practical problems associated with the
use of the rate-monotonic algorithm. These include the scheduling of messages
over a bus with insufficient priority levels but with multiple buffers. We also
review the methods to handle aperiodic tasks and present a new approach to 
stabilize the rate monotonic algorithm in the presence of transient processor
overloads. Finally, the problem of integrated processor and data I/O scheduling
is addressed. New results clearly establish the importance of using an integra-
ted approach to schedule both the processor and data I/O activities."

To summarize the problems with the RM scheduling algorithm:

1. If task execution times are stochastic, and they often are, *ensuring* that
   the processor never becomes overloaded requires the use of absolute worst
   case execution times for your tasks. This can lead to very poor CPU util-
   ization.
2. The rate-monotonic and deadline-driven algorithms operate optimally in good
   conditions, but behave unpredictably in overload conditions. This is ironic
   because it is usually when a system is in overload that scheduling correct-
   ness becomes most critical.
3. Because most I/O devices like serial interfaces, DMA, etc., use simple FIFO
   type scheduling schemes, and because some tasks must wait for other tasks,
   it is quite likely that the optimal priorities assigned by your scheduling
   algorithm will conflict with the FIFO scheme used for I/O. To really have
   an optimal scheduler, you need to have optimal I/O scheduling as well.
4. Neither the rate-monotonic algorithm nor the deadline-driven algorithm are 
   optimal in the multiprocessor case. 

There are probably other cons, but, for the sake of brevity, I will stop here.
If you got my post on real-time references, you might find some other papers
that refer to the RM scheduler and the DD scheduler.  Good luck!

>John T. Baldwin                      |  johnb%srchtec.uucp@mathcs.emory.edu
>Search Technology, Inc.              | 
>                                     | "... I had an infinite loop,
>My opinions; not my employers'.      |  but it was only for a little while..."

   - Eugene Walden (walden@dip.eecs.umich.edu)

vestal@SRC.Honeywell.COM (Steve Vestal) (08/30/90)

In addition to the rate monotonic and earliest deadline algorithms, there are
also cyclic scheduling and least laxity scheduling.

Least laxity scheduling, like earliest deadline scheduling, is a dynamic
priority algorithm that is feasible up to 100% utilization for arbitrary task
sets (which is not true for rate monotonic scheduling).  I believe it requires
that task execution time be declared to the scheduler in advance, but may
exhibit more desirable behavior in some overload situations than earliest
deadline scheduling.  Is anyone familiar enough with this algorithm to
describe its properties in detail?

Cyclic scheduling is more or less restricted to sets of tasks whose periods or
frequencies are multiples of each other.  A cyclic schedule consists of some
number of frames or cycles that are usually built by hand, where the cycles
are dispatched in round-robin fashion at the same frequency F as the highest
frequency task.  Each frame invokes the highest frequency task; a task whose
frequency is F/N is invoked every Nth cycle.  The general problem of producing
an optimal cyclic schedule is NP-complete (bin-packing can be reduced to this
problem).

My own opinion (which with $1 will get you a cup of coffee at some airports)
is that none of these are superior in all ways in all applications.  Here's
what I would look for:

Cyclic.  If processor utilization is low, the task set is small, and the ratio
         of the highest to lowest frequencies is reasonable (number of
         distinct cycles is reasonable) this has some things going for it.
         First, it has a simple implementation (a case statement in a handler
         for a periodic interrupt will do).  Inter-task communication is via
         parameters or global shared variables.  Skew, jitter and transport
         lag can usually be more easily controlled than with preemptive
         disciplines.  This is also extremely attractive for high-reliability
         systems (e.g. fly-by-wire).  The strictly deterministic nature of the
         schedule makes some people more comfortable, and there's almost no
         runtime scheduling code to certify.

Rate Monotonic: Although feasibility up to 100% utilization is not guaranteed
         for all possible task sets, it is for harmonic task sets.  Also, it's
         been shown that the expected breakdown utilization for a task set
         with a random mixture of frequencies is reasonably high (88% sticks
         in my mind).  For processors with prioritized interrupts and multiple
         timers, hardware priority can be used to get a simple high-speed
         implementation.  For harmonic task sets, a 	 self-interrupting
         handler can be used for a high-speed implementation.  Well-suited for
         languages that support static priority assignment (e.g. Ada).
         Solutions exist for supporting inter-task communication, aperiodic or
         event-driven tasks, modeling runtime overheads, graceful
         degradation during overload, and fault-tolerance for overrun/timing
         bugs.  My personal favorite if modularized, maintainable software is
         desired.

Dynamic Priority Disciplines: Don't know much about these, other than that
         they are feasible up to 100% utilization for arbitrary task sets.  I
         believe they may also have some advantages for providing graceful
         degradation during overload conditions.

Steve Vestal
Mail: Honeywell S&RC MN65-2100, 3660 Technology Drive, Minneapolis MN 55418 
Phone: (612) 782-7049                    Internet: vestal@src.honeywell.com

karsh@trifolium.esd.sgi.com (Bruce Karsh) (08/30/90)

In article <1990Aug29.152604.28573@zip.eecs.umich.edu> walden@dip.eecs.umich.edu (Eugene Marvin Walden) writes:

>   If an algorithm is proven to be optimal, as the RM
>scheduling algorithm has been, then there is no "better" algorithm. In the case
>of scheduling periodic tasks with priorities that remain constant through their
>lifetime ("static" scheduling) on a single processor, the RM algorithm is op-
>timal. Period. There are several problems with implementation, however, as I 
>mention below. 
>   In the case of scheduling periodic tasks on a single processor where the
>priorities of the tasks can change ("dynamic" scheduling), the deadline-driven
>algorithm is optimal. 
>   Your search for an alternative method to an algorithm brings up a point. If
>an algorithm is optimal, then there is no need to go any further.

This is an oversimplification.  Rate Monotonic Scheduling is optimal with
respect to certain criteria.  E.g., that the process's priorities are fixed,
that processes are periodic, that lower priority processes may be preempted,
that preemption does not cost anything, that the figure of merit is the
CPU utilization.

If you choose other criteria, like least number of preemptions for a fixed
task-set, then you'll perhaps come up with slightly different algorithms.

> To say that
>another algorithm is better than an optimal algorithm is like saying that Mary
>is more pregnant than Sue.

Even so, a rate-monotonic schedule is not a unique optimal schedule for
a set of tasks.  Before you decide that there is "no need to go further",
it may be a good idea to look at other optimal scheduling rules.

The non-fixed priority schedulers are pretty neat things and they come in
a variety of optimal forms.

			Bruce Karsh
			karsh@sgi.com

walden@dip.eecs.umich.edu (Eugene Marvin Walden) (08/31/90)

In article <68100@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
>990Aug29.152604.28573@zip.eecs.umich.edu>
>Sender: guest@sgi.sgi.com
>Reply-To: karsh@trifolium.sgi.com (Bruce Karsh)
>Organization: Silicon Graphics, Inc., Mountain View, CA
>Lines: 36
>
>This is an oversimplification.  Rate Monotonic Scheduling is optimal with
>respect to certain criteria.  E.g., that the process's priorities are fixed,
>that processes are periodic, that lower priority processes may be preempted,
>that preemption does not cost anything, that the figure of merit is the
>CPU utilization.
>
>If you choose other criteria, like least number of preemptions for a fixed
>task-set, then you'll perhaps come up with slightly different algorithms.

   Well, I guess I should have been more clear. What I meant was that, for the
problem stated in Liu-Layland [73] where the rate-monotonic and deadline-driven
schedulers are proven to be the optimal static and dynamic schedulers, respect-
ively, they are optimal. Thus, if you make the same assumptions as they did,
like static priorities, independent tasks, etc., then you need not look any
further for a "better" algorithm. 
   You also bring up a good point about how the performance of a scheduler is
measured. In a paper entitled "Local Non-Preemptive Scheduling Policies for
Hard Real-Time Distributed Systems," by Woodside and Craig, in the 1987 IEEE
Real-Time Systems Symposium, they compare the following policies:

   1. FCFS
   2. SJF
   3. LLF (least laxity first)
   4. EDD (deadline-driven)
   5. GM (Stankovic)
   6. MM (Moore)

   The performance criteria they used were CPU utilization, ratio of rejected
tasks, ratio of rejected tasks with positive laxity, and the remaining laxity 
of rejected tasks with positive laxity. No one algorithm is "best" for all 
performance criteria, so your point is well-taken.

>			Bruce Karsh
>			karsh@sgi.com


   - Eugene Walden (walden@dip.eecs.umich.edu)

rick@hp-lsd.COS.HP.COM (Rick Nygaard) (08/31/90)

> For example, define event A and event B.  Then, set the analyzer to trigger
> whenever B occurs > 200 usecs after A.

The HP 64620 State Analyzer, when configured with the 20 channel range/overview
board, supports this measurement, and other more general "range time trigger"
measurements.  This analyzer is an optional part of the 64000 emulation system.

Unfortunately, the measurement setup is somewhat obscure.**  The following is
from memory.  The manual (or softkeys) should be consulted for the precise
syntax:

  show trace_specification                         # bring up the trace spec
  overview on time                                 # turn on overview machine
  window one enable address A disable address B    # define interval
  overview enable window one                       # apply interval to overview
  overview event 1 range 200 usec or_more          # define time range
  trigger on overview event 1                      # trigger when in range
  execute                                          # start measurement
  show trace_list                                  # bring up the results

Trigger will occur on the bus cycle containing address B when the time
interval from A to B is 200 usec or longer.  Five time ranges, plus an
"everything else" bucket may be defined.  Thus, triggering in the "notches"
between a set of acceptable time ranges is also possible.

Rick Nygaard
HP, Logic Systems Division


** Yes, I know this is an understatement.

ken@minster.york.ac.uk (09/07/90)

In article <182@srchtec.UUCP> johnb@srchtec.UUCP (John Baldwin) writes:
>Not to start flames, but I must comment that it is about time we had some
>*real* real-time postings....

Quite agree. Get sick of all these "Our Real-Time Unix runs NFS" postings.

In article <1990Aug29.152604.28573@zip.eecs.umich.edu> walden@dip.eecs.umich.edu (Eugene Marvin Walden) writes:
>   No, there really aren't. If an algorithm is proven to be optimal, as the RM
>scheduling algorithm has been, then there is no "better" algorithm. In the case
>of scheduling periodic tasks with priorities that remain constant through their
>lifetime ("static" scheduling) on a single processor, the RM algorithm is op-
>timal. Period.

It is important to state what "optimal" means. Optimal here means "all
deadlines are met".

>[...] The hottest open topic in real-time scheduling re-
>search is the problem of scheduling real-time tasks on a multiprocessor system.
>This problem has been proven to be NP-complete (unsolvable) in the general
>case. This means that no "optimal" algorithm will be possible with a reasonable
>amount of computing time. 

"Optimal" here means something else, like "best load balance", or "lowest
network bandwidth", in addition to "all deadlines are met".

NP-Complete does _not_ mean unsolvable! There are techniques for finding
optimal or very near optimal solutions to these problems! I have a program
which takes a communicating set of tasks and allocates them to a number of
processors, guaranteeing that deadlines are met. It produces results that
are very good. I can't check them as being optimal, except by enumerating
all solutions. With 9 tasks and 5 processors it takes 450 seconds to check
them all (my program got the optimal one straight off). With the big problem
I have (38 tasks, 4 task replicas, 8 processors) it would take 10^28
years to check!  The program takes a bit of time to run (40 seconds) so
you can't use it on-line, so tasks have static properties (fixed
period, fixed CPU time, fixed memory). It will handle aperiodics via
deferable servers. I've got a paper on this, but I'm looking for a
journal without a backlog of 10^28 years...

>   - Eugene Walden (walden@dip.eecs.umich.edu)

--
Ken Tindell             UUCP:     ..!mcsun!ukc!minster!ken
Computer Science Dept.  Internet: ken%minster.york.ac.uk@nsfnet-relay.ac.uk
York University,        Tel.:     +44-904-433244
YO1 5DD
UK