[comp.realtime] Software primitives for real-time programming languages

rhwang@cs.utexas.edu (Rwo-Hsi Wang) (09/20/90)

Sorry if the following questions have been asked before.  Could someone
tell me:

1. What kind of software primitives should a language provide in order
   to be called a (hard) real-time programming language?
2. What are the commonly used real-time programming languages?

Thanks for your answers.

Rwo-Hsi Wang
rhwang@cs.utexas.edu

alex@vmars.tuwien.ac.at (Alexander Vrchoticky) (09/20/90)

rhwang@cs.utexas.edu (Rwo-Hsi Wang) writes:

>Sorry if the following questions have been asked before.  Could someone
>tell me:

>1. What kind of software primitives should a language provide in order
>   to be called a (hard) real-time programming language?

For soft real-time systems (i.e. systems where it is not mandatory to 
ascertain before run-time that the deadlines will indeed be met) there 
are a number of primitives that can conveniently be used to express 
real-time constraints, synchronization, priorities and so on. 
For those there exists a very comprehensive survey called the 
Ada Programming Language Reference Manual :-)

For hard real-time systems one could argue that the 
question should be put the other way round:

`What primitives should be specifically excluded from a language so that
 it may be called a hard real-time programming language'

To stimulate discussions I suggest that the 
following should *not* be used for hard real-time systems: 

	o  Explicit synchronization statements (semaphores, rendezvous).
	o  `delay' statements.
	o  Explicit scheduling statements (of the `at 16:30 start foo' type).
	o  Unbounded loops and recursions.
	o  Unstructured control-flow statements (goto).
	o  Dynamic memory allocation.
	o  Dynamic task creation and termination.


Before anyone sets out to commit a ritualistic murder let me state
that I do know of some techniques that can make *some* of the above work
in the hard real-time environment. The question whether this is a good
idea is an entirely different matter.
I strongly believe that for hard real-time systems an integrated
approach encompassing:

	1.  Architecture
	2.  Operating system
	3.  Scheduling strategy 
	4.  Programming language 

is to be preferred over augmenting a general-purpose language with a few 
constructs to express time and synchronization and letting a run-time system 
go through immense pains trying to support it.

I'd like to encourage discussions on this topic and hope that
we can get something more controversial and stimulating going on 
than discussions of the `anyone have a bar-compiler for the foo-board?' type.

--
Alexander Vrchoticky  Technical University Vienna, Dept. for Real-Time Systems
Voice:  +43/222/58801-8168   Fax: +43/222/569149
e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)

rhwang@cs.utexas.edu (Rwo-Hsi Wang) (09/21/90)

In article <1844@tuvie> alex@vmars.tuwien.ac.at (Alexander Vrchoticky) writes:
>To stimulate discussions I suggest that the 
>following should *not* be used for hard real-time systems: 
>
>	o  Explicit synchronization statements (semaphores, rendezvous).
>	o  `delay' statements.
>	o  Explicit scheduling statements (of the `at 16:30 start foo' type).
>	o  Unbounded loops and recursions.
>	o  Unstructured control-flow statements (goto).
>	o  Dynamic memory allocation.
>	o  Dynamic task creation and termination.

One quick way to satisfy some of the above requirements is to remove the
related constructs, e.g., semaphores, rendezvous, delay, goto, from an
existing language.  However, whether the resulted language is functionally
sufficient for hard real-time applications becomes the problem.  If it is
not, then what primitives should be provided for rectification?

Besides functional sufficiency, convenience/expressiveness may be another
point of consideration.  For example, what will be the impact of a
language without explicit synchronization statements and delay statements
to distributed computing applications?

>I strongly believe that for hard real-time systems an integrated
>approach encompassing:
>
>	1.  Architecture
>	2.  Operating system
>	3.  Scheduling strategy 
>	4.  Programming language 

While it will be interesting to see discussions on all of the above topics,
to increase the change of achieving more constructive conclusions, I'd like
to suggest that we first focus on the programming language issue.

Rwo-Hsi Wang
rhwang@cs.utexas.edu

brendan@batserver.cs.uq.oz.au (Brendan Mahony) (09/21/90)

alex@vmars.tuwien.ac.at (Alexander Vrchoticky) writes:



>To stimulate discussions I suggest that the 
>following should *not* be used for hard real-time systems: 

>	o  `delay' statements.
>	o  Explicit scheduling statements (of the `at 16:30 start foo' type).

I'd certainly like to know why these should not be allowed. By delay do
you mean explicitly relinquishing control of the processor? I can see
problems with scheduling statements if care is not taken to ensure that
there are no conflicts over the processor resource. Please tell us more
about the problems.


>I strongly believe that for hard real-time systems an integrated
>approach encompassing:

>	1.  Architecture
>	2.  Operating system
>	3.  Scheduling strategy 
>	4.  Programming language 

       -1.  Process model
	0.  Specification language

I really don't see how you can get a good formal description of 1-4
without doing some work on these first.
--
Brendan Mahony                   | brendan@batserver.cs.uq.oz       
Department of Computer Science   | heretic: someone who disgrees with you
University of Queensland         | about something neither of you knows
Australia                        | anything about.

alex@vmars.tuwien.ac.at (Alexander Vrchoticky) (09/21/90)

brendan@batserver.cs.uq.oz.au (Brendan Mahony) writes:

[asks why I think that explicit scheduling statements should not be 
 used in hard real-time systems]

>I'd certainly like to know why these should not be allowed. By delay do
>you mean explicitly relinquishing control of the processor? I can see
>problems with scheduling statements if care is not taken to ensure that
>there are no conflicts over the processor resource. Please tell us more
>about the problems.

For the system's scheduler the delay statement is really 
only a special case of the `at xx:xx do yyyy' statement, 
as both require the scheduling of a specified
task at a specified point of time in the future.

The first question about delay-statements: 
How much extra delay is acceptable? If I execute
the statement `delay 10', is it permissible to continue after 11 seconds ?
If not, is it permissible to preempt another task to allow the task
in question to continue execution `exactly' after ten seconds (exactly 
meaning at the first scheduling decision after ten seconds)?

Whatever your decision is, the next question immediately pops up:
Can you analytically guarantee that the required
timing behavior is indeed met under all forseeable circumstances?
Remember that the programming language rests on a run-time system that 
has to schedule tasks according to some strategy (be it rate monotonic,
least laxity, static scheduling, ...). This scheduler has to guarantee
the timely execution of *all* tasks, not just a single one. 
The localized specification of required timing behavior via explicit
scheduling statements in single tasks is making this much more difficult
and sub-optimal solutions are the best one can hope for.

Al Mok argues in [Mok84] that decisions with global impact on the system 
(relinquishing the CPU, initiating actions at certain points in time) 
should be done with a global view of the system, 
including its scheduling strategy, not in the restricted context of a task.
[This is a recommendation :-) ]

----
References:

@inproceedings{Mok:PRTS84,
author = {A. K. Mok},
title = {The Design of Real-time Programming Systems Based on Process Models},
booktitle = {Proc. 1984 Real-Time Systems Symposium},
month = {Dec.},
year = {1984},
pages = {5-17}
}

--
Alexander Vrchoticky  Technical University Vienna, Dept. for Real-Time Systems
Voice:  +43/222/58801-8168   Fax: +43/222/569149
e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)

rajuk@umree.isc.umr.edu (Raju Khubchandani) (09/21/90)

In article <12712@cs.utexas.edu> rhwang@cs.utexas.edu (Rwo-Hsi Wang) writes:
>>I strongly believe that for hard real-time systems an integrated
>>approach encompassing:
>>	1.  Architecture
>>	2.  Operating system
>>	3.  Scheduling strategy 
>>	4.  Programming language 
>While it will be interesting to see discussions on all of the above topics,
                                                                     I'd like
>to suggest that we first focus on the programming language issue.

I have read more than I have hands-on experience with programming languages,
but would anybody disagree that the fastest way to execute is using the assembly
level language of any processor? In other words, what I am saying is that for a
given processor and clock/bus speed the fastest way to execute is to do assembly
level programming. I know this is a lot of programming effort compared to using
a high level language, but the objective is to extract maximum juice from your
hardware.

Raju

alex@vmars.tuwien.ac.at (Alexander Vrchoticky) (09/21/90)

rhwang@cs.utexas.edu (Rwo-Hsi Wang) writes:

>
> [my original comments deleted]
>
>
>One quick way to satisfy some of the above requirements is to remove the
>related constructs, e.g., semaphores, rendezvous, delay, goto, from an
>existing language.  However, whether the resulted language is functionally
>sufficient for hard real-time applications becomes the problem.  If it is
>not, then what primitives should be provided for rectification?

Good suggestion and good question :-) !

What is needed in a hard real-time language ?

	o Means to communicate with the environment.
	o Means to communicate with other tasks, possibly in a distributed
	  computing system.
	o Access to a notion of `time'.

On the other hand a construct can only be permissible if its execution time 
is bounded. For communication message passing seems to be a reasonable option,
because the execution time of a message-send/message receive is not 
dependent on the state of other tasks in the system. 
There also is the possibility of associating
synchronization constructs (if one insists on having them) with timeouts.
However, this raises two problems:

	o A problem of the kind `this task misses it's deadline in 10%
	  percent of the cases' is transformed into one of the kind
	  `this task always meets its deadline, but does not compute 
	  a useful result in 10% of the cases'.

	o If fault-tolerance dictates redundant execution of tasks
	  the one execution may complete `just in time' while the other
	  overruns and the exception handler is called. This may
	  present problems for the decider.

Therefore, unless one uses a way to show that a set of tasks using explicit 
synchronization does meet the deadlines (see [Kli86], [Sto87] and [Lei86])
this only shifts the problem but does not solve it.

Note that the second problem must be carefully considered whenever one
bases *any* decision on the progress of time.

>Besides functional sufficiency, convenience/expressiveness may be another
>point of consideration.  For example, what will be the impact of a
>language without explicit synchronization statements and delay statements
>to distributed computing applications?

It depends on the architecture. In our experience the impact is this: 
Easier understandable, more reliable real-time programs. But then 
the architecture of our system (MARS) strongly supports communication
via messages and doing implicit synchronization via static scheduling.
For an introduction to MARS see [Kop89].


Best Regards,

Alexander Vrchoticky

--- 
References:

@article{Kligerman:TOSE86,
    author =    {E. Kligerman and A. Stoyenko},
    year =      {1986},
    journal =   TOSE,
    month =     {Sep.},
    number =    {9},
    pages =     {941-949},
    title =     {Real-Time Euclid: A Language for Reliable Real-Time Systems},
    volume =    {SE-12}
}

@book{Stoyenko:Diss,
        author =        {A. Stoyenko},
        title =         {A Real-Time Language With A Schedulability Analyzer},
        address =       {Computer Systems Research Institute, 
			University of Toronto},
        publisher =     {Dissertation},
        month =         {Dec.},
        year =          {1987}
}

@article{Leinbaugh:TOSE86,
    author =        {D. W. Leinbaugh and M.-R. Yamini},
    year =          {1986},
    journal =       TOSE,
    month =         {Dec.},
    number =        {12},
    pages =         {},
    volume =        {SE-12},
    title =         {Guaranteed Response Times in a Distributed
                    Hard-Real-Time Environment}
}

@article{Kopetz:MICRO89,
        author =        {H. Kopetz and A. Damm and Ch. Koza and M. Mulazzani
                         and W. Schwabl and Ch. Senft and R. Zainlinger},
        title =         {Distributed Fault-Tolerant Real-Time Systems:
			    The {MARS} {A}pproach},
        journal =       {IEEE Micro}
        volume =        {9},
        number =        {1},
        year =          {1989},
        month =         {Feb.},
        pages =         {25-40}
}
--
Alexander Vrchoticky  Technical University Vienna, Dept. for Real-Time Systems
Voice:  +43/222/58801-8168   Fax: +43/222/569149
e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)

frazier@oahu.cs.ucla.edu (Greg Frazier) (09/22/90)

In article <1424@umriscc.isc.umr.edu> rajuk@umree.isc.umr.edu (Raju Khubchandani) writes:
>level language of any processor? In other words, what I am saying is that for a
>given processor and clock/bus speed the fastest way to execute is to do assembly
>level programming. I know this is a lot of programming effort compared to using
>a high level language, but the objective is to extract maximum juice from your
>hardware.

No, this is not the goal.  The goal is to obtain reliable juice
from your hardware, and assembly code is about the least reliable
way to program a computer that I know of.
--
"They thought to use and shame me but I win out by nature, because a true
freak cannot be made.  A true freak must be born."  K. Dunn, _Geek_Love_

Greg Frazier	frazier@CS.UCLA.EDU	!{ucbvax,rutgers}!ucla-cs!frazier

alex@vmars.tuwien.ac.at (Alexander Vrchoticky) (09/24/90)

rajuk@umree.isc.umr.edu (Raju Khubchandani) writes:


>I have read more than I have hands-on experience with programming languages,
>but would anybody disagree that the fastest way to execute is 
>using the assembly level language of any processor? 

How could anyone ? :-)

>level programming. I know this is a lot of programming effort compared to using
>a high level language, but the objective is to extract maximum juice from your
>hardware.

No.
The objective is to produce software that is reliable, maintainable,
readable and meeting its timing connstraints. Note that
`extracting maximum juice' is not a timing constraint. 

>Raju

-alex

--
Alexander Vrchoticky  Technical University Vienna, Dept. for Real-Time Systems
Voice:  +43/222/58801-8168   Fax: +43/222/569149
e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)

johnb@srchtec.UUCP (John T. Baldwin) (09/24/90)

In article <1844@tuvie> alex@vmars.tuwien.ac.at (Alexander Vrchoticky) writes:
av>
av> To stimulate discussions I suggest that the 
av> following should *not* be used for hard real-time systems: 
av>
av>	o  Explicit scheduling statements (of the `at 16:30 start foo' type).

Does this also preclude the following:

   1. Event "reenter-retro-burn" tied to task "calc-execute-reenter".
   2. Hard-schedule: reenter-retro-burn at 16:30:00.5 GMT.

If so, why?


av>	o  Unbounded loops and recursions.

While I can certainly fathom the reason this would be desirable, how can
it be (practically) enforced so as not to grossly interfere with the
primary goal: getting the system to run in the first place?

av>	o  Dynamic task creation and termination.

Once again, having this constraint would surely make it easier to
assert that a given real-time system meets its timing constraints.
Yet doesn't this (in general) automatically preclude application of
real-time in the AI domain?  This is critically important to me,
personally and professionally, since I am involved with the real-time
aspects of an AI system.

To broaden the horizon of this particular discussion thread: stochastic
behavior of some tasks in the system can really be a pain, and cause
all sorts of heartache (and heartburn! :-)) for the real-time practitioner,
but it is also a fact of life.  Some algorithms simply don't have a *known*
alternative having deterministic execution time.  We *sure* don't want to
pad out the nondeterministic ones out to their worst case!  :) :) :)


av>I'd like to encourage discussions on this topic and hope that
av>we can get something more controversial and stimulating going on 
                                               ^^^^^^^^^^^
av>than discussions of the `anyone have a bar-compiler for the foo-board?' type.

Absolutely!
(cf. my earlier plea, and BTW, thanks to everyone for a great list of
     pointers to good r-t reading material!)

av>Alexander Vrchoticky  Technical University Vienna, Dept for Real-Time Systems
av>Voice:  +43/222/58801-8168   Fax: +43/222/569149
av>e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)


-- 
John T. Baldwin                      |  johnb%srchtec.uucp@mathcs.emory.edu
Search Technology, Inc.              | 
                                     | "... I had an infinite loop,
My opinions; not my employers'.      |  but it was only for a little while..."

johnb@srchtec.UUCP (John T. Baldwin) (09/24/90)

In article <39180@shemp.CS.UCLA.EDU> frazier@oahu.cs.ucla.edu 
  (Greg Frazier) writes:

>In article <1424@umriscc.isc.umr.edu> rajuk@umree.isc.umr.edu 
> (Raju Khubchandani) writes:

>>I know this is a lot of programming effort compared to using
>>a high level language, but the objective is to extract maximum
>>juice from your hardware.

>
>No, this is not the goal.  The goal is to obtain reliable juice
                                                  ^^^^^^^^
>from your hardware, and assembly code is about the least reliable
>way to program a computer that I know of.

True.  The primary concern of real-time is to ensure that tasks reliably
*meet their time constraints*.  In the vernacular, we want "on time, at
the right time,"  not "as fast as we can get it."

I would argue that other forms of reliability are properly in the domain
of so-called "software engineering."  Thus I agree with the statement that
assembly coding is typically less "reliable" than other forms of coding;
yet I disagree that it is inappropriate for RT.  I would rather not
resort to assembler, but given certain task/hardware pairs, you only have
a certain time rate of execution ("performance"), and it may be that the
only way to meet the RT constraints of your given task on the specified
hardware is to use assembler.   :-{

Some of us might argue that the above is called "misspecification."  :-)


-- 
John T. Baldwin                      |  johnb%srchtec.uucp@mathcs.emory.edu
Search Technology, Inc.              | 
                                     | "... I had an infinite loop,
My opinions; not my employers'.      |  but it was only for a little while..."

alex@vmars.tuwien.ac.at (Alexander Vrchoticky) (09/25/90)

johnb@srchtec.UUCP (John T. Baldwin) writes:

>Does this also preclude the following:

>   1. Event "reenter-retro-burn" tied to task "calc-execute-reenter".
>   2. Hard-schedule: reenter-retro-burn at 16:30:00.5 GMT.

>If so, why?

It precludes everything for which you can not give a-priori guarantees 
that the system will meet its timing constraints under all foreseeable 
circumstances and no more. This is what hard real-time systems are for.
If you can give response time bounds for *all* tasks in the system
even when using such constructs then great. But I think it will be difficult.
As the state of the art advances more and more general constructs may
become permissible.
The point is not to disallow useful constructs but to discourage the use
of unnecessarily complex (and hence hard-to-verify) ones.


>av>	o  Unbounded loops and recursions.
>While I can certainly fathom the reason this would be desirable, how can
>it be (practically) enforced so as not to grossly interfere with the
>primary goal: getting the system to run in the first place?

Hmmm ... but the system does not only have to run in the first place, 
but also in the second, the third and in all places after that :-) 
Seriously: Calculating execution time bounds for turing machines is
pretty difficult (mail me if you have a solution :-). 
If you want to do a-priory calculation of bounds for the execution time
you *have* to place some restrictions on the programs, whether enforced 
by the programming language or by some coding standard. 
The alternative (timing verification by testing) is more expensive 
and less reliable, albeit much more often used.

>To broaden the horizon of this particular discussion thread: stochastic
>behavior of some tasks in the system can really be a pain, and cause
>all sorts of heartache (and heartburn! :-)) for the real-time practitioner,
>but it is also a fact of life.  Some algorithms simply don't have a *known*
>alternative having deterministic execution time.  We *sure* don't want to
>pad out the nondeterministic ones out to their worst case!  :) :) :)

But then you have to live with the fact that your system will fail from time
to time. I am sure that there are lots of applications where you can live with
this, but there *are* applications where you don't want to ....

`Hey, ground control, the A* algorithm in the active control system 
 for the control surfaces is just digging through the 537th level 
 of the search tree. What are we ... (silence)'

There is a tradeoff between the power and the predictability of algorithms.
For hard real-time systems, where a timing failure can lead to a catastrophe
and endanger human lives I will always argue towards the deterministic
predictable approaches.
After all the heartburn of the programmer is less of a problem than the 
burns and deaths of people in an airplane.

>John T. Baldwin                      |  johnb%srchtec.uucp@mathcs.emory.edu
>Search Technology, Inc.              | 
>                                     | "... I had an infinite loop,
>My opinions; not my employers'.      |  but it was only for a little while..."
					
				 	 So *this* is the algorithm
					 you talk about ? ;-)

--
Alexander Vrchoticky  Technical University Vienna, Dept. for Real-Time Systems
Voice:  +43/222/58801-8168   Fax: +43/222/569149
e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)

frazier@oahu.cs.ucla.edu (Greg Frazier) (09/25/90)

In article <1853@tuvie> alex@vmars.tuwien.ac.at (Alexander Vrchoticky) writes:
+johnb@srchtec.UUCP (John T. Baldwin) writes:
+>To broaden the horizon of this particular discussion thread: stochastic
+>behavior of some tasks in the system can really be a pain, and cause
+>all sorts of heartache (and heartburn! :-)) for the real-time practitioner,
+>but it is also a fact of life.  Some algorithms simply don't have a *known*
+>alternative having deterministic execution time.  We *sure* don't want to
+>pad out the nondeterministic ones out to their worst case!  :) :) :)
+
+But then you have to live with the fact that your system will fail from time
+to time. I am sure that there are lots of applications where you can live with
+this, but there *are* applications where you don't want to ....
+
+`Hey, ground control, the A* algorithm in the active control system 
+ for the control surfaces is just digging through the 537th level 
+ of the search tree. What are we ... (silence)'
+
+There is a tradeoff between the power and the predictability of algorithms.
+For hard real-time systems, where a timing failure can lead to a catastrophe
+and endanger human lives I will always argue towards the deterministic
+predictable approaches.

Always arguing for deterministic approaches I think is very
limitting, and not necessary.  No system is 100% reliable.
When one is designing the flight control for an airplane, there
is no point in making the computer 100% reliable when the airplane
itself is less.  For example, if an airplane "fails" once every
10^5 hours, then there is really no reason to install a computer
that will fail less than 1/10^7 hours.  Furthermore, if you allow
yourself to use stochastic realtime, it allows you to utilize
significantly more complex hardware and algorithms, which in turn
allows you to do more with your system.  Mind you, the stochastic
approach is less than simple, and I am still searching for literature
on the subject.
>After all the heartburn of the programmer is less of a problem than the 
>burns and deaths of people in an airplane.
>
>>John T. Baldwin                      |  johnb%srchtec.uucp@mathcs.emory.edu
>>Search Technology, Inc.              | 
>>                                     | "... I had an infinite loop,
>>My opinions; not my employers'.      |  but it was only for a little while..."
>					
>				 	 So *this* is the algorithm
>					 you talk about ? ;-)
>
>--
>Alexander Vrchoticky  Technical University Vienna, Dept. for Real-Time Systems
>Voice:  +43/222/58801-8168   Fax: +43/222/569149
>e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)


--
"They thought to use and shame me but I win out by nature, because a true
freak cannot be made.  A true freak must be born."  K. Dunn, _Geek_Love_

Greg Frazier	frazier@CS.UCLA.EDU	!{ucbvax,rutgers}!ucla-cs!frazier

David.Maynard@CS.CMU.EDU (09/25/90)

> It precludes everything for which you can not give a-priori guarantees 
> that the system will meet its timing constraints under all foreseeable 
> circumstances and no more. This is what hard real-time systems are for.

If this is supposed to be a discussion of software primitives for
real-time programming in general, then I see no reason to limit the
discussion to primitives that only support so-called "hard real-time
systems."  There is a HUGE body of real-time applications with (at least
some) stringent timeliness requirements that do not fit in the mold of
"hard deadlines that can never be missed."  In the past, designers have
had to force-fit these applications into rigid systems that divide the
universe into hard deadline tasks and background jobs.  If we are really
interested in improving the state-of-the-art for real time systems, then
why don't we discuss programming primitives that will help application
designers describe their REAL requirements.  Then maybe we can
concentrate OS and scheduling research efforts in areas that will
actually be of some benefit.

> >To broaden the horizon of this particular discussion thread: stochastic
> >behavior of some tasks ....
> But then you have to live with the fact that your system will fail from
> time
> to time. ...

No, you have to live with the fact that the world isn't deterministic --
and respond accordingly.  As systems become larger and more complex it
is becoming increasingly difficult to build working systems that assume
everything is perfectly (at least in the worst case) predictable.

A real solution to the real-time systems problem will embrace
nondeterminism as a fact of life and will provide mechanisms for dealing
with it.  If an application has some vital low-level control loops that
really require a certain level of determinism to ensure their stability,
then the system will need to provide that level of service in spite of
the fact that it is operating in an unpredictable world.  The other
components of the application, however, should be able to respond to
unexpected events or failures in a more dynamic fashion.

A major problem with much of the current real-time research is that
people spend inordinant amounts of time doing the academic equivalent of
saying, "I have a hammer.  I'm going to make everything look like a
nail."  Just because we happen to have a solution that works on a
limited class of problems doesn't mean that we should try to make other
problems look "close enough" so that we can apply the solution we
already know.  We should be spending our time trying to develop
solutions that address a larger class of problems from the outset.

Alpha is an operating system (and the center of an ongoing research and
development effort) that addresses many of these issues.  Alpha is
designed to support distributed real-time systems that may have a
variety of stringent timeliness requirements.  The application designer
specifies these time constraints to the operating system using
"time-value functions."  Time-value functions express the value of
completing an activity as a function of time.  (Hard deadlines are
described  as a simple special case where the time-value function is a
step-function.)  As part of our research, we have developed (and
continue to develop) a class of resource management algorithms that
utilize time-value functions in the decision making process.  

I welcome a discussion of real-time specification methods (such as
time-value functions) that allow you to express the real timeliness
requirements of applications.  There is definitely a need for more work
in this area.  However, I'm not interested in a discussion that assumes
everything is a nail and tries to decide how long the nail is.

-David Maynard
 ---
 David P. Maynard (dpm@cs.cmu.edu)
 Alpha OS Research Group
 Carnegie Mellon University & Concurrent Computer Corp.
 ---
 All opinions expressed are mine.  I haven't asked CMU or
 Concurrent what they think.
 ---

brendan@batserver.cs.uq.oz.au (Brendan Mahony) (09/26/90)

johnb@srchtec.UUCP (John T. Baldwin) writes:

>I agree with the statement that
>assembly coding is typically less "reliable" than other forms of coding;
>yet I disagree that it is inappropriate for RT.

	It may be a little hard to meet a timing requirement if your are
	completely malfunctioning. Real-time reliability is a proper
	extension of software engineering reliability, not an orthogonal
	issue.

>I would rather not
>resort to assembler, but given certain task/hardware pairs, you only have
>a certain time rate of execution ("performance"), and it may be that the
>only way to meet the RT constraints of your given task on the specified
>hardware is to use assembler.   :-{

	Probably the compiler writer knows more about extracting "juice"
	out of the processor than most programmers, but I will ignore
	that fact. Programming in assembly is no theoretical impediment
	to "reliable" programming, but it does mean one or
	two more refinement steps, and a much larger source code to
	reason about. Hence I would expect it to be much more expensive,
	with fairly little return in performance over a good compiler.
	You pays ya money ...

	Ian Hayes has just walked in and suggested that a language in which
	you could make timing demands on the compiled code would be
	useful. For example
		a := exp | /\ t < 4 microsecs;
	Inability to comply would be flagged as a "syntax" error, along
	the lines of type mismatches etc.

	Does anyone know of any proposals along these lines?

--
Brendan Mahony                   | brendan@batserver.cs.uq.oz       
Department of Computer Science   | heretic: someone who disgrees with you
University of Queensland         | about something neither of you knows
Australia                        | anything about.

johnb@srchtec.UUCP (John Baldwin) (09/27/90)

In article <1853@tuvie> alex@vmars.tuwien.ac.at (Alexander Vrchoticky) writes:
>
>It [the suggested limits on RT primitives -jtb] precludes everything
> for which you can not give a-priori guarantees 
>that the system will meet its timing constraints under all foreseeable 
>circumstances and no more. This is what hard real-time systems are for.

When you refer to the "system" meeting its timing constraints, are you
talking about the system as a whole, or a particular task (subroutine, or
what-have-you) completing execution in <= 'n' units time?

>If you can give response time bounds for *all* tasks in the system
>even when using such constructs then great. But I think it will be difficult.
...
>The point is not to disallow useful constructs but to discourage the use
>of unnecessarily complex (and hence hard-to-verify) ones.

Okay, we agree on this completely.

>If you want to do a-priory calculation of bounds for the execution time
>you *have* to place some restrictions on the programs, whether enforced 
>by the programming language or by some coding standard. 
>The alternative (timing verification by testing) is more expensive 
>and less reliable, albeit much more often used.
>

Indeed: it is this "doing RT by tweaking" approach which I think we
all want to avoid.  I'm not espousing a view that we place no restrictions
on the programs;  in fact, I'm not particularly arguing a point of view
at all... I'm trying to use this discussion to maximum effect in educating
myself about real-time.  (Just wanted this to be clear since I'm liable
to put foot in mouth any day now :-)).

>
>But then you have to live with the fact that your system will fail from time
>to time. I am sure that there are lots of applications where you can live with
>this, but there *are* applications where you don't want to ....
>

I thought that one of the "charter" items of hard real-time was, in fact,
to be able to ensure that *when* your system begins to fail, it does so
in a rational, and predictable way, incrementally.

>   `Hey, ground control, the A* algorithm in the active control system 
>    for the control surfaces is just digging through the 537th level 
>    of the search tree. What are we ... (silence)'
>
>There is a tradeoff between the power and the predictability of algorithms.
>For hard real-time systems, where a timing failure can lead to a catastrophe
>and endanger human lives I will always argue towards the deterministic
>predictable approaches.

As I have stated before, the project I'm involved in *involves* aviation.
It also involves AI, which (inherently) means that the algorithms used
will tend towards more power and less predictability.  To get rid of
these concerns is to get rid of the project.  Instead of getting rid
of it, I want to get it on a sound RT footing.

BTW, the system doesn't directly drive the control surfaces.   :-}
(I thought you'd like to be able to sleep at night.)



>>                                     | "... I had an infinite loop,
>>My opinions; not my employers'.      |  but it was only for a little while..."
>					
>				 	 So *this* is the algorithm
>					 you talk about ? ;-)

Actually, it refers to something a co-worker said when several of us were
tired and shooting the breeze.  It suddenly occurred to me that if one
took the phrase "infinite loop" super-literally, one arrived at a very
off-kilter view of the world:

   "Well, boss, we need another machine because Larry had another
    infinite loop."

   "Another useless machine?  That's the third he's wasted in five months!"

   "Yeah.  But they make great doorstops."



-- 
John T. Baldwin                      |  johnb%srchtec.uucp@mathcs.emory.edu
Search Technology, Inc.              |  "Pereant qui ante nos nostra dixerunt!"
                                     |
Opinion (open 'nyun [noun]): knowledge without the hindrance of silly facts.

alex@vmars.tuwien.ac.at (Alexander Vrchoticky) (09/27/90)

johnb@srchtec.UUCP (John Baldwin) writes:

>When you refer to the "system" meeting its timing constraints, are you
>talking about the system as a whole, or a particular task (subroutine, or
>what-have-you) completing execution in <= 'n' units time?

I'm talking about the set of all tasks that have hard deadlines specified. 


>I thought that one of the "charter" items of hard real-time was, in fact,
>to be able to ensure that *when* your system begins to fail, it does so
>in a rational, and predictable way, incrementally.

Hmmm ... `graceful degradation' is a matter that is not necessarily
directly related to the criticality of the system. There are hard
real-time application where there is an `emergency exit' into a safe
system state.
Consider a railway safety system. If the system is failing (for whatever
reason) the reasonable course of action is to switch all the signals to 
`red' and that's it. In this particular case this is to be 
preferred over `graceful degradation' where you might miss deadlines 
and have trains crash.

Of course this is not an option in aviation: `hold it right there ... :-)' 
The point is that there are systems in which missed deadlines are permissible
and systems in which they are not; and there are systems which have a safe
exit and systems which don't. These two concepts are largely orthogonal
and should be treated as such.

>As I have stated before, the project I'm involved in *involves* aviation.
>It also involves AI, which (inherently) means that the algorithms used
>will tend towards more power and less predictability.  To get rid of
>these concerns is to get rid of the project.  Instead of getting rid
>of it, I want to get it on a sound RT footing.

Best of luck. I really don't know much about AI systems, but I would 
be interested in your approaches into the power/predictability tradeoff. 

>BTW, the system doesn't directly drive the control surfaces.   :-}
>(I thought you'd like to be able to sleep at night.)

I sleep perfectly at night ... maybe not so in airplanes :-)

--
Alexander Vrchoticky  Technical University Vienna, Dept. for Real-Time Systems
Voice:  +43/222/58801-8168   Fax: +43/222/569149
e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)

alex@vmars.tuwien.ac.at (Alexander Vrchoticky) (09/27/90)

David.Maynard@CS.CMU.EDU writes:

>Alpha is an operating system (and the center of an ongoing research and
>development effort) that addresses many of these issues.  Alpha is
>designed to support distributed real-time systems that may have a
>variety of stringent timeliness requirements.  The application designer
>specifies these time constraints to the operating system using
>"time-value functions."  Time-value functions express the value of
>completing an activity as a function of time.  (Hard deadlines are
>described  as a simple special case where the time-value function is a
>step-function.)  As part of our research, we have developed (and
>continue to develop) a class of resource management algorithms that
>utilize time-value functions in the decision making process.  

This is a great idea for requirements description. But how are the requirements
then transformed into a working system? 
Basically you have a set of time-value functions. I assume that
the execution of the system should maximize the `overall value',
however it may be defined. What is the next step in the design process? 
Are the value functions used for run-time scheduling? 
How do the scheduling algorithms work? 

>I welcome a discussion of real-time specification methods (such as
>time-value functions) that allow you to express the real timeliness
>requirements of applications.  There is definitely a need for more work
>in this area.  However, I'm not interested in a discussion that assumes
>everything is a nail and tries to decide how long the nail is.

Not everything is a nail. Some things are. Those things are called `nails'.

--
Alexander Vrchoticky  Technical University Vienna, Dept. for Real-Time Systems
Voice:  +43/222/58801-8168   Fax: +43/222/569149
e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)

David.Maynard@CS.CMU.EDU (09/28/90)

> This is a great idea for requirements description. But how are the
> requirements then transformed into a working system? 
> Basically you have a set of time-value functions. I assume that
> the execution of the system should maximize the `overall value',

Exactly.  The basic goal of the algorithms is to maximize the value to
the application.  When sufficient resources are available, all of the
time constraints should be met.  When overload situations occur, threads
that will yield the least value to the application (either because they
are likely to miss their time constraint or because they just aren't
very valuable) should be shed so that other threads can satisfy their
time constraints (and contribute value).

> Are the value functions used for run-time scheduling? 
> How do the scheduling algorithms work? 

We are currently developing the third generation of scheduling
algorithms that use time-value functions (at run-time).  The first
generation assumed a uniprocessor with independent threads (but used a
consistent framework which was adapted to work in a distributed
environment).  The second generation of algorithms accounts for
dependencies among tasks.  The third generation (the topic of my thesis
research) attempts to deal more intelligently with multiple processor
environments.

It is difficult to give a concise (and accurate) summary of how the
algorithms work.  In non-overload situations, the different versions are
"roughly" equivalent to deadline schedulers in that you can prove
similar things about their optimality.  During overloads, they use a
notion of "value density" to determine what threads to schedule.  It is
definitely a hard scheduling problem, but we have been pleased with our
results to date.

> Not everything is a nail. Some things are. Those things are called
> `nails'.

I would really like to see a set of programming primitives that deal
consistently with hard deadline tasks and with other types of time
constraints.  Ideally I would envision a method that would work equally
well on a low-level control system that was best-designed using
hard-deadline periodics and on a higher-level system that was
predominately aperiodic and stochastic (or on a system that handled both
types of activities).

Maybe an appropriate way to get started with this (as opposed to
discussing what we are going to discuss) is for people to posts some
relevant references.  I promise to try coming up with a list of
available Alpha references (although it will likely take a few days). 
I've seen other relevant work in some of the RTSS proceedings.  It would
also be good if we could get some feedback from people in industry that
have developed in-house techniques.

-David
 ---
 David P. Maynard (dpm@cs.cmu.edu)
 Alpha OS Research Group
 Carnegie Mellon University & Concurrent Computer Corp.
 ---
 Any opinions expressed are mine.  I haven't asked CMU or
 Concurrent what they think.
 ---

johnb@srchtec.UUCP (John Baldwin) (09/28/90)

In article <1889@tuvie> alex@vmars.tuwien.ac.at (Alexander Vrchoticky) writes:
>
>Hmmm ... `graceful degradation' is a matter that is not necessarily
>directly related to the criticality of the system.

Agreed!

>The point is that there are systems in which missed deadlines are permissible
>and systems in which they are not; and there are systems which have a safe
>exit and systems which don't. These two concepts are largely orthogonal
>and should be treated as such.

Actually, I was thinking of a third possibility.  BTW, your examples
are *very* good at helping to bring all this to light...

The possibility I was thinking of is the case where *some* of your
task set have a safe exit state, and/or some of them can safely miss
their deadlines under certain circumstances.  It is nice to be able
to plan beforehand and say: under the following conditions, we know
that tasks A, B, and R will be preempted indefinately, while tasks
X and Y are still guaranteed to make their deadlines, which they MUST
in every circumstance.


-- 
John T. Baldwin                     | "Pereant qui ante nos nostra dixerunt!"
Search Technology, Inc.             | (A plague on those who said our good
johnb%srchtec.uucp@mathcs.emory.edu |  things before we did!)

alex@vmars.tuwien.ac.at (Alexander Vrchoticky) (09/28/90)

johnb@srchtec.UUCP (John Baldwin) writes:

>The possibility I was thinking of is the case where *some* of your
>task set have a safe exit state, and/or some of them can safely miss
>their deadlines under certain circumstances.  It is nice to be able
>to plan beforehand and say: under the following conditions, we know
>that tasks A, B, and R will be preempted indefinately, while tasks
>X and Y are still guaranteed to make their deadlines, which they MUST
>in every circumstance.

One way to do this is by employing a very static approach where the possible
scenarios are pre-planned and where we can guarantee beforehand
that the system will work under the anticipated conditions. 
This is the approach I'd take for such systems.

I can imagine that value functions could be used for this purpose as well.
However the description by David Maynard (dpm@cs.cmu.edu) suggests 
that the on-line scheduling problem (maximizing the overall value) 
is similar to the bin-packing problem and 
therefore NP-hard (correct me if I'm mistaken).  
Considering that it has to be executed in parallel with the `real' work
I guess that sub-optimal schedules are the best one can hope for. 

So in order to be able to employ those in safety-critical systems
one would have to have a-priori quantitative measures for the reliability of
such systems .. i.e. statements like `under the stated circumstances the
probability that this tasks misses its deadline is less than 1e-6'
Can such statements be analytically derived from the description of 
the task system, processor allocation, and value functions? 
If so, how? 

It is obvious that when the requirements are not exactly known 
one simply cannot build a system that is guaranteed to meet all deadlines;
and I agree that research into how to build systems that still provide 
reasonable service under unpredicted circumstances is important. 
But I believe that the techniques necessary for such `best-effort' systems 
are not necessarily the same than those for hard real-time systems.

Of course I go along with David Maynard in hoping for a unified
theory and method for building both types of systems. 
But this is going to be a very difficult goal, to say the least.

-alex

--
Alexander Vrchoticky  Technical University Vienna, Dept. for Real-Time Systems
Voice:  +43/222/58801-8168   Fax: +43/222/569149
e-mail: alex@vmars.tuwien.ac.at or vmars!alex@relay.eu.net  (Don't use 'r'!)

kim@kim.uunet.uu.net (Kim Letkeman) (09/28/90)

In article <1424@umriscc.isc.umr.edu> rajuk@umree.isc.umr.edu (Raju Khubchandani) writes:
|  I have read more than I have hands-on experience with programming languages,
|  but would anybody disagree that the fastest way to execute is using the assembly
|  level language of any processor? In other words, what I am saying is that for a
|  given processor and clock/bus speed the fastest way to execute is to do assembly
|  level programming. I know this is a lot of programming effort compared to using
|  a high level language, but the objective is to extract maximum juice from your
|  hardware.

Read "Software Tools" by Kernhigan & Plauger. Their book makes very
clear what experience teaches us all. Specific languages have nowhere
near the impact that algorithms have on the overall performance of a
program or system.

Since assembly language tends to obscure the algorithm (with as much
as ten lines of code for every one in other languages), this would
seem to imply that the programmer in assembly runs the risk of using
an inefficient algorithm.

Anyway .... with horsepower so cheap these days, and manpower so
expensive, high level languages pay better in the long run.

Kim
--
Kim Letkeman    mitel!spock!kim@uunet.uu.net

David.Maynard@CS.CMU.EDU (09/29/90)

> >to plan beforehand and say: under the following conditions, we know
> >that tasks A, B, and R will be preempted indefinately, while tasks
> >X and Y are still guaranteed to make their deadlines, which they MUST
> >in every circumstance.

> I can imagine that value functions could be used for this purpose as
> well.

To date, our research has focused on dynamic systems that are
characteristic of supervisory real-time control.  We feel that these
types of guarantees are almost always inappropriate for that level of
systems.  Some types of guarantees can simplify failure recovery.  The
main problem is whether you can REALLY make the guarantees. 
Statically-designed systems that have been well-planned and that operate
in a sufficiently controlled environment are good examples of when the
approach can work well.   However, as applications become more complex
and demanding it is becoming increasingly difficult to build these types
of systems.  In more dynamic systems the "guarantees" that are made
often don't really buy you anything (and may be counterproductive).

> So in order to be able to employ those in safety-critical systems
> one would have to have a-priori quantitative measures for the
> reliability of
> such systems .. i.e. statements like `under the stated circumstances the
> probability that this tasks misses its deadline is less than 1e-6'....

This would be one approach.  We haven't (yet) considered it.

> But I believe that the techniques necessary for such `best-effort'
> systems 
> are not necessarily the same than those for hard real-time systems.

I've only recently started thinking about how we could address low-end
control systems in the same environment as the higher-level functions. 
That is one reason I'm particularly interested in learning what
specification primitives people think are appropriate.  I don't
(necessarily) expect the same techniques will work for both domains.  I
do want to want the approaches to be compatible.

> Of course I go along with David Maynard in hoping for a unified
> theory and method for building both types of systems. 
> But this is going to be a very difficult goal, to say the least.

I agree.

 ---
 David P. Maynard (dpm@cs.cmu.edu)
 Alpha OS Research Group
 Carnegie Mellon University & Concurrent Computer Corp.
 ---
 Any opinions expressed are mine.  I haven't asked CMU
 or Concurrent what they think.
 ---

mcmahan@netcom.UUCP (Dave Mc Mahan) (09/29/90)

 In a previous article, brendan@batserver.cs.uq.oz.au writes:
>johnb@srchtec.UUCP (John T. Baldwin) writes:
>>I would rather not
>>resort to assembler, but given certain task/hardware pairs, you only have
>>a certain time rate of execution ("performance"), and it may be that the
>>only way to meet the RT constraints of your given task on the specified
>>hardware is to use assembler.   :-{
>
>	Probably the compiler writer knows more about extracting "juice"
>	out of the processor than most programmers, but I will ignore
>	that fact.

I have found that they guy may know more about getting maximum juice, but it
doesn't always appear in the produced code.  The compiler I recently purchased
was touted as being faster than all the competition and better than I could
code myself in assembly language.  Yet, I find that it does really dumb things
when doing very simple operations.  It is highly biased toward using registers
(a selling point) but will use them even when a straight coding of :
   move MemA,MemB

would be much faster.  In general, I have found that compilers are very good
at doing some things, but seem to be brain dead at doing other (seemingly)
simple things.  The overall benefits from using one far outweigh the downside,
however.


>	Ian Hayes has just walked in and suggested that a language in which
>	you could make timing demands on the compiled code would be
>	useful. For example
>		a := exp | /\ t < 4 microsecs;
>	Inability to comply would be flagged as a "syntax" error, along
>	the lines of type mismatches etc.

I like this idea, but I'm not sure how easy it could be to implement.  How can
the compiler know about caches, memory wait states, CPU clock cycles, etc?
It would be an excellent way to produce verifiable code, but it would have
to be able to determine such analysis from data that is not available at
compilation time.  Would it be much better to create a tool that would provide
a formula for execution of a routine?  This might appear as:

foobar.c:                2                   3 
          a(input1_count)   - b(input2_count) + c

where a, b, and c are defined in CPU cycles and input1_count is the maximum
size of of input1 or the biggest count of elements in input1, or whatever.
This would still provide only a guess in a CPU that includes cache, pipelines,
etc.


>Brendan Mahony                   | brendan@batserver.cs.uq.oz       

  -dave

mcculley@alien.enet.dec.com (10/04/90)

In article <1851@tuvie>, alex@vmars.tuwien.ac.at (Alexander Vrchoticky) writes...
>rajuk@umree.isc.umr.edu (Raju Khubchandani) writes:
>>I have read more than I have hands-on experience with programming languages,
>>but would anybody disagree that the fastest way to execute is 
>>using the assembly level language of any processor? 
> 
>How could anyone ? :-)

For sure it's the fastest way to execute instructions, the problem statement is
insufficiently defined to determine whether or not it's the fastest way to
execute code.

>>level programming. I know this is a lot of programming effort compared to using
>>a high level language, but the objective is to extract maximum juice from your
>>hardware.
> 
>No.
>The objective is to produce software that is reliable, maintainable,
>readable and meeting its timing connstraints. Note that
>`extracting maximum juice' is not a timing constraint. 

No.  The objective is to meet requirements.  Requirements define the priority
assigned to all those other objectives.

If you have hardware that (for whatever reason) can't be changed and it's
running out of grunt, you may need to extract maximum juice at the expense of
maintainability in order to meet the timing constraints.  That's all part of
the real world requirements definition (something all too often missing in the
academic abstractions studied as science).

As far as the base question goes, you just can't say.  Some other replies to
this thread stated some assumptions to the effect that the compiler writer
probably knew as much or more about getting maximum juice out of the
environment, but that's just as unfounded as the opposite assumption.  Truth
is, a good compiler will outperform a bad compiler, and a good assembly
programmer will outperform a bad assembly programmer.  Just how any given
compiler and BAL programmer stack up vis a vis one another depends on their
relative merits.  In theory, the best a good compiler should be able to do is
to match a good assembly programmer (after all, if he's good enough he'll just
write the same code as the compiler outputs, right?).  In the real world, I
doubt any compilers do that well, and I doubt many programmers do either.

I work in assembler, all the time.  In an environment I've used for that for
over ten years now.  I'll match my skills against any compiler around, in that
sandbox, for the job I do.  I won't argue the issue for any other conditions
though.

My own belief is that assembler and compiler languages are different tools,
like all tools different jobs are more or less well suited to each of them.

In other words, don't think nails and screws are meant to do the same job.

(HINT: if you think nails and screws are completely interchangable, consider
tension loading.)

Bruce McCulley		; heck, I don't know the mail path, look at the header
Central Engineering
Digital Equipment Corp.
Nashua, NH

david@marvin.jpl.oz (David Magnay) (10/05/90)

The Japanese TRON project has a spec called I-TRON, which is for a (rather
basic) real time operating system. Rather than standardise the primitives,
they are using a higher order of abstraction.

At the end of the day, the goals need clarification before setting off to
solve the problem. Will a set of standardised primitives help anyone ?. If the
goal is to maximise portability, there are several levels of abstraction that
can be ( and have been) chosen. OS level ( a la UNIX/posix,..), the
library call level ( a la C language), the BIOS level (MS-DOS) or the hardware
level (PC's). I suppose other levels could be postulated. Also to be
considered is who will administer the standards, and what are their
motivations in all of this. The cost of supporting a standard, even a PD one,
will effect peoples attitude to it, and the level of support you get.

Perhaps all we need are "generic interfaces", with the implementation details
up to each vendor.

The goals you set out with will select the choice. Lets clarify these a little
further.

Remember, most projects go awry in the first 5 minutes. ( It just takes a
little longer to work out) Lets spend 10 minutes setting the direction.

..........................................................
David Magnay				mail	david@marvin.jpl.oz.au
Boral Elevators				was: Johns Perry Lifts
45 Wangara Road, Cheltenham 3192	phone	(03) 584-3311
Victoria, Australia			O/seas	+61 3 584 3311

mcculley@alien.enet.dec.com (10/06/90)

In article <743@marvin.jpl.oz>, david@marvin.jpl.oz (David Magnay) writes...
> 
>Perhaps all we need are "generic interfaces", with the implementation details
>up to each vendor.
> 

Isn't that a description of POSIX?

>Remember, most projects go awry in the first 5 minutes. ( It just takes a
>little longer to work out) Lets spend 10 minutes setting the direction.

I'll bet the 5-minute rule is still obeyed! :-)

- Bruce McCulley
  Digital Equipment Corp.
  Corporate Software Engineering