[comp.lang.ada] The great ada debate

beser@tron.UUCP (Eric Beser) (02/09/90)

The following is a reprint of an article in the Westinghouse Aerospace
Software Engineering Newsletter.  The  presenters  in  this debate are all
from Westinghouse Electronic Systems  group,  and all have significant Ada
experience. Comments and replies to the net are welcome.
                             The Great Debate

                                by Bob Pula

    "The Great Debate" can be a  dangerous way to advertise an exchange of
views which hasn't happened yet.   But  perhaps it shows what stalwarts we
have  in  the  Aerospace  Software  Engineering  Department.   Unflinching
courage.  Chance takers.
    The debate thus  advertised  was  held  on  Friday,  November 10, 1989
before  a  standing-room-only-overflow-into-the-hallway  audience  in  and
around the MS 432  Conference  Room.    The  "motion before the house" (as
Oxford   Union   fan    Eli    Solomon    chose    to    call   it)   was,
"The Ada  Tasking  Model  is  Adequate  for  Real-Time  Avionics Systems."
Speaking "for the proposition" were,  in  order of speaking, Art Moorshead
and Doug Ferguson; "for the opposition," Jeff Gustin and Eric Beser.
    Eli Solomon gave this introduction:  "Good  afternoon.  My name is Eli
Solomon.  On behalf of the Ada  Working Group, I would like to welcome you
and thank you for attending this debate.  We hope to provide you with some
insight into the issues associated with  the Ada Tasking Model and further
hope that you will  actively  participate  in the discussion following the
debate.  Today, I will be your chairperson in conducting the debate."
    The speakers alternated (pro, op,  pro,  op), taking five minutes each
to stake out their positions.   After  this,  each speaker had a minute to
summarize his position and make closing  comments.  This was followed by a
vote (results below)  and general discussion, questions and answers, etc.
    The first speaker was Art Moorshead.  This is what he said:
    "On the  kinds  of  real-time  systems  we  build,  we  must deal with
asynchronous events in the outside world and perform concurrent processing
on multiple active threads of  control  to handle these events and process
the various streams of input data.
    "We will argue that Ada tasking  is adequate, and even beneficial, for
programming the kind of  systems  we  build  around  here,  and that it is
possible to write an Operational Flight Program (OFP) entirely in portable
Ada, using the Ada Tasking Model  defined in the LRM, without resorting to
a customized run-time system, such as VRTX.
    "Lest we bite off  more  than  we  can  chew,  we  concede this is not
possible on  1750A's  (even  VHSIC  [Very  High  Speed Integrated Circuit]
1750A's).  We are not  saying  that  Ada  tasking  is perfect and needs no
changes.  There are known  problems  with Ada tasking that everyone agrees
need to be fixed in Ada  9X  [an  Ada revision due sometime in the 1990s],
the need for a DELAY_UNTIL statement, for example.
    "We are also  not  defending  Ada  tasking  as  being  able to support
distributed systems in the sense of  one Ada program being spread out onto
multiple processors.  There  are  some distributed processing issues, such
as, if a processor fails, relocating tasks, which are simply not addressed
in the Ada Tasking Model.
    "We are only defending uniprocessor scheduling here.
    "We all know how immature Ada compilers are today.  I believe that the
embedded market for Ada is  big  enough and the optimization technology is

available, so the compilers will  mature.    I  don't believe there is any
inherent barrier to adequate Ada compilers.
    "Real-Time scheduling kernels must address the following needs:
      ~  Scheduling multiple tasks
      ~  Handling interrupts
      ~  Communication between tasks
      ~  Mutual exclusion for access to
         shared resources, and
      ~  Synchronization between tasks
    "Starting with Dijkstra's  1968  paper  on  semaphores, there has been
lots of effort trying to find  good primitives, models and abstractions to
support these concerns.  Semaphores are frequently used for protection and
signaling, but are hard to write  correctly,  prove, and to maintain.  For
example:
      ~  you can jump around the P and
         leave unprotected data
      ~  you can jump around the V and
         leave the data locked
      ~  and you can't wait for multiple
         semaphores or one of several
         semaphores to become free.
    "The Ada rendezvous concept combines synchronization and communication
in one model.  It offers  asymmetric  communication in that only one party
knows the name of the other.    This  offers the ability to create utility
servers who can be called by anybody.   It offers priorities, time outs on
rendezvous, polling, and mapping  interrupts  to  task entries.  In short,
Ada tasking pretty much covers the waterfront in offering the capabilities
needed to write real-time embedded systems.
    Because the Ada Tasking  Model  has high-level abstractions, it offers
higher programmer productivity and  program reliability.  Embedded systems
are intrinsically complex, and get  non-linearly more complex as they grow
in size.  We know we  are  giving  up  efficiency by using very high level
language constructs, but  the  complexity  of  the next generation systems
will simply be too great to manage  all  the details by hand.  Giving up a
little efficiency to gain  enough  improvement  in reliability to make the
system work is a worthwhile tradeoff.  Anyway, our goal is not to make the
most efficient system possible, but merely to  make it work -- it just has
to be fast enough to meet the system spec.
    "Another advantage of  Ada  tasking  is  that  it  will  run on a bare
machine. Embedded systems developers don't  need to write their own custom
run-time system executives for each new hardware architecture they want to
port to.
    "In summary, Ada tasks are:
      ~  portable -- more reusable
      ~  easier to understand -- you
         have less maintenance effort
      ~  and they're easier to code --
         you have higher programmer
         productivity
    "In last year's TRI-Ada Programming  Contest, I had the opportunity to
design and write an embedded system  entirely  in  portable Ada.  I had to
deal with interrupts, and periodic tasks, and terminal drivers.  It took a
little while to re-educate my  own thinking from traditional approaches to

using Ada tasking, but, in  retrospect,  Ada  tasking offers a lot.  There
can be a very direct mapping from data-flow diagrams to program structure.
As far as productivity is  concerned,  there  is  no way I could have ever
written a program of comparable  complexity  and functionality in the same
time period using any other language.
    "One of the biggest criticisms  of  Ada  tasking today is the priority
inversion hard deadline scheduling problem.   I attended John Goodenough's
and Lui Sha's tutorial  at  TRI-Ada  and  Lui  Sha  spoke.  He raised very
significant issues that I hope Ada 9X addresses.  However, our position in
this debate is that Ada tasking  as  currently defined is adequate for the
kinds of systems we  build  here,  and  I  don't see priority inversion in
actual practice to be such  an  insurmountable  barrier as to preclude the
use of Ada tasking.  In fact,  in Lui Sha's paper, he concludes by saying,
`Although the treatment of priorities by the current Ada Tasking Model can
and should be improved,  it  seems  that  the scheduling algorithms can be
used today within the  existing  Ada  rules  if  an appropriate coding and
design approach is  taken,  and  if  schedulers  are  written to take full
advantage of certain coding  styles  and  the  existing flexibility in the
scheduling rules.'"
    Next, Jeff Gustin presented the first  installment of the case for the
opposition.
    "Good afternoon.  I'm going to be arguing the con side of tasking; Ada
tasking, to be specific.  I thought I'd tell you [that] one of my earliest
assignments  here  at   Westinghouse   was  developing  the  multi-tasking
operating system  for  the  array  processor.    I  actually developed the
system,  which  was  called  "APEX,"  and  I  enjoyed  it.    It  was very
challenging and rewarding because that system  is now employed on a number
of systems, including the F-16, the B1-B,  the CFF, and others.  The whole
concept is being updated now for the VHSIC Array Processor System and it's
also being re-coded in Ada.
    "Now, this is not Ada tasking, this is a package, a scheduling package
that you can call the "create  task"  and  "schedule task" and get them to
rendezvous with each other, to give  you a certain amount of communication
between them.  I've  always  been  very  enthusiastic about tasking, and I
agree that's the way to go in  the future for executives.  The question is
whether or not Ada fits into real-time systems to provide us with the task
that we want.
    " When we recognized the need to supply tasking within its language, I
was very enthusiastic about that and  I  wanted to find out more about it,
so when  I  started  working  on  CFF,  which  is  an  Ada  program, and a
production program -- this isn't a  paper  program  -- we tried to use Ada
tasking at the very beginning.   Now, they took a conservative approach at
the beginning.  They  were  just  going  to  use Ada tasking constructs to
install interrupt handlers.  There's a  special way that you can define or
an assign interrupt task entry to  an interrupt, and that task essentially
becomes an interrupt handler.
    "Then we saw how  many  interrupts  were  going  to  come in.  Being a
little wary of the compiler  we  were  using,  I  wanted to do some timing
estimates; how long it took  for  the  Ada run-time system to handle that.
It turned out that on the 1750A processor, it took a hundred sixteen (116)
microseconds and, as you can see [pointing to viewgraph], there were about
twenty to thirty interrupts per frame.   Overhead was about ten to fifteen
per cent just for interrupt  task  entries,  which  was too high.  We were

already pushing the limits with the  required  40 per cent reserve, and so
forth.  So Ada was too slow.
    "Now, at first I figured it was just a bad compiler implementation, so
I went through the task entry code to see what was going on.  Then I found
out that, yes indeed, it had to  check  for delays and it had to deal with
FIFO [first in, first out] queues  for  task entries and it was very, very
complex.  When I looked at the Ada  all around and saw all the things that
were required when you had selective waits and multiple entries in a wait,
and so forth, I understood why all  the  logic  was in there.  I could not
figure out for myself how that could be made any more efficient.
    "We abandoned that approach.  We took  a  second stab at it.  We still
tried to preserve some tasking.    What  we  did here was a foreground and
background task.  We had some  assembly level interrupt handlers.  Most of
the interrupt handling that had to go on was very simple, so we just wrote
some assembly procedures to do that.   We wanted the assembly to trap most
of the interrupts and for the  few  interrupts that really had to kick the
foreground into going, we were  going  to  make  calls to the Ada run-time
system to make that happen.    Then  we  would  only deal with two context
switches per frame, and even  at  a  hundred microseconds that wouldn't be
too much overhead; we could handle that.
    "The lesson we learned from that  was  that it's not nice to fool with
Ada run-time systems.  You've either  got  to  go  all the way with an Ada
run-time system and  the  Ada  Tasking  Model,  or  you've  got to go with
something else.
    "So as a result, we still reject it.  We abandoned that approach, too.
    "It seemed to us that  the  answer  was  pretty clear; that you have a
tradeoff.   You  either  go  with  Ada  tasking  ...  or  you  add another
processor.  The point  of  this  is  that  inefficiencies in your software
translate into system cost dollars.  Recurring cost dollars in a system --
that's one thing project managers do  not like to hear on production jobs.
You can say you're going to  take another month to develop something; they
don't want to hear it.
    "So, for now, our current  approach  is  that  tasking is great if you
have simple  primitives.    While  the  [theory  around  us]  with the Ada
rendezvous and the Ada  Tasking  Model  is  that  the rendezvous is a very
complex  mechanism  that  is  inherently,  inefficient,  and  if  you want
something simpler like a semaphore,  you  have  to add more tasks and more
rendezvous,  which  is  counter   to   the  typical  software  programming
standpoint of providing simple primitives from which more complex ones can
be built.
    "My solution is to  stick  to  a  simple tasking package callable from
Ada.  It's  small,  it's  efficient,  and  it  can  be tailored to special
requirements, which we've had to do  with  APEX  in the past.  I feel that
Ada Tasking can't be  tailored    to  meet special scheduling requirements
such as Art suggested.  I guess that's what Eric's going to address."

    Eli Solomon:  "I  now  call  upon  Mr.  Doug  Ferguson  to  second the
proposition."

    Thus spake Doug Ferguson:

    "I'm glad that I'm following Mr.  Gustin.   I'd like to talk about how
Ada  tasking  is  efficient  enough   on  embedded  computers  to  do  the

requirements.  Let's talk about what  is a requirement for performance and
what are the  potential  architectures  we're  dealing  with in developing
software in the future for  real-time  embedded  computer systems.  How do
you measure the  performance  of  Ada  tasking,  and what alternatives are
available within the Ada language for doing scheduling?
    "So what's a `requirement'?  Well, you  never get a customer to say he
has to be able to do  an  Ada  task rendezvous in `um' microseconds.  It's
always some implicit or derived requirement.  Often, this problem with Ada
tasking has been exemplified by  writing  programs  that have too many Ada
tasks on VAX computers, 1 MIPS  [Million Instructions Per Second], 2 MIPS,
3 MIPS, slow computers, such as  a  VAX, or slow 1750A computers, or 68020
or 030 computers.  That's ancient  technology.   If you put too many tasks
on those processors, then you're not going to have sufficient compute time
to do your job.
    "The hardest real-time requirement I've  seen, the most explicit real-
time requirement I've seen, was  from  Triple  AM  [AAAM].  It said, `Do a
context  switch  which  includes  receiving  an  interrupt,  going  to the
interrupt handler and calling  the  first  executable statement within the
application program to handle the new data within 50 microseconds.'  O.K.?
That was a requirement laid down by General Dynamics.
    "Well,  what  are  the  estimated  performances  of  some  of  the new
architectures?  You look at  our  old architectures: the Fairchild 9450 is
about 1 MIPS; we used to build  1750A's  that are in the 600 KIP [Thousand
Instructions  Per  Second]  range;  the   newer  VHSIC  [Very  High  Speed
Integrated Circuit] 1750A's are 2 MIPS; the  PACE chip being used in a lot
of programs is right around 2 MIPS, maybe a little bit less; 68020 about 2
MIPS; 68030, depending upon what kind of cache, what kind of memory static
or dynamic, you have maybe 3  or  4  MIPS.   But the process we're dealing
with now, the JIAWG-compliant 32-bit architectures to be used for FSD go a
step beyond.
    "We're looking at the MIPS architecture that you see here [viewgraph],
the R-4000 second generation MIPS processor available in 1991 -- the first
generation's  available  in  April   of   next  year  (1991's  the  second
generation) -- we're looking at  [deleted:a large number, but proprietary]
MIPS of processing power on chip.  O.K.?
    We're looking at the INTEL P-12, currently the 80960 processor used on
one of the ATF programs.  It's  about  6 MIPS.  They're claiming a 50 MIPS
performance at 28 megahertz, which would be the military temperature range
chip set.
    "How do you measure this performance?    Well, you use one of the PIWG
tests.  Jon Squire, who  used  to  be  the  chair of the PIWG (Performance
Issues Working  Group)  Committee,  has  created  a  number  of tests that
measure [these] things.  This is  one test [viewgraph], the T-000001 Test.
It measures the time for a single  Ada  task -- there is the task spec and
the task body -- control them to set up the start of the timing within the
timing group.  This is a two-way rendezvous.  It actually rendezvous' with
and returns from  Task  T1's  Entry  E.    O.K.,  so  it's  a two-way task
rendezvous.  It's a good measure,  and  it's  the closest thing we have in
the way of a bench-mark to  compare  against the 50 microseconds time from
the time that you receive an interrupt to the time you handle it.
    "What's the performance of  Ada  tasking on the current architectures?
We're looking at the Fairchild  chip  taking  961 microseconds -- almost a
millisecond.  Totally  inefficient.    We're  looking  at  going  to later

generations of the Fairchild  with  the InterACT G-code compiler improving
it.  Halving the time.  And in  the latest instance under beta test of the
XD Ada compiler -- XD, the  new Digital Equipment Corporation compiler for
the 1750As -- does  the  same  thing  on  the  PACE chip; 76 microseconds.
Let's get them down here for the 1750A.   But still inadequate to hit a 50
microsecond requirement.
    "On the 68020  we've  progressed  from  269  microseconds  down to the
latest  XD  Ada  test  run  by  Bob  Wittman,  117  microseconds.    Still
inadequate.
    "You can look  at  compilers  that  optimize  particular  things.  The
Verdix, only 27 microseconds for a  passive  Ada task, but the R-2000 MIPS
is 76 microseconds.
    "Now, if you look  at  the  estimates of future generation processors,
we're  looking  at  the  R-4000  Second  Generation  processor  taking  13
microseconds for  Test  T1.    That's  a  round-trip  rendezvous.   If you
allocated 1 per cent of  your  processing  time for Ada tasking, you could
still do 7690 Ada task rendezvous per  second with only 1 per cent of your
CPU time allocated for the overhead of rendezvous.
    "If you look at the claims  by  JIAWG Common Avionics Processor 32 Bit
Working Group  I  attended,  where  we  selected  the  MIPS  and  the P-12
architectures for all future  avionics  systems,  then  the R-4000 will be
down to 3.3 and the P-12 down  to 12 microsecond context switch times.  In
either case, once you go to one of the JIAWG FSD 32-bit architectures, the
days of not having enough CPU  time  for using Ada tasking on your machine
are over."

    Eli Solomon: "Thank you,  Mr.  Ferguson.    We  now call upon Mr. Eric
Beser to second the opposition."

    Eric Beser:  "I'm batting clean up.
    [Laughter]
    "What we are  going  to  talk  about  today  is tasking appropriate to
today's applications.  My distinguished colleagues have pointed out what's
happening in the future  with  the  newer  processors and what will happen
with the Ada 9X.
    "Real-time systems frequently  contain  jobs  with  hard deadlines for
their execution.  Failure to meet a deadline is tantamount to reducing the
job's execution and perhaps jeopardizing  the  system's mission.  What Ada
allows us to do is  to  assign  priorities  to tasks and the Ada scheduler
that we use  must  be  a  pre-emptive  scheduler.    It must give absolute
deference to higher priority  tasks  over  lower  priority tasks.  But the
problem is that the language  itself  leaves unspecified the mechanism for
doing that.
    "If a task priority  is  static,  and priorities generally are because
they're defined  as  integer  values,  synchronized  communication between
tasks is achieved through a rendezvous.   But what can occur is that a low
priority task which is in  rendezvous  will subsume a higher priority task
because the current rules of the  language which state that the rendezvous
must be completed  and,  therefore,  a  low  priority  task can invert its
priority.  This  is  what  Lui  Sha  has  noted  in his priority inversion
problem where low priority jobs will pre-empt a higher priority task.
    "The Ada scheduling rules may  be consistent with heuristics that have
been studies over a period of  time;  nevertheless, my premise is that Ada

scheduling rules are inappropriate for  real-time systems in the following
ways.
    "First, the algorithms  for  efficient  real-time scheduling cannot be
implemented with  language-specified  constraints  on  the definitions for
task priority and task scheduling.
    "Secondly, the FIFO queue, which  is  the way that Ada task implements
its entry call,  and  the  arbitrary  selection  discipline  which is used
during inter-task communications are  inappropriate algorithms for control
of real-time jobs.
    "As the  work-arounds  my  distinguished  colleague  pointed  out have
shown, they will cause the actual  scheduling algorithm to be used ad hoc.
[Such procedures] are frowned upon in production systems.
    "The rate monotonic algorithm was  defined from the standard monotonic
algorithm as a  fixed  priority  hard  deadline  scheduling algorithm.  It
handles  both  stochastic  execution  times  and  transient  overloads and
achieves both a high average  case  utilization,  over  88 per cent, and a
high worst-case utilization,  which  is  70  per  cent.   However, it does
require that certain high priority  tasks  execute within a certain amount
of time and that  other  lower  priority  jobs  must  be given the time to
execute within the time  frame.    It's  possible  to  work around the Ada
definitions but, again, those work-arounds are ad hoc; they're not defined
within the language.    What  has  to  happen  is  that the implementation
definition of task priority needs to be relaxed.  The work-around for that
is to assign every task the same priority.
    "The scheduler of  real-time  application-dependent information, which
can be supplied  either  through  implementation-specific pragmas or other
mechanisms which are distinct  from  Ada but, nevertheless, generate input
to the Ada translator.  It was recommended that this practice not be used,
but to change the entry  queue  from being FIFO-ordered to being priority-
ordered.
    The  requirements  of  real-time   scheduling  require  a  coordinated
scheduling policy.  Scheduling should be  manifest, not very deep into the
details of the application.  The scheduler must be able to maintain a wide
variety of jobs, both periodic and  a-periodic.  And the scheduler must be
pre-emptive.
    "Now, what  happens  is  what  this  chart  [viewgraph]  is showing, a
comparison  of  scheduling  algorithms.     This  is  the  rate  monotonic
algorithm, which is the most effective  algorithm that is used in embedded
systems; however, it is  inappropriate,  given the Ada tasking constructs,
to implement this algorithm.  We  must therefore rely on both FIFO-ordered
algorithms and arbitrary entry points.    What  we're showing here is that
the probability of missed  deadlines  increases dramatically with the type
of algorithm that we can use.

    Eli Solomon: "Thank you.

    "We now have a minute each  for  speakers,  in the same order in which
they presented their cases, for closing comments.  Mr. Moorshead?"

    Art Moorshead:  "The  theory  of  rate  monotonic scheduling, which my
distinguished colleague has just  described, offers an analytical approach
to computing the minimum processor loading  to meet hard deadlines.  It is
desirable, but it is not essential.  We've never built systems around here

in the past,  not  using  Ada  tasking,  where  we've  used rate monotonic
scheduling as a requirement.
    "The proposition that we  are  arguing  today  is that Ada tasking, as
currently defined, is adequate  for  building  systems.  We're not arguing
that it's optimal.  I, too, would  like  to see this type of scheduling in
Ada 9X; however,  I'm arguing that we build avionics systems using the Ada
tasking that we have  today.    It's  adequate  to build real-time avionic
systems.
    "As far as ad hoc  schedules  [go],  all  of the systems that we build
require engineering tuning to make them  work.   Ada won't take that away.
That's part of our job."

    Jeff Gustin: "Well, I guess my  main point from the beginning I'd like
to reiterate  is  that  the  Ada  rendezvous  is  a  very  complex tasking
mechanism from which simpler [mechanisms], such as signals and semaphores,
can be built, introducing extra tasks, and I think that's crazy.
    "The second point I'd like to bring up, though we're considering a new
AP [array processor] test approach, we  have  a problem in the new version
of the AP, where the old approach of stopping the AP to go and look at the
memory doesn't work anymore.  We have to  keep the AP running.  We have to
do what's equivalent  to  a  Control  C  on  a  VAX,  return to some lower
operating system,  from  which  the  operating  system  can  down-load new
programs and have them execute and have those programs return data values.
As far as I can tell, the  Ada Tasking Model will not support that without
special primitives, such  as  `suspend  task,'  `delete task,' and `commit
task,' `execute task,' which  Ada  tasking  does  not  supply.  So there's
something else to consider,  sort  of  a  special application that I don't
feel Ada tasking will help with.  Thank you."

    Doug Ferguson: "First, I'd like to  reiterate  that the goal is not to
make the most efficient system possible,  but merely to make a system that
meets the system spec.  I pointed out how you can use Ada tasking and that
it can meet time lines.  Two  available features came from a working group
that Eric Beser belonged to  for  a  number of years, called "Ada Run-Time
Environment Working Group."  They  created  a document called A Catalog of
Interface Features and Options.   It's available, features from that, from
a number of Ada compiler companies,  on  the 1750A, on the MIPS, 80960 and
the P-12 processor.  They came to well over eighteen primitives, like Jeff
was talking  about,  like  `suspend,'  and  `resume.'    That's   what our
Distributed Operating System we  developed  at Westinghouse currently uses
to  do   hard,   real-time,   event-driven,   priority-based,  pre-emptive
scheduling -- using Ada.    Not  going  outside  the Ada language [but] by
calling `suspend' and `resume' in the run-time system.  They're two of the
CIFO [Catalog of Interface Features and Options] features.  For doing hard
real-time scheduling, you call `schedule,' one of the CIFO features.  It's
not part of the Ada Tasking  Model,  it's  a  result of the work that Eric
Beser did in the Ada  Run-Time  Environment  Working Group, coming up with
the suspend and resume now being  implemented  by a number of Ada compiler
companies.  Part of the Ada language."

    Eric Beser: "Again,  my  point  that  I'd  like  to make is that these
solutions are outside of the  language's  semantics.  The issue that we're

debating today is whether or not Ada  tasking as it stands today is useful
within the run-time system.
    "What Jeff has pointed  out  is  that  they  need the ability to shift
modes.  The only way to pre-empt the task within the Ada run-time language
definition is by using  the  `abort'  statement.    The abort statement is
inappropriate because  of  the  length  of  time  it  takes  to  do a task
abortion, and the fact that you don't want to terminate the task, you want
merely to suspend the task.  What  the vendors have done is to work around
the language problems rather than  within  it  and give solutions that are
outside of the language definition.
    "It's true that the Catalog of Interface Features and Options provides
a plethora of functions and routines that  give you a window into the run-
time; it's to provide these ad  hoc solutions to solve immediate problems.
The way that we're going to  solve  the  problem  in a permanent way is to
ensure that the language is changed to meet the needs of embedded systems.
As it stands today, the language definitions do not meet the needs.  Thank
you."

    Eli Solomon: "O.K.   At  this  stage  you've  heard the cases for both
sides.  Both sides have brought up some very interesting points.  I'd like
to solicit a vote from the floor.   All those in favor of the proposition,
please indicate that by raising your hands. ... 18 for, 7 against.  Motion
carried."

    Eli Solomon then  took  care  of  some  business  relating  to the Ada
Working Group and Doug Ferguson  called for increased participation in it.
"There's a lot of work in Ada  being  done  in ESG," he said.  "People are
fighting issues; problem is  that  the  compiler  has  some bugs.  The Ada
Working Group is supposed to  support  the  projects in the development of
software; take information  gained  from  one  project  and  share it with
another project.  I don't  think  the  Ada  Working Group is being used as
much as it could be."
    A word to the wise: it's  good  for  your  career  to be a member of a
working group.

    The formal part of the debate was followed by questions from the floor
and discussion among the presenters.

    Jon Squire:  "Mr. Beser."

    Eric Beser:  "Uh oh."  [Laughter]

    J.S.:  "If  you  were   at  Nelson  Weiderman's  presentation  (Nelson
Weiderman is the one who  has  implemented   the rate monotonic) you would
have found that he  was  able  to  implement  that  in pure Ada, not using
anything [else], run it on  about six different compilers and consistently
get about 95 per cent utilization  before  missing deadlines.  Why did you
say that the rate monotonic could not be implemented in conventional Ada?"

    E.B. "Because of the fact that you cannot use conventional Ada to pre-
empt tasks that are being run in order to get tasks of lower priority time
to execute.   The  way  that  Weiderman  did  it  was to maintain priority
throughout all of the tasks,  and  was executing tasks within a consistent

priority throughout the system.  In reality, the rate monotonic scheduling
algorithm uses tasks of differing  priorities.   The problem with the rate
monotonic algorithm and Ada is the fact that you have to pre-empt the task
or hold the task in place in  order  to give a task of lower priority time
to run.  The mechanisms for  doing  that do not exist within the semantics
of the language.  The  problem  with  the  rate monotonic algorithm is the
FIFO word queue of entry.    What  is  needed  to support a rate monotonic
scheduling algorithm is a priority-ordered queue for entry."

    Jeff Gustin: "I have a question for you.   Do you know of any plans in
the future to add to the language things that specify the stack size for a
task?  That's a problem we're  running  into.    The AP on the CFF program
might have six tasks, and trying to  implement that in Ada tasking -- that
would grow to twelve tasks to  handle all the semaphore simulation and all
that.  Certainly, if all a  task  is  doing is simulating a semaphore, you
probably don't need more than a few  dozen  words on the stack, and if you
can do a context switch ..."

    [Doug Ferguson here makes Gustin an interrupt handler]:

    D.F: "Fixed storage size doesn't do that for you? -- for a task?"

    J.G.: "What's that?"

    D.F.: "Fixed storage size.   Storage  size  is the attribute of an Ada
task which tells how much memory's allocated to that task ..."

    Art Moorshead: "Attribute of task type."

    D.F.: "Right.  See, you  declare  task  type, if you already know that
task type ... before that object exports ..."

    J.G.: "So that's supposed to be supported ..." [crossed voices]

    Voice: "... the language supports it."

    Eli Solomon: "I have a question.  Doug brought out in his presentation
the issue of  the  Catalog  of  Interface  Features  and Options (CIFO), I
believe.  I'd like to ask  Doug,  and  Eric next, whether you believe that
that is going to be totally adequate  to do all the things in Ada tasking,
and the other things, of course, that the Ada Tasking Model itself doesn't
cater to.  Doug?"

    D.F.: "Yes.  There're eighteen  different  features in this Catalog of
Interface  Features  and  Options,  and,  having  developed  a Distributed
Operating System  in  Ada  on  the  VAMP  program  and  LHX  programs, and
delivered it to  Wright-Patterson  Air  Force  Base  and NASA and Stanford
Research Institute, we basically did  all the dispatching in assembly code
for the first go 'round.   Then  we  re-wrote  that  thing in Ada.  It was
ugly.  Then, once the  CIFO  features became available, using the InterACT
Ada Compiler, we threw away about  a  fourth  of the operating system.  We
just called "suspend"  and  "resume."    O.K.?    I  only  used two of the

eighteen features, but the ability  to  suspend  a  task and resume a task
gave me complete control over task scheduling on the processor.

    Eric Beser:  "I'm  in  agreement  with  what  you're  saying.   What's
necessary is to be able to pause  execution.  The problem is it becomes an
argument against the abort statement in  a task, and that's something that
has been argued about for  years  --  since  there's been a language.  The
CIFO not only gives you the ability  to  suspend and resume a task, but it
also gives you the ability to differ  the priority of the task so that you
can do  a  priority  inheritance  scheme  to  counteract  the  problems of
priority merging.  In fact, what Lui Sha  at the SEI has shown is that the
priority merge schemes  --  priority  merging  --  can  be counteracted by
temporarily assigning a low priority  task  a  higher priority in order to
pre-empt the task that is currently  executing  in a task scheduler.  What
the interface features and options were  designed to do was supplement the
run time give the user the  ability  to have an application interface into
the run time without having to  modify  it -- without having to modify the
run time."

    Art Moorshead: "One of the claims for Ada tasking that I intentionally
sidestepped was being able to write  one  Ada program and put the tasks on
different processors.  Eric, does the CIFO have the kind of support that's
needed to really support that?"

    Eric Beser: "No. The distributed Ada    is what you're saying.  We did
this early on over in the West Building with the AMSP processor , where we
took the Telesoft compiler and we interlaid a distributed operating system
within the Telesoft compiler.  We  found  that there were no real features
that CIFO gave us, not only  to propagate tasks across the processors, but
for propagating sections back from those  tasks and to support the concept
of local memory versus global memory.
    There's another area.  One of the problems that we had in distributing
Ada across multi-processors was the fact  that we were using shared global
memory.  Each access to the  global  memory would cause six wait states to
be put into the processor.  What  we were having to do was to artificially
assign our local stacks  so  that  local  variables,  for example, a "for"
loop, would execute off  a  local  stack  and  not  cause execution in the
global memory, which would ultimately slow down the processor.  The reason
why the processor would have wait states  was to deal with the arbitration
between multiple processors  and  the  shared  memory.    That was a major
weakness in the CIFO that's  being  addressed  at the ARTEWG now, not only
the distributed task, but the whole realm of distribution."

    Art Moorshead:  "There's  a  lot  of  work  and  research  going on on
distribution.  Ada tasking as it stands now just doesn't support it."

    Doug Ferguson: "It's nor pure  research.    On the Navy program that's
taking place  in  Sykesville  right  now,  there's  a  lot  of  work where
Honeywell has paid something like  ten  million  bucks to a company called
VERDIX to develop distributed Ada.  They demonstrated it to the Navy about
three months ago  running  on  a  thing  called  the  `hypercube.'  So the
ability to have Ada tasks distributed across processors is now coming to a
reality."

    Voice: "Was that tightly coupled or loosely coupled?"

    D.F.: "That was tightly coupled."

    Voice: "As far as I understand,  you're  limited to how many CPU's you
can put on a tightly coupled system because ..."

    D.F.: "No.  It's going to work by demonstration time next year, that's
about June of next year, with multiple hypercubes loosely coupled."

    Voice: "Yes, but you're still limited  to the number of CPU's, because
the variable dimensions. The  next  stage  that  I see coming ... [static]
you're going to see a booster  coupling  ...  [static].  Have you seen any
work done around here in that?"

    Eli Solomon: "Jon?"

    Jon Squire: "Yes.   The  Argonne  National  Labs has a hypercube, they
have connection machines, they  have  an  Alliant,  and they have Sequent.
And I know the  Sequent,  the  VERDIX  compiler  also  for the Sequent, is
running tasks for  the  distributed  processors.    The  Alliant now has a
compiler which I think might also be ..."

    Doug Ferguson: "That's tightly coupled -- the Alliant."

    J.S.: " ... and that's tightly  coupled.   So there are at least three
machines I know.  The  root  compiler  for  all  of them happens to be the
VERDIX Front End.  But there are  three  compilers that I know of that are
running tightly coupled distributed tasking right now."

    Eric Beser: "One of the problems  with distributed systems is that the
Honeywell approach went to instrument  the  Ada  code, and they had to add
certain pragmas into the code.

    Doug Ferguson: They used to do that.    Now it's a separate file.  You
get the APPL Ada Program Partition Language, separate from the code."

    E.B.: "The approach we took over in the West Building was to make that
transparent to the user and to  set  the  default  Ada task to be a global
task that could propagate across the  processors.  And we let the run-time
system do the load balancing  so  that one task would consistently execute
within the time frame that we  needed.    We were able to demonstrate that
with a tracking algorithm  that  was  redundant  ...  [static] ... for the
multiple processors.

    Jeff Gustin: "One of the points of the rendezvous is, as Art described
it, to combine the  synchronization  and  the  communication all under one
thing so that you didn't  have  the  problem  with the semaphores, and all
that.  Does the CIFO give you any ability to get around -- to get yourself
in trouble again?"

    D.F.: "I think so.  You can get yourself into trouble with the suspend
and resume, I can tell you that!"

    J.G.: "So, basically, it just  gives  you a way out of the Ada Tasking
Model to do the kinds  of  things  we've  always  done with tasking in the
past, which people are going to use."

    Art Moorshead: "People are using the  Ada Tasking Model, too, but it's
not foolproof."

    J.G.: "Sure, since you can  simulate  a  semaphore with an Ada Tasking
Model anyway, you can go back and do the ... " [inaudible].

    E.B.: "Jeff, did you say that's [a] model run-time interface?"

    Doug Ferguson: "No.  Eli would know more about that than anybody else,
having just written the CARTS [Common Ada Run-Time System] proposal."

    Eli Solomon: "The question was  asked  here relating to the Model Run-
Time  Interface  for  Ada,  otherwise  referred  to  as  MRTSI [pronounced
"mertsy"].  We recently,  in  fact,  a  couple  of  weeks ago, submitted a
proposal to Wright-Patterson that proposed  to  take the MRTSI document --
MRTSI, incidentally was the  task  force  that was organized under ARTEWG,
which itself is a group within SIGAda.  The MRTSI document itself (we have
copies here at Westinghouse) is not in its final form by any means, but it
supposes to reduce the  problems  of  porting  Ada  code from one compiler
and/or host combination to another.    The  idea  being that if you have a
common set of  interfaces  that  you  know  you  can  use,  you access the
semantics and the  interface  itself  through  that  mechanism so that you
won't have to worry about, `Oh, if I  do it this way for this compiler, do
I have to change my Ada code for this other compiler?'
    "What we did  in  the  CARTS  proposal  is  to  attack  the problem of
combining all of MRTSI, hopefully, with  a  great deal of CIFO and come up
with a set of interfaces that will  allow the user and the Ada language to
do the kinds of things we need for embedded avionics systems."

    J.G.: "When can we expect to  see  some of these features in validated
compilers?  I was using  the  InterACT  compiler.   I don't know if that's
just InterACT ..."

    Several voices together here,  from  which cacophony emerges the voice
of Eli Solomon: " ... suspend and resume?"

    J.G.: "J.G.: "That's for multiple programs, it seems."

    D.F.: "The processor?"

    J.G.: "It's not on a task-by-task basis."

    D.F.: "It is."

    J.G.: "It's on a program-by-program basis."

    D.F.: "Ada InterACT treats -- you can have multiple programs, multiple
tasks -- treats them all the same.  We combine multiple programs."


    J.G.: "I like to suspend or resume.   I couldn't use it for what I was
doing.  Maybe it was just my application.   Anyway, as far as I could see,
that's all it offered."

    Voice: "How would you do an operating system in Ada?"

    J.G.: "I'd code up my own suspend in assembly language."

    D.F.: "If you use the InterACT System over to the DOS, it works fine."

    E.S.: "In fact, we have a  32-bit, which probably doesn't apply to you
just now, but  for  32-bit  work  with  MIPS  R3000  -- the GISA [Guidance
Instruction Set Architecture] processor,  really.    That has been defined
contractually as a  requirement,  that  they  have  to support suspend and
resume, start and cancel, task and program."

    There followed a discussion of the  debate  format.  The value of more
back and forth discussion among the presenters was urged.
    "Software First" was suggested as a topic for a future debate.
    Consensus was that it had been, indeed, a "great debate."
    So the advance notice worked out after all.


Thanks to the presenters and  (especially)  Eli Solomon for helping in the
editing of  this  transcript.    Eli  informs  us  that  he  will make the
transcript available to the three  subcontractors on the CARTS program for
discussion of the issues the debate  raises.                           Ed.