[comp.sys.amiga] AmigaDOS scheduling

jms@doctor.Tymnet.COM (Joe Smith) (05/19/89)

In article <761@boing.UUCP> dale@boing.UUCP (Dale Luck) writes:
>A very early version of exec had a Yield() call. This would give up
>the cpu to whatever task was next in the Ready Q. I used it for cpu
>bound demos that I did not want to be cpu bound if there was other
>things going on.
>  [stuff about adjusting the size of the timeslice]

The current priority scheme means that a ray-tracing program at priority -6
will not get any runtime if there is another compute-bound program running at
priority -5 or higher.  I would like to see a schedular that would allow me to
declare that this compute-bound program is to get 1/3 of the background cycles
and that compute-bound program is to get 2/3 of the background cycles.

After listening to Bernardo Huberman at Xerox PARC, I found a way to include
some of his ideas about "Computational Ecologies" to the scheduler.  The main
idea is that each background task gets to say "I want N units of CPU usage
per unit time".  If there are only 2 programs, the first wanting 2 units and
the second wanting 1 unit, then the first program will be given 66% of the
CPU and the seconds will get 33%.  If a third program joins the background run
queue also asking for 1 unit, then the first program gets 50%, and the other
two get 25%.  Likewise, if one program wants 1 unit and a second wants 9 units,
then the former will get only 10% of the available CPU cycles.

My vote is to keep positive priorities the way they are (absolute priority
levels), but to change the definition of priorities less than zero to
include sharing the CPU's idle time.
-- 
Joe Smith (408)922-6220 | SMTP: JMS@F74.TYMNET.COM or jms@tymix.tymnet.com
McDonnell Douglas FSCO  | UUCP: ...!{ames,pyramid}!oliveb!tymix!tardis!jms
PO Box 49019, MS-D21    | PDP-10 support: My car's license plate is "POPJ P,"
San Jose, CA 95161-9019 | narrator.device: "I didn't say that, my Amiga did!"

gilham@polya.Stanford.EDU (Fred Gilham) (05/19/89)

In article <216@doctor.Tymnet.COM> jms@doctor.Tymnet.COM (Joe Smith) writes:
>
>The current priority scheme means that a ray-tracing program at priority -6
>will not get any runtime if there is another compute-bound program running at
>priority -5 or higher.  I would like to see a schedular that would allow me to
>declare that this compute-bound program is to get 1/3 of the background cycles
>and that compute-bound program is to get 2/3 of the background cycles.
>

There are all kinds of schedulers out there.  I happen to like the one
where the scheduler can migrate tasks between priority levels, so that
if one's a cpu hog it gets migrated back, but if it doesn't get to run
for a while because there's another cpu hog around, it gets migrated
up.  This migration occurs within limits.  You can start a program at
a given level and only let it migrate so many levels back (so that
your interactive programs never migrate back to "background" levels).
Of course, a program never migrates any farther forward than its base
priority.  I think this has the advantage of adapting to changing
conditions, besides which it is pretty simple to implement.
-Fred Gilham

rap@rap.ardent.com (Rob Peck) (05/20/89)

In article <216@doctor.Tymnet.COM> jms@doctor.Tymnet.COM (Joe Smith) writes:
>
>After listening to Bernardo Huberman at Xerox PARC, I found a way to include
>some of his ideas about "Computational Ecologies" to the scheduler.  The main
>idea is that each background task gets to say "I want N units of CPU usage
>per unit time".  If there are only 2 programs, the first wanting 2 units and
....
....
I think this kinda thing could be accomplished by running a single, high
priority task-rescheduler, that deliberately sleeps letting everyone else
run.  Lets say the program is called "LAUNCH" and it automatically
detaches from the CLI that starts it.

	RUN LAUNCH

in the startup sequence to get it going, then later, when starting new
programs that are to run in gimmee-atleast-some-timeslices mode:

	LAUNCH 30 PROGRAM Parameters Parameters Parameters

where the first parameter to LAUNCH specifies the desired percentage of time
slices that the launcher should try to assign.  If multiple programs were
to be launched this way, the program would adjust and allot the time
slices on the basis of who had the highest percentage specified and
apportion them accordingly.  Each time it'd wake up, it'd lower the
priority of whichever of the launched tasks was currently running and
raise the priority of the next one in line.  ONLY programs started by
LAUNCH would be affected, because they were all started deliberately
as background tasks, and the priority level would, I suppose, be
raised from minus-something to zero for their time allotment.
Word processing programs, for example, would hardly notice background
tasks running along because they spend such a high percentage of their
time waiting for keystrokes and thereby leave time slices available for
ray-tracers and the like.

(If LAUNCH is already running, the new incarnation just sends a message
to the first-run version with the new program name, slice priority and
parameters, then dies.)

And to remove LAUNCH from memory, perhaps:

	LAUNCH 0

Just a thought.

Rob Peck

dbk@teroach.UUCP (Dave Kinzer) (05/20/89)

In article <216@doctor.Tymnet.COM> jms@doctor.Tymnet.COM (Joe Smith) writes:
[with deletions]
>
>The current priority scheme means that a ray-tracing program at priority -6
>will not get any runtime if there is another compute-bound program running at
>priority -5 or higher.  I would like to see a schedular that would allow me to
>declare that this compute-bound program is to get 1/3 of the background cycles
>and that compute-bound program is to get 2/3 of the background cycles.

   As an example of why this is not as good an idea as it sounds, consider
the following "compute bound" programs.  Program 'A' will require 6 minutes
of CPU time.  Program 'B' will require 4 minutes.  If we launch these with
each 50 percent of the CPU power, Program 'B' will finish in 8 minutes, and
program 'A' will finish in 10 (last 2 minutes at 100% cpu).  For a worst
case, we can pick 60% cpu for 'A' and 40% for 'B' causing both to take the
full 10 minutes.  If we allow only one prioritized task to run, the first
gets done at full CPU power (finishing fast), and the second gets done
no later that it would using shared CPU.  For our example, we launch 'A'
with an Amiga priority of 1, and 'B' with a priority of '2'.  'B' gets
done in 4 minutes (4 minutes faster), and 'A' finishes in 10 minutes (the
same as before.)  The method of time-sharing you propose can only delay
the finish of a program.

   Many managers do not understand this principal either.

   If, however, you want two programs to output data to their user windows
at a proportional rate, that is another story entirely.
  
>Joe Smith (408)922-6220 | SMTP: JMS@F74.TYMNET.COM or jms@tymix.tymnet.com
>McDonnell Douglas FSCO  | UUCP: ...!{ames,pyramid}!oliveb!tymix!tardis!jms
>PO Box 49019, MS-D21    | PDP-10 support: My car's license plate is "POPJ P,"
>San Jose, CA 95161-9019 | narrator.device: "I didn't say that, my Amiga did!"


|    // GOATS - Gladly Offering All Their Support  Dave Kinzer (602)897-3085|
|   //  >> In Hell you need 4Mb to Multitask!  <<  uunet!mcdphx!teroach!dbk |
| \X/   #define policy_maker(name) (name->salary > 3 * dave.salary)         |

chk@dciem.dciem.dnd.ca (C. Harald Koch) (05/24/89)

In article <6560@ardent.UUCP> rap@rap.ardent.com (Rob Peck) writes:
[ description of a process that changes priorities of other processes in
  the system deleted. ]

The Amiga Exec calls itself through its own jump table in a lot of places.
In particlular it indirects via -$24(ExecBase) whenever an interrupt occurs,
to check if task rescheduling is necessary.

It seems to me that it would be possible (and relatively easy!) to replace
this jump vector (and maybe a couple of other entry points that massage the
task queues) to install any arbitrary scheduling algorithm.

The only problem I can see with this is that there are tasks out there that
assume the current scheduling algorithm. Some systems that I have come
across assume that a really high priority task will get all the CPU whenever
it wants it. This task then avoids locking certain structures because it
knows that nobody else has access while it runs. Of course, this is a bad
practice, and is rare, so it shouldn't be a major problem.

-- 
C. Harald Koch		NTT Systems, Inc., Toronto, Ontario
chk@gpu.utcs.utoronto.ca, chk@dretor.dciem.dnd.ca, chk@chkent.UUCP