flaps@utcsri.UUCP (07/31/87)
How does a task tell the exec that it wants to give up the rest of its current cpu-time quantum? I am currently using Delay(1L), which will probably work fine but is exceedingly ugly. In case you want to know, this is why I need to force a context switch. I'm writing a program where several tasks are started. Through use of priorities, all are started at once. One of these functions sets a global variable that others need. If through bad luck another function starts running first and tries to use this global variable before it's set, it has to allow the first function to set it before proceeding. Since this is the second line in the code of the first function, all it has to do is give up the rest of its cpu quantum this time around which will give the other task a chance to run before this one runs any more, thus setting the variable. -- // Alan J Rosenthal // \\ // flaps@csri.toronto.edu, {seismo!utai or utzoo}!utcsri!flaps, \// flaps@toronto on csnet, flaps at utorgpu on bitnet. "To be whole is to be part; true voyage is return."
farren@hoptoad.uucp (Mike Farren) (08/03/87)
In article <5175@utcsri.UUCP> flaps@utcsri.UUCP (Alan J Rosenthal) writes: > > I'm writing a program where several tasks are started. >Through use of priorities, all are started at once. One of these >functions sets a global variable that others need. If through bad luck >another function starts running first and tries to use this global >variable before it's set, it has to allow the first function to set it >before proceeding. Seems to me that relying on the scheduling algorithm in EXEC to synchronize your tasks is a dangerous and losing proposition. Why not have the main task Wait() until the first sub-task has set its global before launching the rest of the sub-tasks? Or, if it's truly a global, have the main task set it before launching the sub-tasks? I don't think that there's any guarantee that tasks will be run in the order launched, although that's the intuitive way of thinking about it. Unfortunately, intuition isn't always correct. -- ---------------- "... if the church put in half the time on covetousness Mike Farren that it does on lust, this would be a better world ..." hoptoad!farren Garrison Keillor, "Lake Wobegon Days"
mwm@eris.BERKELEY.EDU (Mike (My watch has windows) Meyer) (08/03/87)
In article <5175@utcsri.UUCP> flaps@utcsri.UUCP (Alan J Rosenthal) writes:
<In case you want to know, this is why I need to force a context
<switch. I'm writing a program where several tasks are started.
<Through use of priorities, all are started at once. One of these
<functions sets a global variable that others need. If through bad luck
<another function starts running first and tries to use this global
<variable before it's set, it has to allow the first function to set it
<before proceeding. Since this is the second line in the code of the
<first function, all it has to do is give up the rest of its cpu quantum
<this time around which will give the other task a chance to run before
<this one runs any more, thus setting the variable.
Like Mike Farren, my first reaction was "this isn't right." My second
reaction was "Why isn't he using semaphores?" Checking the RKM shows
why 0 there's isn't a good semaphore system in the Exec. All you get
is ports. At least it's better than what one did on v[67] Unix....
So could whoever at CBM is keeping track of things needed for 1.3 add
"real semaphores" to the list? Maybe as "semaphore.library"?
The best solution is probably to set the global variable in the
startup function, as suggested by Mike. From the sounds of things,
this variable will not be changed after it's started, so that should
work fine (otherwise, how does a function know that it's "not set"
yet?). If it does change, how do you handle exclusion during updates?
Will that method work for the startup period?
<mike
--
ICUROK2C, ICUROK2. Mike Meyer
ICUROK2C, ICWR2. mwm@berkeley.edu
URAQT, I WANT U2. ucbvax!mwm
OO2EZ, I WANT U2. mwm@ucbjade.BITNET
ewhac@well.UUCP (Leo 'Bols Ewhac' Schwab) (08/03/87)
[ Invest in OCP. You have 20 seconds to comply. ] In article <2609@hoptoad.uucp> farren@hoptoad.UUCP (Mike Farren) writes: >In article <5175@utcsri.UUCP> flaps@utcsri.UUCP (Alan J Rosenthal) writes: >> >> I'm writing a program where several tasks are started. >>Through use of priorities, all are started at once. One of these >>functions sets a global variable that others need. If through bad luck >>another function starts running first and tries to use this global >>variable before it's set, it has to allow the first function to set it >>before proceeding. > >Seems to me that relying on the scheduling algorithm in EXEC to synchronize >your tasks is a dangerous and losing proposition. Why not have the main >task Wait() until the first sub-task has set its global before launching >the rest of the sub-tasks? [ ... ] Sounds to me like a classic application for semaphores, which the Amiga has. No, I don't know how they work exactly. Would anyone at CATS care to clarify their use? _-_-_-_-_-_-_-_-_-_ Old signature used as 'inews' filler. _-_-_-_-_-_-_-_-_-_ ________ ___ Leo L. Schwab \ /___--__ The Guy in The Cape ___ ___ /\ ---##\ ihnp4!ptsfa!well!ewhac / X \_____ | __ _---)) ..or.. / /_\-- -----+==____\ // \ _ well ---\ ___ ( o---+------------------O/ \/ \ dual ----> !unicom!ewhac \ / ___ \_ (`o ) hplabs -/ ("AE-wack") ____ \___/ \_/ Recumbent Bikes: "Work FOR? I don't work FOR The _O_n_l_y Way To Fly! anybody! I'm just having fun."
cmcmanis@pepper.UUCP (08/03/87)
In article <5175@utcsri.UUCP> flaps@utcsri.UUCP (Alan J Rosenthal) writes:
.>
.>How does a task tell the exec that it wants to give up the rest of its
.>current cpu-time quantum? I am currently using Delay(1L), which will
.>probably work fine but is exceedingly ugly.
.>
.>In case you want to know, this is why I need to force a context
.>switch. I'm writing a program where several tasks are started.
.>Through use of priorities, all are started at once. One of these
.>functions sets a global variable that others need.
One way to do this would be to have the slave tasks all wait on one of the
'break' signals. (^C ^D ^E ^F). When the main task has set the global
it could signal all of the other tasks that the global was ready to be
used. A more robust way would be to have the task that sets the global
start the other tasks after it has set it. Then no sleeping would be required.
A third way is to use a named message port to pass around the data.
--Chuck McManis
uucp: {anywhere}!sun!cmcmanis BIX: cmcmanis ARPAnet: cmcmanis@sun.com
These opinions are my own and no one elses, but you knew that didn't you.
flaps@utcsri.UUCP (08/04/87)
In a recent article, I, flaps@utcsri.UUCP (Alan J Rosenthal), write: >How does a task tell the exec that it wants to give up the rest of its >current cpu-time quantum? > >... I'm writing a program where several tasks are started. >Through use of priorities, all are started at once. One of these >functions sets a global variable that others need. If through bad luck >another function starts running first and tries to use this global >variable before it's set, it has to allow the first function to set it... Many people have explained alternate ways of expressing the algorithm listed above. But this is not the whole story, and none of these solve my problem. We are writing library routines. The programmer using our routines can call a routine to create a task. They can start our special task to handle some things, and their own task to use these facilities. We want our special task to be programmed just like a user task. This wouldn't be necessary; we could special-case it in the create-a-task function, but we SHOULDN'T HAVE TO!! Most multitasking systems provide a way to force a context switch. It is extremely easy to implement. I'm sure that there are kludgey ways to do it on the Amiga, but I would like to do it nicely. -- // Alan J Rosenthal // \\ // flaps@csri.toronto.edu, {seismo!utai or utzoo}!utcsri!flaps, \// flaps@toronto on csnet, flaps at utorgpu on bitnet. "To be whole is to be part; true voyage is return."
tenney@well.UUCP (Glenn S. Tenney) (08/05/87)
Remembering wayyyy back (V21, 23 or 24 days, it all gets muddled) there WAS an exec call to force a context switch. It was dropped 'cause "they" felt there was no need for it. I pointed out that if I was running my own multitasker (there are good reasons to do this) that I'd really like to just say "offer the cpu", but no..... -- Glenn Tenney UUCP: {hplabs,glacier,lll-crg,ihnp4!ptsfa}!well!tenney ARPA: well!tenney@LLL-CRG.ARPA Delphi and MCI Mail: TENNEY As Alphonso Bodoya would say... (tnx boulton) Disclaimers? DISCLAIMERS!? I don' gotta show you no stinking DISCLAIMERS!
bryce@COGSCI.BERKELEY.EDU (Bryce Nesbitt) (08/07/87)
In article <3677@well.UUCP> tenney@well.UUCP (Glenn S. Tenney) writes: >Remembering wayyyy back (V21, 23 or 24 days, it all gets muddled) >there WAS an exec call to force a context switch. It was dropped >'cause "they" felt there was no need for it. I pointed out that >if I was running my own multitasker (there are good reasons to do >this) that I'd really like to just say "offer the cpu", but no..... One place that I wanted to use this call was right before a Forbid(). If I Forbid() at the start of my quantum, rather than at any old place, I am less likely to exceed my time, and I minimize the effect on the system. For this (and other) reasons I'd sure like to see that call come back. I had another situation where there was a known dead time during some data aquisition from the parallel port. The reader task *had* to Disable() to meet timing requirments, but at certain points it could guarantee X ms of free processor time. It ended up not returning the time to the system because there was no easy guarantee that some other (lower priority) task might not Disable() and screw things up. [It would have had to SetFunction() Disable(); messy!. Setfunction()'ing Forbid() would be worthless since that is often implemented as a macro] |\ /| . Ack! (NAK, EOT, SOH) {o O} . ( " ) bryce@cogsci.berkeley.EDU -or- ucbvax!cogsci!bryce U "Success leads to stagnation; stagnation leads to failure."
brianr@tekig4.TEK.COM (Brian Rhodefer) (08/08/87)
May I interject some questions into the "context switch" discussion? The Amiga's multitasking has me mystified. I am using it on blind faith and a prayer, with next to NO idea of what it is really doing, and would greatly appreciate some advice. 1) If Task A, running at priority P, sends a Signal to Task B, which is sleeping at priority P+1, what happens? Does "A" go to sleep immediately, or only after its current timeslice is exhausted? Does the same go for posting a message to one of B's ports? 2) Where do interrupts fit in the scheme of things? I presume that they are able to interrupt even the highest priority tasks; is this true? (further naive questions deleted...) It occurs to me that a little concerted fooling around with some test programs would be helpful. If I find anything interesting or counterintuitive, I'll post. Meanwhile, would either of the following two methods result in forcing a context switch? 1) If "Signal"ing does indeed force a re-evaluation of the task to run, why not spawn a simpleminded little task, at an extremely high priority, such as 125 or so, which endlessly (except for some devilishly clever termination mechanism) "Wait"s for a signal? To force a switch, just send the dimwit task the signal it's waiting for. You get suspended, it gets awakened, and it immediately puts itself back to sleep. If I read Amiga Writ aright, a fresh evaluation of all suspended tasks, including the one which sent the signal, should ensue. 2) Does the "Change Task Priority" (or whatever it's called) routine have immediate effect? (i.e., if Task A promotes sleeping task B's priority above that of A, does A get chloroformed?) If so, setting one's own task's priority to the same value it already has might force the kind of context switch that's sought, without the extra clutter of a slave task. Foolishly hoping to understand my Amiga BEFORE its EPROMs evaporate, Brian Rhodefer
dillon@CORY.BERKELEY.EDU (Matt Dillon) (08/23/87)
1 master program which must set up some stuff before slave programs can work N slave programs Simple: All Slave programs do a Wait(somesignal) before doing anything else. The master program sets up the variables, then simply signals all the Slave programs. Poof. And, there is no reliance on the scheduling algorithm. -Matt