dan@maccs.McMaster.CA (Dan Trottier) (01/17/89)
The subject pretty well says it all! A user asked me how much of a time quantum he gets by default when he logs into the system. In trying to explain how the cpu shares its resources I discovered that I really didn't understand it myself. If anyone can give me a short explanation or perhaps point me at some usefull documentation (I RTFMs but couldn't find anything appropriate) I would appreciate it. -- Dan Trottier dan@maccs.McMaster.CA Dept of Computer Science ...!uunet!utai!utgpu!maccs!dan McMaster University (416) 525-9140 x3444
chris@mimsy.UUCP (Chris Torek) (01/18/89)
In article <1798@maccs.McMaster.CA> dan@maccs.McMaster.CA (Dan Trottier) writes: >A user asked me how much of a time quantum he gets by default when >he logs into the system. In trying to explain how the cpu shares its >resources I discovered that I really didn't understand it myself. 4.3BSD uses a `round robin' priority based scheduler, with 32 priority queues. A process's priority is based on a complex and unreasonable function of its nice, whether it is in a system call, how much cpu time it has had recently, and the load average. Ten times each second, the `roundrobin' function in /sys/sys/kern_sched.c arranges for the current process, whatever it may be, to be descheduled and the next eligible process (possibly the one descheduled) to be scheduled. Thus, if by `quantum' you mean the typical scheduler quantum, the value is essentially random, depending on when the process was scheduled with respect to the constant 100 ms ticks. In the absence of clock-related voluntary context switching, the unfairness should average out to the point where it can be ignored, but that absence is not guaranteed. -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163) Domain: chris@mimsy.umd.edu Path: uunet!mimsy!chris
jfh@rpp386.Dallas.TX.US (John F. Haugh II) (01/18/89)
In article <15504@mimsy.UUCP> chris@mimsy.UUCP (Chris Torek) writes: >Thus, if by `quantum' you mean the typical scheduler quantum, the value >is essentially random, depending on when the process was scheduled with >respect to the constant 100 ms ticks. In the UNIX idiom the answer would be 100ms. Every 100ms you get a chance to lose the CPU, but not more often [ baring a process being awoken on a higher priority ]. Older kernels used a 1 second quantum. I suspect this is what prompted the question. > In the absence of clock-related >voluntary context switching, the unfairness should average out to the >point where it can be ignored, but that absence is not guaranteed. All scheduling is unfair. Doesn't the joke go "If this is timesharing I want my share now."? -- John F. Haugh II +-Quote of the Week:------------------- VoiceNet: (214) 250-3311 Data: -6272 |"UNIX doesn't have bugs, InterNet: jfh@rpp386.Dallas.TX.US | UNIX is a bug." UucpNet : <backbone>!killer!rpp386!jfh +--------------------------------------
mckee@hpfcdc.HP.COM (Bret McKee) (01/19/89)
The BSD scheduler is a non-trivial animal. The following description is actually for 4.2, but I believe it is little changed in 4.3. Each process is assigned a priority between 0 and 127. In the BSD scheme lower numbers mean higher priority. The scheduler maintains 32 queues, numbered 0 to 31. (There are 31 queues instead of 127 for reasons of efficiency) Each process is maintained on queue[priority/4]. At any instant in time the process with the highest priority (lowest number) is the running process (for these purposes all processes on the same queue are considered to have the same priority). These queues are divided into two distinct groups: processes on queues 0-11 are said to be executing at kernel priority while those on queues 12-31 are executing at user priority. The major differences between the two priority levels is that kernel priority processes are never preempted and kernel priority does not degrade. User priorities on the other hand do degrade based upon system load and process CPU usage. non-kernel priority process priority is computed as follows: priority = P_USER + cpu_used/CPU_DIVISOR + nice_value*NICE_MULTIPLIER where: P_USER is the constant 50 CPU_DIVISOR is the constant 4 NICE_MULTIPLIER is the constant 2 The P_USER term is added to prevennt a process from "aging" its way into kernel priority. Every 10ms the cpu component of the priority of a process is incremented, and whenever it is divisible by CPU_DIVISOR the process is moved to a lower queue. Once per second the cpu_component is recomputed to be: (new)cpu_usage = cpu_usage*load_decay_factor+nice_value where: load_decay_factor = load_factor/(load_factor+1) load_factor = DECAY_SCALE*load_average DECAY_SCALE is a constant whose value is 2 load_average is the average number of waiting processes over the last 60 seconds This will allow a process to be "forgiven" for time it has used in the past. As the system becomes more loaded, the load_decay_factor approaches 1, making the usage degrade more slowly. Kernel priority is assigned when a process must sleep waiting for some event. The idea is that since this process is going to be waiting for a while, it should be quickly restarted when the wait is over. This will I/O bound processes to make better progress. When the process is awakened its priority is raised until it is finished with the system call it had made, at which time its priority is recomputed. While sleeping, the cpu_usage term is aged as: (new)cpu_usage = cpu_usage*load_decay_factor^(seconds slept-1) Once every 100 ms the scheduler recomputes prioritys of all processes to allow multiple processes in the bottom queue to fight for time. The scheduler can also be invoked when an interrupt as been serviced. So, the answer to "what is the quantum" is approximately CPU_DIVISOR*10ms for a system with "a normal load". What a long winded answer to a simple question. Sigh. --- Bret Mckee Hewlett Packard HP-UX Kernel Group Phone:(303)229-6116 email: mckee%hpfcde@hplabs.hp.com