[comp.os.minix] Comments on nice

kevin@nuchat.sccsi.com (Kevin Brown) (04/04/91)

Having recently obtained Kai-Uwe Bloem's 'nice()' patches, I was most
interested in trying them out.  So I installed them into my kernel (saving
my previous source, of course!), and gave them a shot.

Now, I should probably describe the system I have, because it does have an
effect on what I can do.  I'm running a 25 MHz 386 with 8 meg of RAM.
Because of this, I have expanded the process table to 64 entries (I have
also expanded the global file descriptor table, but that's not relevent to
the discussion here, whereas the process table expansion is).

In addition to the above, Bruce Evans was kind enough to supply me with a
small patch to boost the multitasking performance of my system.  The patch
turns out to be very simple.  Instead of queueing new processes (which
includes processes which have just been awakened after waiting for something,
e.g. I/O) at the end of the queue, Bruce queues them at the beginning.  In
this way, they get immediate attention.

So I installed the nice() patches, eager to see if I could get even more
performance and flexibility out of my system than I already had.  Most of
it went well, except for what Peter MacDonald already mentioned (thanks,
Peter!  You saved me a lot of time).  I compiled the system and did as the
docs suggested: created a short program ("hog") that eats CPU.  I then
forked off 30 of them, and ran kermit (connected to another system) on
a separate console (I'm running Gordon Irlam's virtual consoles.  Don't
go home without them :-).

The system was S...L...O...W....

Changing the priority equation did help some, but it was still at its knees
at 30 cpu-bound processes.

In contrast, the system with Bruce's patch is FAST.  You get INSTANT response
to I/O (well, with a short delay, perhaps, but still) and to commands.

I think part of it has to do with the fact that even with nice() installed,
the system still 'round robins' the process, regardless of where it came
from.  There doesn't seem to be any real distinction between I/O bound
processes and CPU bound processes.  Oh, there is some, but the maximum 
resolution available for that purpose is (MILLISEC * HZ) / 1000, where
MILLISEC is the amount of time (in milliseconds, naturally) between calls
to the scheduler.  I have mine set to 25 milliseconds for faster response,
but have played around with it some while testing the nice() patches.  I
got no significant performance variation.

It was a different matter entirely when the 'hog' processes were 'nice'd.
THEN the response was as I think it should be.  Unfortunately, it's a pain
to have to 'nice' everything (or to, er, 'unnice' the things you want to have
fast response), so I happen to prefer Bruce's patch.

Just for grins, I decided to see what the response of the system would be
without either Bruce's patch or the nice() patches installed.

With the standard scheduling and queueing algorithm in place, the system
in effect comes to a complete standstill with 30 'hog's going!  So while
the nice() patches may be slower than Bruce's solution, it's STILL better
than vintage Minix (at least with the nice() patches installed, you can kill 
the processes after you've started them, even if you do have to wait a bit.  
This is effectively impossible with vintage Minix). 

So for those who are interested in sheer speed, Use Bruce's patch (supplied
below) instead of nice().  You will be AMAZED.

For those of you who want more flexibility, nice() is probably for you, as
long as you don't plan on running lots of CPU-intensive tasks simultaneously.

Seeing how important it is to queue jobs quickly, and to queue incoming
jobs first, I plan on implementing my own version of the nice() scheduling,
probably with multiple queues (each one representing a priority) and
priority heightening for incoming jobs.  We'll see how it goes...

But for sheer simplicity, elegance, and performance, I don't think you can
beat Bruce's patch...:-)

Here it is (I believe it's relative to vintage 1.5.10, but I'm not absolutely
sure):

*** /usr/1.5/kernel/proc.c	Mon Feb 12 03:24:19 1990
--- proc.c	Mon Aug  6 00:43:43 1990
***************
*** 360,361 ****
--- 408,410 ----
  #endif
+ #if 0
    if (rdy_head[USER_Q] != NIL_PROC)
***************
*** 366,367 ****
--- 415,422 ----
    rp->p_nextready = NIL_PROC;
+ #else  
+   /* Better to add users to the *beginning* of the queue. */
+   rp->p_nextready = rdy_head[USER_Q];
+   if (rdy_head[USER_Q] == NIL_PROC) rdy_tail[USER_Q] = rp;
+   rdy_head[USER_Q] = rp;
+ #endif  
  }
---


And uuencoded for those who have to suffer through bitnet...

table
 !"#$%&'()*+,-./0123456789:;<=>?
@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_
begin 644 proc.cdif
M*BHJ("]U<W(O,2XU+VME<FYE;"]P<F]C+F,)36]N($9E8B Q,B P,SHR-#HQz
M.2 Q.3DP"BTM+2!P<F]C+F,)36]N($%U9R @-B P,#HT,SHT,R Q.3DP"BHJy
M*BHJ*BHJ*BHJ*BHJ*@HJ*BH@,S8P+#,V,2 J*BHJ"BTM+2 T,#@L-#$P("TMx
M+2T*(" C96YD:68**R C:68@, H@(" @:68@*')D>5]H96%D6U5315)?45T@w
M(3T@3DE,7U!23T,I"BHJ*BHJ*BHJ*BHJ*BHJ*@HJ*BH@,S8V+#,V-R J*BHJv
M"BTM+2 T,34L-#(R("TM+2T*(" @(')P+3YP7VYE>'1R96%D>2 ]($Y)3%]0u
M4D]#.PHK("-E;'-E(" **R @("\J($)E='1E<B!T;R!A9&0@=7-E<G,@=&\@t
M=&AE("IB96=I;FYI;F<J(&]F('1H92!Q=65U92X@*B\**R @(')P+3YP7VYEs
M>'1R96%D>2 ](')D>5]H96%D6U5315)?45T["BL@("!I9B H<F1Y7VAE861;r
M55-%4E]172 ]/2!.24Q?4%)/0RD@<F1Y7W1A:6Q;55-%4E]172 ](')P.PHKq
M(" @<F1Y7VAE861;55-%4E]172 ](')P.PHK("-E;F1I9B @"B @?0HM+2T*p
 o
end


Enjoy!


--
Kevin Brown						Disclaimer: huh?
kevin@nuchat.sccsi.com					csci31f7@cl.uh.edu 

Minix -- the Unix[tm] of the 90's.  System V -- the Multics of the 90's.  :-)