[comp.unix.questions] Context Switch time in UNIX

patiath@umn-cs.cs.umn.edu (Pradip Patiath) (03/22/91)

We would like to know the time a context switch in UNIX takes.
Is there any way to measure this? Is it documented somewhere, 
say for SunOS 4.0.3 on a Sparcstation 1+? How does this figure
vary as a function of # of processes? 

Also, any tips on how to take fine resolution performance
measurements within UNIX would be a great help to me.

Any input from informed netters would be extremely valuable.
Please email me at:
	patiath@cs.umn.edu

Thanks in advance.
-Pradip
-- 
=====================================================================

Pradip Patiath
Sensor & System Development Center                 patiath@cs.umn.edu

torek@elf.ee.lbl.gov (Chris Torek) (03/22/91)

In article <1991Mar21.202637.29340@cs.umn.edu> patiath@umn-cs.cs.umn.edu
(Pradip Patiath) writes:
>We would like to know the time a context switch in UNIX takes.

You will have to define it before you can measure it:

>Is there any way to measure this? Is it documented somewhere, 
>say for SunOS 4.0.3 on a Sparcstation 1+? How does this figure
>vary as a function of # of processes? 

In particular, on SparcStations (and other machines with Sun MMU)s
the word `context' has several meanings, and kernel process scheduling
timings depend on whether an MMU context already exists, among other
things.

Once you sit down and decide what it is you want to measure, the best
way to do it is to use external hardware to monitor some sort of
signals (preferably ones that do not involve inserting extra code into
the bits you want timed, although this requires much fancier external
timing devices).  Otherwise the timing code you insert winds up
changing the time taken.  You must also watch out for cache effects
---simply moving one instruction can change the time taken to run that
instruction, because it moves into or out of the cache, or shares a
cache line with some other important item, or something.

If you want good results, you are pretty much stuck with doing
everything yourself.  If you just want `user level approximations', the
gettimeofday() system call is designed to return values accurate to the
nearest microsecond.  It even comes fairly close to doing this on
the SparcStations, which have microsecound counter/timer chips on board.
-- 
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA		Domain:	torek@ee.lbl.gov