[net.unix-wizards] Why is the "real time" so much greater than the "cpu time"

govern@houxf.UUCP (07/14/83)

On my 4.1BSD system, the output of "time" for a long, CPU-intensive
program typically shows a "real time" about 20 times as high as the
CPU time, and the "user" time is often close to the real time.

1) Is this normal?  The ratio seems awfully high.  How can I check
to see if the system is running ok or if it needs tuning?

2) Exactly what times go into the CPU and user times?
	- When a process has  20% of the CPU, is the "CPU time"
		value scaled to reflect this?
	- Where does IO time fit in?
	- What about paging time? (The application was a very large
		simulation, so it has to page a lot.)

			Thanks;
			Bill Stewart
				houxf!govern
				ucbvax!ihnp4!houxf!govern
				     allegra!houxf!govern

drockwel@bbn-vax@sri-unix.UUCP (07/16/83)

From:  Dennis Rockwell <drockwel@bbn-vax>

Your question indicates a basic misapprehension:  both the "user time"
and the "sys time" given by the time command are CPU times.  The user
time is that time spent in your program, and the sys time is the time
the operating system spent supporting your program.  Thus, if your
"user time" is close to the "real time", then your system is pretty
well tuned, at least for CPU-bound processes.

Of course, the terms I used above are subject to quantization errors;
to be more accurate, the CPU times are the proportion of the times that
the clock ticked while your program had control of the CPU.  IO time
gets added in (as sys time);  also, if the clock ticks while the system
is servicing a device interrupt, that time gets added to yours, even if the device that interrupted has nothing to do with you.