A105@UWOCC1.BITNET (Brent Sterner) (09/19/86)
Help again. It has been brought to my attention here that a user running a fixed job can be charged varying amounts. Our charging algorithm is simply rate/hour times CPU used. This means that a user performing the same task repeatedly with identical data is using differing amounts of CPU resource. For our site, this is not good. My impression is that some (perhaps all?) system overhead is being tallied against user jobs. Is this so? Is there any way to stop this happening? For our site, reproducibility (ie fairness) is important. One of the factors which specifically seems to be at fault is paging. As available memory saturates, and paging increases, the individual job costs rise. There may be other components affecting us as well. If there is no way to remove system overhead from our cost calculations, has anyone investigated ways of estimating it (based on paging for the job and/or other factors)? If so, the estimated overhead could be applied to the cost calculations to reduce the impact of the overhead. I'm open to any/all suggestions. Please reply direct to me, and if interest is sufficient I'll summarize to the net. Thanks in advance. Brent Sterner Computing & Communications Services Natural Sciences Building The University of Western Ontario London, Ontario, Canada N6A 5B7 Telephone (519)661-2151 x6036 Network <A105@UWOCC1.BITNET>
garry@TCGOULD.TN.CORNELL.EDU (Garry Wiegand) (09/22/86)
In a recent article A105@UWOCC1.BITNET (Brent Sterner) wrote: > > Help again. It has been brought to my attention here that a user >running a fixed job can be charged varying amounts... > Hear ye, hear ye, all those who want to charge for CPU time or make accurate timing measurements: the "CPU time" figure is *not accurate*. If there is more than one user on the system, then it is also *not reproducible*. My suspicion is that CPU only gets charged at "quantum end". If a task is no longer running when that clock tick happens (for example, it runs for a little while, then stops for a page fault or I/O completion just before the tick), it doesn't get charged. Whichever task IS running at that instant does get charged. So: VMS cpu timing is not measuring - it's sampling! For charging, we just live with it. For timing, what I do is: 1) make sure everyone else is off the system 2) log in on another terminal, set priority down, and start a little "pure-compute" program. Empty DO-loops are satisfactory. 3) run the program I want to time, and 4) ask the "pure-compute" program how many machine instructions it managed to get while 3) was in progress. Context-switching between the foreground and the background may be soaking up a little machine time, but this is the most accurate method I've found. Or at least reproducible. PS - don't trust low-priority batch jobs not to scramble your figures either - the page-faulter doesn't seem to care what the nominal job priority is!