jdc@rama.UUCP (James D. Cronin) (07/27/88)
Do any of you netlanders know what the paging performance is for Vaxen (or any other machine, for that matter)? For example, how long does it take to service a "soft" page fault (i.e. the desired page is in the free/modified page list) vs. a "hard" page fault (i.e. going out to disk). Thanks in advance, Jim Cronin -- James D. Cronin UUCP: {...}!rochester!tropix!rama!jdc Scientific Calculations/Harris
pardo@june.cs.washington.edu (David Keppel) (07/30/88)
>[ Faulting performance on a VAX ]
You can probably figure it out yourself by setting your csh "time"
variable with:
set time=(5 "%Uu %Ss %E %P (%Xt+%Dds+%Kavg+%Mmax)k %Ii+%Oo (%Fmaj+%Rmin)pf %Wswaps")
The "maj" figure is "major", or i/o-requiring faults, while "min" is
"minor", or recoverable faults. The "k" figures are wrong in various
combinations on various machines (bugs in undocumented features) so
don't use them. "u"=user time, "s"=system time, %E is for elapsed
time, and %P is percent of cpu (computed as [s+u]/elapsed).
Page fault times are idle/faults, and idle time is elapsed-(s+u).
Now all you need to do is find a machine where you can allocate more
virtual memory than physical memory (not always easy) and write some
programs that have various kinds of faulting behavior. One way to do
this is to allocate a huge array (larger than physical memory) and
step through it in page-size increments. You can compare performance
when the pages are only read, when they are written, etc.
As you get below the physical memory limit the hard fault rate should
drop dramatically. Soft faults are harder to generate so you may need
to do some algebra to figure out soft rates once you know the hard
rates.
Your tests should probably be done on a lightly-loaded system else
your process will be ready to go after a page fault but will not run
because the CPU is busy.
;-D on ( followup reply e-mail, avoid posts! ) Pardo
shepperd@dms.UUCP (Dave Shepperd) (08/02/88)
From article <8103@rama.UUCP>, by jdc@rama.UUCP (James D. Cronin): > > > Do any of you netlanders know what the paging performance is for > Vaxen (or any other machine, for that matter)? For example, how > long does it take to service a "soft" page fault (i.e. the desired > page is in the free/modified page list) vs. a "hard" page fault > (i.e. going out to disk). Measured softfault times: Vax 11/730 1700 Vax 11/750 800 Vax 11/780 500 uVax II 500 Vax 3xxx 180 All times in microseconds and are reasonably accurate. The one I remember for sure is the the 11/780 at 498us. The 730 and 750 were measured with VMS 3.x. The 780 was measured under VMS 3.x and VMS 4.x (no difference). The uVax II and Vax 3xxx were measured under VMS 4.7. Hard fault times are a function of your paging disk's specs coupled with what other I/O demands your system is placing on it.
carl@CITHEX.CALTECH.EDU (Carl J Lydick) (08/03/88)
> > Do any of you netlanders know what the paging performance is for > > Vaxen (or any other machine, for that matter)? For example, how > > long does it take to service a "soft" page fault (i.e. the desired > > page is in the free/modified page list) vs. a "hard" page fault > > (i.e. going out to disk). > > You can probably figure it out yourself by setting your csh "time" > variable with: > > set time=(5 "%Uu %Ss %E %P (%Xt+%Dds+%Kavg+%Mmax)k %Ii+%Oo (%Fmaj+%Rmin)pf %Wswaps") > > The "maj" figure is "major", or i/o-requiring faults, while "min" is > "minor", or recoverable faults. The "k" figures are wrong in various > combinations on various machines (bugs in undocumented features) so > don't use them. "u"=user time, "s"=system time, %E is for elapsed > time, and %P is percent of cpu (computed as [s+u]/elapsed). > > Page fault times are idle/faults, and idle time is elapsed-(s+u). That's all well and good if he's interested in the timing on a machine running UNIX. Since the way page faulting is done varies from system to system, his original question was not, I'll admit, well-posed. To give you some idea as to the rough degree of difference we're talking about here, consider that a soft fault requires modifying some pointers (I think that's all it takes, and yes, I know putting it that way makes it sound simpler than it is); on the other hand, a hard page fault requires actual disk I/O. That means that not only will the hard page faults take much longer than the soft ones, but the time it takes to process a hard fault can be very dependent on what else the paging disk is being used for. > Now all you need to do is find a machine where you can allocate more > virtual memory than physical memory (not always easy) and write some > programs that have various kinds of faulting behavior. If he's trying something like this on a VMS machine, it's easy to find a machine where you can allocate more virtual than physical memory, at least if you have a cooperative system manager. There's a special sysgen parameter designed specifically to lobotomize your VAX: PHYSICALPAGES (Special Parameter) Maximum number of physical pages to be used - permits testing of smaller memory configurations without actually removing memory boards. Actually, that documentation's somewhat out-of-date. The most common use for this parameter these days is to reserve part of the memory on a VAXstation for the graphics hardware. If you make the mistake of setting PHYSICALPAGES to the amount of memory you actually have, the graphics stuff and the operating system end up both trying to use the same chunk of memory without any mechanism to communicate who's using what. Last time I did that, the result was that the top half of the VAXstation console screen became a mirror-image of the bottom half, just before the system crashed. > One way to do > this is to allocate a huge array (larger than physical memory) and > step through it in page-size increments. You can compare performance > when the pages are only read, when they are written, etc. > As you get below the physical memory limit the hard fault rate should > drop dramatically. Soft faults are harder to generate so you may need > to do some algebra to figure out soft rates once you know the hard > rates. Again, if he's got a cooperative system manager, he can intentionally detune the VAX to force page faulting. If he sets PFRATL unreasonably high, the working set unreasonably low, and allows voluntary decrementing of the working set, and mucks around with a number of other parameters, he can have the system doing practically nothing BUT soft page faults. > Your tests should probably be done on a lightly-loaded system else > your process will be ready to go after a page fault but will not run > because the CPU is busy. Depending on what he wants the information for, running his tests on a lightly loaded system may be as useless as doing the same with a standard "benchmark" package. After you're through, you'll know how your machine behaves when it's in a state you're seldom going to see, but you still don't know how it performs under load.
MCGUIRE@GRIN1.BITNET ("The Sysco Kid ", McGuire,Ed) (08/04/88)
James, I can't give you exact figures comparing soft and hard faults. Perhaps this will be sufficient. Soft faults occur at memory speeds. Hard faults occur at disk speeds. With high speed processors such as the 6000 and 8000 series, the time for a hard fault is an eon compared to the time for a soft fault. Always tune for fewer hard faults. Ed