[net.unix-wizards] Overwriting paging files

chris@umcp-cs.UUCP (Chris Torek) (11/02/86)

In article <8545@sun.uucp> guy@sun.uucp (Guy Harris) writes:
>In a system using NFS ... if the process using that file tries to
>fetch a page from a file that has been modified since the process
>in question first attached to it, it gets zapped by a SIGKILL (a
>message is printed on the user's terminal, if there's a terminal
>associated with this process).

Not very nice.  It would be better if the pages were brought over
and stored locally until the process is done with them.  This could
be done as a `background task', to keep it from affecting performance
much.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
UUCP:	seismo!umcp-cs!chris
CSNet:	chris@umcp-cs		ARPA:	chris@mimsy.umd.edu

guy@sun.uucp (Guy Harris) (11/03/86)

> >In a system using NFS ... if the process using that file tries to
> >fetch a page from a file that has been modified since the process
> >in question first attached to it, it gets zapped by a SIGKILL (a
> >message is printed on the user's terminal, if there's a terminal
> >associated with this process).
> 
> Not very nice.  It would be better if the pages were brought over
> and stored locally until the process is done with them.  This could
> be done as a `background task', to keep it from affecting performance
> much.

When would the pages be brought over?  When the program was first executed?
This may narrow the window of vulnerability, but it wouldn't close it
entirely.  I also wouldn't go so far as to say doing it as a background task
wouldn't affect performance much, without seeing some hard data - if the
program is very big, you will be tying your network and your server's disk
up fetching a bunch of pages which, presumably, you'll not be using.

When the file is written?  No can do.  The file may be written by another
machine, so you have no way of knowing it's being written.
-- 
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com (or guy@sun.arpa)

chris@umcp-cs.UUCP (Chris Torek) (11/04/86)

In article <4135@umcp-cs.UUCP> I wrote:
>>... It would be better if the pages were brought over and stored
>>locally until the process is done with them.  This could be done
>>as a `background task', to keep it from affecting performance much.

In article <8830@sun.uucp> guy@sun.uucp (Guy Harris) writes:
>When would the pages be brought over?  When the program was first executed?

Yes.

>This may narrow the window of vulnerability, but it wouldn't close it
>entirely.

True---but note that most of the statistics gathered by 4BSD process
migration studies show that programs are grouped into `short lived'
and `long lived', and that there are very few `medium life' programs.
If the window is trimmed down to (say) five seconds, that would
help programs like editors considerably.  (Imagine having your
editor die after working on a file for several hours!  Fortunately,
most editors can recover from crashes, including being killed by
the kernel because the server's copy of the editor chaned).

>I also wouldn't go so far as to say doing it as a background task
>wouldn't affect performance much, without seeing some hard data ....

A point.  But on the Sun-9, this will all be done by the network
processor and the disk-I/O-processor over private memory channels
on your ten terabit LAN anyway, right :-)?

>When the file is written?

No.  Aside from the server's having to track all client paging
behaviour, by then it is probably too late anyway.  Delaying the
write while clients squirreled away their own copies might work,
but might be slow; writing a new copy would cost disk space and
kernel complexity.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
UUCP:	seismo!umcp-cs!chris
CSNet:	chris@umcp-cs		ARPA:	chris@mimsy.umd.edu