[comp.unix.internals] close

ske@pkmab.se (Kristoffer Eriksson) (10/23/90)

In article <35111@cup.portal.com> ts@cup.portal.com (Tim W Smith) writes:
>Furthermore, when the close() fails, you now have a program that knows
>that some amount of previously written data is not valid. ...
>Or does this mean that a program should keep a copy
>in memory of all data that is hard to reproduce until it closes the file?

Yes, I think it should. Provided that you really think it can do anything
to recover from the problem when it is detected, of course. There's no
point in saving the data if it won't be possible to recover from the trouble
and write it out later. There is also no point in saving it if you can
simply rerun the program with the original input data, which, in many
cases, you still have around. (I certainly would not through away my input
data before the output data was safely stored away, and often not even then.
You never know what can happen.) And in the case of an editor, for example,
there is no problem in saving the data for later retry, since you have to
store the edited text in memory anyway.

You don't have to store your data for longer than until the next fsync()
you do (if you do any, and if your data is very sensitive, you may have
good reason to do some), or if your system happens to have the option of
setting a synchronous write mode on your Very Important File, then you
don't have to save anything.

>In summary, this behaviour of a file system is not acceptable.

It apparently was deemed acceptable for Unix. And I think it is quite hard
to make a completely failure-free file system, especially if you want
performance too.
-- 
Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!sunic.sunet.se!kullmar!pkmab!ske

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (10/26/90)

In article <4335@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
> You don't have to store your data for longer than until the next fsync()
> you do

Are you saying that we have to invoke the overhead of fsync() to solve a
problem not related to disk and CPU failures? That for a relatively
simple synchronization problem we have to send mounds of junk over the
network, when otherwise it might never need to traverse the network at
all? That even in a guaranteed failure-free system where the CPU and
disks never crash, we would have to use fsync()? Do you really believe
that EDQUOT should be made as disastrous as EIO? That's what you imply.

---Dan