C70:info-cpm (07/07/82)
>From JWP@Mit-Ai Tue Jul 6 20:36:22 1982
JWP@MIT-AI 07/04/82 05:10:20 Re: Deblocking
To: INFO-CPM at MIT-AI
Up until recently, I had taken the deblocking algorithm for
granted while doing some extensive (commercial) work with the
BIOS/monitor for my SuperBrain. Without much thought and a few
patches, I just inserted the DR supplied code.
With all this talk of problems with the code lately, I
decided that I should sit down and really look at what it does.
I was especially interested because of a problem sometimes
encountered while doing a wildcard file transfer in PIP where it
would lose its place in the directory and end up skipping half
the files. Checking for a patch in the BDOS, the lack of which
might cause such a condition, I found that the patch had been
implemented and so my illusive gremlin was still at large (and
remains so today).
I finally was able to comprehend how the blocking/deblocking
algorithm worked and found nothing out of order except that in
the 2.0 release of the algorithm, DR neglected to clear UNACNT in
READ.
One thing that gets me is that while the read algoritms are
good and serve their purpose well, it seems that the write logic
is a little lacking. Would it be proper thinking to write only
when necessary (ie. the way reads are done)? Of course, the
directory write can be used to force a write. This approach
seems reasonable enough. The only time data would be lost is
when either a cold boot is performed or the system goes down
before the current buffer is written. When you think about it,
entire extents are usually lost when this happens anyway - the
unwritten buffer wouldn't be missed unless you know how to use DU
to rebuild the extent. Files would be closed with the forced
(directory) write function (1).
Jonathan Platt@MIT-MC
C70:info-cpm (07/25/82)
>From JWP@Mit-Mc Sat Jul 24 17:22:45 1982
A while ago I tried to send a message to INFO-CPM
but was having some problems getting it out. What follows
is that message. My appolgies to those who may be getting this for
the second time.
================
Up until recently, I had taken the deblocking algorithm for
granted while doing some extensive (commercial) work with the
BIOS/monitor for my SuperBrain. Without much thought and a few
patches, I just inserted the DR supplied code.
With all this talk of problems with the code lately, I
decided that I should sit down and really look at what it does.
I was especially interested because of a problem sometimes
encountered while doing a wildcard file transfer in PIP where it
would lose its place in the directory and end up skipping half
the files. Checking for a patch in the BDOS, the lack of which
might cause such a condition, I found that the patch had been
implemented and so my illusive gremlin was still at large (and
remains so today).
I finally was able to comprehend how the blocking/deblocking
algorithm worked and found nothing out of order except that in
the 2.0 release of the algorithm, DR neglected to clear UNACNT in
READ.
One thing that gets me is that while the read algoritms are
good and serve their purpose well, it seems that the write logic
is a little lacking. Would it be proper thinking to write only
when necessary (ie. the way reads are done)? Of course, the
directory write can be used to force a write. This approach
seems reasonable enough. The only time data would be lost is
when either a cold boot is performed or the system goes down
before the current buffer is written. When you think about it,
entire extents are usually lost when this happens anyway - the
unwritten buffer wouldn't be missed unless you know how to use DU
to rebuild the extent. Files would be closed with the forced
(directory) write function (1).
Jonathan Platt@MIT-MC
================
Since then, I have found the problem with PIP. It was hardware.
Any thoughts on the write-only-when-necessary philosophy?