[comp.lang.pascal] Reset

NORM%IONAACAD.BITNET@cunyvm.cuny.edu ( Norman Walsh) (06/11/90)

>  Unless I am badly mistaken (and I have several pieces of running code to
>  persuade me I'm right), your second parameter here should be the number of
>  bytes per block you want read.  Looking immediately below, it looks as if
>  it should be 10240.  My usual blocks are only 512, so I don't know how a
>  single BLOCKREAD/WRITE of this size is going to behave, but will agree at
>  least with what you describe here.

The second parameter to Reset() or ReWrite() is Turbo Pascal's default
block size.  However, the actual number of bytes transfered is controlled
by the BlockRead/BlockWrite procedure.  The third parameter controls the
number of blocks to read or write.  So, given Reset(f,1), the procedure
BlockRead(f,buffer,10240,Actual) reads 10240 bytes in a single (or nearly
single) operation.

>  How did file I/O perform when BLOCKREAD/WRITE were only handling a single
>  byte per block?  Which is what I'm sure has been happening from these RESET
>  and REWRITE calls.

The best results are obtained by doing blockreads of about 4k or 16k.  Note:
this can be optained *either* by doing a reset(f,4096) and a blockread of
one block or by doing a reset(f,1) and a blockread of 4096 blocks.  The
performance degradation from reading 4095 or 4097 bytes instead of 4096 is
quite remarkable...

>  By the way, remember that your BLOCKWRITE's are going to be writing chunks
>  this size, even when there's much less than that occupying them.  So if you
>  read a file that goes, say, 10 bytes into its final bufferload, its chunks
>  are going to be written with a full 10240-byte final block to contain those
>  10 bytes.

This is the main reason that I always do a Reset(f,1) and then choose my
the amount of data read in the call to BlockRead...or the amount written
in BlockWrite.  Note: with a blocksize of 512 bytes (Reset(f,512)) it is
impossible to read the last 10 bytes of a file that is 1034 bytes long
without getting an IO error of some kind...

                                                        ndw

milne@ics.uci.edu (Alastair Milne) (06/14/90)

In <23598@adm.BRL.MIL> NORM%IONAACAD.BITNET@cunyvm.cuny.edu ( Norman Walsh) writes:

>>  How did file I/O perform when BLOCKREAD/WRITE were only handling a single
>>  byte per block?  Which is what I'm sure has been happening from these RESET
>>  and REWRITE calls.

>The best results are obtained by doing blockreads of about 4k or 16k.  Note:
>this can be optained *either* by doing a reset(f,4096) and a blockread of
>one block or by doing a reset(f,1) and a blockread of 4096 blocks.

    That's right.  Or any other arrangement of BlockSize*BlockCount that
    yields 4096.

> The
>performance degradation from reading 4095 or 4097 bytes instead of 4096 is
>quite remarkable...

    Interesting.  I hadn't explored it, as the block sizes I am using are
    inherited from the p-System, where they are the filing system's normal
    block size.

    But as I think about it, I can see where you may have answered something
    that surprised me quite a while ago, and that is the BlockI/O on the
    p-System is much faster than BlockI/O on DOS.  I doubt whether the
    p-System's being interpreted has much to do with it, as I think the bulk
    of Block I/O's code is at the native-code level; but it may be that the
    512-byte size is as badly suited to file I/O in DOS as it is well suited
    in the p-System.  Perhaps a better adjusted block size would bring DOS'
    file performance back up to that of the p-System (yes, that is the
    direction I mean).

>Note: with a blocksize of 512 bytes (Reset(f,512)) it is
>impossible to read the last 10 bytes of a file that is 1034 bytes long
>without getting an IO error of some kind...

   I don't think there's a problem with it.  The remaining bytes of the block
   are just unused, unless this has changed since last I tried it.
   Filled with nulls or uninitialised, I don't remember which.


   Alastair