[net.micro.cpm] possible problems with large numbers of open files simultaneously?

"Robert L. Krawitz" <ZZZ.RLK%MIT-OZ@MIT-MC.ARPA> (01/14/85)

Does CP/M do strange things when many files are open and being written
at once?  I have a program that does this (six files are being written
at once, and are therefore open), and a variety of strange behaviours
occur, such as the disk write sequential call seems to return errors
(non-zero value in the A register) before the disk is actually full.

Related question:  the documentation for this op (whatever the number
is) says that a non-zero value is returned in the A register for a
nonsuccessful write due to a full disk.  Can this happen for other
reasons than a full disk?  Examples would be some flavor of write
error, etc.

When deleting a file, what if anything besides the file name (the
first 12 bytes, giving the drive, name, and extension) should be
initialized, and to what value?  Which should not be?

When opening a file for reading, same questions.  Does not closing a
file after reading it, either partially or totally, cause any
problems?

What is the best procedure for temporarily closing a file so it can be
read from disk in a different FCB, and then reopening it later for
writing, at the spot I left off when closing it?  I. e. I flush the
memory buffers for the six files I mentioned above, close the files,
and use a different FCB for reading them.  When I read it, I open the
file, but never close it.  To reopen the file, I save the number of
the last record, open the proper extent of the file, and restore the
last record number (base+32).

To initialize an FCB for creating a file or deleting it, I set the
following to zero: bytes 12 through 15, and byte 32 (offset from the
base of the FCB).  Is this the right thing to do?  Should I do this
much?

What seems to happen is at the time the disk is full, my program seems
to delete all the files.  It should only delete the oldest generation
(that's implicit in the file name).  It only calls the routine to do
so when the value returned from the write is non-zero.

If anyone can help me with this I would be extremely grateful.

Robert Krawitz

andrew@orca.UUCP (Andrew Klossner) (01/18/85)

All of this applies to CP/M-80 version 2.2.

Key to understanding disk file I/O in CP/M is the fact that all of the
information about an "open file" is contained in the FCB, which your
program allocates and controls.  To open a file, CP/M just fills in
your FCB with the directory information about the base segment, then
promptly forgets about the file.  When you read, it uses the FCB to
determine which record to get, calls the BIOS to read it, then updates
the FCB to point to the next record.  When you cross to a new segment,
CP/M goes to the directory and fills in the FCB with the new segment's
information.

On output, the directory is not updated until you do a CLOSE or WRITE
to a different segment.  That's why, if you CREATE a file, WRITE many
records to it, then kill your program, you often discover that you have
a zero length file; the WRITEs happened but the directory was never
updated to record them.  On the next warm boot, all the records that
were written are reclaimed as free space.

	"Does CP/M do strange things when many files are open and being
	written at once?  I have a program that does this (six files
	are being written at once, and are therefore open), and a
	variety of strange behaviours occur, such as the disk write
	sequential call seems to return errors (non-zero value in the A
	register) before the disk is actually full."

I regular open dozens of file for input and output, with no trouble.
Since CP/M doesn't record knowledge of open files, there's no problem
with any internal tables overflowing (there aren't any).  Perhaps the
fact that the directory updates are deferred is fooling you into
thinking that the disk isn't full when actually all the free records
have been used up.

	"Related question:  the documentation for this op (whatever the
	number is) says that a non-zero value is returned in the A
	register for a nonsuccessful write due to a full disk.  Can
	this happen for other reasons than a full disk?  Examples would
	be some flavor of write error, etc."

A write will also fail if it's attempting to create a new segment and
the directory is full.

	"When deleting a file, what if anything besides the file name
	(the first 12 bytes, giving the drive, name, and extension)
	should be initialized, and to what value?  Which should not
	be?"

Of the first 16 bytes, set the last four to zero.  Actually I always go
belts-and-suspenders and zero all the rest of the FCB, but it shouldn't
be necessary.

	"When opening a file for reading, same questions.  Does not
	closing a file after reading it, either partially or totally,
	cause any problems?"

Just the opposite.  You should take pains NOT to do a CLOSE of a file
that was used only for reading.  This is because all a CLOSE does is
copy the FCB back out to the directory.  If you haven't modified the
file, this is an unnecessary disk access, and will prevent your program
from running when the file or the disk is read-only.

	"What is the best procedure for temporarily closing a file so
	it can be read from disk in a different FCB, and then reopening
	it later for writing, at the spot I left off when closing it?
	I. e. I flush the memory buffers for the six files I mentioned
	above, close the files, and use a different FCB for reading
	them.  When I read it, I open the file, but never close it.  To
	reopen the file, I save the number of the last record, open the
	proper extent of the file, and restore the last record number
	(base+32)."

If you're absolutely sure that you're not going to write (or otherwise
modify) the file while it's temporarily closed, it suffices to do a
CLOSE and keep the FCB, then resume WRITING with the FCB later.  This
is because CLOSE doesn't cause the file to no longer be OPEN in the
usual sense; all CLOSE really does is update the directory.  In fact,
if you have a transaction processing program which adds records to an
open file, it should CLOSE the file whenever it knows that it will be
idle for awhile (waiting for another line of terminal input), to make
sure that the entire file will be there if the system crashes or
someone removes the floppy.

	"To initialize an FCB for creating a file or deleting it, I set
	the following to zero: bytes 12 through 15, and byte 32 (offset
	from the base of the FCB).  Is this the right thing to do?
	Should I do this much?"

This should be enough.  But it can't hurt to zero the whole thing, just
in case.  I admit to what may be superstition here, but I keep finding
the "undefined" or "reserved for future use" bits in the FCB turn out
to be used.

  -- Andrew Klossner   (decvax!tektronix!orca!andrew)       [UUCP]
                       (orca!andrew.tektronix@csnet-relay)  [ARPA]

oacb2@ut-ngp.UUCP (oacb2) (01/20/85)

> Just the opposite.  You should take pains NOT to do a CLOSE of a file
> that was used only for reading.  This is because all a CLOSE does is
> copy the FCB back out to the directory.  If you haven't modified the
> file, this is an unnecessary disk access, and will prevent your program
> from running when the file or the disk is read-only.

The BDOS (CP/M 2.2 and, I assume, CP/M Plus) is smart enough to not rewrite
the FCB if it's not been changed.  Not closing input files is just asking
for trouble if you ever upgrade to a multiuser or multiprocessor system.

> If you're absolutely sure that you're not going to write (or otherwise
> modify) the file while it's temporarily closed, it suffices to do a
> CLOSE and keep the FCB, then resume WRITING with the FCB later.  This
> is because CLOSE doesn't cause the file to no longer be OPEN in the
> usual sense; all CLOSE really does is update the directory.  In fact,
> if you have a transaction processing program which adds records to an
> open file, it should CLOSE the file whenever it knows that it will be
> idle for awhile (waiting for another line of terminal input), to make
> sure that the entire file will be there if the system crashes or
> someone removes the floppy.

Again, this may cause trouble if you upgrade to a multiuser or multiprocessor
system.

I strongly recommand that all files be closed after processing and that
I/O never be done to a "closed" FCB.  Closing an input file causes negligible
overhead.  Opening a closed file does require some overhead, but I think it's
worth it.
-- 

	Mike Rubenstein, OACB, UT Medical Branch, Galveston TX 77550