VORBRUGG@DBNPIB5.BITNET (02/09/88)
To correct some of Paul's misconceptions, here are a few remarks on volume sets: 1. Space allocation is no longer done by searching for free clusters in the bitmap. This is only done when the extent cache is initially filled (or has to be replenished). Usually, an allocation will be done through the cache, which is just a list of (start-lbn, length) records of free clusters. This cache is shared by all processes of one VAXcluster member and only a local lock conversion is required to access it. 2. Of course, this locks only allocation operations (file creation, exten- sion and deletion) and not a simple file open. Anyway, the time to access the extent cache is so short that it shouldn't be a problem. 3. I really don't understand what the problem should be with contiguous-best-try files. Generally, this is a good idea for data files (together with a generous extent size). 4. If you don't use the features and tools offered to tune your system (re using FDL to tune ISAM files, for example), it's your fault. Up to now, the computer user is still expected to have a higher IQ than the machine (s)he is using. 5. As to use of free space in a volume set, I'm quite certain that the file system will create a file on the disk with the largest free space (in absolute blocks) left. It then will extend the file on that disk unless it is forced (by the user or lack of space) to use another disk. (As this by necessity forces an extension header, maybe the decision which disk to use is also re-evaluated when an extension header becomes necessary because of fragmentation.) This generally leads to a quite well balanced volume set. Exceptions are of course files that consume a considerable amount of the total disk blocks; they will create an imbalance that will take some time to fill. 6. The general answer to backing up of large volume sets is twofold: First: "You can't have the cake and eat it." That is to say, you just have to pay a price for the nice features of volume sets. That it physically consists of different disks which can break individually and are repaired individually, is totally irrelevant in this context. (Disks also have a number of surfaces, and a crash of one head doesn't necessarily destroy the others. Nonetheless, nobody expects to be able to read the data stored on them - you just change the HDA and fetch your backup tapes...) Second: If you know or suspect that a disk in a volume set will fail, you can do a single-disk physical backup of it. This allows you to repair a single disk with minimum overhead. Of course, this does not have the nice side effect of compressing all files. Maybe Frank Nagy could tell us how Fermilab handles their user volume set consisting of 16 RA81s. I even suspect they had a crash just some weeks back.... Jan Vorbrueggen -------