[comp.os.vms] Further Thoughts On Volume Sets... They Have Considerable Drawbacks.

CLAYTON@xrt.upenn.EDU ("Clayton, Paul D.") (01/22/88)

A recent message from John on the use of volume sets makes me concerned that
the flip side of the coin with volume sets is not even hinted at. The following
lines are what concerns me the most.
  
|  Subj:	Disk volume sets are wondeful.
|
|  I'd recommend volume sets to anybody with enough disks to implement them. It
|   greatly reduces the number of device,etc names users have to remember and 
|   you gradually expand existing logical devices simply by adding another 
|   volume to the set.

While it easy to 'grow' more disk space that can be accessed by using a single
and consistent logical or device name the problems that I have with volume
sets are the following.

1. When using volume sets, unless you have a 'trial_and_error programmer or 
hacker' in your shop then you would in most cases NOT use the ability of RMS 
to spread out the various areas of an ISAM file to different spindles and 
locate the files on cylinder boundaries. Most people let RMS/FDL use the 
defaults and you end up with a single file that will start at some random 
location and grow as needed, based on usage. It will also probably have the 
'Contigious Best Try' or 'CBT' bit set in the file header which means that 
the ACP for the disk is now going to LOCK considerablly MORE users from the 
volume when it goes looking for the next spot to grow into. The CBT bit tells 
the ACP to go through the bit map and look for an area that is available that 
is the same size as the extend size. If it can not find the right size it then 
starts splintering the requested size into the more then one group of smaller 
sizes in order to satisfy the request. This continues and will 'spill' on to 
other volumes in the set. This is called disk thrashing in my book. It also 
consumes a considerable amount of system overhead and LOCKS the disk from any 
'concurrent' file opens, closes, extends and delete operations while this goes 
on. Not a nice condition if you have many spindles in the set and lots of 
files with many users.

2. The space that is available in a multi-spindle set is not used in a manner
that spreads the usage out over several disks. It is a case of fill the first 
volume up then move to the second and so on. This results in a LARGER impact
due to the problems listed in #1. The exception to this is for users that know
how to use the FDL statements to place portions of their file on various 
disks in the set. 

3. When you have a multi-spindle set, there is a larger tendancy to NOT refresh
the set to make the files contigious and put the disks in better shape. This
causes #1 to come into play even more.

4. When backing up a set, there are two ways to do it. You can specify the 
logical name as the input to the operation and thus get all the files as though
they were on a single volume. The second method is to specify a specific volume
in a set to be copied in the operation. The problem with these methods is that 
should a single volume fail, you have to rebuild the ENTIRE set and not just 
one volume. With the linked file headers and what not that is the ONLY recourse.

In light of all the above, I do not recommend sets unless that is the ONLY 
method available to store disk files due to size constraints. I have files on
one of my systems that are 1.2 to 1.5 MILLION blocks in size and they are 
completely contained on ONE SI93C. If you have going to have large files, look
into the larger disk currently available. The time saved from less thrashing 
through the bitmap/MFD/disk heads is considerable when you go seperate volumes
with many users or large files.

The philosophic details of how to make a set of seperate disks 'easy' for 
users to access is a seperate issue and one that could easly take more room 
then this did. If that issue is raised I shall offer further ideas and 
comments as deemed appropiate to the subject.

I hope this offers further depth to the subject of volume sets and the 
implications of using them. :-)

pdc

Paul D. Clayton - Manager Of Systems
TSO Financial - Horsham, Pa. USA
Address - CLAYTON%XRT@CIS.UPENN.EDU