[comp.arch] disk allocation strategy

det@hawkmoon.MN.ORG (Derek E. Terveer) (05/24/90)

In article <1990May17.220613.21280@unx.sas.com> kent@mozart.unx.sas.com (Paul Kent) writes:
> Where do you put the user directories?
> 
>   distributed over the 20 odd departmental file servers
>   on each individuals workstation. (along with OS and swap space)
> 
> I would appreciate any notes from people with setups like this,
> on the pros and cons of centralised user directories, and the
> management of all the NFS cross mounting.

One potential problem that i see is how to back up distributed user's
directories.  If they are centralized, they can be more easily backed up.

derek
-- 
Derek Terveer		det@hawkmoon.MN.ORG

david@indetech.com (David Kuder) (05/29/90)

In article <1990May24.071122.7009@hawkmoon.MN.ORG> det@hawkmoon.MN.ORG (Derek E. Terveer) writes:
>One potential problem that i see is how to back up distributed user's
>directories.  If they are centralized, they can be more easily backed up.

One strategry that has been re-invented in several different forms is
to copy the data on local disks onto the central servers.  The schemes
range from doing network file system mounts, to doing backups on the
individual machines and shipping the tape images over the net to the
fileserver.  Check out the proceedings of the Large System
Administration conferences held by Usenix the last several years for
details.
-- 
David A. Kuder              Looking for enough time to get past patchlevel 1
415 438-2003  david@indetech.com  {uunet,sun,sharkey,pacbell}!indetech!david

dan@maccs.dcss.mcmaster.ca (Dan Trottier) (05/30/90)

In article <1990May28.232017.9279@indetech.com> david@indetech.com (David Kuder) writes:
>In article <1990May24.071122.7009@hawkmoon.MN.ORG> det@hawkmoon.MN.ORG (Derek E. Terveer) writes:
>>One potential problem that i see is how to back up distributed user's
>>directories.  If they are centralized, they can be more easily backed up.
>
>One strategry that has been re-invented in several different forms is
>to copy the data on local disks onto the central servers.  The schemes
>range from doing network file system mounts, to doing backups on the
>individual machines and shipping the tape images over the net to the
>fileserver.  Check out the proceedings of the Large System
>Administration conferences held by Usenix the last several years for
>details.

We back up clients machines from the fileservers via the rdump command.
The difficult part was deciding on how to coordinate the dumps. The
solution was to run everything from one of the servers and use the 
following commands to dump the clients:

# For each host in hosts[x] 
while ( $i <= $#hosts ) 
  # and its corresponding filesystems listed in fs[x]
  foreach j ( $fs[$i] )
    # start up rdump on each system to do the actual dump
    rsh $hosts[$i] ${rdump} ${dhost}\:${dev} $j >>& ${dumplog}.${ext}
  end
  # move on to the next host
  @ i++
end

This is run in the middle of the night and dumps incremental backups to
and exabyte drive. The dump level can be controlled to either maximize
file recovery or minimize tape usage.

-- 
Dan Trottier                                       dan@maccs.dcss.McMaster.CA
Dept of Computer Science                       ...!uunet!utai!utgpu!maccs!dan
McMaster University                                      (416) 525-9140 x3444