[comp.protocols.tcp-ip.domains] Pros and cons of secondary name servers off site

wales@CS.UCLA.EDU (Rich Wales) (03/07/91)

What is the current policy/philosophy on having a copy of a domain's
data stored on at least one "off-site" server (e.g., a secondary name
server for UCLA data that is located somewhere other than UCLA)?

When the domain system first came out, it was either a requirement or a
strong recommendation that this should always be done.  However, when I
looked through RFC 1123 (host software requirements) the other day, I
was unable to find any comments on the issue of off-site name service.

When I suggested to a colleague around here that we ought to arrange
for an off-site secondary server for our domain info, he countered that
doing this would be pointless.  He reasoned that, if his department's
machines were all down, no one would be able to send him mail (or TELNET
or FTP to him) anyway -- and anyone trying to send mail to his depart-
ment would get temporary "name server failure" errors and try again
later.  So (he reasoned), why bother with off-site backup name service?

My gut reaction to the above was that it's probably better to have the
name server info available anyway, even if the hosts in question aren't
reachable.  But I can't think of any convincing reason why this =must=
be so.

Is the idea of off-site backup name service just one of those concepts
that seemed to make sense in the original design of the system, but
which turned out to be impractical and has since been quietly dropped?

Comments?  If possible, I'd prefer an explanation of =why= off-site
backup name service is (or is not) a good idea, rather than simple com-
ments of the form "thou shalt do it because such-and-so RFC demands it".

Rich Wales <wales@CS.UCLA.EDU> // UCLA Computer Science Department
3531 Boelter Hall // Los Angeles, CA 90024-1596 // +1 (213) 825-5683

thorinn@RIMFAXE.DIKU.DK (Lars Henrik Mathiesen) (03/07/91)

   From: Rich Wales <wales@CS.UCLA.EDU>

   When I suggested to a colleague around here that we ought to arrange
   for an off-site secondary server for our domain info, he countered that
   doing this would be pointless.  He reasoned that, if his department's
   machines were all down, no one would be able to send him mail (or TELNET
   or FTP to him) anyway -- and anyone trying to send mail to his depart-
   ment would get temporary "name server failure" errors and try again
   later.  So (he reasoned), why bother with off-site backup name service?

1) If you have a backup MX record pointing off-site, it's obviously a
good idea for it to be visible when you're down.

2) Some mailers will only send mail via IP if they can see an A (or
MX) record. If you're down, it may go to UUCP (and be bounced).

--
Lars Mathiesen, DIKU, U of Copenhagen, Denmark      [uunet!]mcsun!diku!thorinn
Institute of Datalogy -- we're scientists, not engineers.      thorinn@diku.dk

braden@ISI.EDU (03/08/91)

	From namedroppers-RELAY@NIC.DDN.MIL Wed Mar  6 23:15:27 1991
	Date: Wed, 6 Mar 1991 12:55:19 -0800 (PST)
	From: wales@CS.UCLA.EDU
	To: namedroppers@nic.ddn.mil
	Subject: Pros and cons of secondary name servers off site

	What is the current policy/philosophy on having a copy of a domain's
	data stored on at least one "off-site" server (e.g., a secondary name
	server for UCLA data that is located somewhere other than UCLA)?

	When the domain system first came out, it was either a requirement or a
	strong recommendation that this should always be done.  However, when I
	looked through RFC 1123 (host software requirements) the other day, I
	was unable to find any comments on the issue of off-site name service.


Rich,

  The host requirements documents RFC-1122 and RFC-1123 were carefully
  limited to defining the requirements for host software, but not to
  specify how it is to be USED.  Thus, it generally does not give
  rules for operation or configuration of hosts, and therefore would
  not have anything to say about the way DNS components are configured.
  I would look to RFC-1032 and RFC-1033 for guidance on your question.  
  Regards,
  
     Bob Braden
     

asp@uunet.UU.NET (Andrew Partan) (03/08/91)

In article <910306.205519z.05578.wales@valeria.cs.ucla.edu>, wales@CS.UCLA.EDU (Rich Wales) writes:
> ... why bother with off-site backup name service?

If you do not have a reachable nameserver, then when some host tries to
get some information on your domain, it will get back something like
'host unknown' or 'no such host' or 'no information'.

On the other hand, if just your net is unreachable, then the host will get 
back the correct info, and when it tries to use it will get 'host
unreachable' or 'net unreachable' - things that are a lot more tractible
to deal with than the absence of any information.

	--asp@uunet.uu.net (Andrew Partan)

asp@UUNET.UU.NET (Andrew Partan) (03/12/91)

[This message is not really about secondary name servers off site but
rather about the values for SOA timers as sugested in RFC 1033].

> From: braden@ISI.EDU
> Subject: Re:  Pros and cons of secondary name servers off site
>
>  I would look to RFC-1032 and RFC-1033 for guidance on your question.  

RFC 1033 suggests using the following values in the SOA record:

         @   IN   SOA   SRI-NIC.ARPA.   HOSTMASTER.SRI-NIC.ARPA. (
                           45         ;serial
                           3600       ;refresh
                           600        ;retry
                           3600000    ;expire
                           86400 )    ;minimum

I think that the refresh & retry times are way too sort in today's
Internet.

I have been suggesting to anyone that asks me to use at least 1 day for
refresh (and preferably higher) and to use at least 1 hour for the
retry time.  We use 5 days & 1 hour here.

I also think that the expire time of ~40 days is rather long - I have
been suggesting 20 days.


Has anyone been looking at operational issues for DNS (such as SOA
times)?  Is there any work going on about updating this RFC or the BOG
(Bind Operator's Guide)?

	--asp@uunet.uu.net (Andrew Partan)