[comp.dcom.sys.cisco] fast IGRP

jyy@merit.edu (Jessica Yu) (07/06/90)

To have IGRP updates every 15 sec?  It is even one time more frequent than RIP
updates.  What is the percnetage of the IGRP traffic of the total traffic
on your network?  Is your network very instable?  If not, what is the reason
to do so?

                           Thanks
                           Jessica
 
v

hedrick@athos.rutgers.edu (Chuck Hedrick) (07/06/90)

On our central gateways, it looks like IGRP is 2 to 10% of the total.
That doesn't particularly worry me, since we are far from the capacity
of both the cisco CPU's and the networks involved.  We have had
problems with lines that go up and down on a fairly regular basis.
And now and then when we're doing testing, we'll do something to a
gateway that will cause it to lose all its routes and have to start
building the table over again.  My goal was to have connections not
pause more than about 30 sec. in such cases.  This is obviously an
arbitrary number.  Pick your own target.  What is realistic depends
upon your technology.  Since our network is based almost entirely on
T1 and Ethernet, I thought we could get away with running IGRP that
fast.

At the moment we use slow lines only as stubs.  On those connections,
we set the interface passive (i.e. disable sending IGRP) on the
central router, and set a default route on the remote router.  The
remote router still sends out updates for the nets it controls, but of
course that is a rather small update message, since it just mentions a
couple of subnets.

It is also possible to run IGRP at different speeds in different parts
of the network.  The routers at the interface between the fast and
slow part must be set so they use the fast update time and the slow
timeouts.  E.g. something like

  slow part: timers basic 60 180 0 240
  fast part: timers basic 15 45 0 60
  interface: timers basic 15 180 0 240

We've used this sort of setup in the past when only part of our
network was running code that knew about the timers command.

Obviously the sort of approach I use requires more care in setting
things up, both in looking at bandwidth and CPU implications, and in
monitoring to make sure that IGRP doesn't take over the network.  But
I believe the results are worth it.

By the way, I have one other suggestion for configuration IGRP: Don't
pass metric information that you don't actually intend to use for
making decisions.  In order to get rid of loops, IGRP removes routes
when the metric increases.  The exact criterion depends upon which
release you are running, but all releases do it in some way.  Suppose
you pass all the NSFnet routes into your internal network in such a
way that the IGRP metric reflects the actual metric being used in
NSFnet.  Whenever this metric increases, there will be an interruption
in traffic for that net, because IGRP will remove the route.  If you
have holddowns, the interruption can be quite significant.  You're
better off defining a default route to your external gateway(s), or
using a constant metric rather than actually passing the NSFnet
metric.