[comp.protocols.time.ntp] precision

billd@fps.com (Bill Davidson) (06/07/91)

I am a little curious about the precision paramter in the ntp.conf
file.  It says in the manual that it should probably not be changed
unless there is a good reason to do so.  What consititutes a good
reason?

Some of the machines we have (including the ones we build but also some
others) synchronize their system clocks to a hardware counter at each
gettimeofday() call and at each tick rather than just blindly adding
"tick" microseconds every tick.  As a result, microsecond precision is
possible.  Admittedly, the only machine which I checked had it's clock
timer crystal spec'd at 0.0025% (or 25PPM).  Also there's the overhead
of setting the system clock and returning from the system call so I'm
not sure it's right to say it's accurate to 1 microsecond.  Maybe it
is.

Does it make any difference anyway when the machines which have the
higher precision aren't even at the lowest stratum level that I run?
I would think that it might not make any difference unless every clock
from a stratum 1 on up in the path had the high precision.

--Bill Davidson

Craig_Everhart@TRANSARC.COM (06/07/91)

I believe that the ``precision'' concept is used only to create an
envelope around a given clock reading.  On a 50Hz clock, say, a given
microsecond reading can be good only to the nearest(?) 1/50'th second. 
On a real microsecond clock, it is possible to be precise to the
microsecond.  This is a useful concept when you're reading a remote
clock multiple times, and comparing it against your own clock; you use
the precision of the remote clock and that of your own clock to help you
interpret the differences.  For example, if your clock and some other
clock are both precise only to 50Hz, any reading of either of them could
be off by 20 milliseconds.  If you read your clock and the other clock
at the same time, up to 40 milliseconds of their difference could be
attributable to their imprecision.

Precision is different from accuracy.  Venturing definitions here,
accuracy represents how close a value is to truth; precision is instead
like the number of significant figures in a value.  A temperature value
of 98.95923331 (deg. F.) for my body temperature is very precise, but
it's probably accurate only to half a degree.

There was a program floating around that made a good guess as to the
precision of your local clock by making calls to gettimeofday() and
looking at the smallest difference in microsecond values that it got. 
There was a bit of trickiness at high precision, since lots of
gettimeofday() routines remember the last result returned, and make sure
that each successive call got a unique value--generally by adding 1 to
the microsecond counter.  Of course, even if one program on a system
calls gettimeofday() in a tight loop, there are other callers of the
same kernel routine, so the values you get back could increment by 2 or
3 or 5 or more even if there's no real extra precision there.  The
program interpreted successive microsecond values of
    n, n, n, n, n+20000, n+20000,
	-> 50Hz precision
    n, n, n, n, n+16666, n+16666,
	-> 60Hz precision
    n, n, n, n, n+1000, n+1000,
	-> 1000Hz precision
as well as the tricker
    n, n+1, n+2, n+3, n+1000, n+1001,
	-> 1000Hz precision
    n, n+1, n+4, n+5, n+500, n+502,
	-> 2000Hz precision
and it's just plain tricky to calculate:
    n, n+1, n+3, n+4, n+7, n+8, n+14, n+15, ...
	-> very precise clock or very busy gettimeofday() routine???

Given all this, you can probably set your precision value based on the
smallest possible meaningful difference between gettimeofday() calls. 
If you really have a microsecond crystal behind your gettimeofday()
call, it sounds like you could reasonably claim that your precision is
-20 (1000000Hz ticker).

		Craig