[fa.tcp-ip] Time servers

tcp-ip@ucbvax.ARPA (06/25/85)

From: ulysses!smb@BERKELEY (Steven Bellovin)

I'm implementing some network-based time stuff, and I find I need more
precision (in two senses of the word) than RFC868 provides.

First, what do folks think of allowing an (optional) second "word", giving
the time in microseconds.  (Yes, that's what Berkeley UNIX gives; no, that's
not why I'm using those units.)  As long as clients check the received
length on a message, the current behavior would still work.

Second, given the current standard, how should a system with a more precise
idea of the time round its response?  Truncate?  Round?  The current RFC
is silent.


		--Steve Bellovin
		AT&T Bell Laboratories
		ulysses!smb@berkeley.arpa
		smb.ulysses.btl@csnet-relay

tcp-ip@ucbvax.ARPA (06/25/85)

From: MILLS@USC-ISID.ARPA

In response to the message sent  Mon, 24 Jun 85 22:26:44 edt from ulysses!smb@Berkeley 

Steve,

I think you might be heading down the wrong road. We had quite a long discussion
on these points some years back when the present protocols were being designed.

1. TCP-derived timestamps can never achieve precisions much better than a
   few seconds, due dispersions in transmission and service times on typical
   hosts. I try to discourage anyone from using TCP for this in the first
   place.

2. UDP-derived timestamps can be expected to achieve precisions in the order
   of a second on most hosts and operating systems, but the TOPS-20 is
   not one of them. The problem is queueing delays at several points in the
   service process - time-slicing, interprocess message passing, paging and
   the like. We decided 32 bits in precision was justified for UDP.

3. ICMP-derived timestamps are the best we can do. In most systems the
   IP/ICMP layer is as close to the hardware driver as we can get, so the
   protocol delays at higher levels can be avoided. The residual errors are
   due to frame encapsulation, possible link-level retransmissions and so
   forth. We decided 32 bits of milliseconds was the most appropriate unit.

4. For anything more precise than milliseconds, you need to be very careful
   about your technique. Absolute timetelling to this precision requires
   carefully calibrated radio clocks or atomic standards. Relative delays
   between mutually synchronized clocks is easier, but precisions better than
   a millisecond requires carefully controlled link delays and constant-drift
   intrinsic oscillators. This is what the fuzzballs strive to do. They have to
   work so hard at it that the intrinsic drift of the ovenless crystal
   oscillators can be measured individually via the network.

5. The usefulness of any timestamp is relevant only to the extent the
   application program can operate with it. It doesn't make sense to deliver
   a super-accurate timestamp to a user program trying to control a real-time
   process when its control mechanism has inherent random delays in the order
   of disk-seek latencies. This comment does not apply if you are measuring
   differences in timestamps (delays), of course.

6. With respect to Unix timestamps. Our Sun workstation has a hard time
   maintaining clockwatch to within several seconds (sic), much less to within
   an order of milliseconds. It is not clear whether this is due to oscillator
   instability or simply sloppy implementation. The apparent time drifts wildly
   relative to our rather precise network clock as measured by our loccal-net
   clock-synchronization algorithms.

7. In my experience the most pressing need for additional protocol development
   is a mechanism to determine the order of precision of a delivered timestamp.
   For instance, the recent episodes when we told our clockwatching friends
   gross lies in timestamps due local power disruptions, we should have been
   able to indicate relative faith, perhaps as a field in the header (assuming
   ICMP for record). We should also be able to convey whether the stamp
   was derived from primary, secondary or other standards and whether it
   was determined by a third party.

The bottom line is to suggest that you use ICMP as the primary source of precise
milliseconds and resolve high-order ambiguity with UDP and/or TCP only as
necessary.

Dave
-------

tcp-ip@ucbvax.ARPA (06/25/85)

From: POSTEL@USC-ISIF.ARPA


Steve:

I agree with Dave Mills, the RFC-868 Time Protocol is not expected
to have accuracy greater than one second.  I'd be very supprised if
it was every that good.  To find out more about the procedures Dave
uses see RFCs 891 and 778.

--jon.
-------

WUTS@USC-ECLC.ARPA (Maurice J. Wuts) (10/03/85)

I know there are Unix time pollers / servers out there.  Is there anyone
with a Tops20 version?  
				Maurice Wuts
-------