[comp.protocols.time.ntp] DTS vs. NTP

paul@kuhub.cc.ukans.edu (02/26/91)

Greetings, fellow oscillators! :-)

DECnet Phase V includes something called DTS (Digital Time Service).
Does anyone know if DTS and NTP are indeed one and the same? Digital's
seminar about planning for DECnet Phase V provided no information.

Any information would be appreciated! Thank you.

marcus@ksr.com (Mark Anthony Roman) (02/26/91)

DTS and NTP are indeed NOT one in the same.  While both claim to do time
synchronization, DTS was designed with very different design goals and
assumptions, especially concerning accuracy, availability of high-stratum
clocks, breadth of the served networks, and response to net pathologies.

DTS was recently selected by the Open Software Foundation for their
Distributed Computing offering.


			mark roman


------------------------------------------------------------------------
mark anthony roman				systems programmer
marcus@ksr.com					Kendall Square Research
uunet!ksr.com!marcus				+1.617.895.9480

Mills@udel.edu (02/26/91)

Paul,

DTS and NTP are not the same protocol and, indeed, have somewhat different
requirements drivers. Having said that, be advised that good ideas from DTS
were pirated from DTS and added to other good stuff and made part of NTP
Version 3. Additional discussion and comparison between NTP and DTS can
be found in the file pub/ntp/dts.txt on louie.udel.edu. Since this amounts
to a verbatim transcript of several informal messages exchanged among
friends, I ask you to use it only for personal information and not to quote,
cite or redistribute it.

Dave

Mills@udel.edu (02/26/91)

Mark,

Can you tell me whether the DTS model blessed by OSF had the local-
clock machinery changed to include frequency correction (type-II 
phase-lock loop)? This was one of the open issues remaining from
our last round of discussions.

Dave

gary@proa.sv.dg.com (Gary Bridgewater) (02/26/91)

In article <28775.27c916d0@kuhub.cc.ukans.edu> paul@kuhub.cc.ukans.edu writes:
>Does anyone know if DTS and NTP are indeed one and the same? Digital's
>seminar about planning for DECnet Phase V provided no information.

If not, wouldn't

	DEC knows how to have a Good Time

be a natural for a tradeshow button?
-- 
Gary Bridgewater, Data General Corporation, Sunnyvale California
gary@sv.dg.com or {amdahl,aeras,amdcad}!dgcad!gary
C++ - it's the right thing to do.

marcus@ksr.com (Mark Anthony Roman) (02/26/91)

	From: Haavard Eidnes <uunet!idt.unit.no!he>
[. . .]
	In-Reply-To: Your message of Mon, 25 Feb 91 17:15:38 EST 
	Message-Id: <CMM.0.88.667557342.he@garm.idt.unit.no>
	Status: R

	But do any of you know enough about DTS to say if (and how) a DTS subnet
	could use NTP as its external clock source? Sort of "the revenge of NTP"...

While I'm not sure about the DTS implementation from DEC, the OSF
offering will most likely provide a gateway between the two, i.e.  a
listener to NTP which will convert its numbers into a confidence
interval.  There may be some issue with redistribution, as DTS uses an
ever increasing confidence interval as the measure of accuracy and the
prevention of loops, while NTP uses a fixed stratum approach to
preventing loops in the synchronization mesh.


		mark

wunder@HPSDEL.SDE.HP.COM (Walter Underwood) (02/27/91)

The OSF DCE Rationale talks about services inside a cell and between
cells.  A cell is a group of 2-1000 well-connected hosts, basically
all the "local" machines.  Services inside a cell are optmised for
performance, availability, and management.  Services between cells are
optimised for interoperablity and scalability (10,000 cells), and tend
to be International Standard kind of things.

A good example is name service: DECdns inside a cell, X.500 between
cells.  DECdts adds replicated servers, caching, and some other stuff
that X.500 doesn't do.

For time service, the "between cells" part of the DCE is left blank.
I don't really think that OSF expects DECdts to be the answer for
syncronising 10,000 cells.  NTP could be the answer.

wunder

rayaprol@SNOOK.ECS.UMASS.EDU (Venu S Rayaprolu) (02/27/91)

There are programs called Time Provider Interfaces to enable DTS to
use NTP, ACTS, TRACONEX etc.. as its external clock sources. 

--
Venu S Rayaprolu

rees@pisa.citi.umich.edu (Jim Rees) (02/27/91)

In article <28775.27c916d0@kuhub.cc.ukans.edu>, paul@kuhub.cc.ukans.edu writes:

  DECnet Phase V includes something called DTS (Digital Time Service).
  Does anyone know if DTS and NTP are indeed one and the same? Digital's
  seminar about planning for DECnet Phase V provided no information.

They are not the same.  Most everyone I know uses ntp, but OSF has chosen
dts for DCE (enough TLAs?).

Dave Mills (of fuzzball fame) wrote a paper comparing these two:  Draft
document distributed to the NTP engineering group on 12 February 1990:  "A
Comparison of the Network Time Protocol and Digital Time Service."  This
triggered some discussion on the net.  Joe Comuzzi of DEC wrote a rebuttal.
The whole exchange makes good reading and is more than you ever wanted to
know about time keeping.

The ntp timestamp resolves to 200 picoseconds (6 cm at the speed of light!)
but will roll over in the year 2036.  As Joe says,

    The DTS time is a signed 64 bits of 100 nanoseconds since Oct 15, 1582.
    It will not run out until after the year 30,000 AD. Unlike NTP which
    will run out in 2036. I, for one, intend to still be alive in 2036!

Mills@udel.edu (02/27/91)

Mark,

Weenie quibble: the NTP stratum number (really, hop count to the root
(primary server) is variable, depending on the subnet configuration
and the whisps and clanks of the Bellman-Ford minimum spanning tree
algorithm. In my discussions with the DECfolk I came to appreciate
the usefulness of the correctness interval so laboriously calculated
in DTS and in fact stole the idea and incorporated it in NTP. It
was my intent that the NTP correctness interval, actually called the
synchronization distance in NTP version 3, provably contain the DEC
correctness interval. I say contain, rather than coincide, because
the vanilla DEC selection algorithm (based on Marzullo's dissertation)
usually results in rather poor accuracy compared with the older
(prior NTP algorithms), so I modified the algorithm slightly to
retain the previous accuracy, while retaining the correctness arguments.

All this to say that (a) an NTP version 3 implementation can (and should)
provide the correctness interval to a client, which could of course
be a DTS time provider interface, and (b) an NTP daemon could in principle
interwork with a DTS daemon, as long as the DEC correctness interval were
mapped to the NTP synchronization distance. I expressed some concern to
the DECfolk on this last issue, since a DTS/NTP interworking subnet
of any real size, like what we now see in the Internet, might become unstable
under the right circumstances. You will note that the current maze is getting
really large and, for all those rambunctions oscillators out there, has
not displayed any instabilities I have seen. You guys may beat me up about
my beloved phase-lock loops and bickering about their parameters, but the
darn things realy do work as designed.

Dave

Mills@udel.edu (02/27/91)

Jim,

Well, I suppose I should reveal why I chose the NTP format. The obvious
inference in DTS is that time began with the Gregorian Calendar, which
ticked the first on 15 October 1582. The Pope was a Catholic, of course,
and had little experience with time zones, leap seconds and the like,
and besides he was more concerned with Easter falling in the right
festival epochs. Moslems, Jews and Hindus probably have mixed feelings
on this particular choice, so I felt it not quite a terrific idea.

I chose 1 January 1900 simply because Jon Postel had preempted that
design choice in UDP/TIME and I wanted simple conversion. I could have
chosen the primal tick of the Julian Era, 4713 BC, but that's too many
bits and few cosmicologies twitch in periods that long. If I had it to
do again, I would choose 1 January 1872 when UTC itself was born. How
many leap seconds since 1582 anyway?

As to the choice of format (yawn), most time buzzards agree that 64
bits is about right, but to some epochs are more important than picoseconds
and to others the other way around. I was very concerned that high speed
nets that may come along be served, as well as time-transfer applications
in fields other than computer networking be served. Not the least
consideration was that the time format be easily cleaved into useful
chunks without requiring multiply/devide (read that process controllers
and PCs). The natural cleavage seemed to be seconds and fractions, which
leads to an LSB of an incredible 232 nanoseconds. According to journal
articles and sworn statements from my geodetic friends, this is about
the precision necessary to map the world, measure continental drift and
confirm global warming.

The question of epoch, so dear to many hearts, may be a red herring. There
are lots of other protocols, like my investment accounts, spreadsheets
and even other time protocols that have various degrees of Rolloveritis.
I even got a call from a NY Times reporter that wanted me to comment on
a report he heard that on 31 December 1999 the Social Security computers
would all stop and all retirees would starve. Now, it's highly unlikely that
many applications will find it necessary to determine precision intervals
spanning many years to a precision of 232 nanoseconds; but, if any do turn
up, they will need to account for leap seconds, which requires some kind
of institutional memory. If you have that, it's a simple matter to add
a leap-136-year bit to NTP and we all get well. 

My answer to all this is the Maya Long Count calender dating. The Maya
were the best timekeepers in the world and they kept the calender precise
for over a millenium. See the recent article in Scientific American.

Dave

rbthomas@frogpond.rutgers.edu (Rick Thomas) (02/28/91)

Dave,

>           The natural cleavage seemed to be seconds and fractions, which
> leads to an LSB of an incredible 232 nanoseconds.
	 ...
>                                               Now, it's highly unlikely that
> many applications will find it necessary to determine precision intervals
> spanning many years to a precision of 232 nanoseconds;

You mean pico-seconds, right?

Rick

Mills@udel.edu (03/03/91)

Rick,

Well, cleave my tongue. Picoseconds, not even nanoseconds be our game.
Thanks for the sanity check.

Dave