[sco.opendesktop] SCO Unix, ALR FlexCACHE losing time

vlr@litwin.com (Vic Rice) (01/02/91)

I have been running nbs_time from cron to keep my systems clock up to
date. Below is a piece of my clock log file:

12/27/90-06:00:56  Clock is 118 seconds slow.
12/27/90-06:02:54  New time set.

12/28/90-06:00:53  Clock is 63 seconds slow.
12/28/90-06:01:56  New time set.

12/29/90-06:00:53  Clock is 63 seconds slow.
12/29/90-06:01:56  New time set.

12/31/90-06:00:53  Clock is 104 seconds slow.
12/31/90-06:02:37  New time set.

01/01/91-06:00:53  Clock is 63 seconds slow.
01/01/91-06:01:56  New time set.

My system:    SCO ODT 1.0.0y (System V R3.2.1)
              ALR FlexCache 33/386
	      
What gives ??? Why am I losing over a second a day ??
-- 
Dr. Victor L. Rice
Litwin Process Automation

gsm@gsm001.uucp (Geoffrey S. Mendelson) (01/03/91)

in Message-ID: <1991Jan01.162400.6155@litwin.com>
Dr. Victor L. Rice writes:
>	      
>What gives ??? Why am I losing over a second a day ??
>-- 

This is a very common bug in IBM PC design.  It comes from one of two
"features" of the IBM pc that has been carried over from the original.

Time loss may be caused by three different things, but I think that yours
is the second or third:

1:  The battery backed up clock loses (or gains time).  
    This is caused by the clock chip being "off".  The problem may be fixed
    by your motherboard vendor (in this case ALR) or it may not.  They
    probably do not warrenty clock accuracy.  

    This problem usually shows up on systems that are powered off most of the
    time.

    If it is really off, such as hours a day, or dead, replace the battery.

    A friend of mine had an early Tandy 3000 (mitsubishi m/b) that lost 11
    seconds a day.  TANDY fixed it by replacing the motherboard.  It lost
    12 seconds a day.  On the third try they said it was within specs.

    I know of no published specs for clock accuracy. 

2:  When UNIX (and MS-DOS) are booted, the read the battery backed up clock.
    From then on they update the time on each "clock tick interupt".
    Most device drivers, especially disk, turn off interupts while they 
    are running.  

    The clock will be "off" 1/60 of a second until the interupt gets processed.
    Since disk drivers don't want to be interupted, they turn off interupts
    during transfers.  If for some reason they are busy for more than 1/60 of
    a second you loose any clock ticks after the first.

    If you have a tape or SCSI driver that is hit very hard you may see this.
    Also a serial card driver may block interupts, but not likely for that 
    long a time.  
      
3:  Of course, your clock tick generator on you motherboard may be off, 
    this is usually a crystal used for veritical sync.  In fact, it's supposed
    to be off.   This is because NTSC video actually refreshes at 59.9? Hz
    not sixty hertz.  The number is some sub-multiple of the color carrier
    frequency which is 3.57??????? MHZ.

Also a note on accuracy:
 
    1%  would be 864 seconds a day or 14 minutes 24 seconds
   .1%  would be 86  seconds a day or  1 minute  26 seconds
   .01% would be 8.6 seconds a day
   1 second a day is 1/86400 or 1 in almost 1 part in one hundred thousand.

How many scientific instruments can boast that accuracy?
     
-- 
Geoffrey S. Mendelson
(215) 242-8712
uunet!gsm001!gsm

lws@comm.wang.com (Lyle Seaman) (01/04/91)

gsm@gsm001.uucp (Geoffrey S. Mendelson) writes:

>This is a very common bug in IBM PC design.  It comes from one of two
>"features" of the IBM pc that has been carried over from the original.

>2:  When UNIX (and MS-DOS) are booted, the read the battery backed up clock.
>    From then on they update the time on each "clock tick interupt".
>    Most device drivers, especially disk, turn off interupts while they 
>    are running.  

A lot of Unices check the battery backed clock periodically.  This 
should bound the drift caused by disbled interrupts so that the long
term clock drift is solely attributable to the battery-backed clock.

>Also a note on accuracy:
> 
>    1%  would be 864 seconds a day or 14 minutes 24 seconds
>   .1%  would be 86  seconds a day or  1 minute  26 seconds
>   .01% would be 8.6 seconds a day
>   1 second a day is 1/86400 or 1 in almost 1 part in one hundred thousand.

>How many scientific instruments can boast that accuracy?

Well, my free Disneyworld watch, for one.
My free Grimace watch from McDonald's, for another.

-- 
Lyle                  Wang           lws@capybara.comm.wang.com
508 967 2322     Lowell, MA, USA     Source code: the _ultimate_ documentation.

tim@delluk.uucp (Tim Wright) (01/04/91)

In <1991Jan2.221527.15181@gsm001.uucp> gsm@gsm001.uucp (Geoffrey S. Mendelson) writes:


>in Message-ID: <1991Jan01.162400.6155@litwin.com>
>Dr. Victor L. Rice writes:
>>	      
>>What gives ??? Why am I losing over a second a day ??
>>-- 
...
>2:  When UNIX (and MS-DOS) are booted, the read the battery backed up clock.
>    From then on they update the time on each "clock tick interupt".
>    Most device drivers, especially disk, turn off interupts while they 
>    are running.  

>    The clock will be "off" 1/60 of a second until the interupt gets processed.
>    Since disk drivers don't want to be interupted, they turn off interupts
>    during transfers.  If for some reason they are busy for more than 1/60 of
>    a second you loose any clock ticks after the first.

>    If you have a tape or SCSI driver that is hit very hard you may see this.
>    Also a serial card driver may block interupts, but not likely for that 
>    long a time.  
>      
Is this really the case? How often do the above drivers do an 'splhi()' which
would lock out the clock. I can't imagine it happens that much. The AT&T 3B15
had a bug where the disk buffer cache code locked out the clock and it really
knocked the time out of whack when busy. From memory, v7-style block drivers
lock the buffer cache with spl6, which doesn't affect the clock. I suppose
a driver doing "programmed-io" might want to lock out all interrupts but I
can't see that squirting 512 bytes in a loop takes that long on a modern
system ! Anybody care to comment/correct me.
>Also a note on accuracy:
> 
>    1%  would be 864 seconds a day or 14 minutes 24 seconds
>   .1%  would be 86  seconds a day or  1 minute  26 seconds
>   .01% would be 8.6 seconds a day
>   1 second a day is 1/86400 or 1 in almost 1 part in one hundred thousand.

>How many scientific instruments can boast that accuracy?
Well my casio wristwatch (quite cheap) is GUARANTEED accurate to 15 seconds a
month or ~ 1/172800. I fail to see why the RTC-chip manufacturers can't make
one at least as good (except for pricing considerations :-)

Tim
--
Tim Wright, Dell Computer Corp. (UK) | Email address
Bracknell, Berkshire, RG12 1RW       | Domain: tim@dell.co.uk
Tel: +44-344-860456                  | Uucp: ...!ukc!delluk!tim
"What's the problem? You've got an IQ of six thousand, haven't you?"

gsm@gsm001.uucp (Geoffrey S. Mendelson) (01/05/91)

lws@comm.wang.com (Lyle Seaman) 
Replied to my note of:
>
>>Also a note on accuracy:
>> 
>>    1%  would be 864 seconds a day or 14 minutes 24 seconds
>>   .1%  would be 86  seconds a day or  1 minute  26 seconds
>>   .01% would be 8.6 seconds a day
>>   1 second a day is 1/86400 or 1 in almost 1 part in one hundred thousand.

>>How many scientific instruments can boast that accuracy?

With:


>Well, my free Disneyworld watch, for one.
>My free Grimace watch from McDonald's, for another.

Funny, my "Swiss Army Watch" does not. If you LOOK AT THE SPECS most digital
watches do not claim to be accurate to one second a day.  Most of the are,
however.  The manufactures don't check them for accuracy, (the same with 
clock crystals).

A friend of mine's father-in-law was presented with a gift in the early 
1980's. It was a "cheap" (under $10) LED watch that was accurate to less
than a second a month.  It was not any different than the others made in
the same batch except someone bothered to test it and found out it
was that accurate. Obviously some others in the same batch were accurate
to one second a month, some probably were accurate to one second a year, 
and some were (in)accurate to a minute a day or more.

The manufacturers only guarentee accuracy within a range. The accuracy of a 
particular piece within that range is somewhat random, although you can 
compute the probability of it being a within a specific value.
-- 
Geoffrey S. Mendelson
(215) 242-8712
uunet!gsm001!gsm