[comp.mail.uucp] UUCP status files and wierd dates - revisted.

jessea@dynasys.UUCP (Jesse W. Asher) (11/22/90)

In article <803@sci34hub.UUCP>, gary@sci34hub.sci.com (Gary Heston) wrote the following:
>In article <736@dynasys.UUCP> jessea@dynasys.UUCP (Jesse W. Asher) writes:
>=I was looking in the status files for UUCP connections and I was trying
>=to figure out what date was used to calculate the below number:
>
>=0 0 658135267 0 SUCCESSFUL rutgers
>     ^^^^^^^^^
>
>=I assume that this is the date of the last connection.  If this assumption
>=is incorrect, what is this number?  If it is correct, I have a few more
>=questions.  Why is such an old date used?  Why not just use the beginning
>
>The date is the internal format, which is defined as the number of seconds
>since January 1, 1970. (Reportedly day 0, year 0, of the age of Unix.)
>
>All Unix dates are maintained in this format, for some reason; in things
>like ls you see a converted date.

I guess I didn't make myself very clear when I posted the original message.
I knew the date was in the early seventies (couldn't remember exactly when
though).  The main thrust of my posting was

WHY is this date used?  WHY can't the beginning of the year be used instead?
It seems like a waste of resources to compute a date from the number of
seconds given for a twenty year old date - especially if so many programs
use this format.  The only thing I can think of that would make things 
difficult might be leap years.  No one answered why this is used and 
why something better couldn't be used.  That's what I really wanted to know.

Thanks to all those that did reply, though.  


-- 
      Jesse W. Asher                             Phone: (901)382-1609 
               6196-1 Macon Rd., Suite 200, Memphis, TN 38134
                UUCP: {fedeva,chromc,rutgers}!dynasys!jessea
 -> An atheist is a man with no invisible means of support.

mjr@hussar.dco.dec.com (Marcus J. Ranum) (11/22/90)

In article <754@dynasys.UUCP> jessea@dynasys.UUCP () writes:

>WHY is this date used?  WHY can't the beginning of the year be used instead?

	What about stuff that's older than Jan 1, 1990 ? I have some
stuff from back then... Should we start dating it in negative time ?
And, once we've laboriously converted every single database, file, and
whatnot that we have that contains a date, are we supposed to do it
all over again Jan 1, 1991 ?

	Timestamps are critical to a lot of real-world applications -
it's one thing to ignore time if you're running a floppy-based PC, but
it's another if you have a few gig of data that your business relies
on. The UNIX solution to handling time is brilliant, elegant, and
you'll come to realize it the first time [pun intended] you have to
write something that needs to safe-store things based on time on a
machine that handles it, say, as an ASCII string that includes a
daylight savings code.

mjr.
-- 
"When choosing between two evils, give preference to the council of your
tummy over that of your testes. The history of mankind is full of disasters
that could have been averted by a good meal, followed by a nap on the couch."
		-Me, as explained to me by my wife's cat Strummer.

shawn@marilyn.UUCP (Shawn P. Stanley) (11/23/90)

In article <754@dynasys.UUCP> jessea@dynasys.UUCP () writes:
>WHY is this date used?  WHY can't the beginning of the year be used instead?
>It seems like a waste of resources to compute a date from the number of
>seconds given for a twenty year old date - especially if so many programs
>use this format.

That date format is not only used in status files; it's also used for
directory entries and file transfers.  Because of this, only one set of
functions is really necessary for converting the date/time to a
displayable format.

Since this common date/time format is so widely used, it's easy to
recognize as such.  And having a common format is helpful when transporting
data across dissimilar machines.

If one wished to "optimize" the date/time calculation for outputting
status dates in a displayable format, one could simply subtract a constant
from the value, thus reducing the number to something that would take less
time to calculate, theoretically.  However, you should note that to store
even a single year's worth of seconds still requires something larger than
a short int.  The time saved in division/modula operations would be
practically unnoticeable, given that the same number of bits would be
used in the operations.

The only real waste in resources is the extra one or two characters saved
for the ASCII representation of the number.
--
Shawn P. Stanley         shawn@marilyn.marilyn.mn.org
bungia!marilyn!shawn     {rosevax,crash}!orbit!marilyn!shawn

wnp@iiasa.ac.at (wolf paul) (11/24/90)

In article <754@dynasys.UUCP> jessea@dynasys.UUCP () writes:
>In article <803@sci34hub.UUCP>, gary@sci34hub.sci.com (Gary Heston) wrote the following:
>>The date is the internal format, which is defined as the number of seconds
>>since January 1, 1970. (Reportedly day 0, year 0, of the age of Unix.)
>
>WHY is this date used?  WHY can't the beginning of the year be used instead?
>It seems like a waste of resources to compute a date from the number of
>seconds given for a twenty year old date - especially if so many programs

Well, how would you do this? If you stored dates as computed from the
beginning of the year (I presume you mean the current year), then you
need to also store the current year somewhere. Just as easy, if not
easier, to use a known reference date. 

What I do not understand, however, is why a number of MS-DOS compilers
instead used 01/01/80 as the "epoch" for their UNIX-like ctime
functions.
--
W.N.Paul, Int. Institute f. Applied Systems Analysis, A-2361 Laxenburg--Austria
PHONE: +43-2236-71521-465            INTERNET: wnp%iiasa@relay.eu.net
FAX:   +43-2236-71313                UUCP:     uunet!iiasa!wnp
HOME:  +43-2236-618514               BITNET:   tuvie!iiasa!wnp@awiuni01.BITNET

jessea@dynasys.UUCP (Jesse W. Asher) (11/27/90)

In article <1990Nov22.024607.7474@decuac.dec.com>, mjr@hussar.dco.dec.com (Marcus J. Ranum) wrote the following:
>In article <754@dynasys.UUCP> jessea@dynasys.UUCP () writes:
>
>>WHY is this date used?  WHY can't the beginning of the year be used instead?
>
>	What about stuff that's older than Jan 1, 1990? I have some
>stuff from back then... Should we start dating it in negative time?
>And, once we've laboriously converted every single database, file, and
>whatnot that we have that contains a date, are we supposed to do it
>all over again Jan 1, 1991?

How does the system know how many seconds there are from 1970?  You tell it.
If it can calculate the number of seconds from Jan 1, 1970 it will know
whether it is 1990 or 1991.  As a matter of fact, I mentioned before that
the only problem I saw was that of leap years - but the current implementation
takes this into account so it shouldn't be a problem.  If some arbitrary point
in time can be used for all this, some other point in time can be chosen as
well.  As for dates needed before the current year, something else will have
to be done.  I have no idea what.  I know it sounds bizarre, but using a
negative number only for those programs that need it may not be such a bad
idea.  The only problem is that you might cut your possible dates back in
half (what is the number for the date stored as anyway?).  It just
doesn't seem to me to be very elegant to calculate from a twenty year
old date every time you need a date.  I really know nothing about this -
I'm just trying to find out and so far I still haven't had my question
answered.  In less than five years we will be calculating a 25 year old 
date and the total number will be approxiamately 817314005(approximation
of Nov. 26 1995).  How much total resources - both individual systems 
and all unix systems put together - would be saved if we only had to 
calculate a date using a maximum of approxiamately 31536000 seconds 
instead of that astronomical sum above.  Would it really make that much 
of a difference?

-- 
      Jesse W. Asher                             Phone: (901)382-1609 
               6196-1 Macon Rd., Suite 200, Memphis, TN 38134
                UUCP: {fedeva,chromc,rutgers}!dynasys!jessea

herrickd@iccgcc.decnet.ab.com (11/27/90)

In article <967@iiasa.UUCP>, wnp@iiasa.ac.at (wolf paul) writes:
> In article <754@dynasys.UUCP> jessea@dynasys.UUCP () writes:
>>In article <803@sci34hub.UUCP>, gary@sci34hub.sci.com (Gary Heston) wrote the following:
>>>The date is the internal format, which is defined as the number of seconds
>>>since January 1, 1970. (Reportedly day 0, year 0, of the age of Unix.)
>>
>>WHY is this date used?  WHY can't the beginning of the year be used instead?
>>It seems like a waste of resources to compute a date from the number of
>>seconds given for a twenty year old date - especially if so many programs
> 
> Well, how would you do this? If you stored dates as computed from the
> beginning of the year (I presume you mean the current year), then you
> need to also store the current year somewhere. Just as easy, if not
> easier, to use a known reference date. 
> 
> What I do not understand, however, is why a number of MS-DOS compilers
> instead used 01/01/80 as the "epoch" for their UNIX-like ctime
> functions.

Same reason.  The world began on 01/01/80.  If you take your copy of
MSDOS to a machine that does not have a clock/calendar chip to initialize
the MSDOS time of day clock and boot it, you will find that the date
ie 1 January 1980.

dan herrick
herrickd@astro.pc.ab.com

> --
> W.N.Paul, Int. Institute f. Applied Systems Analysis, A-2361 Laxenburg--Austria
> PHONE: +43-2236-71521-465            INTERNET: wnp%iiasa@relay.eu.net
> FAX:   +43-2236-71313                UUCP:     uunet!iiasa!wnp
> HOME:  +43-2236-618514               BITNET:   tuvie!iiasa!wnp@awiuni01.BITNET

tneff@bfmny0.BFM.COM (Tom Neff) (11/27/90)

This might be a good time to repost my 'ago' program for Sys V.
I made a new SHAR just for you.

#! /bin/sh
# This is a shell archive, meaning:
# 1. Remove everything above the #! /bin/sh line.
# 2. Save the resulting text in a file.
# 3. Execute the file with /bin/sh (not csh) to create:
#	README
#	ago.1
#	ago.c
# This archive created: Mon Nov 26 19:16:19 1990
export PATH; PATH=/bin:/usr/bin:$PATH
if test -f 'README'
then
	echo shar: "will not over-write existing file 'README'"
else
sed 's/^X//' << \SHAR_EOF > 'README'
X
XAGO is a System V command to display formatted date/time strings for
Xtimes in the past or future.  I needed to do this and couldn't find
Xanything out there, so voila!
X
XVersion 1.2 adds the -T switch to let you translate an absolute numeric
XUNIX time value into a formatted date string.
X
XYou will need getopt(3C) to parse the command line.  If this isn't
Xin your standard C library, get one of the PD versions and link it in.
X
X
X
X@(#) README	$Revision: 1.2 $		$Date: 90/11/26 19:05:05 $
SHAR_EOF
fi
if test -f 'ago.1'
then
	echo shar: "will not over-write existing file 'ago.1'"
else
sed 's/^X//' << \SHAR_EOF > 'ago.1'
X'\"	@(#) $Id: ago.1,v 1.2 90/11/26 19:15:20 tneff Exp $
X.TH AGO 1
X.SH NAME
X\fBago\fR \- display date for times in the past or future
X.SH USAGE
X.B ago 
X.B [-s secs] 
X.B [-m mins] 
X.B [-h hrs] 
X.B [-d days] 
X.B [-w wks] 
X.B [-T timeval]
X.B [+fmt]
X.SH DESCRIPTION
XThe
X.I ago
Xcommand is an extension to date(1) that displays a formatted date/time
Xstring for a moment in the past or future, specified in seconds, minutes,
Xhours, days and/or weeks relative to now.
X.I ago
Xaccepts the same format string as date(1).
X.sp
XWith no parameters specified,
Xor with only
X.I +fmt
Xspecified,
X.I ago
Xbehaves exactly like date(1).
XIf any of
X.I \-s \-m \-h \-d \-w
Xare specified,
X.I ago
X.B subtracts
Xthe argument values from the current time before display.
X(Specifying negative arguments will
X.B add
Xthe values to the current time, i.e., point into the future.)
X.sp
XIf the
X.I -T
Xoption is given, the
X.b timeval
Xoption value will be used as an absolute time argument
Xinstead of adding or subtracting from the current time.
XThis is useful for decoding binary timestamps found in
Xcertain databases.
X.sp
XFor further details on the format string, see date(1).
X.SH EXAMPLES
X.sp
X$ ago 
X.br
XFri Jan 12 17:22:05 EST 1990
X.sp
X$ ago -d20 -h3
X.br
XSat Dec 23 14:23:00 EST 1989
X.sp
X$ ago -w-3
X.br
XFri Feb  2 17:23:41 EST 1990
X.sp
X$ ago -T 651891946
X.br
XTue Aug 28 21:05:46 EDT 1990
X.sp
X$ ago -d-1 +'Tomorrow the time zone will be "%Z"'
X.br
XTomorrow the time zone will be "EST"
X.SH SEE ALSO
X.BR date(1), cftime(3C).
SHAR_EOF
fi
if test -f 'ago.c'
then
	echo shar: "will not over-write existing file 'ago.c'"
else
sed 's/^X//' << \SHAR_EOF > 'ago.c'
X/*
X * ago - display date for times in the past or future
X *
X * usage:
X *	ago [-s secs] [-m mins] [-h hrs] [-d days] [-w wks] [+fmt]
X *
X * where:
X *	-s secs		Specifies how far into the past to look.
X *	-m mins		A negative argument looks into the future.
X *	-h hrs
X *	-d days
X *	-w wks
X *	-T timeval	An absolute timeval replaces 'now' as a base.
X *
X *	+fmt		is a cftime(3C) format string with a leading '+'
X *			for compatibility with date(1).  If omitted, the
X *			date(1) default of "%a %b %e %T %Z %Y" is used.
X *
X * NOTE:
X *	Uses getopt(3C) to parse the command line.
X *
X * $Log:	ago.c,v $
X * Revision 1.2  90/09/18  15:45:07  tneff
X * add -T timeval switch -- specifies an absolute base instead of 'now'
X * 
X * Revision 1.1  90/01/12  20:20:13  tneff
X * Initial revision
X * 
X */
X
X#ident "@(#) $Id: ago.c,v 1.2 90/09/18 15:45:07 tneff Exp $"
X
X#include <sys/types.h>
X#include <time.h>
X#include <stdio.h>
X
X/*  The command mainline. */
X
Xmain(argc, argv)
Xint argc;
Xchar **argv;
X{
X	int c;
X	int errflg = 0;
X	extern char *optarg;
X	extern int optind;
X
X	time_t when = 0;
X	char *fmt;
X
X	char buf[1024];
X
X	int secs, mins, hrs, days, wks = 0;
X
X	/*  Collect option switches  */
X
X	while ((c = getopt(argc, argv, "s:m:h:d:w:T:?")) != -1)
X		switch (c)
X		{
X		case 's':
X			if ((secs = atol(optarg)) == 0)
X				errflg++;
X			break;
X		case 'm':
X			if ((mins = atol(optarg)) == 0)
X				errflg++;
X			break;
X		case 'h':
X			if ((hrs = atol(optarg)) == 0)
X				errflg++;
X			break;
X		case 'd':
X			if ((days = atol(optarg)) == 0)
X				errflg++;
X			break;
X		case 'w':
X			if ((wks = atol(optarg)) == 0)
X				errflg++;
X			break;
X		case 'T':
X			if ((when = atol(optarg)) == 0)
X				errflg++;
X			break;
X
X		default:
X			errflg++;
X		}
X
X	/*  Validate args and print usage message if bad  */
X
X	switch(argc-optind)
X	{
X	case 0:
X		fmt = "%a %b %e %T %Z %Y";
X		break;
X	case 1:
X		if (argv[optind][0] == '+')
X			fmt = argv[optind]+1;
X		else
X			errflg++;
X		break;
X	default:
X		errflg++;
X	}
X
X	if (errflg)
X	{
X		fprintf(stderr, "usage: %s [-s secs] [-m mins] [-h hrs] [-d days] [-w wks] [+fmt]\n", argv[0]);
X		exit(1);
X	}
X
X	/*  Start with 'now' or our preset value  */
X
X	if (when == 0)
X		when = time(NULL);
X
X	/*  Adjust  */
X
X	when -= secs*1;
X	when -= mins*60;
X	when -= hrs*60*60;
X	when -= days*60*60*24;
X	when -= wks*60*60*24*7;
X
X	/*  Format and output  */
X
X	cftime(buf, fmt, &when);
X	printf("%s\n",buf);
X}
SHAR_EOF
fi
exit 0
#	End of shell archive

mjr@hussar.dco.dec.com (Marcus J. Ranum) (11/27/90)

In article <756@dynasys.UUCP> jessea@dynasys.UUCP () writes:

>[...]  I really know nothing about this -

	obviously.

>I'm just trying to find out and so far I still haven't had my question
>answered.  In less than five years we will be calculating a 25 year old 
>date and the total number will be approxiamately 817314005(approximation
>of Nov. 26 1995).  How much total resources - both individual systems 
>and all unix systems put together - would be saved if we only had to 
>calculate a date using a maximum of approxiamately 31536000 seconds 
>instead of that astronomical sum above.  Would it really make that much 
>of a difference?

	very, very, very little difference. dates are stored as an
(time_t, really) integer, and conversions are performed mathematically
on that value - typically a modulus operation to generate an index into
a table of other values such as days of the week. since the value's an
integer, you don't even pay the cost of messing with floating point.
it's convenient to save the integer value in data files in its
integer form - saves space, and it means that if I give you a tape
archive of files, both of our machines will be able to interpret
that date normalized to a known time and a known timezone, rather
than having to perform lots of wasted conversion.

	your suggestion (as I tried to point out before) would cost
*FAR* more in terms of resources to run around re-dating data in files
than to keep calculating from a 25 year old date. the computer doesn't
care that the date is 25 years old, I don't either, and since most
people use reasonably standardized date conversion libraries, they
don't have to either. what's your problem, then ? just want some
change for the sake of a little change ?

mjr.
-- 
	Good software will grow smaller and faster as time goes by and
the code is improved and features that proved to be less useful are
weeded out.	[from the programming notebooks of a heretic, 1990]

olson@bootsie.UUCP (Eric Olson) (11/28/90)

In article <1990Nov27.151512.24352@decuac.dec.com> mjr@hussar.dco.dec.com (Marcus J. Ranum) writes:
>In article <756@dynasys.UUCP> jessea@dynasys.UUCP () writes:
>>I'm just trying to find out and so far I still haven't had my question
>>answered.  In less than five years we will be calculating a 25 year old 
>>date and the total number will be approxiamately 817314005(approximation
>>of Nov. 26 1995).  [...]
>
>	very, very, very little difference. dates are stored as an
>(time_t, really) integer, and conversions are performed mathematically [...]

The _real_ problem with storing dates as "number of seconds since Jan 1, 1970"
is that sometime in the year 2106, all the dates are gonna wrap around to
1970 again!

Of course, by then, we _might_ have given up on unix :-) :-) :-) :-) .

-Eric

P.S. Macintosh computers store the date as the number of seconds since 1/1/1904,
which means they will wrap around sometime in the year 2040!!!

-- 
Eric K. Olson, Editor, Prepare()       NOTE: olson@bootsie.uucp will not work!
Lexington Software Design              Internet: olson@endor.harvard.edu
72A Lowell St., Lexington, MA 02173    Usenet:   harvard!endor!olson
(617) 863-9624                         Bitnet:   OLSON@HARVARD

shawn@marilyn.UUCP (Shawn P. Stanley) (11/28/90)

In article <756@dynasys.UUCP> jessea@dynasys.UUCP () writes:
>How much total resources - both individual systems and all unix systems
>put together - would be saved if we only had to calculate a date using
>a maximum of approxiamately 31536000 seconds instead of that astronomical
>sum above.  Would it really make that much of a difference?

I don't believe so, and for this reason: The math necessary to convert a
date/time from a number like that is not any more complicated for one
year or twenty years.  Since the number for one year is big enough to be
stored in a "long", as is the value for twenty years, there is no savings
in data type.  The number of bits to shift for doing a multiply or divide
is the same.

I have other reasons for liking the current system.  Since everyone agrees
to it across the board (Unix to Unix), there are no date/time conflicts
when communicating with other systems or using backup disks, etc.  You
don't have to worry about a backup being on "1988 time", or a system you're
talking to around the first of the year still being on the previous year's
time.

One thing you have to consider is that calculation is only necessary when
converting the number to display format.  Don't picture your system as
crunching away endlessly, calculating the date/time over and over again
using time-consuming math.  That's not what's happening.
--
Shawn P. Stanley         shawn@marilyn.marilyn.mn.org
bungia!marilyn!shawn     {rosevax,crash}!orbit!marilyn!shawn

Makey@Snoopy.Logicon.COM (Jeff Makey) (11/29/90)

In article <37@bootsie.UUCP> olson@endor.harvard.edu (Eric Olson) writes:
>The _real_ problem with storing dates as "number of seconds since Jan 1, 1970"
>is that sometime in the year 2106, all the dates are gonna wrap around to
>1970 again!

One of the few things that VAX/VMS does better than any other
operating system I am aware of is its range of dates.  Zero is
17 November 1858, it won't overflow for more than 10,000 years, and
the precision is 100 nanoseconds (better than current hardware can
provide).  Unfortunately, DEC more than compensated for this by
omitting any concept of time zones or daylight savings time.

                           :: Jeff Makey

Department of Tautological Pleonasms and Superfluous Redundancies Department
    Disclaimer: All opinions are strictly those of the author.
    Domain: Makey@Logicon.COM    UUCP: {ucsd,nosc}!snoopy!Makey

guy@auspex.auspex.com (Guy Harris) (12/03/90)

>This might be a good time to repost my 'ago' program for Sys V.

Or, more correctly, for any system that has:

1) "getopt()" (although, as the README notes, you can pick this up for
   those systems that don't have it)

and

2) "cftime()".

More systems than System V have "getopt()", and not all System V systems
have "cftime()", so it's not really a program for "System V", it's a
program for System V Release 3.1 and later.

Adding an option to use "strftime()" or "cftime()" would make it a
program for more systems; "strftime()" is in the ANSI C standard,
"cftime()" isn't, so many systems have only "strftime()" (e.g., SunOS
4.1), and I expect future systems to be more likely to have "strftime()"
than "cftime()" (although some may have both).

tneff@bfmny0.BFM.COM (Tom Neff) (12/03/90)

In article <4633@auspex.auspex.com> guy@auspex.auspex.com (Guy Harris) writes:
>More systems than System V have "getopt()", and not all System V systems
>have "cftime()", so it's not really a program for "System V", it's a
>program for System V Release 3.1 and later.

I welcome all patches.  The doc file should go into more detail on the
limitations, but I never found the time.  I just needed 'ago' and wrote
it.  If it helps, use it.