bolles@reed.UUCP (Spencer Bolles) (01/19/85)
I have a friend that raised an interesting question that I immediately tried to prove wrong. He is a programmer and has this notion that when we reach the year 2000, computers will not accept the new date. Will the computers assume that it is 1900, or will it even cause a problem? I violently opposed this because it seemed so meaningless. Computers have entered into existence during this century, and has software, specifically accounting software, been prepared for this turnover? If this really comes to pass and my friend is correct, what will happen? Is it anything to be concerned about? I haven't given it much thought, but this programmer has. I thought he was joking but he has even lost sleep over this. When I say 'friend,' I'm NOT referring to myself, if it seemed that way. "I've never really written anything like that before" Spencer L. Bolles
jrb@wdl1.UUCP (01/21/85)
Referring to an article on Julian Dates by Gordon King in Dr Dobb's Journal #80 (June 1983) pages 66-70. Most computer systems use some form of modified Julian date internally because it is compact to store and simple arithmetic can be used on them. A Julian date algorithm for a 16 bit computer is valid for ~179.4 years (65,536 days) This is used as an offset from a base year, usually 1900. Such an algorithm would then stop working in 2079. The base year has to be chosen fairly carefully because of leap years. John R Blaker UUCP: ...!fortune!wdl1!jrb ARPA: jrb@FORD-WDL1 and blaker@FORD-WDL2
gary@arizona.UUCP (Gary Marc Levin) (01/21/85)
> I have a friend that raised an interesting question that I immediately > tried to prove wrong. He is a programmer and has this notion that when we > reach the year 2000, computers will not accept the new date. Will the > computers assume that it is 1900, or will it even cause a problem? > ... > Spencer L. Bolles The problem won't be the computers, but the software. Some software is bound to be wrong, only considering the last two digits of the year. Actually, the year 2000 will probably make some faulty software work correctly for 100 years longer than they should. 2000 is the second level exception to the leap year rule. Leap years are those years divisible by 4, EXCEPT those divisible by 100, EXCEPT those divisible by 400. Programs that assume that all multiples of 4 are leap years are wrong, but the problem won't come up until 2100. -- Gary Levin / Dept of CS / U of AZ / Tucson, AZ 85721 / (602) 621-4231
bukys@rochester.UUCP (Liudvikas Bukys) (01/22/85)
Spencer L. Bolles: "... He is a programmer and has this notion that when we reach the year 2000, computers will not accept the new date. Will the computers assume that it is 1900, or will it even cause a problem? ..." Hey! No big deal! So what if every piece of code that prints dates with ctime[3] starts believing every year in the 21st century is Year 2, thanks to a little parenthesization error? cp[2] = '0' + t->tm_year >= 200; Or, as Joe Bob would say, "It could happen here." P.S. I will leave unnamed the particular Unix version I pulled this source line from. I don't know which of the popular factions introduced it first or fixed it first. I don't want to know, and please don't tell me.
dgary@ecsvax.UUCP (D Gary Grady) (01/22/85)
<> > The problem won't be the computers, but the software. Some software is > bound to be wrong, only considering the last two digits of the year. And thereby hangs a tale: In 1978, when I was working in banking, I ran across a curious date storage format. It seems that transaction dates were coded with the last digit of the year in one nibble, the month in hex in the next, and the date (in packed decimal) in the next two. I asked one of the more senior systems analysts about this and she informed me that when the record was originally designed, only the month and day (in packed decimal) had been included. This caused sorting problems on statements printed in January, because checks written in the December of the previous year would sort after checks written in January of the current. So the format had been modified to the one I just described. "Good grief!" said I. "What happens in January of 1980?" She turned pale and admitted she had considered that before but managed to put it out of her mind. "So why not go ahead and fix it now?" I asked. She pointed out that fixing it would require expanding the demand deposit master record format, a mammoth undertaking. About a billion COBOL programs would have to be recompiled. At this shop we were still on cards and a rush compile took about a week. "You want to do that?" she inquired. This time I turned pale. We considered our options, knowing that one or the other of us would be called upon to fix the problem. And you know what we did? First, I modified the daily demand deposit program with code that checked for the date and about mid-1979 started printed warnings on the console of what would happen come new year. Then the systems analyst and I got new jobs. This is known as stepwise interactive development. -- D Gary Grady Duke U Comp Center, Durham, NC 27706 (919) 684-3695 USENET: {seismo,decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary
ndiamond@watdaisy.UUCP (Norman Diamond) (01/23/85)
> > I have a friend that raised an interesting question that I immediately > > tried to prove wrong. He is a programmer and has this notion that when we > > reach the year 2000, computers will not accept the new date. Will the > > computers assume that it is 1900, or will it even cause a problem? > > ... > > Spencer L. Bolles > > The problem won't be the computers, but the software. Some software is > bound to be wrong, only considering the last two digits of the year. > but the problem won't come up until 2100. > ... > Gary Levin / Dept of CS / U of AZ / Tucson, AZ 85721 / (602) 621-4231 Leap years are not the only problem, and some software already is wrong. There was some 105-year-old lady who hadn't registered for school, and the truant officers came after her. I think this happened in the U.S. midwest, around 8 years ago. -- Norman Diamond UUCP: {decvax|utzoo|ihnp4|allegra|clyde}!watmath!watdaisy!ndiamond CSNET: ndiamond%watdaisy@waterloo.csnet ARPA: ndiamond%watdaisy%waterloo.csnet@csnet-relay.arpa "Opinions are those of the keyboard, and do not reflect on me or higher-ups."
gadfly@ihu1m.UUCP (Gadfly) (01/23/85)
-- >> I have a friend that raised an interesting question that I >> immediately tried to prove wrong. He is a programmer and has this >> notion that when we reach the year 2000, computers will not accept >> the new date. Will the computers assume that it is 1900, or will >> it even cause a problem?... >> Spencer L. Bolles Your friend is probably aluding to the leap-century correction in the Gregorian Calendar. Most date programs do not make any subtler correxions than leap-year (and some don't even do that). There is no Feb 29 in a century year unless that year is divisible by 400. Thus, 1900 was not a leap year (look it up), but 2000 will be. So, all un-leap-century-corrected programs will be safe until 2100, and most folks will slide blissfully into the next millenium never even stopping to think about their calendar's fine tuning. -- *** *** JE MAINTIENDRAI ***** ***** ****** ****** 22 Jan 85 [3 Pluviose An CXCIII] ken perlow ***** ***** (312)979-7188 ** ** ** ** ..ihnp4!iwsl8!ken *** ***
darryl@ISM780.UUCP (01/23/85)
> He is a programmer and has this notion that when we > reach the year 2000, computers will not accept the new date. This brings to mind the famous PDP-8 date problem. Under OS-8, the year was encoded as 3 bits (!!), which promptly ran out in 75 or 76, at which point the powers that were managed to scrape another bit. Anybody out there still running OS-8? What did you do when the year turned to sh*t again? (Buy an 8080 based machine for improved performance and memory capability?) --Darryl Richman, INTERACTIVE Systems Inc. ...!cca!ima!ism780!darryl The views expressed above are my opinions only.
scw@cepu.UUCP (01/23/85)
In article <820@reed.UUCP> bolles@reed.UUCP (Spencer Bolles) writes: > > I have a friend that raised an interesting question that I immediately >tried to prove wrong. He is a programmer and has this notion that when we >reach the year 2000, computers will not accept the new date. Will the >computers assume that it is 1900, [...]s even lost sleep over this. When >I say 'friend,' I'm NOT referring to myself, if it seemed that way. Well, it depends on several things, (1) the 'base' date, (2) how many bits are uses to encode the offset, and (3) the resolution used. For example OS/8 (a operating system for the PDP-8 and 12) used 3 bits for they year and a base date of Jan 1 1970. On Jan 1 1978 it broke. Unix (v7 anyway) uses 32 bits to record the time in seconds since 0000Z01JAN70 (Midnight GMT Jan 01,1970) this will break sometime in 2038 (Jan 18 about 3 AM GMT). Other operating systems use different epochs and different resolutions and will break at different times. -- Stephen C. Woods (VA Wadsworth Med Ctr./UCLA Dept. of Neurology) uucp: { {ihnp4, uiucdcs}!bradley, hao, trwrb}!cepu!scw ARPA: cepu!scw@ucla-cs location: N 34 3' 9.1" W 118 27' 4.3"
mike@enmasse.UUCP (Mike Schloss) (01/24/85)
> I have a friend that raised an interesting question that I immediately > tried to prove wrong. He is a programmer and has this notion that when we > reach the year 2000, computers will not accept the new date. Will the > computers assume that it is 1900, or will it even cause a problem? I > violently opposed this because it seemed so meaningless. Computers have > entered into existence during this century, and has software, specifically > accounting software, been prepared for this turnover? If this really > comes to pass and my friend is correct, what will happen? Is it anything > to be concerned about? I haven't given it much thought, but this programmer > has. I thought he was joking but he has even lost sleep over this. When > I say 'friend,' I'm NOT referring to myself, if it seemed that way. > I have heard the same rumor from some reliable sources. When I was working summers for Prudential a while back I was told the story about this and the people were serious. One guy, a serious system programmer, not a hack, told me he was setting his retirement date according to the date this problem will manifest itself. The story goes as follows: In IBM's OS/VSI, OS/VSII, and MVS all files have a time stamp associated with them, usually the creation date. If upon creation the file is deemed to be temporary the the time stamp becomes the expiration date and defaults to sometime in the future. The difference between a creation date and expiration date is the expiration date has the high order bit set. [See the problem coming] The problem is that sometime in 2000 (I dont think its midnight Jan 1) the most significant bit in the timestamp will change and the system will then think that all files on all disk drives are temporary and should have been deleted a long time ago. Net result ... All files get deleted.
das@ucla-cs.UUCP (01/24/85)
From what I've read, many programs broke at the start of 1970 because they stored the year as a single digit; fewer, but still a good number, broke in 1980. I think the real trouble will come on January 3, 2000, not January 1, since the 3rd is the first business day. I think the problems will come in subtle ways -- most companies will catch the obvious implications of a two-digit year cycling around, but buried away in some obscure code... -- David Smallberg, das@ucla-cs.ARPA, {ihnp4,ucbvax}!ucla-cs!das
kendall@talcott.UUCP (Sam Kendall) (01/24/85)
> ... [T]his notion that when we > reach the year 2000, computers will not accept the new date. Yeah, this thought occurred to me when I took COBOL years ago and found that data was encoded in decimal, and years often encoded in 2 digits. I don't know about the IBM OS creation date/temporary file problem, but other than that, the COBOL two-decimal-digit-year problem is the major one. This is a pretty common thing to do in COBOL programs; COBOL is the most-used computer language (I think, and in any case it certainly is in the business/bureaucratic world); there are plenty of programs that have been running for years, and for which the sources have been lost. I am posting this because I think a lot of people have never seen a COBOL program, and so don't realize why the year 2000 will be trouble. I think, though, that IBM will get moving on this problem around the year 1995, if only so that the society on which they depend for profits will continue to exist. Sam Kendall {allegra,ihnp4,ima,amd}!wjh12!kendall Delft Consulting Corp. decvax!genrad!wjh12!kendall
stern@bnl.UUCP (Eric Stern) (01/24/85)
> > I have a friend that raised an interesting question that I immediately > tried to prove wrong. He is a programmer and has this notion that when we > reach the year 2000, computers will not accept the new date. Will the > computers assume that it is 1900, or will it even cause a problem? I > violently opposed this because it seemed so meaningless. Computers have > entered into existence during this century, and has software, specifically > accounting software, been prepared for this turnover? If this really > comes to pass and my friend is correct, what will happen? Is it anything > to be concerned about? I haven't given it much thought, but this programmer > has. I thought he was joking but he has even lost sleep over this. When > I say 'friend,' I'm NOT referring to myself, if it seemed that way. > > "I've never really written anything like that before" > > Spencer L. Bolles I used to work for a company that packed dates into 16 bit words in such a way so that being the last part of the century, all dates were negative numbers. However, certain files could contain either of two types of records, the distinguishing characteristing being that one type of record contained a date at a particular offset. Of course, the check for this kind of record was whether the number at that offset was negative or not, so when the century rolls over this test would fail. I pointed this feature out to several people, who rightly were not concerned, as by the time this became a problem, their software would have migrated to a different system and would probably be largely rewritten. However, I have heard that CDC operating systems had a problem at a certain date in the past, where the computer would refuse to boot up when this date was reached. Calls came in to CDC from all over the world as midnight advanced westward. Eric G. Stern
rhesmith@wlcrjs.UUCP (Richard H. E. Smith II) (01/24/85)
In article <6876@watdaisy.UUCP> ndiamond@watdaisy.UUCP (Norman Diamond) writes: >>> I have a friend that raised an interesting question that I immediately >>> tried to prove wrong. He is a programmer and has this notion that when we >>> reach the year 2000, computers will not accept the new date. Will the >>> computers assume that it is 1900, or will it even cause a problem? >> The problem won't be the computers, but the software. Some software is >> bound to be wrong, only considering the last two digits of the year. >> but the problem won't come up until 2100. >Leap years are not the only problem, and some software already is wrong. >There was some 105-year-old lady who hadn't registered for school, and >the truant officers came after her.... -- Norman Diamond Some software blows up on dates at other times. I'm aware of some old DEC software (don't worry... you're NOT using it... it's single user!) that keeps the date year as a 5 bit offset from 1972. Let's see... 1972+31=2003, so it blows up in 2004. Probably, tho, the display-a-year routine isn't written to handle beyond 31-dec-99, since no one expects that RT11 (oops, now I said it) will still be used then. I hope. -- ---------- Dick Smith ..ihnp4!wlcrjs!rhesmith
larry@extel.UUCP (01/24/85)
Another problem is that we have gotten into the habit of only using the last 2 digits of the year (look at your checkbook). Even worse is that some business software only allows a 2 character wide field for the date. Perhaps the designers did not expect their program to be in use in the year 2000 but I would not be suprised to see a considerable amount of 370 code running in the year 2000. Just think that in a few years you will be able to refer to the year 2002 as aught-two! By the way the Websters Thesaurus also lists ought as an alternate spelling to aught. Larry Pajakowski ihnp4!tellab1!extel!larry
jdb@mordor.UUCP (John Bruner) (01/24/85)
Back in the V6 days at Purdue/EE we purchased the CULC adaptation of DEC's Fortran-IV-Plus compiler for our PDP-11's. Part of the package was a UNIX version of MACRO-11. I noted with amusement that the output conversion routine for the date and time (used to produce listings) for MACRO-11 was never intended to handle a year greater than 1979 -- when I ran it in 1980 it printed the date as "13-SEP-7:". Of course, by that time we only had one PDP-11/45 still running V6. -- John Bruner (S-1 Project, Lawrence Livermore National Laboratory) MILNET: jdb@mordor.ARPA [jdb@s1-c] (415) 422-0758 UUCP: ...!ucbvax!dual!mordor!jdb ...!decvax!decwrl!mordor!jdb
doug@terak.UUCP (Doug Pardee) (01/25/85)
> >> I have a friend that raised an interesting question that I > >> immediately tried to prove wrong. He is a programmer and has this > >> notion that when we reach the year 2000, computers will not accept > >> the new date. Will the computers assume that it is 1900, or will > >> it even cause a problem?... > > Your friend is probably aluding to the leap-century correction > in the Gregorian Calendar. Oh, dear oh dear. Folks, there is an outside world out there and that world uses computers to do REAL STUFF. One of the "real stuff" things that computers do out there is to store data in files, both on tape and on disk. Things like the balance in your checking account (or the amount that it's overdrawn :-) There is SO MUCH data in those files, and tapes and disks cost SO MUCH to buy and store, that those files have "expiration dates", at which time a program (run daily, as a rule) will see that they have expired and will remove all traces of them from the various directories, and will return the disk space or reel of tape to the "available" pool. I imagine you are aware that IBM's System/360/370/30xx machines handle nearly all such transactions (to the unending dismay of Honeywell, Burroughs, Univac, etc.) In the IBM world, the date of December 31, 1999 is the highest (latest) date that can be specified. So if you have stuff that you want to keep forever, you put a date of 99365 on it. I leave it to your imagination what will happen on 12/31/99 when all of those computers find all of those disk files and tapes are to be scratched. A variation results from the natural cycle of many such files. For example, a monthly backup tape in a 4-month cycle will be kept for four months, no? Although IBM doesn't supply any routine to compute such a date, virtually every site has written or bought one. So on, say, 10/01/99 a 4-month file will be set to expire on 02/01/00. Guess what happens the next morning? Bye-bye file! There are a number of other effects which will result, all from the fact that the computer will NOT be able to compare two dates to find out which one is later. Unless the programmer anticipated the problem, the formula for figuring out how many days elapsed between two dates won't work. How do you figure, e.g., interest earned, if you don't know the time period involved? Dates and time ARE of the utmost importance to the business world! There are minor effects, too. Like when your company's ten-year forecast says that you'll be making a good profit in 1903. Looks really professional on the ol' annual report. -- Doug Pardee -- Terak Corp. -- !{hao,ihnp4,decvax}!noao!terak!doug
mangoe@umcp-cs.UUCP (Charley Wingate) (01/25/85)
Univac EXEC 8 systems store dates as a 36-bit signed offset from Jan. 1, 1964 (I think that's the right date); with 2**35 days in either direction, I suspect that rollover problems are not likely, at least not until your average COBOL program which changes this into MM/DD/YY get ahold of it anyway..... Actually, one system facility DOES provide MM/DD/YY directly. Oh, Well. Charley Wingate umcp-cs!mangoe
tim@callan.UUCP (Tim Smith) (01/26/85)
If you are really worried about timewrap breaking programs in subtle ways, then set your clock ahead now, and find the bugs. That will give you several years to fix them. If you are binary only, you might NEED several years to get you vendor to fix them! :-) -- Duty Now for the Future Tim Smith ihnp4!wlbr!callan!tim or ihnp4!cithep!tim
ron@brl-tgr.ARPA (Ron Natalie <ron>) (01/28/85)
> > Spencer L. Bolles: > "... He is a programmer and has this notion that when we reach the > year 2000, computers will not accept the new date. Will the computers > assume that it is 1900, or will it even cause a problem? ..." > > Hey! No big deal! So what if every piece of code that prints dates with > ctime[3] starts believing every year in the 21st century is Year 2, thanks to > a little parenthesization error? > > cp[2] = '0' + t->tm_year >= 200; > Of course, UNIX time (seconds past midnight GMT 1 Jan 1970 in 32 bits) falls apart around 2042.
ron@brl-tgr.ARPA (Ron Natalie <ron>) (01/28/85)
> For example OS/8 (a operating system for the PDP-8 and 12) used 3 > bits for they year and a base date of Jan 1 1970. On Jan 1 1978 it > broke. Unix (v7 anyway) uses 32 bits to record the time in seconds > since 0000Z01JAN70 (Midnight GMT Jan 01,1970) this will break sometime > in 2038 (Jan 18 about 3 AM GMT). Other operating systems use different > epochs and different resolutions and will break at different times. > -- Uh, huh. Anyone remember the form letter programs from version 6? It stopped working around 1979, never to move again. V6 nroff also used to have a bug that caused certain strange effects to occasionally appear and disappear every nine hours or so. -Ron
tim@callan.UUCP (Tim Smith) (01/29/85)
> In IBM's OS/VSI, OS/VSII, and MVS all files have a time stamp > associated with them, usually the creation date. If upon creation > the file is deemed to be temporary the the time stamp becomes the > expiration date and defaults to sometime in the future. The > difference between a creation date and expiration date is the > expiration date has the high order bit set. [See the problem coming] > The problem is that sometime in 2000 (I dont think its midnight > Jan 1) the most significant bit in the timestamp will change > and the system will then think that all files on all disk drives > are temporary and should have been deleted a long time ago. > Net result ... All files get deleted. Look, if you have a bit that marks a file as temporary or permanent, and that bit is set at file creation time, then there is no problem with files created BEFORE the high order bit of the date is set. The system will NOT decide that they are all temporary and delete them! The only problems will be with files created after the high order bit of the date is set. [ Unless, of course, the use AT&T Common Object File format, which, according to my copy of the manual, keeps the timestamp as the number of seconds relative to the CURRENT time! :-) ] -- Duty Now for the Future Tim Smith ihnp4!wlbr!callan!tim or ihnp4!cithep!tim
henry@utzoo.UUCP (Henry Spencer) (01/29/85)
Forecasting programs are already encountering this sort of problem. 1975 was a bad year for 25-year forecasts... -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
robert@cheviot.UUCP (Robert Stroud) (01/29/85)
I don't know about 2000 (I can guess :-) but I do have an anecdote that relates to a summer job I had back in 1979. We got a 'phone call from the suppliers of some application software along the following lines... Them: Are you planning to use the machine on August 17th 1979? Us: Probably not - it's a Saturday. Them: Well if you do, whatever you do, when you boot the machine, don't tell it it's August 17th! Lie and pretend it's August 18th. It turned out that the internal coding of "August 17th 1979" matched a character sequence used by the application to denote EOF! That's true - honest! Names of machines, operating systems and software suppliers are suppressed to protect the guilty. I wouldn't swear to the exact date, but it was around that time. Robert Stroud, Computing Laboratory, University of Newcastle upon Tyne. ARPA robert%cheviot%newcastle.mailnet@mit-multics.arpa UUCP ...!ukc!cheviot!robert
eugene@ames.UUCP (Eugene Miya) (01/29/85)
<294@callan.UUCP> Recently, I was one of the operations officers for the 1984 ACM National Meeting. The theme of that conference was "The Fifth Generation." While putting the conference together, one of the other people (Bob Van Tuyl of GTE) joked: If there is any one thing which is going to hold back the 'Fifth Generation,' it's going to be the 'Second Generation.' You have just given evidence to support that conclusion. ;-) --eugene miya NASA Ames Research Center {hplabs,ihnp4,dual,hao,vortex}!ames!aurora!eugene emiya@ames-vmsb.ARPA
kimcm@diku.UUCP (Kim Christian Madsen.) (01/29/85)
Well you can fix the bug(s) on a specific machine, but the main purpose must be to create a standard fix so no machine will be affected in an unpleasant way when 2000 comes. (or even before)!!! -- Kim Chr. Madsen. Institute of Datalogy, University of Copenhagen {decvax,philabs,seismo}!mcvax!diku!kimcm
alexis@reed.UUCP (Alexis Dimitriadis) (01/30/85)
> > If you are really worried about timewrap breaking programs in subtle ways, > then set your clock ahead now, and find the bugs. That will give you several > years to fix them. If you are binary only, you might NEED several years > to get you vendor to fix them! :-) > -- > Duty Now for the Future > Tim Smith > ihnp4!wlbr!callan!tim or ihnp4!cithep!tim With most library functions, you do not need to reset the machine clock-- just call them with the right number of seconds, and see what they do. (You might even catch some of the overflow problems that have been discussed here). I attached a simple program that does that, just run it and give it the number of years you want to go forward (or backward, if < 0), or can substitute your pet functions for time() and ctime(). E.g., I found that we DO have the bug in ctime that prints every year after 2000 as year 2. (and without a trailing newline...) alexis @ reed --------------------------- #include <stdio.h> #include <sys/time.h> #define YEAR 31536000 /* only roughly, but who cares */ main() { long time(), clock; float increment; char * ctime(); time(&clock); fputs(ctime(&clock), stdout); while (scanf("%f", &increment) > 0) { clock += (long) (increment * YEAR); fputs(ctime(&clock), stdout); } }
bruce@ISM780.UUCP (01/30/85)
> From what I've read, many programs broke at the start of 1970 > because they stored the year as a single digit; fewer, but still a > good number, broke in 1980. Not as well known is the fact that many COBOL banking and/or accounting programs that broke in 1970 were fixed by allowing the year field to be interpreted as a binary field rather than a decimal field. This was intended as a temporary measure until the database records could be reorganized with a wider date field. (When you've got several million records and several hundred programs, adding just one byte to each record takes a bit of doing and most records have more than one date field.) Many of those same systems broke again at the beginning of 1976. I recall that when I started working for Western Bancorp in Sept. 1976 that some of my co-workers were nine months later still regaling each other with tales of which banks got caught by that one. I seriously plan on closing my checking account several months before the end of the centuary and hiding all my cash under my mattress until all the smoke clears. Bruce Adler {sdcrdcf,uscvax,ucla-vax,vortex}!ism780!bruce Interactive Systems Corp. decvax!yale-co!ima!bruce
nather@utastro.UUCP (Ed Nather) (01/31/85)
For those of you fixing things in your software: The year 2000 *is* a leap year, despite what many algorithms tell you. The year 2400 is *not* a leap year. With minimal effort, you can make things work until 2399. You may be subject to complaints after that. -- Ed Nather Astronony Dept, U of Texas @ Austin {allegra,ihnp4}!{noao,ut-sally}!utastro!nather
ron@brl-tgr.ARPA (Ron Natalie <ron>) (02/01/85)
> For those of you fixing things in your software: > > The year 2000 *is* a leap year, despite what many algorithms tell you. > The year 2400 is *not* a leap year. > > With minimal effort, you can make things work until 2399. You may be > subject to complaints after that. > Now you've really got me confused. Why is 2400 not a leap year?
ron@brl-tgr.ARPA (Ron Natalie <ron>) (02/01/85)
What is really amusing about all of this is that if people didn't insist on putting specific checks in their code for the last year of each century not being a leap year, everything would have been OK. It is doubtful that most simple utilities care about dates before 1901 or after 2099. -Ron
chongo@nsc.UUCP (Landon C. Noll) (02/01/85)
In article <301@terak.UUCP> doug@terak.UUCP (Doug Pardee) writes: > Unless the programmer anticipated >the problem, the formula for figuring out how many days elapsed >between two dates won't work. How do you figure, e.g., interest >earned, if you don't know the time period involved? Are you suggesting that people pull their money out of the banks on Dec 31, 1999? If so, then maybe you should suggest that people avoid the rush and grab it Dec 30, or maybe Dec 29, .... I think a date overflow is far better than a input transaction overflow.. :-) Soon I will test another area of the 2000 date problems. Magazine subscription dates. Well due to a strange set of events, I have a subscription to this mag. which ends in 1999. (of which I have paid nothing for) Well the othter day they sent me a renewal notice, so im going to actually pay for another year and ... In article <771@ames.UUCP> eugene@ames.UUCP (Eugene Miya) writes: > If there is any one thing which is going to hold back the > 'Fifth Generation,' it's going to be the 'Second Generation.' > Oh, you mean MBI and Big Green and Cobol? Or do you mean Big Mama and her Fifth Sister? :-) chongo <is that why they call it release 2?> /\VV/\
berry@zinfandel.UUCP (Berry Kercheval) (02/01/85)
In article <301@terak.UUCP> doug@terak.UUCP (Doug Pardee) writes: >In the IBM world, the date >of December 31, 1999 is the highest (latest) date that can be >specified. So if you have stuff that you want to keep forever, >you put a date of 99365 on it. I leave it to your imagination >what will happen on 12/31/99 when all of those computers find >all of those disk files and tapes are to be scratched. I once heard an apocryphal story to the effect that a Systems Programmer at Large Unnamed Corp. was debugging something late one night and for some reason it became necessary to set the system date at 99365. Guess what happened at midnight? Guess who is now a plumber? -- Berry Kercheval Zehntel Inc. (ihnp4!zehntel!zinfandel!berry) (415)932-6900
jca@abnji.UUCP (james armstrong) (02/01/85)
>The year 2000 *is* a leap year, despite what many algorithms tell you. >The year 2400 is *not* a leap year. Actually, 2400 is a leap year. 2100, 2200, and 2300 are not.
dmmartindale@watcgl.UUCP (Dave Martindale) (02/02/85)
In article <974@utastro.UUCP> nather@utastro.UUCP (Ed Nather) writes: >The year 2000 *is* a leap year, despite what many algorithms tell you. >The year 2400 is *not* a leap year. > >With minimal effort, you can make things work until 2399. You may be >subject to complaints after that. Are you absolutely sure of this? (your trailer DOES say you work come from an astronomy department....) My understanding was that years divisible by 4 were leap years, except that years divisible by 100 were not, except that years divisible by 400 were - giving 97 leap days every 400 years. According to that pattern, 2000 IS a leap year, and the naive year-mod-4 algorithms will work properly until 2099, not 2399.
ronbe@tekred.UUCP (Ron Bemis ) (02/03/85)
For those of you fixing things in your software: > The year 2000 *is* a leap year, despite what many algorithms tell you. Agreed (by everybody, I think). > The year 2400 is *not* a leap year. How do you figure? Shouldn't that say 2100? Leap if divisible by 4 Unless divisible by 100 Unless divisble by 400 -- _____ Ron Bemis / o o \ Support Bacteria - Tektronix | \___/ | It's the only culture Redmond, OR \_____/ Some people have!
tstorm@vu44.UUCP (Theo van der Storm) (02/03/85)
In article <7927@brl-tgr.ARPA> ron@brl-tgr.ARPA (Ron Natalie <ron>) writes: >> For those of you fixing things in your software: >> >> The year 2000 *is* a leap year, despite what many algorithms tell you. >> The year 2400 is *not* a leap year. >> >> With minimal effort, you can make things work until 2399. You may be >> subject to complaints after that. >> >Now you've really got me confused. Why is 2400 not a leap year? (msd = mean solar day) 1 year = 365.2422 msd = 365 + 1/4 - 1/100 + 1/400 + error That's why we have: leapyear 1 out of 4 non leap year 1 out of 100 leapyear 1 out of 400 (So 2400 is a leap year.) Read any basic astronomy book. -- Theo van der Storm, 52 20'N / 4 52'E, {seismo|decvax|philabs}!mcvax!vu44!tstorm
ndiamond@watdaisy.UUCP (Norman Diamond) (02/04/85)
> The year 2000 *is* a leap year, despite what many algorithms tell you. > The year 2400 is *not* a leap year. So, the guy made a mistake. Why aren't astronomers permitted to make as many mistakes as programmers? I even make mistakes occasionally (though not that one). -- Norman Diamond UUCP: {decvax|utzoo|ihnp4|allegra|clyde}!watmath!watdaisy!ndiamond CSNET: ndiamond%watdaisy@waterloo.csnet ARPA: ndiamond%watdaisy%waterloo.csnet@csnet-relay.arpa "Opinions are those of the keyboard, and do not reflect on me or higher-ups."