tomm@voodoo.boeing.com (Tom Mackey) (02/02/91)
I've used RCS now for about 6 years, and for the last 4 years we have managed a large software project (~250 files) under RCS. So far, so good. We can go back all the way to the first release of the system (not that we want to) and it is nice to trace the evolution of certain modules through the rlog entries. We thought we had it all nailed until last week. One of the files we track is our ongoing bug list. Now either we have more bugs than normal, or a larger system than normal, as it gets revised quite frequently (at least twice a week). Last week one of the developers went to check out "bugs" and got the message: RCS/bugs,v --> bugs co error, line 1201: Hashtable overflow co aborted To make a long story short, there is an internal limit in the hashtable set to 240. The thing that bothers me is that RCS lets you check in the 240th revision with no warning of the impending doom, just slams the door shut AFTER you have done the damage. So how about it.... anyone else find this, how did you cope with it (we obviously have to split the ,v files, but what's the most nifty way), and have any other computer vendors enhanced RCS to deal with this? We use Silicon Graphics equipment. So far, no critical files have been affected, but several of them currently have 200 or so revisions.