johnl@ima.UUCP (02/12/87)
Does anyone know of any papers on dynamically sized hash tables? i.e. hash tables that grow and shrink proportionately to the number of items stored in the table? I want to implement associative arrays efficiently without knowing beforehand the number of entries that will be placed in the array. The algorithm need not produce tables which are either minimal nor perfect, but they should be "reasonable" in space, and fairly constant in time (at worst O(log(# of collisions)) not including any time devoted to the storage allocator. The algorithm may use space in a stepwise manner, however. For example, an algorithm which uses a conventional hash table of a given size, and when it fills up, simply create a larger table and dumps the contents of the old to the new when it fills up is unacceptable, since any insertion or deletion may take O(n) time where n is the number of existing entires. Gus Fernandez [This topic is known as extendible or extensible hashing, and has been treated at length in the data base literature. There have been a fair number of articles in TODS. It seems to me, though, that the data base crowd wants to minimize the number of table accesses because the table is on the disk. For an in-core symbol table, I'd think that chaining from hash headers would be a reasonable alternative to rehashing the whole thing. Anybody got any real answers? -John] -- Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU Plausible paths are { ihnp4 | decvax | cbosgd | harvard | yale | cca}!ima Please send responses to the originator of the message -- I cannot forward mail accidentally sent back to compilers. Meta-mail to ima!compilers-request