stephen@temvax.UUCP (Stephen C. Arnold) (10/02/89)
I have just a few :-) questions about object oriented languages and computer language generation. Has anyone written a lexical analyser or parser generator in an object oriented language? Is it in the public domain? Has anyone done attribute grammar work in an object oriented language? Is there a good parallel object oriented language? Has anyone looked into using parallel object oriented languages to capture the parallelism within attribute grammars? Who, if anybody is doing research in this area? Who's interested (besides me)?
djones@megatest.UUCP (Dave Jones) (10/03/89)
From article <194@temvax.UUCP>, by stephen@temvax.UUCP (Stephen C. Arnold): > > I have just a few :-) questions about object oriented languages and computer > language generation. > > Has anyone written a lexical analyser or parser generator in an object > oriented language? I just finished a parser-generator, written in C. (Does that count? :-) It uses object-instances extensively. The most frequently used objects are hash-tables and lists. There are ten static hash-table objects, all with different virtual "hash" and "equiv" functions. As it turns out, the most frequently used objects within objects are plain vanilla linked lists. The lists are typically too short to justify fancier methods. I also make extensive use of memory-management objects -- stacks and heaps. These are very quick as compared to plain malloc. Stacks allow you to free up all the memory above a "mark" in one quick operation. "Heaps" contain fixed sized packets. Both use macros (inline member- functions, if you will) for allocation and deallocation. As it is, the program takes about two minutes to do the Pascal grammar I'm working with. If I used plain malloc, every envocation would be a trip to the coffee room. My kidneys couldn't take it. The most interesting use of hash-tables is in calculating the equivalency classes of LR(1) item-sets (similar to "state-merging"). No kidding. One hash table eventually stores one representative from each equivalence class. The virtual "equiv" function used in that table recursively decides when two item-sets are equivalent, depending on the equivalence of the item-sets referenced in its "shift" and "goto" actions, if any. It instantiates another hash-table for item-set pairs which are currently undergoing investigation for equivalence. It uses that table to halt the recursion when the item-set graph is cyclic. (Exercise: What does it do when it is asked to determine the equivalence of a pair already under investigation? A) return true. B) return false. C) other (specify) ____________________.) It's almost magic. 219 lines of code is all it takes to reduce the graph. Hooray for reusability. > >Is it in the public domain? > Nope. Sorry. > ... Has anyone looked into using > parallel object oriented languages to capture the parallelism within > attribute grammars? I gave it some thought a few years back. It turns out that there is a direct analogy between parallel evaluation of attribute-grammars and lazy evaluation of lambda calculus expressions. I'm not too interested in the subject these days. My interest could be rekindled if I were somehow convinced that the topic had practical value.