tsc2597@acf4.UUCP (05/14/84)
Does anyone out there use the Digital Research Compilers now available for IBM PC-DOS? So far I have used the PASCAL MT+86 compiler from DR and found it totally acceptable. Unlike Microsoft Pascal, it is very easy to create large programs in MT+86 as there are extensive overlay and chaining facilites. (I wrote several large programs in MT+86 in access of 7000 lines). It also has a pascal source level debugger and the Compact memory model. There are also several packages of "tools" available from DR which include Access Manager - a B-Tree subroutine package; Display Manager - a device independent subroutine package for creating menu's and controlling the attributes of displays (cursor addressing, hiliting, blinking etc.); GSX-86, a device independent extension to PC-DOS supporting device drivers for many color cards, printers and plotters. All these tools are callable from high level languages such as MT+86. Has anyone used the DR C Compiler? A review in microsystems claimed that it was the fastest C compiler (among DSmet, Lattice, CIC86 etc) and supported all memory models, Small, Medium, Compact and Large. They also claimed that the fledging DR compiler was run against a PDP 11 Unix C compiler to ensure Unix compatiblility! Sam Chin (NYU)
micom@houxf.UUCP (D.SYKORA) (05/15/84)
PC World (June 84) compared the Lattice, CI-C86, Digital Research and Mark Williams C compilers. They said Digital Research "compiles slowly and produces large, sluggish object code." Danny Sykora AT&T Bell Labs (Holmdel) (201)949-7358 houxf!micom
bcw@duke.UUCP (Bruce C. Wright) (05/16/84)
I've been using the Digital Research PL/I compiler, and have found it a fair (I don't think I would say 'good') implementation. The compiler generates very large code (contrary to some of the benchmarks I saw, which was one of the reasons I bought it), especially if you do any string operations at all (it likes to generate lots of temporaries for example, and NEVER re-uses the space!!!). It claims to be an optimizing compiler, but it looks to me that it must only have a peephole optimizer at most (and I tend to agree with Ullmann (?) that a peephole optimizer is simply another way of saying reasonable code generation, not optimization). It is incapable of discovering what it has in registers, for example: j = x (i) + y (i) will generate code like MOV SI,I MOV AL,X[SI] MOV DI,I ; !!! ADD AL,Y[DI] MOV J,AL and so forth. This is taken literally from a program fragment with the names changed to protect the innocent. From what I have seen in other reviews, this is typical of DR compilers, notably the DR C compiler (I think there's a review of several C compilers in the latest PC World or maybe PC, and the DR C compiler comes out rather bad in code generation, as might be expected since I have heard that it uses the same code generator as the PL/I compiler). The compiler has a number of bugs, some serious, some not so serious. It is a rather weak implementation of PL/I. There are a number of bugs in the utilities, notably the assembler. On the other hand, the documentation and packaging are well-done. It's nice to have a reasonable manual, although for most of my purposes except for compiler pecularities and assembler interfacing conventions I don't really need a PL/I manual. On balance I would probably reccommend that the current version of the compiler is not worth the rather stiff price unless you have PL/I programs which you need to transport to the 8086 architecture but which don't use much of the language. In all fairness I don't know how much of this also applies to other DR languages ... and I'm not sure I want to find out. Bruce C. Wright
dont@tekig1.UUCP (Don Taylor) (05/18/84)
X In partial response to bcw@duke's comments about the quality of code of Digital's compilers. I think PART of the problem is the processor, not the compiler writer. Looking at early releases of the intel 286 compiler writers guide, and the extremes necessary to generate any sort of decent code, it is amazing we have compilers now at all. Separate compilation is almost out of the question, you just can't tell, without lots of hints from the user if you need to reload registers or not. If you want to stick to 64k of memory, and I thought escaping that limit was what all the bother was about, you can do fairly well. You WILL NOT have much left in the registers to be able to reuse to make better code, the stuff just keeps getting clobbered. With lots of work, 3 passes through the tree, for the code generation alone in the above reference, plus code modification and rewrite at the link/locate point, you can avoid reloading TOO many registers. I agree that his example looks pretty silly from the human point of view, but I wonder what it looks like from the inside of a code generator? (I'd rather have general registers with lots of leftovers, but thats just to make my job easier) Don Taylor