MSRS002@ECNCDC.BITNET ("THE DOCTOR.") (10/27/87)
How big is the program? If it's not too awefully huge, it shouldn't take long to convert it by hand, and I'm sure a person could do a more efficient job than a machine. You could likely spot changes in the program to fit your libraries better, while the machine program would have to blindly simulate the old environment. If the program is too big for that to be practical, how 'bout just running it in Turbo? I do all my new stuff in Modula, but I have a lot of perfectly useable turbo programs laying around that I still run. Gotta Run ... Tom Ruby MSRS002@ECNCDC P. S. ECNCDC has to be the most finger tangling thing to type.
BOTCHAIR@UOGUELPH.BITNET.UUCP (10/28/87)
Why not use the Turbo to Modula-2 translator from LogiTech. It works quite well and has a very good 200+ page manual with it. Alex
bradley@gpu.utcs.toronto.EDU (BRADLEY) (11/03/87)
We are already using the Turbo to Modula-2 compiler from Logitech. It is certainly useful -- but we find that a considerable amount of patching after the fact is necessary. I don't think the slowness of the program has anything to do with the use of the automatic conversion -- we've certainly tried to remove obvious clumsy things that the converter introduces because it doesn't really understand the code -- and our code looks (at least to our eyes) similar to ``good'' Modula-II. The problem really does seem to be the Logitech 2.0 compiler. Our code is text oriented. It uses a lot of string operations, it uses lots of pointers, it uses lots of records. I'm hoping that the new version of 3.0 will produce better code. .... john bradley uucp: utgpu!bradley
dick@ccb.ucsf.edu (Dick Karpinski) (11/04/87)
In article <8711031748.AA10838@gpu.utcs.toronto.edu> Info-Modula2 Distribution List <INFO-M2%UCF1VM.bitnet@jade.berkeley.edu> writes: > >I don't think the >slowness of the program has anything to do with the use of the >automatic conversion .... I have always found at least 20%, usually a factor of 2, and often a factor of 10 available in the speed of execution while working at the source level. The approach I use is to profile the working program (about 1 day, usually) and analyze the results carefully. These first results can be astonishing or mundane. Then I look for the parts that have high cost and find ways to avoid them or to greatly reduce the cost. Usually a single day of recoding can accomplish most of the readily available speedup. The key thing is to SEE where the time goes. But most folks really don't care to find out since they secretly believe that they will look silly when their program runs faster with so little work. Dick Dick Karpinski Manager of Minicomputer Services, UCSF Computer Center UUCP: ...!ucbvax!ucsfcgl!cca.ucsf!dick (415) 476-4529 (11-7) BITNET: dick@ucsfcca or dick@ucsfvm Compuserve: 70215,1277 USPS: U-76 UCSF, San Francisco, CA 94143-0704 Telemail: RKarpinski Domain: dick@cca.ucsf.edu Home (415) 658-6803 Ans 658-3797
BOTCHAIR@UOGUELPH.BITNET (Alex Bewley) (11/05/87)
I have had version three for a few weeks now, and it is really good. Mind you, I'm not the biggest developer to hit the earth. It's excluding linker is very good. It removes ANY module that hasn't been referenced in the code. This usually results in about anywhere from 1% to 20% code removal. Also, a few more libraries have been added in this version. The utilities are very useful. M2decode can take .OBJ files and convert them in readable assembly language (symbols and all, included). There is a cross referencer, source formatter, make utility (except it keeps wanting to recompile the system libraries), and a couple other useful ones. Alex