toth@tellab5.tellabs.CHI.IL.US (Joseph G. Toth Jr.) (06/02/89)
What started out as a discussion (plea) regarding High Level Languages that run on a computer ( //e, //c, ][+, etc) has rolled into a discussion of P-Code systems and their merits in general. I agree, Mr. Lyons, that if you are refering to an implementation on a //gs, a dreaded IBM, or any of a number of other computers where a processor that supports large stack frames is used, that a P-Code system is dreadfully innefficient compared to Native Code Compiles. So, LETS GET BACK TO 6502 implementations. On a Apple // (e or c or ][+), the effective difference between a P-Code system and something compiled to "Native Code" is not that great when it comes to execution speed. Pseudo-Stacks must be maintained for any Procedural Variables allong with return pointers and any other things that the compiler thinks are important (flags, etc). Initialization code is always created, and (in many cases) Hardware drivers are loaded as subroutine calls. All this additional code is added (in many cases, even if it is not needed - A guy I work with bought Aztec 'C' anf compiled a program with a single procedure that did 'printf ( "Hi" );', this one stament program compiled to approximately a 4K byte program - no disk I/O or any other functions). We could get into good programming techniques and implementations, but I would rather not. It just seems that (at least on the Apple //) Compilers that purport being native code compilers must do many of the same things to support their implementations that supposedly make P-Code systems large and inefficient. So, I'm not really pushing P-Code compilers as a perfect system, it's not. It just seems to make sense in this application. Back to the point; A couple of compilers were mentioned, ORCA Small C and Prolog. I went to my local Egghead store. They said; "What are those, compilers? We don't sell those. If you get us the name and address of the company that created them, we will place an order for you. However, you won't be able to return them since it is a special order." Hell, I could direct order them myself if I had that information. I went to 2 Authorized Apple retailers (one by work, one near my home). They said; "Why don't you buy a Mac. We hav quite a selection of compilers for it." They didn't even bother pushing a //gs. It's like they were trained in sales techniques at an IBM sales seminar. -- ------------------------------------------------+--------------------- Maybe I shouldn't have done it, sarcasm is so | Joseph G. Toth Jr. seldom understood. Don't FLAME on me, please. | uunet!tellab5!toth
dlyons@Apple.COM (David Lyons) (06/03/89)
In article <1373@tellab5.tellabs.CHI.IL.US> toth@tellab5.tellabs.CHI.IL.US (Joseph G. Toth Jr.) writes: >[...] >I agree, Mr. Lyons, that if you are refering to an implementation on a //gs, >a dreaded IBM, or any of a number of other computers where a processor that >supports large stack frames is used, that a P-Code system is dreadfully >innefficient compared to Native Code Compiles. I don't recall ever arguing that P-code is dreadfully inefficient compared to native code. Actually, I thought my main point was that p-code efficiency could be pretty decent, and that the code could be a lot more compact. >[...] A guy I work with bought Aztec 'C' anf compiled >a program with a single procedure that did 'printf ( "Hi" );', this one >stament program compiled to approximately a 4K byte program - no disk I/O >or any other functions). Not too surprising--printf() is a powerful function, and the compiler didn't know that his program wasn't going to use that power. Using puts instead of printf should reduce the size considerably. --Dave Lyons, Apple Computer, Inc. | DAL Systems AppleLink--Apple Edition: DAVE.LYONS | P.O. Box 875 AppleLink--Personal Edition: Dave Lyons | Cupertino, CA 95015-0875 GEnie: D.LYONS2 or DAVE.LYONS CompuServe: 72177,3233 Internet/BITNET: dlyons@apple.com UUCP: ...!ames!apple!dlyons My opinions are my own, not Apple's.
coy@ssc-vax.UUCP (Stephen B Coy) (06/06/89)
In article <1373@tellab5.tellabs.CHI.IL.US>, toth@tellab5.tellabs.CHI.IL.US (Joseph G. Toth Jr.) writes: > All this additional code is added (in many cases, even > if it is not needed - A guy I work with bought Aztec 'C' anf compiled > a program with a single procedure that did 'printf ( "Hi" );', this one > stament program compiled to approximately a 4K byte program - no disk I/O > or any other functions). printf() is a BIG function. Try the same code on another system and look at the size. My guess is that 4K will all of a sudden seem tiny. Just for a sanity check I tried it on a couple of machines. VAX running BSD v? ~9k, Compac 286 running MS-DOS and Microsoft C v5.? with full optimization ~7k. > Maybe I shouldn't have done it, sarcasm is so | Joseph G. Toth Jr. > seldom understood. Don't FLAME on me, please. | uunet!tellab5!toth Stephen Coy uw-beaver!ssc-vax!coy
toth@tellab5.tellabs.CHI.IL.US (Joseph G. Toth Jr.) (06/07/89)
In article <2693@ssc-vax.UUCP>, coy@ssc-vax.UUCP (Stephen B Coy) writes: > In article <1373@tellab5.tellabs.CHI.IL.US>, toth@tellab5.tellabs.CHI.IL.US (Joseph G. Toth Jr.) writes: > > All this additional code is added (in many cases, even > > if it is not needed - A guy I work with bought Aztec 'C' anf compiled > > a program with a single procedure that did 'printf ( "Hi" );', this one > > stament program compiled to approximately a 4K byte program - no disk I/O > > or any other functions). > > printf() is a BIG function. Try the same code on another system and > look at the size. My guess is that 4K will all of a sudden seem tiny. > > Stephen Coy This rebutal to my point about the size of the 'printf()' function is taken out of context from the general concept of the postings. Maybe I didn't go into detail about all of the complexities of the 'printf()' function making my reference to additional code that is not needed, but I had no intention of stating that in the grand scheme of the compilers that the size was not necessary. (This could be used as a great argument for the use of assembly instead of any compiler, and is absoltely valid in any piece of code as simple as that in the example). In order to allow for a single function that performs many different operations (string expansion, data conversion, etc.), the code that must be generated in an executable is inherintly large, even if much of the code is never executed. The jist of my posting was intended to point out to those who were stating that P-Code Systems are worse than Native Code Compilers due to the inherintly large (they're not THAT big) P-Code interpreters that the size of code generated by an available Native Code Compiler contains inherint inefficiency in the code generated for simple operations, and that the distinction between P-Code Systems and Native Code is less than many would think. To use more 'C' code examples: Native code; printf(), fprintf(), and sprintf(), would each include their own copy of the code segments that perform conversions, etc. This duplicated code creates larger executables. P-Code system; The same three functions end up using the same conversion routines, etc., allowing for a savings in code space due to lack of redundancy. Another point for P-Code Systems; The size of the executable file on disk IS very small. You can generate MANY programs, and still use up minimal space on a disk. This is primarily due to the fact that the physical executable code is resident in the P-Code System file, which is stored on the disk once. The size of the executable file on disk of a Native Code program will bw MUCH larger. This is true for EVERY executable that is generated since each executable must contain all code for every function used. This size can really add up. -- ------------------------------------------------+--------------------- Maybe I shouldn't have done it, sarcasm is so | Joseph G. Toth Jr. seldom understood. Don't FLAME on me, please. | uunet!tellab5!toth