bjorn@dataio.UUCP (05/08/86)
I'm sitting here, moving our latest software product down to the IBM PC, waiting for it to compile, and I notice this (odd) fact: the compiler on the PC is better than the one on the VAX! It's faster, it produces better code, and it catches errors that the UNIX C does not. So I'm curious: The state-of-the-art in compilers has progressed on PCs, so why hasn't anyone come up with a better compiler for UNIX, or have I just not heard of it? For your information I'm running UNIX 4.2bsd with the standard C compiler on a VAX 11/750, and Datalight C 2.04 on an IBM PC/AT under MSDOS 3.0. The PC takes 5 minutes 26 seconds to compile 7605 lines of code in 29 files (plus 735 lines of header in 13 files), whereas the unloaded VAX (load average 1.13) takes 8 minutes 30 seconds. All the outside influences were indentical: debugging, optimization, etc. Bjorn N Freeman-Benson FutureNet, a Data I/O company
chris@umcp-cs.UUCP (Chris Torek) (05/12/86)
In article <989@dataioDataio.UUCP>, bjorn@dataio.UUCP writes: >... I notice this (odd) fact: the compiler on the PC is better than >the one on the VAX! It's faster, it produces better code, and it >catches errors that the UNIX C does not. So I'm curious: > The state-of-the-art in compilers has progressed on PCs, > so why hasn't anyone come up with a better compiler for > UNIX, or have I just not heard of it? 1. Your company gets paid for improving your compiler. Is not that so? 2. The 4.3BSD C compiler has a number of improvements in both compilation speed and code generation. (Nothing major, just little fixes, but then *we* are *not* paid to do this sort of thing....) 3. I believe there are several optimising compilers for Vax Unix (which version(s), I do not know) on the market. -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 1415) UUCP: seismo!umcp-cs!chris CSNet: chris@umcp-cs ARPA: chris@mimsy.umd.edu
jim@cs.strath.ac.uk (Jim Reid) (05/14/86)
In article <989@dataioDataio.UUCP> bjorn@dataio.UUCP writes: > The state-of-the-art in compilers has progressed on PCs, > so why hasn't anyone come up with a better compiler for > UNIX, or have I just not heard of it? > >For your information I'm running UNIX 4.2bsd with the standard C >compiler on a VAX 11/750, and Datalight C 2.04 on an IBM PC/AT under >MSDOS 3.0. The PC takes 5 minutes 26 seconds to compile 7605 lines of >code in 29 files (plus 735 lines of header in 13 files), whereas the unloaded >VAX (load average 1.13) takes 8 minutes 30 seconds. All the outside >influences were indentical: debugging, optimization, etc. Whoopee! I'm glad someone has proved that an AT - floppy disks and all? - is faster than a VAX. Maybe now I can convince folk it's a good idea to scrap our VAX for an AT!!! :-) :-) :-) :-) :-) Comparisons like that are *totally meaningless* - What about the quality of the generated code? What "optimisations" do the compilers perform? Do both produce much the same symbolic information for debugging? What's involved in linking object modules in the two progams? How many passes over the source code/expanded code/"parse trees" does each compiler do? The 4BSD compiler has at least 5 - 6 if you count linking. First there's the preprocessor, then the compiler proper has probably two passes, the assembler has another two for good measure (depending on how you look at the assembler). Then there's your configuration - how much memory does each system have? How much core does each compiler use/need? How much paging or swapping goes on during compilation? How much disk traffic - intermediate files etc - is done? Granted, your AT compiler might have faster algorithms for symbol table lookup and the like, but the only conclusion that can be drawn from the numbers you gave is that for the conditions you describe, your AT C compiler is faster. It doesn't follow that both compilers or CPUs are doing the same amount of work, so making comparisons or drawing conclusions are pointless. Jim
bright@dataio.UUCP (05/14/86)
In article <1469@umcp-cs.UUCP> chris@umcp-cs.UUCP (Chris Torek) writes: >In article <989@dataioDataio.UUCP>, bjorn@dataio.UUCP writes: >>... I notice this (odd) fact: the compiler on the PC is better than >>the one on the VAX! It's faster, it produces better code, and it >>catches errors that the UNIX C does not. So I'm curious: >1. Your company gets paid for improving your compiler. Is not > that so? No, it isn't so. Datalight, Data I/O, Data Exchange, Data Translation Inc., Data Zone Inc., Datacom Northwest Inc., Datatech Enterprises Co. and Data General are all independent companies. The similarity in the names is coincidence.
aglew@ccvaxa.UUCP (05/15/86)
I also noticed that C compilers on PCs are frequently better than UNIX issue. Little things like printing a bit of context with your errors, like: cahr stuff; Line EEE: ^ ^ Illegal declaration or obsolete initialization instead of telling you `invalid initialization on line 563'. Coming back to minicomputers is like going back to the Stone Age - big, fast, powerful monsters, but not quite as intelligent as their smaller successors. Andy "Krazy" Glew. Gould CSD-Urbana. USEnet: ihnp4!uiucdcs!ccvaxa!aglew 1101 E. University, Urbana, IL 61801 ARPAnet: aglew@gswd-vms
buls@dataioDataio.UUCP (Rick Buls) (05/15/86)
In article <996@dataioDataio.UUCP> bright@dataio.UUCP (Walter Bright writes: >In article <1469@umcp-cs.UUCP> chris@umcp-cs.UUCP (Chris Torek) writes: >>In article <989@dataioDataio.UUCP>, bjorn@dataio.UUCP writes: >>>... I notice this (odd) fact: the compiler on the PC is better than >>>the one on the VAX! It's faster, it produces better code, and it >>>catches errors that the UNIX C does not. So I'm curious: >>1. Your company gets paid for improving your compiler. Is not >> that so? > >No, it isn't so. Datalight, Data I/O, Data Exchange, Data Translation Inc., >Data Zone Inc., Datacom Northwest Inc., Datatech Enterprises Co. and >Data General are all independent companies. The >similarity in the names is coincidence. Please Note However: Walter Bright is the author of the Datalight Compiler and Bjorn works on the compiler for Bjorn. Data I/O employs both of these fellows, the Compiler is a side-line for both! So Chris Torek point still holds, even though he may have been under a misconception. Bjorn's original article, all though accurate, did appear to ME as being somewhat self serving. -- Rick Buls (Data I/O; Redmond, Wa) uw-beaver!entropy!dataio!buls
greg@utcsri.UUCP (Gregory Smith) (05/18/86)
In article <131@stracs.cs.strath.ac.uk> jim@cs.strath.ac.uk (Jim Reid) writes: >In article <989@dataioDataio.UUCP> bjorn@dataio.UUCP writes: >> The state-of-the-art in compilers has progressed on PCs, >> so why hasn't anyone come up with a better compiler for >> UNIX, or have I just not heard of it? >> >Comparisons like that are *totally meaningless* - What about the quality of ^^^^^ depending on how many compiles you have to wait through. >the generated code? What "optimisations" do the compilers perform? Do both >produce much the same symbolic information for debugging? What's involved in >linking object modules in the two progams? How many passes over the source >code/expanded code/"parse trees" does each compiler do? The 4BSD compiler has >at least 5 - 6 if you count linking. First there's the preprocessor, then >the compiler proper has probably two passes, the assembler has another two >for good measure (depending on how you look at the assembler). Then there's >your configuration - how much memory does each system have? How much core >does each compiler use/need? How much paging or swapping goes on during >compilation? How much disk traffic - intermediate files etc - is done? > Give me a break. Sure, having a separate pre-processor will slow the compiler down considerably, but is it an advantage?????? It only gives you a certain amount of convenience in implementing the compiler. Consider that the cpp has to do lexical analysis as sophisticated as that done by the compiler, in order to do `#if's. It makes a *lot* of sense to have the cpp/lexer/parser in a single pass - Much code can be shared. When you find an identifier, for example, you go look it up in the #define table before saying you have found an identifier/keyword - as opposed to going through everything twice. Consider the single character i/o that will be saved - even if it is done through a pipe. The only disadvantage is that the cpp and compiler symbol tables must live together in the same process. If compiler A has more passes than compiler B, it doesn't mean 'A' is better or more sophisticated - It could just mean that the implementors of B did a better job. Your argument that the 4.2 compiler is slower because it generates better code makes sense, but I haven't the slightest idea which one is better in this area. I know of one *big* reason why the UNIX compiler would be easy to beat - it produces human-readable assembler. If it produced a binary-coded assembler, the costs of (1) writing out all that text (2) reading in all that text [twice] and (3) *lexing* and *parsing* all that &*!@#@ TEXT and looking up all those mnemonics [twice!] would be saved, and no functionality would be lost. Of course, you would want a binary-to-human assembler translator as a utitility... This makes even more sense for any compiler that may have to run off floppies - the full assembler text can be considerably larger than the C program, so you would be rather limited in what you could compile if full assembler were used. -- "We demand rigidly defined areas of doubt and uncertainty!" - Vroomfondel ---------------------------------------------------------------------- Greg Smith University of Toronto UUCP: ..utzoo!utcsri!greg
davidsen@steinmetz.UUCP (Davidsen) (05/22/86)
In article <989@dataioDataio.UUCP> bjorn@dataio.UUCP writes: >I'm sitting here, moving our latest software product down to the IBM PC, >waiting for it to compile, and I notice this (odd) fact: the compiler >on the PC is better than the one on the VAX! It's faster, it produces >better code, and it catches errors that the UNIX C does not. So I'm >curious: > The state-of-the-art in compilers has progressed on PCs, > so why hasn't anyone come up with a better compiler for > UNIX, or have I just not heard of it? > >For your information I'm running UNIX 4.2bsd with the standard C >compiler on a VAX 11/750, and Datalight C 2.04 on an IBM PC/AT under >MSDOS 3.0. The PC takes 5 minutes 26 seconds to compile 7605 lines of >code in 29 files (plus 735 lines of header in 13 files), whereas the unloaded >VAX (load average 1.13) takes 8 minutes 30 seconds. All the outside >influences were indentical: debugging, optimization, etc. > > Bjorn N Freeman-Benson > FutureNet, a Data I/O company I have a UNIX benchmark suite which I run everywhere I can to give me an idea of the relative speed of various boxes. The reason that stuff compiles faster on the AT is that the 750 isn't that much faster than an AT, and the VAX is probably loaded. The following info is from benchmarks I took in the last year, AT running SCO Xenix/286, VAX 750 running SysV (*not* 4.2). Test AT 750 integer Kops/sec 358.2 364.1 float Kops/sec 23.8 39.5 Branch and compare K/sec int 183.8 246.7 float 11.2 138.5 trig functs op/sec 1159 1020 avg access 2MB file/ms 23.2 8.2 pipes Kbytes/sec 304.8 276.8 systemcalls K/sec 4.0 3.4 ---------------------------------------------------------------- What this shows is that (a) an AT souped up to 9MHz or so and given a better disk is that it is a really nice 1-2 user system (given realistic memory), and (b) that even a stock AT is faster than whatever fraction of a VAX you can actually get in most places. As for compiler checking, I completely agree with you. The Microsoft C compiler catches things that slip past lint, and the v4.0 (I doing beta test) is even better! The nice thing about PCC is that's is portable, and anyone who's ever moved UNIX code to other compilers (PCDOS, Xenix, VAX-C) may find that the code either won't compile or runs much faster but doesn't work. Hope this explains why the AT looks so good, it is. -- -bill davidsen ihnp4!seismo!rochester!steinmetz!--\ \ unirot ------------->---> crdos1!davidsen / sixhub ---------------------/ (davidsen@ge-crd.ARPA) "Stupidity, like virtue, is its own reward"
cg@myrias.UUCP (05/23/86)
Jim @ Strathclyde missed the point about the DataLight compiler running faster on an IBM PC-AT than the standard BSD compiler does on a VAX. The result is not a comparison of machines, but a comparison of compilers. What contortions the compiler goes through is irrelevant - how long it takes and how good the resulting code is are what's important. We all know that most UN*X C compilers are hogs! I will re-iterate the question that was asked: Why aren't there any decent C compilers provided with UN*X systems? (Perhaps there are, and I just don't know about them. By decent I mean that they give meaningful error messages, never crash or abort, generate good code, and run quickly.)
chris@umcp-cs.UUCP (Chris Torek) (05/28/86)
In article <2786@utcsri.UUCP> greg@utcsri.UUCP (Gregory Smith) writes: >... having a separate pre-processor will slow the compiler down >considerably, but is it an advantage?????? It only gives you a >certain amount of convenience in implementing the compiler. Not so! There is another advantage. The preprocessor can be used alone, or in combination with programs other than the C compiler. This is the `software tools' philosophy: if you can make a clean conceptual break in a task, make that break programmatically; you then have a set of tools that may be useful in surprising ways. >It makes a *lot* of sense to have the cpp/lexer/parser in a single >pass - Much code can be shared. It makes a lot of sense, in terms of time to code generation, to put everything in a single process. But I am not now willing to rewrite the entire cpp/ccom/c2/as sequence as a single program. It is simply not worth the effort to me. It might be worth the effort to others, though. And again, we would still need a separate cpp to make the kernel, and I would either have to include the work `inline' (asm.sed for you 4.1 and 4.2 folks) does, or have a separate ccom and c2+as phase. >If compiler A has more passes than compiler B, it doesn't mean 'A' >is better or more sophisticated - It could just mean that the >implementors of B did a better job. Or that the implementors of B were aiming for speed, while those of A were aiming for reusability. Or that A runs on smaller machines; this is probably the real reason for those multi-pass PDP-11 compilers. They just turned out to be a good idea (in some ways). In article <250@myrias.UUCP> cg@myrias.UUCP writes: >What contortions the compiler goes through is irrelevant - how long it >takes and how good the resulting code is are what's important. Important to whom? What about those writing the compilers themselves, or using parts of them? Did you know that the F77 and Pascal compilers share the same code generation program, which is itself a modification of the original second pass of the C compiler proper? And all the compilers share the assembler, which does the grunge work of optimising branch instructions on a machine where conditional branches have a limited distance, but unconditionals do not. >We all know that most UN*X C compilers are hogs! I will re-iterate >the question that was asked: Why aren't there any decent C compilers >provided with UN*X systems? From whom did you buy your UN*X systems? If Berkeley, what did you expect from a research institution? `If it were any good we would sell it.' (Who said that?) If from a `real world' company, complain away---but not to us! -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 1516) UUCP: seismo!umcp-cs!chris CSNet: chris@umcp-cs ARPA: chris@mimsy.umd.edu
bzs@bu-cs.UUCP (Barry Shein) (05/29/86)
Re: cpp as a separate pass/program I wonder out loud how hard it would be to turn CPP into a subroutine, change its main() to cpp_main() and set up the call in the main() of ccom, and of course figure out a way to get cc to understand that (probably could just pass the arg list as a single quoted string and break it up in ccom?) If that were done it could be a MAKE option when the compiler were built (would probably build both so you still had the stand-alone cpp on-line, but at least now you're sure the code is the same.) Maybe there would be some externals clashes but I've done this sort of thing before and it's usually just idiot work (changing names of symbols that clash.) Then again, I agree with Torek, it's not of obvious value, but I think a compromise like this would be easy enough. I think the current trend in technology indicates such efforts are probably a waste of time (by the time you get it debugged and out everyone is running a machine twice as fast anyhow, you can say any saving is worthwhile but that's not true, there's a point where I can't tell its there.) -Barry Shein, Boston University
ludemann@ubc-cs.UUCP (Peter Ludemann) (05/30/86)
In article <2786@utcsri.UUCP> greg@utcsri.UUCP (Gregory Smith) writes: >I know of one *big* reason why the UNIX compiler would be easy to beat >- it produces human-readable assembler. If it produced a binary-coded >assembler, the costs of (1) writing out all that text (2) reading in >all that text [twice] and ... Sorry, not true. The deSmet C compilers (for IBM-PC and Macintosh) produce human readable assembler and they are still fast. For example, on my Mac, I can compile about 3000 lines per minute (including i/o to and from floppies), although I do use a RAM disk for the temp files (the compiler is 3 passes, the last being the assembler). Incidentally, I think that deSmet (or C-ware) makes fine products at a good price; they are also very responsive to bug reports --- I have received written replies every time. (Incidentally, the deSmet C on the Mac beat almost all the benchmarks in the recent Byte article, both for compiling and for run-time code.)
aglew@ccvaxa.UUCP (05/30/86)
>/* Written 3:49 pm May 28, 1986 by chris@umcp-cs.UUCP */ >In article <2786@utcsri.UUCP> greg@utcsri.UUCP (Gregory Smith) writes: >>... having a separate pre-processor will slow the compiler down >>considerably, but is it an advantage?????? It only gives you a >>certain amount of convenience in implementing the compiler. > >Not so! There is another advantage. The preprocessor can be used >alone, or in combination with programs other than the C compiler. >This is the `software tools' philosophy: if you can make a clean >conceptual break in a task, make that break programmatically; you >then have a set of tools that may be useful in surprising ways. Unfortunately, there is no longer a clean conceptual break between the C pre-processor and the compiler: `sizeof' can be used in pre-processor constant-expressions. I very much doubt that a cpp that parses enough of C to understand sizeof will be useful in non-C-related applications. Andy "Krazy" Glew. Gould CSD-Urbana. USEnet: ihnp4!uiucdcs!ccvaxa!aglew 1101 E. University, Urbana, IL 61801 ARPAnet: aglew@gswd-vms
greg@utcsri.UUCP (Gregory Smith) (05/31/86)
In article <1723@umcp-cs.UUCP> chris@maryland.UUCP (Chris Torek) writes: >In article <2786@utcsri.UUCP> greg@utcsri.UUCP (I) write: >>... having a separate pre-processor will slow the compiler down >>considerably, but is it an advantage?????? It only gives you a >>certain amount of convenience in implementing the compiler. > >Not so! There is another advantage. The preprocessor can be used >alone, or in combination with programs other than the C compiler. >This is the `software tools' philosophy: if you can make a clean >conceptual break in a task, make that break programmatically; you >then have a set of tools that may be useful in surprising ways. It may also be surprising in useless ways ;-) The problem with cpp is that it is rather C-specific - it knows the C comments, and string formats, and that 123.e+12 does *not* contain an 'e' which is a candidate for #define expansion ( at least it should :-) ). Constrast to m4 which is a much more general beasty. In general, though, I agree with this idea. > >>If compiler A has more passes than compiler B, it doesn't mean 'A' >>is better or more sophisticated - It could just mean that the >>implementors of B did a better job. > >Or that the implementors of B were aiming for speed, while those of >A were aiming for reusability. Or that A runs on smaller machines; >this is probably the real reason for those multi-pass PDP-11 compilers. >They just turned out to be a good idea (in some ways). > Yes, and yes. >In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 1516) -- "We demand rigidly defined areas of doubt and uncertainty!" - Vroomfondel ---------------------------------------------------------------------- Greg Smith University of Toronto UUCP: ..utzoo!utcsri!greg
guy@sun.uucp (Guy Harris) (06/01/86)
> And again, we would still need a separate cpp to make the kernel, Well, note that if a "cc" is to be considered a real UNIX "cc", it will have to support "-P" and "-E". This means that if you don't have a separate preprocessor, there will have to be *some* way of running just the preprocessor part of the lexical analyzer and getting compilable C output from it. (Sorry, any of you people who want to bundle "cpp" and the lexical analyzer, but that's the way it is.) Given that, you can either hack "cc" to understand ".S" files as assembler source files to be run through the preprocessor, or have a "/lib/cpp" command which runs this wonderful all-passes-integrated C compiler in "preprocessor only" mode. So there are ways of doing it without "cpp" being a separate program, but the important point is that it still isn't just a matter of hiding "cpp"s functionality in the lexical analyzer. > and I would either have to include the work `inline' (asm.sed for you 4.1 > and 4.2 folks) does, or have a separate ccom and c2+as phase. Or borrow the idea AT&T-IS has been talking about (see the note on "Assembler windows" in the article "The Evolution of C - Past and Future" in the recent UNIX edition of the AT&T Bell Laboratories Technical Journal - October 1984, Vol. 63 No. 8 Part 2), where the "asm" keyword is used differently: ...An experimental implementation now being evaluated uses the keyword "asm" in a different context. A declaration of the form asm f(arg1, arg2, ...) { ... } defines a function "f" to be compiled in line (without function linkages). The programmer can specify alternate assembly-language expansions in the function prototype, depending on the storage classes of the actual parameters. This has the advantage that it makes it easier to drop into assembler when you absolutely have to, and the disadvantage that it makes it easier to drop into assembler when you *don't* have to. > Important to whom? What about those writing the compilers themselves, > or using parts of them? Did you know that the F77 and Pascal > compilers share the same code generation program, which is itself > a modification of the original second pass of the C compiler proper? Which is a very common practice; how do other systems implement this? Do they have the language-dependent first pass and the language-independent second pass as separate programs? > And all the compilers share the assembler, which does the grunge > work of optimising branch instructions on a machine where conditional > branches have a limited distance, but unconditionals do not. And don't forget the grunge work of understanding your particular machine and UNIX system's object file format.... -- Guy Harris {ihnp4, decvax, seismo, decwrl, ...}!sun!guy guy@sun.arpa
mash@mips.UUCP (06/02/86)
In article <3844@sun.uucp> guy@sun.uucp (Guy Harris) writes: >...discussion why cpp functionality must be available... >So there are ways of doing it without "cpp" being a separate program, but >the important point is that it still isn't just a matter of hiding "cpp"s >functionality in the lexical analyzer. We've found it handy to use "cpp" as pre-pass to FORTRAN, PASCAL, C [of course] and assembler: greatly eases multi-language software work. > >> and I would either have to include the work `inline' (asm.sed for you 4.1 >> and 4.2 folks) does, or have a separate ccom and c2+as phase. > >Or borrow the idea AT&T-IS has been talking about (see the note on >"Assembler windows" in the article "The Evolution of C - Past and Future" in >the recent UNIX edition of the AT&T Bell Laboratories Technical Journal - >October 1984, Vol. 63 No. 8 Part 2), where the "asm" keyword is used >differently: > > ...description of asm f(arg1, arg2, ...) {...} > >This has the advantage that it makes it easier to drop into assembler when >you absolutely have to, and the disadvantage that it makes it easier to drop >into assembler when you *don't* have to. Although asm() can be very useful on occasion, it's a sad thing, and it just about wrecks the use of good optimizing compilers. Unless I recall incorrectly, there wasn't much of this in PDP-11 days, but it really got popular on machines with slow subroutine calls. From experience, I much prefer a good optimizing compiler on a machine with fast calls: the wish for "asm" drops away pretty quickly. This leads to another set of questions: real data would be appreciated: Who uses asm? On what machines? How much performance was it worth? Why did you use it? Structural reasons? (i.e., getting to privleged ops) Performance of small functions (i.e., spl?()) Other performance (like getting long moves inline). Did you use it inside the kernel, in libraries, or in application code? Have you ever been bitten by compiler changes wrecking the code? Do you have equivalent asm() code for 2 or more machines? -- -john mashey DISCLAIMER: <generic disclaimer, I speak for me only, etc> UUCP: {decvax,ucbvax,ihnp4}!decwrl!mips!mash, DDD: 408-720-1700, x253 USPS: MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086
phil@sequent.UUCP (VP de GSP) (06/03/86)
There is another very good reason to logically separate pieces of a compile phase into pieces. A multi-processor computer (like our BALANCE 21000) for example can do: % /lib/cpp ed.c | /lib/ccom | /lib/c2 | as -o ed.o 89.8u 7.0s 0:56 172% 29+21io 10pf+0w ==== Try doing that with a single process compiler ... --Phil Hochstetler, Sequent Computer Systems, Inc.
bobr@zeus.UUCP (Robert Reed) (06/04/86)
In article <697@bu-cs.UUCP> bzs@bu-cs.UUCP (Barry Shein) writes: > > I wonder out loud how hard it would be to turn CPP into a subroutine, > change its main() to cpp_main() and set up the call in the main() of > ccom,... What's the point? Sure, it consolidates the C compiler into one less load module, but the processing is still done as a separate pass, and should have no effect on performance or functionality.
greg@utcsri.UUCP (Gregory Smith) (06/04/86)
In article <3844@sun.uucp> guy@sun.uucp (Guy Harris) writes: >> And again, we would still need a separate cpp to make the kernel, > >Well, note that if a "cc" is to be considered a real UNIX "cc", it will have >to support "-P" and "-E". This means that if you don't have a separate >preprocessor, there will have to be *some* way of running just the >preprocessor part of the lexical analyzer and getting compilable C output >from it. (Sorry, any of you people who want to bundle "cpp" and the lexical >analyzer, but that's the way it is.) Given that, you can either hack "cc" Bee's Knees. Any c compiler with an integral cpp will have a function 'inchar()' ( or possibly a macro for speed ) which is called from the lexer to get the next input char from the cpp. In effect, 'inchar()' runs the cpp. So the -E option is done by the following magnum opus (which of course is also used to debug the cpp code ): while( (c=inchar()) !=EOF ) putc( c, outfile ); Modularity in software design does not imply a separate program for each module. -- "We demand rigidly defined areas of doubt and uncertainty!" - Vroomfondel ---------------------------------------------------------------------- Greg Smith University of Toronto UUCP: ..utzoo!utcsri!greg
faustus@cad.BERKELEY.EDU (Wayne A. Christopher) (06/05/86)
In article <2600061@ccvaxa>, aglew@ccvaxa.UUCP writes: > I very much doubt that a cpp that parses enough of C to understand > sizeof will be useful in non-C-related applications. You're wrong: /lib/cpp doesn't recognise sizeof, /lib/ccom does. Wayne
aglew@ccvaxa.UUCP (06/06/86)
... > sizeof and /lib/cpp (1) I apologize to the net. The latest ANSI C draft explicitly does not allow sizeof in #if conditionals. I believe that earlier versions did, or were vague enough to let me think so. (2) I am mildly annoyed by people who say "/lib/cpp does not recognize sizeof, /lib/ccom does". People who talk in the present tense show a limited imagination. Ditto people who talk in a limited geographic or market-oriented tense. The C preprocessor is not always /lib/cpp, nor is the first step of the compiler /lib/ccom; there are many C systems out there where they have different names, or are merged. There are even systems where sizeof is recognized in #ifs. Enough. Sorry to, perhaps, have misled anybody about sizeof in ANSI C. Sorry, also, to be disappointed by this being missing.
gwyn@brl-smoke.ARPA (Doug Gwyn ) (06/07/86)
In article <275@zeus.UUCP> bobr@zeus.UUCP (Robert Reed) writes: >What's the point? Sure, it consolidates the C compiler into one less load >module, but the processing is still done as a separate pass, and should have >no effect on performance or functionality. (a) This discussion pertains to a particular implementation of the compiler, not to the language itself. (b) UNIX (the implementation being discussed) actually works better if you let it multitask, rather than insisting on purely sequential execution.
guy@sun.uucp (Guy Harris) (06/10/86)
> Modularity in software design does not imply a separate program for each > module. Neither did my posting. Read it again. It says "there will have to be some way of running *just the preprocessor part of the lexical analyzer* and getting compilable C output from it." The posting was meant as a warning to people who plan to build their #define-and-#include-handler (if it doesn't pre-process the source, it can't really be called a preprocessor) in such a way that you *can't* get the "preprocessed" source out of it. Furthermore, I'm not convinced that "any C compiler with an integral cpp will have a function 'inchar'...". Somebody may design a C compiler which *isn't* quite so modular. -- Guy Harris {ihnp4, decvax, seismo, decwrl, ...}!sun!guy guy@sun.com (or guy@sun.arpa)