[comp.lang.c++] How compatible are the various C++ implementations?

ts@cup.portal.com (Tim W Smith) (10/09/90)

We are thinking of moving from C to C++ as our primary
development language at work.  Before doing this, we
need to make sure that we can get a set of compilers
across various platforms that are compatible.

Most of our work is with standalone systems.  Right now,
we tend to use Turbo C or Microsoft C to write our stuff
if the target system has an 8086 family CPU, and Think C
if the target has a 68000 family CPU.  We take the output
of the compiler, wrap it in standalone "glue", and stick
it in the standalone system.

How well would this work with C++?  For example, suppose we
tried to use Turbo C++ and Apple MPW C++.  Since C++ is
relatively new, there are bugs in various implementations.
Are we going to spend a lot of time working around various
compiler bugs?  (Nothing is as annoying as a compiler bug
when working on embedded code.  Debugging facilities are
often quite primitive, so you can't just step through with
a source level debugger and see that the compiler has
screwed up (I recently worked on a project where my only
debugging aid was an LED that I could turn on or off.  Oh
well, it provided good practice for reading Morse code :-) ) )

We don't care about various add-on class libraries provided
by the compiler vendor.  We plan to implement our own that
are geared toward the stuff we write, so we only care about
the basic compiler.

In summary, 1) do the various C++ compilers out there now
really conform to the C++ standard?  2) Are they mature enough
that we won't be hit by obscure compiler bugs?  3) Are there
any compilers we need to avoid because they "know" that they
are on a particular system?  For example, a compiler that is
going to implement "new" by outputting a DOS INT instruction
to allocate memory would not be nice.  We need some way to
hook into this sort of thing so that we can provide it on
the standalong system.

I'm somewhat pessimistic at the moment since my very first attempt
to write a somewhat real C++ program found a big bug in Turbo C++.
If you overlead the comma operator, the compiler evaluates expressions
involving the overloaded operator wrong.  For example, if you have
a type T, and T,T yeilds a type T, then the expression

	T1,T2,T3,T4

where Ti is of type T is evauluated as

	(T1,T2),(T3,T4)

where the second comma is treated like the ordinary C comma
operator.

I would really hate to write some big standalong system for a 68000
using MPW C++, make heavy use of overloading the comma operator, then
port to an 80386 with Turbo C++ and find that I have to rewrite
everything to avoid overloading the comma operator.  This is the
kind of thing I want to avoid.

					Tim Smith

ps: the exact details in the above example may be wrong, as I am
doing it from memory.  The basic idea is right, however.  Expressions
with multiple comma operators failed with the failure being a comma
evaluated like a regular C comma rather than calling the overloading
function.  Changing the comma to << got rid of the problem.

sking@nowhere.uucp (Steven King) (10/12/90)

In Message <34661@cup.portal.com> ts@cup.portal.com (Tim W Smith) writes:

>We are thinking of moving from C to C++ as our primary
>development language at work.  Before doing this, we
>need to make sure that we can get a set of compilers
>across various platforms that are compatible.

>Most of our work is with standalone systems.  Right now,
>we tend to use Turbo C or Microsoft C to write our stuff
>if the target system has an 8086 family CPU, and Think C
>if the target has a 68000 family CPU.  We take the output
>of the compiler, wrap it in standalone "glue", and stick
>it in the standalone system.

	[ other stuff about compatiblity between various target platforms ]

   It was for these same reasons (and cost as well) that I choose a translator
as opposed to a native code compiler. Instead of purchasing seperate C++ 
compilers for each target (even assuming they were availible) I have a single
C++ translator running on my host platform (386 unix) that can produce C source
for 8051, v40 (a 80186 sort of), 68k, this here box, or anything else that has
a C compiler for it. I dont have to worry about compatibility at the C++ level
since the C output is the same for each target, one only needs to evaluate the
capabilities of the target C compiler.
 
  Also by using a translator, one has the option of editing the C output to
correct any "deficiencies" -- usually not fun, but occasionally neccessary,
especially for embedded systems work where ones execution enviroment doesnt
match traditional general purpose computers (code in ROM, discontiguous memory
spaces, etc.)


-- 
-------------------------------------------------------------------------------
new && improved:			 			 sking@nowhere
old && reliable:			...!cs.utexas.edu!ut-emx!nowhere!sking
-------------------------------------------------------------------------------

steve@taumet.com (Stephen Clamage) (10/14/90)

sking@nowhere.uucp (Steven King) writes:

>   It was for these same reasons (and cost as well) that I choose a translator
>as opposed to a native code compiler. Instead of purchasing seperate C++ 
>compilers for each target (even assuming they were availible) I have a single
>C++ translator running on my host platform (386 unix) that can produce C source
>for 8051, v40 (a 80186 sort of), 68k, this here box, or anything else that has
>a C compiler for it. I dont have to worry about compatibility at the C++ level
>since the C output is the same for each target, one only needs to evaluate the
>capabilities of the target C compiler.

This seems attractive, but is not as reliable as you make it sound.  For
non-trivial C++ programs, the C code produced by the translator will not
necessarily work on arbitrary target machines.  For example, the
translator knows the sizes of data types (especially ints and pointers),
and generates references which depend on knowing those sizes.  If a
target uses other sizes, the C code will be wrong.  If you use the AT&T
cfront translator, you can purchase the source so that you can recompile
it for use with different targets.  It is, however expensive.

In any event, you need still need source for the runtime support code used
by the translator so that this code can be compiled for use with the
various targets.  
-- 

Steve Clamage, TauMetric Corp, steve@taumet.com

sking@nowhere.uucp (Steven King) (10/17/90)

>In article <475@taumet.comme>> steve@taumet.com (Stephen Clamage> writes:
>sking@nowhere.uucp (Steven King> writes:
>
>>   It was for these same reasons (and cost as well> that I choose a translator
>>as opposed to a native code compiler. Instead of purchasing seperate C++ 
>>compilers for each target (even assuming they were availible> I have a single
>>C++ translator running on my host platform (386 unix> that can produce C source
>>for 8051, v40 (a 80186 sort of>, 68k, this here box, or anything else that has
>>a C compiler for it. I dont have to worry about compatibility at the C++ level
>>since the C output is the same for each target, one only needs to evaluate the
>>capabilities of the target C compiler.
>
>This seems attractive, but is not as reliable as you make it sound.  For
>non-trivial C++ programs, the C code produced by the translator will not
>necessarily work on arbitrary target machines.  For example, the
>translator knows the sizes of data types (especially ints and pointers),
>and generates references which depend on knowing those sizes.  If a
>target uses other sizes, the C code will be wrong.  If you use the AT&T
>cfront translator, you can purchase the source so that you can recompile
>it for use with different targets.  It is, however expensive.

	This could be a problem for others, however, both the implementations
of cfront that I use ( Guidelines & Intek ) provide a mechanism for specifying
a "size" file that informs the translator of the correct size and alignments
for the target. It has been my experiance that the translator, in general, is
quite nice about leaving the determination of size of items up to the target
C complier. An exception to this is in the call to constructors of member
objects; the offset for the member is calculated based upon what the translator
believes the sizes are. Packing of classes/structures by the target C compiler
also impacts this; I created some additional size files for this....

>In any event, you need still need source for the runtime support code used
>by the translator so that this code can be compiled for use with the
>various targets.  

	Again, both versions that I use supplied source for _main() and the
_patch_ mechanism of initialization of global objects. Source to the task and
complex math class library would be nice, but hardly indepensible; I would
find it quite pleasant to have source to the iostream library...

-- 
new && improved:			 			 sking@nowhere
old && reliable:			...!cs.utexas.edu!ut-emx!nowhere!sking

rfg@NCD.COM (Ron Guilmette) (11/03/90)

In article <475@taumet.com> steve@taumet.com (Stephen Clamage) writes:
<sking@nowhere.uucp (Steven King) writes:
<
<>   It was for these same reasons (and cost as well) that I choose a translator
<>as opposed to a native code compiler. Instead of purchasing seperate C++ 
<>compilers for each target (even assuming they were availible) I have a single
<>C++ translator running on my host platform (386 unix) that can produce C source
<>for 8051, v40 (a 80186 sort of), 68k, this here box, or anything else that has
<>a C compiler for it. I dont have to worry about compatibility at the C++ level
<>since the C output is the same for each target, one only needs to evaluate the
<>capabilities of the target C compiler.
<
<This seems attractive, but is not as reliable as you make it sound.  For
<non-trivial C++ programs, the C code produced by the translator will not
<necessarily work on arbitrary target machines.  For example, the
<translator knows the sizes of data types (especially ints and pointers),
<and generates references which depend on knowing those sizes.  If a
<target uses other sizes, the C code will be wrong.  If you use the AT&T
<cfront translator, you can purchase the source so that you can recompile
<it for use with different targets.  It is, however expensive.

Additionally, when considering a translator as opposed to a debugger,
consider also the type of symbolic debugging support available.  Some
firms (most notably ParcPlace) have made some very large strides
towards providing symbolic debugging support in conjunction with a
translator, but it ain't necessarily cheap.

Anyway, if you are *seriously* concerned about portability of your code
between various platforms, there are a few things that I would recommend:

	First, try to learn what things that make C code non-portable.
	Then avoid such usage.  In particular, avoid things that the
	ANSI C standard says are implementation dependent.  I am often 
	amazed at the high level of ignorance which many professional
	C coders have about what the ANSI C standard says.

	Second, try to learn what things make C++ code non-portable.
	For example, don't write code which will break if the format
	of virtual function tables changes a little or a lot.

	Third, support the ANSI C++ standardization effort.

	Forth, beat all suppliers of (so called) C++ language processors
	about the head and neck over each and every deviation
	from E&S (unless of course that deviation has been approved by
	x3j16).  You will end up getting better C++ language products, and
	people may even thank you for it.

<In any event, you need still need source for the runtime support code used
<by the translator so that this code can be compiled for use with the
<various targets.  

Ditto for `standard' libraries for compilers.

-- 

// Ron Guilmette  -  C++ Entomologist
// Internet: rfg@ncd.com      uucp: ...uunet!lupine!rfg
// Motto:  If it sticks, force it.  If it breaks, it needed replacing anyway.

steve@taumet.com (Stephen Clamage) (11/04/90)

rfg@NCD.COM (Ron Guilmette) writes:

>In article <475@taumet.com> steve@taumet.com (Stephen Clamage) writes:

|<For non-trivial C++ programs, the C code produced by the translator will not
|<necessarily work on arbitrary target machines.  For example, the
|<translator knows the sizes of data types (especially ints and pointers),
|<and generates references which depend on knowing those sizes.  If a
|<target uses other sizes, the C code will be wrong.  If you use the AT&T
|<cfront translator, you can purchase the source so that you can recompile
|<it for use with different targets.  It is, however expensive.

Since I posted that, I have been informed that AT&T cfront supports a
configuration file which allows you to specify machine-specific
features at C++ translation time.  I apologize for not checking this first.
-- 

Steve Clamage, TauMetric Corp, steve@taumet.com