[comp.std.c++] aliases, ptr coercions, and optimizations

jimad@microsoft.UUCP (Jim ADCOCK) (07/31/90)

Request for Clarification and a General Proposal:

I claim there is a need for clarification in terms of what kinds of ptr
casts are legal in the face of an optimizing compiler.  For example, does:

	foo* fooptr = &(bar.fooMember);

poison fooMember, the labeled section fooMember is in, the class derivation
that fooMember is in, or all of bar?

My general proposal is that C++ be very restrictive in terms of what pointer
casts are "guaranteed" to work, allowing for maximum compiler optimizations,
while minimizing the pointer hacks that are "guaranteed" to work.  IE,
good C++ citizens should get well optimized code, but C++ hacker code may
not work at all.

shap@thebeach.wpd.sgi.com (Jonathan Shapiro) (08/03/90)

In article <56170@microsoft.UUCP>, jimad@microsoft.UUCP (Jim ADCOCK) writes:

> My general proposal is that C++ be very restrictive in terms of what pointer
> casts are "guaranteed" to work, allowing for maximum compiler optimizations,
> while minimizing the pointer hacks that are "guaranteed" to work.  IE,
> good C++ citizens should get well optimized code, but C++ hacker code may
> not work at all.

This is not in the spirit of the language at all.  There is nothing to
prevent the optimizer from recognizing the non-optimizable cases and
simply not optimizing them.

Jon

jimad@microsoft.UUCP (Jim ADCOCK) (08/07/90)

In article <11295@odin.corp.sgi.com| shap@sgi.com writes:
|In article <56170@microsoft.UUCP>, jimad@microsoft.UUCP (Jim ADCOCK) writes:
|
|> My general proposal is that C++ be very restrictive in terms of what pointer
|> casts are "guaranteed" to work, allowing for maximum compiler optimizations,
|> while minimizing the pointer hacks that are "guaranteed" to work.  IE,
|> good C++ citizens should get well optimized code, but C++ hacker code may
|> not work at all.
|
|This is not in the spirit of the language at all.  There is nothing to
|prevent the optimizer from recognizing the non-optimizable cases and
|simply not optimizing them.
|
|Jon

I disagree.  As I stated in my prefix, there is indeed something preventing
compilers from recognizing the non-optimizable cases.  That something is
the traditional "un*x" model of separate compilation and linking.  If, for
example a vendor provides a pre-compiled library containing a method say:
void doSomething(const FOO& foo);

then an optimizing compiler has two choices:

1) it can assuming doSomething really does treat foo as a constant.  In which 
case the optimizing compiler can safely enregister fields of foo across the
doSomething call.

2) it can pessimitically assume doSomething violates its pledge of const'ness.
Then any enregistered fields of foo need to be reloaded after the doSomething
call....

Either way, under the traditional "un*x" model of separate compilation and
linking -- including separate libraries delivered precompiled, there is no
reasonable way for compilers to verify the truthfullness of the const'ness
of any function [short of automatic decompilation and analysis]

Certainly one can imagine adding informational libraries to state "is this
const function *really const?*" or add additional name mangling to indicate
"this function says its const but it really isn't"....  But const'ness or
not const'ness is just one flavor of the lies that a prepackaged routine
can present to the outside world.  Shouldn't compilers be able to assume
that functions honour the contract implied by their signatures?  Shouldn't
attempts by programmers to violate their signature contracts be flagged as
errors, or at least warnings?