[comp.std.c++] X3J16/90-0091 typos and comments

jimad@microsoft.UUCP (Jim ADCOCK) (11/07/90)

TYPOS, COMMENTS, AND A QUICK REVIEW ON X3J16/90-0091

[The following are my opinions only]

1-1 "February 1990"	

	Should be updated.

2-1 "tokens, that is, a file"

	Looks like a misuse of commas, but I can't figure out how to fix 
	it.

2-2 keywords

	Shouldn't this include a list of preprocessor directives and
	"defined" ?

5-5 5.3.3"The type-specifier-list may not contain const, volatile, class
	declarations, or enumeration declarations."

	It would seem that allowing const and/or volatile would not be 
	harmful, and might make for better type calculus, especially when 
	using the placement operator:

	const volatile ClockReg* pread_only_clock_reg = 
		new(0x12345678) const volatile ClockReg;

	or consider:

	#define ROClockReg const volatile ClockReg

	// ....

	ROClockReg* proClockReg = new(0x12345678) ROClockReg;

5-7 5.4 "A yet undefined class may be used in a pointer cast, in which
	case no assuptions will be made about class latices."

	Actually, in this situation it seems to me that a LOT of 
	assumptions ARE made!  Perhaps it would be better to state what
	those assumptions are?  To wit:  in such a situation, a pointer
	remains bitwise equivalent, and no pointer adjustments are made?
	Or what?

5-8	"A pointer to an object of const type can be cast into a pointer
	to a non-const type....The result of attempting to modify that
	object through such a pointer will either cause an addressing
	exception or be the same as if the original pointer had referred
	to a non-const object.  It is implementation dependent whether 
	the addressing expection occurs.

	In my opinion, this section is oxy-moronic.  A standard should not
	sanctify two bipolar implementation choices that so differ that no
	reasonable strictly-conforming program can make use of both 
	behaviors.  Either ONE behavior for const objects should be 
	codified, or the entire issue should be left implementation 
	dependent.  It is silly to insist that compilers accept cast-
	from-const constructs, where that construct is just going to cause
	a runtime error later, because on that machine consts are write-
	protected.  My recommendation is that the entire issue of cast 
	from const be implementation dependent.  In particular, on 
	machines where consts go in read-only memory, compilers should
	not allow cast from const.  And highly-optimizing C++ compilers 
	may wish to prohibit cast-from-const to allow efficiencies by
	making const-ness assumptions part of the function calling 
	protocol.  [IE enregistered const parameters need not be reloaded
	over function call boundaries.]

7-5 7.1.6	"Volatile"

	Exact meaning of volatile needs to be codified.  "A hint to the
	compiler not to be overagressive" is insufficient.

7-8	"it is recommended that the spelling be taken from the document
	defining that language, for example Ada (not ADA) and FORTRAN
	(not Fortran)

	Recommendations have no place in a standard.  Its either a 
	requirement or its not.  Make this a requirement.	

8-11 8.4.3 'A variable declared to be a T&, that is "reference to type 
	T" (&8.2.2), must be initialized by an object of type T or by
	an object that can be converted into a T."

	In my opinion, null references should be explicitly allowed, thus
	maintaining the general symmetry between pointers and references.  
	Also, it seems silly to prohibit such in the language spec when 
	compilers can't detect such usages.  IE we should explicitly 
	allow:

	int& r = *(int*)0;

9-4	"Nonstatic data membres of a class declared without an intervening
	access-specifier are allocated so that later members have higher
	addresses within a class object."

	In my opinion, this statement is silly and outside the scope of
	a language definition and instead is the business of compiler
	implementators.  In the language, the only connotation of ordering
	of addresses is the ordering of elements of an array.  Therefor,
	it is not possible to write a strictly conforming program that 
	makes use of the above constraint.  Individual compilers
	can still choose to follow the above constraint, and programmers
	can then write non-strictly conforming programs that make use
	of those compilers' "features."  So what is the point of making
	ALL compilers forever-more conform to a constraint that NO 
	STRICTLY-conforming program can use !?

	All this constraint does is confuse customers about what kinds of
	pointer hacks are strictly conforming, and what aren't.

	Among other things, this unnecessary  constraint has a 
	negative impact on compilers striving for speed, compactness, 
	incremental compiles, schema evolution, etc...  

9-7	"A union may be thought of as a structure whose member objects
	all begin at offset zero and whose size is sufficient to contain 
	any of its member objects."

	A union CAN be thought of like this, or any other way you might
	choose, but if this isn't meant to be a constraint on how 
	compilers are allowed to implement unions, then it'd be best to 
	leave the statement out.  In particular, it is easy to imagine
	on some machines "offset zero" might refer to left packed objects,
	but that machine might prefer to right pack objects into a union,
	leading to a "not offset zero" implementation.  Consider the 
	following trivial example:

	union { long l; char c; } U;

	Are both l and c constrained to be at "offset zero" ? I think 
	not.

10-1	"The commentary sections at the end of this chapter discuss the
	run-time mechanisms necessary to implement inheritence."

	The commentary is gone, and so should this statement.

13-2	"Note that only the second and subsequent array dimensions are
	significant in argument types."

	I think this is bad.  Arrays of fixed dimension should 
	always be considered of certain type, but be coercible to 
	uncertain type implicitly.  The rules for array parameters go
	the wrong way, allowing the below kind of silly-ness:

	double power(double d[1][3])
	{
		return 
		d[0][0]*d[0][0] +
		d[0][1]*d[0][1] +
		d[0][2]*d[0][2] ;
	}

	void doSomething()
	{
		double M3x3[3][3];

		double pow = power(M3x3);	// oops, we silently sliced 
							// off 2 rows!
	}

	.... or even worse.  Implicit coercion of arrays of definite
	type to arrays of indefinite type is bad enough.  Implicit
	coercions of arrays of indefinite type to arrays of definite
	type must be the weakest point in the C++ type system!


13-5	typo: numbers [1]-[5] are duplicated.

13-6	13.4	"The following operators cannot be overloaded"

	As I have argued before, I believe operator dot, and operator
	dot star, should be overloadable analogous to operator->() and
	operator->*()

16-4	"An implementation may impose a limit on the depth of #include
	directives within source files that have been read while 
	processing a #include directive in another source file."

	A sensible lower bound should be stated that all conforming
	implementations must be able to meet.  What does ANSI-C call out?

jgro@lia (Jeremy Grodberg) (11/14/90)

In article <58861@microsoft.UUCP> jimad@microsoft.UUCP (Jim ADCOCK) writes:
>
>TYPOS, COMMENTS, AND A QUICK REVIEW ON X3J16/90-0091
>
>[The following are my opinions only]
>
>[...]
>5-8	"A pointer to an object of const type can be cast into a pointer
>	to a non-const type....The result of attempting to modify that
>	object through such a pointer will either cause an addressing
>	exception or be the same as if the original pointer had referred
>	to a non-const object.  It is implementation dependent whether 
>	the addressing expection occurs.
>
>	In my opinion, this section is oxy-moronic.  A standard should not
>	sanctify two bipolar implementation choices that so differ that no
>	reasonable strictly-conforming program can make use of both 
>	behaviors.  Either ONE behavior for const objects should be 
>	codified, or the entire issue should be left implementation 
>	dependent.  It is silly to insist that compilers accept cast-
>	from-const constructs, where that construct is just going to cause
>	a runtime error later, because on that machine consts are write-
>	protected.  My recommendation is that the entire issue of cast 
>	from const be implementation dependent.  In particular, on 
>	machines where consts go in read-only memory, compilers should
>	not allow cast from const.  And highly-optimizing C++ compilers 
>	may wish to prohibit cast-from-const to allow efficiencies by
>	making const-ness assumptions part of the function calling 
>	protocol.  [IE enregistered const parameters need not be reloaded
>	over function call boundaries.]

I disagree strongly on this point.  By specifying exactly 2 options
for compiler writers, ANSI gives a needed degree of freedom while minimizing
the difficulty for programmers.  Under the proposal, people who want
ultra-clean code will avoid casting away const, just as they will under
Jim's proposal.  However, under the ANSI proposal, programmers will be able 
to write portable code which takes advantage of casting away const and 
works portably across all machines that allow it (which I expect to be
the majority of them), while (with judicious use of #defines) they could
preserve a second version fairly easily that did not rely on casting away
const.

For example, a class might be designed which caches results of common
operations in private members of the object.  Even if the object is 
semantically const, it should be legal for the object to update these
caches, and could be accomplished by casting "this" from const.  It would
not be hard to write such classes so that caching (and casting from const)
could be turned off with a #define.  

I interpret the spec to say "Well, lots of people want const to be an
artificial limitation, to make programs easier to debug, but lots of
other people want to write ROMable code and use const to tell the compiler
it is okay to put this variable in ROM.  Since we don't want to add still
one more keyword, we will just let compiler writers choose which one they
want to implement, since the customers of a particular compiler will likely
agree on which choice is better."

Also note that the spec doesn't say that a compiler either has to do it 
one way or another.  It would not be unreasonable to have the compiler
always generate the same code (which allows casting away const), and have the
address exception only occur when the program is executed from ROM.

I expect the result of the ANSI proposal will be that most compilers would
allow casting away const (in a portable way), while only compilers that
were being used to develop embedded code that would actually live in 
ROM would not allow casting away const.  Compilers could also be written
to support both options, with a command-line option or pragma to select
between them.

>8-11 8.4.3 'A variable declared to be a T&, that is "reference to type 
>	T" (&8.2.2), must be initialized by an object of type T or by
>	an object that can be converted into a T."
>
>	In my opinion, null references should be explicitly allowed, thus
>	maintaining the general symmetry between pointers and references.  
>	Also, it seems silly to prohibit such in the language spec when 
>	compilers can't detect such usages.  IE we should explicitly 
>	allow:
>
>	int& r = *(int*)0;


And I suppose you then would write:

if (&r == 0)  cerr << "r is null";

This strikes me as too error prone.  In general, a reference is just 
another name for a real object.  If you want to allow for there not
being an object, then use a pointer.

>9-4	"Nonstatic data membres of a class declared without an intervening
>	access-specifier are allocated so that later members have higher
>	addresses within a class object."
>
>	In my opinion, this statement is silly and outside the scope of
>	a language definition and instead is the business of compiler
>	implementators.  [...]

I would like to see the stronger statement that would provide for 
strict correspondance between C++ class data members and ANSI-C structs.
Perhaps that was what was intended by this requirement, although it
appears in fact to be too weak to have much of any usage.

>9-7	"A union may be thought of as a structure whose member objects
>	all begin at offset zero and whose size is sufficient to contain 
>	any of its member objects."
>
>	A union CAN be thought of like this, or any other way you might
>	choose, but if this isn't meant to be a constraint on how 
>	compilers are allowed to implement unions, then it'd be best to 
>	leave the statement out.  [...]

I think a little clarification and explanation can remain in the spec, 
even if it isn't a strict requirement.  If it is not made explicit 
elswhere, then I would agree that this statement should be expanded into
a requirement on how unions are implemented.
>16-4	"An implementation may impose a limit on the depth of #include
>	directives within source files that have been read while 
>	processing a #include directive in another source file."
>
>	A sensible lower bound should be stated that all conforming
>	implementations must be able to meet.  What does ANSI-C call out?

I agree.  People are already reporting to comp.lang.c++ problems caused
by one compiler's limit of 11 levels of #include nesting.



-- 
Jeremy Grodberg      "I don't feel witty today.  Don't bug me."
jgro@lia.com          

jimad@microsoft.UUCP (Jim ADCOCK) (11/20/90)

In article <1990Nov13.222700.1194@lia> jgro@lia.com (Jeremy Grodberg) writes:
|In article <58861@microsoft.UUCP> jimad@microsoft.UUCP (Jim ADCOCK) writes:
|>5-8	"A pointer to an object of const type can be cast into a pointer
|>	to a non-const type....The result of attempting to modify that
|>	object through such a pointer will either cause an addressing
|>	exception or be the same as if the original pointer had referred
|>	to a non-const object.  It is implementation dependent whether 
|>	the addressing expection occurs.
|>
|>	In my opinion, this section is oxy-moronic.  A standard should not
|>	sanctify two bipolar implementation choices that so differ that no
|>	reasonable strictly-conforming program can make use of both 
|>	behaviors.  

|I disagree strongly on this point.  By specifying exactly 2 options
|for compiler writers, ANSI gives a needed degree of freedom while minimizing
|the difficulty for programmers.  

Hm, let me try to clarify this.  I see at least three good, realistic
ways people might want to implement compilers regards "const."

1) A compiler that places constant objects without constructors in
honest-to-god read-only memory.

2) A compiler that allows programmers to successfully cast away from const
anywhere.

3) A compiler that considers a "const" declared function param as a contract
with that function's implementer, allowing significant calling optimizations 
in the context of separate module compilations, and with libraries delivered
from multiple vendors sans source code.

The needs of #3 -- optimizing compilers with separate compilation modules/
libraries is not currently supported in the proposed standards.  This then 
requires that all of an object's member values be flushed from registers 
over function calls, even when a "const" member function is called.

library from TinCo:
// ....
int OB::intval() const { return this->intmember; }
// ...


used by Bob Customer's program:

	//  ....
	int i = ob->intval();	// oops, we must purge all enregistered 
				// members of ob, since we are not guaranteed
				// intval doesn't violate its const-ness 
				// contract! [Even though, in this case
				// and 99.5% of the rest of the cases,
				// member functions *do* respect their 
				// const-ness contracts.

Thus the needs of customers desiring well-optimized code are ignored.

The user of a #1 - type compiler are ill-served by the present proposal
too -- the proposal says: even though the compiler can recognize that
an object is going into read-only memory, and thus cast-from-const is
going to bomb at runtime, the type #1 compiler is not allowed to catch
this error, but rather must "accept" the cast -- even though the compilers
"knows" the cast is going to result in a runtime crash.  

Why is this desirable???

Perhaps what the standard should specify is that either of two behaviors
are allowed from compilers in a given situation: 1) the compiler can
accept the cast-from-const or 2) a compiler can reject the cast-from-const ?
--The end programmer result is the same as you propose:  the programmer
must come up with both a cast-from-const and a non-cast-from-const
way of doing the job, and for a given compiler enable one or the
other.

...

|And I suppose you then would write:
|
|if (&r == 0)  cerr << "r is null";

Yep.

|This strikes me as too error prone.  In general, a reference is just 
|another name for a real object.  

I consider null objects 'real objects.'  You just aren't allowed to do
anything with them other than take their address..  Such type safe sentinel 
objects are quite useful -- in my experience.  Safer than just using NULL 
every where.

| If you want to allow for there not
|being an object, then use a pointer.

Again, there's no way in general for compilers to prohibit such 
a programming style -- other than put in expensive runtime checks 
everywhere, so why not declare it "legal", and leave its use or abuse up
to the tastes of an individual programmer? [give me a few reference abuses
over the present pile of pointer abuses anyday :-]

This constraint is the counter style of a vacuous constraint:  instead of
a constraint on a compiler, this one is a constraint on programmers, that keeps
programmers from doing something they might want to do, with no great
benefits to compilers, and in practice no changes in the legality of programs
according to real world compilers, nor to the quality of code they 
typically generate.

|>9-4	"Nonstatic data membres of a class declared without an intervening
|>	access-specifier are allocated so that later members have higher
|>	addresses within a class object."
|I would like to see the stronger statement that would provide for 
|strict correspondance between C++ class data members and ANSI-C structs.
|Perhaps that was what was intended by this requirement, although it
|appears in fact to be too weak to have much of any usage.

Again, the meaning of order of addresses is only defined within an
array of like objects, which members of a object aren't, so why 
define a constraint on conforming compilers that can't be used by
conforming programs?  If programmers desire to perform programming
hacks that are outside of the language, let them choose compilers that
support those hacks.  If programmers desire their programs to run on
all conforming compilers, let them then write conforming programs.
If the ordering of members of an object is to mean something, then
add wording to the standard whereby programmers are allowed to do
_something_ legal and conforming based on these constraints.  As it stands
today, there's no legal way for a programmer to make use of the ordering
of members.  So why then constrain compilers?  If you believe that order
of members is an important thing to have in the language, then please add
language to the standards allowing programmers to do _something_ [_anything_ !]
legal based on that ordering.  But as it now stands there is nothing 
legal programmers are allowed to do with member order.  Thus member 
ordering is a vacuous constraint on compilers.

The C++ standard should represent a contract between C++ programmer and
compiler vendors.  Constraints placed on compiler vendors should
be matched by permissions alloted the C++ programmer.  Constraints placed
on C++ programmers should be matched by permissions alloted the C++
compiler vendor.  Constraints on one, without matching permission on
the other, represents an ill-formed contract that benefits neither,
but rather is to everyone's disadvantage.  Which is the basis of my
above complaints.