tom@sco.COM (Tom Kelly) (08/17/90)
The recent discussion of delete [] in comp.lang.c++ got me to thinking about some example C++ code I've seen. For example, in Lippman, _C++ Primer_, p. 234: (with int len; char *str) String::String(char *s) { len = strlen(s); str = new char[len + 1]; strcpy(str, s); } String::~String() { delete str; } The same code (essentially) appears in Dewhurst & Stark, _Programming in C++_, p. 70. It seems to me that the delete statement is incorrect, it should be delete []. According to E&S (ARM), section 5.3.4, p. 21: "The effect of deleting an array with the plain delete syntax is undefined". The annotations note that there is a practical difference only for classes that have destructors, and of course, "char" has no destructor. But the annotation further points out that it is possible that during program evolution, a built-in type can be replaced by a class with a destructor (using typedef) and so this is a dangerous practice. Comments? Tom Kelly (416) 922-1937 SCO Canada, Inc. (formerly HCR) 130 Bloor St. W., Toronto, Ontario, Canada {utzoo, utcsri, uunet}!scocan!tom
rfg@NCD.COM (Ron Guilmette) (08/18/90)
In article <1990Aug16.162718.14828@sco.COM> tom@sco.COM (Tom Kelly) writes:
<The recent discussion of delete [] in comp.lang.c++ got me to thinking
<about some example C++ code I've seen.
<
...
< delete str;
...
<
<The same code (essentially) appears in Dewhurst & Stark,
<_Programming in C++_, p. 70.
<
<It seems to me that the delete statement is incorrect, it
<should be delete []. According to E&S (ARM), section 5.3.4, p. 21:
<
<"The effect of deleting an array with the plain delete syntax
<is undefined".
I believe that it would be useful to also have a compile-time enforcable
constraint here, e.g.:
It is illegal for the pointer given in a plain delete
statement to be of any pointer-to-array type.
--
// Ron Guilmette - C++ Entomologist
// Internet: rfg@ncd.com uucp: ...uunet!lupine!rfg
// Motto: If it sticks, force it. If it breaks, it needed replacing anyway.
drk@athena.mit.edu (David R Kohr) (08/20/90)
In article <1231@lupine.NCD.COM> rfg@NCD.COM (Ron Guilmette) writes: >In article <1990Aug16.162718.14828@sco.COM> tom@sco.COM (Tom Kelly) writes: ><The recent discussion of delete [] in comp.lang.c++ got me to thinking ><about some example C++ code I've seen. >< >... >< delete str; >... >< ><The same code (essentially) appears in Dewhurst & Stark, ><_Programming in C++_, p. 70. >< ><It seems to me that the delete statement is incorrect, it ><should be delete []. According to E&S (ARM), section 5.3.4, p. 21: >< ><"The effect of deleting an array with the plain delete syntax ><is undefined". > >I believe that it would be useful to also have a compile-time enforcable >constraint here, e.g.: > > It is illegal for the pointer given in a plain delete > statement to be of any pointer-to-array type. >-- > >// Ron Guilmette - C++ Entomologist >// Internet: rfg@ncd.com uucp: ...uunet!lupine!rfg >// Motto: If it sticks, force it. If it breaks, it needed replacing anyway. I'm still basically learning C++, so please pardon any obvious instances of ignorance in the question which follows. Also, I hope this question is not so basic that it should really have been posted to comp.lang.c++; unfortunately, this thread of discussion isn't being mirrored in that newsgroup, so I think I have to post here. In K&R and ANSI C, of course you must specify the size of a region of memory to allocate when using malloc(), but when returning that memory using free() you merely need to pass a pointer to the memory. This implies that the implementation of the memory allocator must be able to figure out for itself the size of the memory region when it is returned using free(). This can easily be handled either by using a "hidden" size field in a header immediately preceding the region returned by malloc(), or by maintaining a database which records the size of the memory associated with each pointer returned by malloc() but not yet recovered by free(). (The former technique seems more efficient, and is presented in K&R itself.) But in C++, the new and delete operators seem to have lost this capability of calculating the size of allocated memory regions automatically. I was under the impression that most implementations of C++ would naturally use malloc() and free() as the underlying mechanisms for implementing new and delete, so that it would not be unreasonable to require that delete be able to figure out for itself the size of an allocated region of memory. Why is this not the case? -- David R. Kohr M.I.T. Lincoln Laboratory Group 45 ("Radars 'R' Us") email: DRK@ATHENA.MIT.EDU (preferred) or KOHR@LL.LL.MIT.EDU phone: (617)981-0775 (work) or (617)527-3908 (home)
rfg@NCD.COM (Ron Guilmette) (08/20/90)
In article <1990Aug19.170256.21630@athena.mit.edu> drk@athena.mit.edu (David R Kohr) writes: > >But in C++, the new and delete operators seem to have lost this >capability of calculating the size of allocated memory regions >automatically. This is actually a pretty darn good question. Back in the good 'ol days, if you new'ed up an array of twenty objects of type T (where the type T required some destruction, either from a user- supplied (explicit) destructor or from a compiler-supplied (implicit) destructor) then when you wanted to delete that array, you had to say: delete [20] p; so that the compiler would know that the destructor (either explicit or implicit) should be called twenty times, i.e. once for each member of the array of T's. Well, we don't have to do that anymore. Nowadays were supposed to be able to expect the compiler and run-time system to do something intelligent like sticking in a length (or count) word at the start of the space allocated, so that later on we can just say: delete [] p; and have the run-time system figure out 20 destructor calls are needed. So why can't we just assume that there will always be such a length word near the start of all memory areas allocated via new, and then further assume that: delete p; will always look for such a word and then destruct the proper number of things. In other words, what do we need the (seemingly redundant) [] for? Well, there's a case to be made that the [] is not redundant. It's my assumption (although I'm not at all sure about this) that the current language definition tries to allow the implementation to save space by avoiding the additon of a length word when we just allocate a simple (non-array) object as in: T* tp = new T; if my supposition is correct, then this explains why we need to have two different forms of delete operator because the `tp' pointer could (as far as the compiler is concerned) point either to a single T (with no length word) or to an array of T's (with a length word). Thus, the two different forms of delete instruct the compiler to call one of two possible (radically different) forms of run-time delete routines. So that's that. While we are on the subject however, I'd like to reraise one point that I raised some time back in comp.lang.c++. The point is that if new and delete understood more about types, and if they were more `type correct' then we could get by with only one form of delete. In particular, the thing that bothered me (and still bothers me) about the current C++ language definition with respect to new and delete is that new seems to be `type incorrect' in the case of allocating arrays. For example, given the expression: new T where T is some type, what is the type of the value yielded? Answer: you can't tell. For any "normal" type, the type of the value yielded is (of course) `T*' however if T were defined as: typedef some_other_type T[10]; Then the type of the value yielded is type `some_other_type*'. Most of the folks I have discussed this inconsistancy with see it as `no big deal' however I'd like to see there be a more consistant rule which says that `new T' yields a `T*' regardless of the nature of type T. If such a rule were adopted, then we would need only one form of delete statement. Given: delete p; the compiler could easily decide which run-time delete routine to call based strictly upon the static type of `p'. If p's type were some pointer-to-non- array type, then the simple delete routine would be called. If however the type of `p' were some pointer-to-array type, then the array-delete run-time routine would be called. I believe that getting new to act in a more `type correct' manner would have other benefits also. I'll probably try to enumerate those in a separate posting. Anyway, am I the only one on the planet who thinks that this is even worth discussing? If so, I'll shutup (about this anyway :-). -- // Ron Guilmette - C++ Entomologist // Internet: rfg@ncd.com uucp: ...uunet!lupine!rfg // Motto: If it sticks, force it. If it breaks, it needed replacing anyway.
pal@xanadu.wpd.sgi.com (Anil Pal) (08/21/90)
Following up on the discussion of delete[], I wondered about virtual destructors, and whether they would be called correctly for arrays. To begin with, I was unable to find a definitive answer in the Annotated Reference Manual (although I may have missed it - please point me to it if I am wrong) as to whether or not Base * p = new Derived[10]; is legal, given class Derived : public Base; Assuming for the moment that it is (and my cfront 2.0 based translator does not complain, even with +p and +w enabled), then will delete [10] p (remember, this is 2.0, not 2.1) call the destructor for Derived, assuming Base::~Base is virtual? Well, I tried it, and yes, it did indeed call the derived destructor correctly. The next step was to invoke some other virtual function on each of the members of the array p. At this point things failed with a core dump. Now, I can understand that cfront has trouble determining the size of the elements in the array pointed to by p. My question is really why there is no warning whatsoever from the compiler for this situation, nor any prohibition that I could find against it in the ARM. Actually, this seems to be just another manifestation of C's confusion between arrays and pointers. Given that it is not possible to fix C, what can be done in C++? The simplest solution would be to prohibit Base *p = new Derived[20]; which would require differentiating between a pointer to Derived and a pointer to array of Derived. - Anil Pal Silicon Graphics pal@sgi.com (415)-335-7279 Attached is the program I used to test this. It compiles cleanly, but core dumps at run time on invoking the #include <stdio.h> class Base { public: int i; virtual void print() { fprintf(stdout, "Base::print(%d)\n", i); } Base() {} virtual ~Base() { i = 0; } }; class Derived : public Base { public: int j; void print(); Derived() { j = 0; } ~Derived() { fprintf(stdout, "Derived::~Derived(%d)\n", i); } }; main() { Base* p = new Derived[20]; for (int i = 0 ; i < 20; ++i) { p[i].i = i; } /* This loop core dumps */ for (int j = 0 ; j < 20; ++j) { p[j].print(); } delete [20] p; return (0); }
jimad@microsoft.UUCP (Jim ADCOCK) (08/23/90)
Roughly speaking E&S corresponds to compilers claiming "2.1" compatibility. delete [20] p verses delete [] p is a "2.0" to "2.1" difference. This is discussed on pg 65 E&S. I wouldn't be surprised compilers coming out in the near future allow the old form as an anachronism -- issuing a warning, but accepting the construct. Don't be surprised if "2.0" compatible compilers don't accept the "[]" construct.