nspl@olivey.olivetti.com (Riccardo Mazza) (03/26/90)
Does anybody have informations about commercial or public domain C++ test suite for AT&T C++ Translator, Release 2,0? Any help will be appreciated. Thanks in advance. ------------------- Ferraro Giovanni Olivetti N.ICO 2" Piano, Ivrea(To) Italy +39 125 528154 UUCP : mario@icodap.ico.olivetti.com
bill@contex.UUCP (Bill Phillips) (03/27/90)
In article <57289@oliveb.olivetti.com> nspl@oliveb.UUCP (Riccardo Mazza) writes:
:
: Does anybody have informations about commercial or public domain
: C++ test suite for AT&T C++ Translator, Release 2,0? Any help
: will be appreciated. Thanks in advance.
We'd be interested in this information too; thanks.
--
[Bill | William F] Phillips, [Systems Development | Development Systems] Group
Xyvision Design Systems (in a waterfowl preserve in Wakefield, MA, USA)
uunet!contex!bill (formerly wfp@well & wfp@dasys1)
halley@cs.utexas.edu (Mike Kelley) (03/28/90)
I just called Perennial in Santa Clara about such a beast. $15K for a site-source license: a little steep for our needs, since we're not writing a pre-processor/compiler. If anyone knows of a public domain suite, please post--I'm sure lots of netters would be grateful. If you're interested in Perennial's product, their number is (408) 727-2255; uucp address is sun!practic!peren. Mike Kelley Tandem Computers, Austin, TX halley!kelley@cs.utexas.edu (512) 794-0954
bright@Data-IO.COM (Walter Bright) (03/29/90)
I'd be wary of most compiler test suites. Nearly all that I've looked
at only tested parsing and semantic analysis. Practically *none*
tested things like optimization, aliasing assumptions, code generation,
floating point accuracy, etc.
Another problem I tend to see are tests that only are meant to be
compiled. The tests are not complete enough to actually run.
The following compiler program will 'pass' the test:
int main() { return 0; }
Things to look for in a test suite are constructs like:
testadd()
{ int i1,i2;
assert(sizeof(i1) == 2); /* for 16 bit ints */
i1 = 1;
i2 = 2;
assert(i1 + i2 == 3);
assert(i1 - i2 == -1);
...
}
rfg@ics.uci.edu (Ronald Guilmette) (03/29/90)
In article <57289@oliveb.olivetti.com> nspl@oliveb.UUCP (Riccardo Mazza) writes: > > Does anybody have informations about commercial or public domain > C++ test suite for AT&T C++ Translator, Release 2,0? Any help > will be appreciated. Thanks in advance. I don't know anything about the Perrienial test suite, except that it costs money. For those with empty pockets, I should add that I have been providing my bug report files for cfront & g++ to people who have requested them. I suppose that you could call this a "regression tests suite" (as opposed to a "formal validation" suite), but even that would be stretching it a bit. All my bug report files do is give very short (usually minimal) demonstrations of all of the bugs that I have found in cfront and g++. If you need to (or want to) test a brand new C++ compiler or translator of your own design, these tests will probably not help you much unless you happen to create some of the exact same bugs as the authors of these other C++ language processors have. The point is that these bug report files are *not* exaustive validation tests. They are just bugs reports which all happen to include sufficient C++ code to demonstrate particular bugs. That's all. Several people have already written to me asking for these files, and most of these people have told me that they only want them because they have to write some C++ code and they want to increase their chances that their code will be (easily) portable between cfront and g++ (or vise versa). These people seem to think (incorrectly) that (a) I have already found either *all* of the bugs in these two processors, or at least the majority of them, and (b) that the best way for them to learn about the features to be avoided (for portability's sake) is to read my bug reports. Both of these ideas are wrong. There is no way for me to know what percentage of bugs actually exist in these two C++ language processors, so there is no way for me to reliably guess at what percentage of the existing bugs I have found to date. I have not been doing these reports for very long however, and so I would guess that I have only found a small fraction of the bugs which actually exist. But it's impossible to know. You can *never* prove the *absence* of any bugs (or, in this case, and *more* bugs). You can only prove the inverse of that (i.e. that some do exist) by giving concrete examples. Regarding methodologies for insuring that code is *portable* between g++ and cfront (or vise versa), reading my bugs reports is probably one of the *least* effective methodologies. In contrast, the most effective methodology (and the one I would recommend to all who need to do this) is to (a) obtain both of these C++ language processors, and (b) frequently recompile *all* of your code with both of these processors during development (and fixup any compatibility problems as they become evident from the errors and warnings produced during these compilations). In short, if you want to know *exactly* what each of these C++ language processors do and do not currently accept, just ask them! Oh yes, and if you do follow my advice and do compilations with both, it would be great if you could send brief descriptions of any differences in behavior between the two to me, so that I can record them as either cfront bugs or as g++ bugs. (Differences are usually one or the other.) So given everything I've said, why do so many people want test suites for C++ all of a sudden? Are a lot of you folks writing your own "from scratch" compilers? C'mon now. Fess up. // Ron Guilmette (rfg@ics.uci.edu) // C++ Entomologist // Motto: If it sticks, force it. If it breaks, it needed replacing anyway.