[comp.org.acm] Programming Contest Style

booga@ibmpa.awdpa.ibm.com (Steve Jankowski) (05/03/91)

I think the major problems of a programming contest for quality
have been brought up.  There is just no widely accepted judgement of
software quality.  But, all is not lost, there are several meaningful
measurements of quality that can be run.  And there are other ways
to measure team success.

Starting with the latter, I helped run a "new" style programming
contest at Cal Poly, San Luis Obispo (shortly before I graduated,
a short time ago).  The idea was not mine, but belongs to a Cal Poly
prof; John Dalbey.  The goal was to get students to spend more time
thinking about a problem and their solution.  So, John decided to
measure the number of compiles and runs that the student performed.
Compiles that returned errors "cost" more points than those that
compiled cleanly.  Each run of compiled code cost a point as did a
failed turn in.

I wrote a small shell to help keep people honest (though it was FAR
from secure) and to automatically track compiles and runs.  This was
done on a Unix platform with the cc and gcc compilers.  One nice
consequence was that the students didn't have to turn in source code.
The turnin command in the shell triggered a piece of email to the
judge (Dalbey) who then just ran the a.out in the contestant's
directory (the shell chdir'd to a "protected" directory).

Students did spend more time planning their solution to the problem.
To keep life simple, two problems were given (one easy and one hard)
and there were no teams.  There was an obvious dichotomy between the
students who knew the language well, those who knew how to program,
and those who were lost on both counts.

There was quite a bit of emphasis on knowing the language real well,
but that requirement could be loosened up without making the contest
too easy.

Another thing I have in mind requires a fair amount of knowledge of
software engineering.  Far more than would be taught at the high
school level.  How about requiring teams to generated a regression
test suite for their programs and measure code coverage from that.
There are some free coverage testers for C, so the details shouldn't
be hard to work out.  I/O would likely need to be limited to stdin
and stdout, but that's not a big deal.
If contestants knew they had to generate sufficient test data for
their code to run properly and achieve 85% coverage, I think they
would code much differently.  I can't convince myself they would code
any better, but at least some semblance of s/w engineering would
be included.

Code coverage is an example, perhaps there are other s/w eng mechanisms
that could be included.

There's still the problem that some would discount regression testing and
coverage test as a waste of time... but oh well.  Some people think
programming is a waste of time.

booga
booga@ibmpa.awdpa.ibm.com