[comp.databases] Correction of benchmark availability from Oracle.

rbradbur@oracle.UUCP (Robert Bradbury) (07/19/88)

I have recently been informed by a manager at PACBELL that Oracle
does not have the results of the benchmarks they did, that the
results of the benchmarks are proprietary and that I should be
shot for suggesting to the people reading this topic that a
public utility could release information favoring one vendor
over another.  All I am told that I can say is that PACBELL
has chosen Oracle as one of its database systems.

Ouch!

Of course that leaves us back where we were before I opened
my mouth and stuck my foot in it.  As I see it the companies
that have the manpower & machine resources to do these
benchmarks "right" are dis-inclined to publish the results
due to the potential legal hassles.  From participating in these
benchmarks I know that large companies spend man-months and days of
dedicated computer time (which is expensive for a machine like
an Amdahl) comparing these systems.  I also know of one large
company with a policy of re-evaluating the RDBMS on the market
every 2 years.  Given that most of the RDBMS are larger than
the UNIX kernel (some by a factor of 2 or more) and probably
require many times more functional tests than UNIX does one
is faced with a massive job when trying to compare these
systems.

I think I'm forced to agree with the suggestion that we
need an open forum where all vendors agree to participate
and publish the results.  I think this was tried at UniForum
recently and that one or more of the major UNIX RDBMS vendors
did not participate (I know we did).  I think problems crop
up here when the UNIX machine vendors willing to donate machine
time happen not to be those machines on which one vendor or
another performs favorably.  The obvious solution is to have
a variety of platforms agreed upon far enough in advance
that the RDBMS vendors have their latest and greatest
running and tuned for those machines.  Also they would have
to agree to publish all of the code used in the benchmarks
so other vendors would be free to respond to any overly
"creative" approaches to superior performance. 

If I recall there was/is a committee/working group for USENIX/UniForum
which is focused on these kinds of things.  Does anyone know what
its status is and/or if the results of the last RDBMS comparison
were published?

eric@pyrps5 (Eric Bergan) (07/19/88)

In article <274@turbo.oracle.UUCP> rbradbur@oracle.UUCP (Robert Bradbury) writes:
>
>I think I'm forced to agree with the suggestion that we
>need an open forum where all vendors agree to participate
>and publish the results.  I think this was tried at UniForum
>recently and that one or more of the major UNIX RDBMS vendors
>did not participate (I know we did).

	Actually, it was held at UNIX Expo in October of 1987. The
participants were Focus, MDBS, Oracle, Progress, and Unify. Other
vendors, based on previous experiences with this event, declined
to attend. The event was run by Neal Nelson, who markets a benchmark
suite that is not database related.

	The test database was only 14 megabytes of data. The test
machine had 16 megabytes of memory, so large portions of the tests
did no disk accesses. The tests were single thread, and somewhat
similar to the DeWitt tests in what they were testing (various
size scans, joins, aggregates, etc.) There were only a few update
tests, and it was not specified if logging was to be enabled. There were
no OLTP-type tests.

	Of the 16 tests, the vendors completed 10, 11, 8, 7, and
14 of the tests (not in any order of vendor). The reason was insufficient
time to allow each vendor to complete.

	All in all, I'm not surprised that the database vendors are
not particularly interested in running in this environment. My understanding
is that Neal Nelson is negotiating with the various DBMS vendors to
provide a better test environment for the upcoming UNIX Expo. But
it will be difficult in a trade show environment, and in the course
of a few days, to be able to do a reasonable test of even a subset of
the 20+ DBMS's available under UNIX. Ideally, you would want to
use a 100+ Mbyte database, which probably kills a day per database system
just to load and build the indices. Then at least another day should be
alloted to allow the database vendor to run the test, and make "reasonable"
tuning adjustments. ("reasonable" presumably does not include making
modifications to the database code itself, but does include adjusting
buffering, data layout on spindles, etc.) Sounds like something of
a logistical nightmare.

	I really do wish Neal Nelson luck with this effort, but I'm
just not sure that it can be pulled off at a trade show. On the brighter
side, there are apparently efforts underway (some reported on and talked
about at SIGMOD) to try and come up with more meaningful benchmark tests
which are better representations of the "real" world. (What, you
think the average OLTP application does 60% updates to 40% retrievals
like TP1 does?) Also, the conditions under which the benchmarks are
to be run are being more strictly defined, to try and reduce the
apples and oranges comparisons. So maybe in the next year or so we
will see more definitive benchmarks being run and published.