[net.database] vendor-supplied benchmarks

hedrick@topaz.RUTGERS.EDU (Charles Hedrick) (08/12/86)

There are seldom clearcut winners and losers in performance.
Generally a user is looking for, not so much a global best product, as
one where the tradeoffs made are appropriate for his usage.  Many
vendors take a long-term view, and do not want to get customers whose
application is inappropriate for their products or whose expectations
are bound to be thwarted.  It is obvious that vendors will attempt to
present information in a way that puts their best foot forward.  But
benchmarks and other information from vendors can still be useful.
The majority of sales people with whom I deal are ethical people, and
attempt to prevent a fair impression of their product.  Those few who
do not operate in this way generally find that their business with my
group is limited.

I understand that many may consider vendor-supplied benchmarks
inappropriate for Usenet.  For this reason many vendors will no doubt
prefer to refrain from posting them.  However I personally would be
happy to see them, as long as the postings are written by technical
people, include descriptions of how they were done and their likely
limitations, and as long as they do not contain overt advertisements.
Performance evaluation is an important topic, and vendor personnel are
often in a position to spend more time doing this sort of testing than
most users are.

metro@asi.UUCP (Metro T. Sauper) (08/14/86)

I feel that vendor benchmarks are valuable.  Obviously vendors will come
up with programs which make their particular software shine above the
competition.  This has great value as long as the benchmark programs are
published as part of the benchmark.  Who would better know how to drive
a database system at its maximum than the vendor?

Examples of high performance benchmarks might give valuable insight to
the best (intended) way of using a database package.

On the Aside -- I see no problems with "advertisements" which have technical
value without purchasing the product. In the example of benchmarks, they
not only show where some programs are better than others, they quite often
show difficiencies between systems that are not even the "favored" system.

Metro T. Sauper, Jr.
ihnp4!ll1!bpa!metro

robinson@ecsvax.UUCP (Gerard Robinson) (08/14/86)

I don't always have my facts straight, but is there not a Database Derby
going on/just completed in which both RTI and Oracle are/were participants?
Is there some way that these results can be posted to the net?
Thanks.

				Gerard Robinson
				UNC-CH School of Medicine