[comp.databases] Benchmarks and DBMS Internals

davek@rtech.rtech.com (Dave Kellogg) (07/24/88)

In article <302@turbo.oracle.UUCP> rbradbur@oracle.UUCP (Robert Bradbury) writes:
>I suspect that the hard numbers I need to back it up are covered by a
>non-disclosure agreement of some sort.  I make the claim based on the fact
>that the sort/merge/join modifications made to Oracle 5.0 were done
>specifically to outperform earlier releases of Informix and Ingres
>which had been improved to outperform Oracle 4.0.
>

Several Comments Here:

	1. Modifying "sort-merge" join algorithms will do absolutely 
	   *nothing* to speed up performance of TP1-like benchmarks.
	   There's not a single join query in a TP1, or DebitCredit, 
	   test.  For DeWitt tests, it should speed up all queries 
	   which involve joins - but only those queries which involve
	   joins (either internally or externally).

	2. Robert's claim that "earlier releases of INGRES were improved
	   to outperform his firm's Version 4" presupposes that INGRES
	   needed modification to outperform his firm's product.  I wish
	   Robert would take the sage advice of many readers of this group
	   and stop throwing [erroneous] stones.  By the way, INGRES 
	   Release 4 was primarily a functionality release, with most 
	   enhancements made to user-interfaces, and not to the DBMS.

	3. Robert's erroneous note that enhancing "sort-merge" algorithms
	   will generally enhance performance does, however, bring up
	   an excellent point -- different DBMS modifications affect 
	   different classes of queries.  For example, "fast commit"
	   speeds updates, but won't do a darn thing to speed up selects
	   (or fetches).

	   I'll make the obvious statement now, that I presume we all 
	   already know in our hearts.  Purported words of wisdom follow:

		Database Systems are very complicated.  Marketing 
		departments are chartered with "differentiating" 
		their products from others.  This often results in
		feature lists, scorecards, advertisements, etc. that
		say 'without feature X, you're up the creek.'

		However, due to the complexity of DBMS systems, 
		comparing feature for feature won't do a darn thing
		to help see if a system will meet your needs.  Benchmarks
		will do a little more to help, but not much, and the
		circles of confusions surrounding benchmarks is helping
		to make them almost worthless, unless written and run
		by you, yourself.

		However, customer inquiry about 'internals' is natural
		and inevitable.  I personally recommend that "feature
		lists" be avoided. Rather, I like architectural discussions
		which will actually allow someone to get an overall
		feeling as to how a system functions.

		In evaluating a system for your needs, I recommend the
		following:

			1. Determine your needs, first. Failing to do so
			   often results in a, typically the first, vendor
			   defining YOUR needs for you.

			2. Tell the vendors your needs.  If they tell you 
			   "No!  You need the features on this scorecard 
			   that no on else has...," that should give you a 
			   feeling as to their sincerity in desiring to help
			   you.

			3. Technically become familiar with the overall 
			   architecture of the system.  You'll be amazed
			   how many things you can induce, once you understand
			   the "philosophy" of a product.

			4. If needed and time permitting, develop and 
			   benchmark a proto-type application using the 
			   vendor's REGULAR phone support (so you can get a 
			   taste of post-honeymoon support) during development.
			   Furthermore, development time and ease should be a 
			   major factor in evaluating "system performance."

			   I'd recommend allowing their consultants to help
			   you with tuning the benchmark, however, because if
			   you have just started with a product you won't 
			   have sufficient tuning knowledge to see it at its
			   best.


Feel free to make additions to the above list.  In my personal opinion, 
the creation of a comp.databases "evaluation methodology" is a *far* more
worthwhile pursuit than the creation of a comp.databases benchmark.  Such
a methodology would also benefit DBMS shoppers far more than yet another
benchmark.  Every benchmark sets out to "clear the smoke" -- see what a good
job they've done? :-).

David Kellogg
|----------------------------------------------------------------------
|Relational Technology (INGRES) New York City
|212-952-1400
|"Opinions above are my own, and not necessarily those of my employer"
|----------------------------------------------------------------------