[comp.databases] Oracle Peformance Simulation

honzo@4gl.UUCP (Honzo Svasek) (11/20/88)

From article <3352@newton.praxis.co.uk>, by ben@praxis.co.uk (Ben Dillistone):
> I am in the process of trying to create a performance evaluation
> environment for an interactive/batch Oracle system.
. . .
> The problem is simulating variable numbers of users.
> The only solution so far is to use a script repeatedly running SQL*Forms
> with an pre-created echo file generating the key strokes. A single such
> script generates about 150 queries a minute on a MicroVAX (can you type
> that fast?).

I have worked on a simular simulation for Unify/Accell on Unisys hardware.
We started of the way you describe. It gave some idea of the speed of the
machine for this application, but was not really realistic.

After quite some experimenting, we ended up with a system that used pty's
and some sort of feedback.

Two programs talking to the master end of the pty. A writer program (the
hands of the typist) and a reader program (the eyes of the typist).

The writer 'types' the keystrokes for one transaction and sleeps.
The reader reads the 'screen'  until it recognises a token on the
screen that indicates that the transaction is now processed. It then
waits some seconds ('thinks'), and then wakes the writer up with a user
signal.

The actual benchmark was developed from scratch for my client, and I doubt
that they are willing to make it available. The reader/writer part was
developed be me and based on comp.sources programs. I guess this makes
it public domain :) 

If there is a lot of interest for it I will post it AS IT IS. This means that
you will have to change some things to make it work for you. For instance: it
has vt220 keys hard coded in!
If there is not a lot of interest, I will mail it... just let me know.

I_I(           _  
I I ) Honzo Svasek  <honzo@4gl.nl>
---------------------------------------------------
Svasek Software Consultants  - - - - - > Uniconsult
IJsselkade 24, 3401 RC IJsselstein, the Netherlands
---------------------------------------------------

jkrueger@daitc.daitc.mil (Jonathan Krueger) (11/23/88)

In article <557@4gl.UUCP>, honzo@4gl (Honzo Svasek) writes:
>From article <3352@newton.praxis.co.uk>, by ben@praxis.co.uk (Ben Dillistone):
>> I am in the process of trying to create a performance evaluation
>> environment for an interactive/batch Oracle system.
>. . .
>> The problem is simulating variable numbers of users.
>> The only solution so far is to use a script repeatedly running SQL*Forms
>> with an pre-created echo file generating the key strokes. A single such
>> script generates about 150 queries a minute on a MicroVAX (can you type
>> that fast?).
>
>I have worked on a simular simulation for Unify/Accell on Unisys hardware.
>We started of the way you describe. It gave some idea of the speed of the
>machine for this application, but was not really realistic.
>
>After quite some experimenting, we ended up with a system that used pty's
>and some sort of feedback.
>
>Two programs talking to the master end of the pty. A writer program (the
>hands of the typist) and a reader program (the eyes of the typist).

I am currently writing up the results of a similar benchmark performed
here.  The advantages of this type of testing include:

	measures specific software and hardware of interest

	yields clear, unambiguous results

	predicts real loads from real users under real conditions

	finds limits of acceptable performance with parametric testing

	finds system, application bottlenecks, identifies where to invest
	system upgrades or application optimization

In other words, it's highly empirical and directly relevant.  But the
disadvantages include:

	impossible to be sure how well simulations predict actual users,
	but accuracy of prediction depends entirely on this guess

	results are specific to exact hardware config and environment;
	no way to predict results on different processor, memory size,
	disk speed, operating system, application itself, version of DBMS

	highly human-intensive, therefore these data are expensive to
	collect, yet due to above must be recollected after each
	significant change to hardware, OS, application, or DBMS.
	Worse yet, definition of `significant' is circular; without
	reference to a model, only way to know what's significant is
	to re-run the simulation.

	Testing must be done on an unloaded system from driver
	software executing on another system, else artifacts render
	results unreliable; not only is this inconvenient, as it
	removes two shared systems from general availability, but it
	also tells you little about mixed workloads, i.e. what
	response database users will get when other users are active.

In other words, it's not a powerful method.  The ratio of data
collected to predictions generated is low.  Underlying problem seems
to be you're observing performance but not modeling it.

This topic comes up every now and again on comp.arch. I recommend John
Mashey's articles on performance measurement there, a recent summary
by Henry Spencer is <1988Nov18.200914.4636@utzoo.uucp>.  They warn
against small, synthetic benchmarks, single figures of merit, and
measures of performance which poorly represent real applications.

-- Jon
-- 

pavlov@hscfvax.harvard.edu (G.Pavlov) (11/23/88)

>From article <3352@newton.praxis.co.uk>, by ben@praxis.co.uk (Ben Dillistone):
> I am in the process of trying to create a performance evaluation
> environment for an interactive/batch Oracle system.
>. . .
> The problem is simulating variable numbers of users.

  Whatever method you use will be viewed as suspect.  Hopefully by yourself 
  as well as others.

  Quite a long time ago, we attempted to benchmark systems using an approach
  which may give you some ideas.  We wrote a series of programs requiring
  "user interaction" which ran on the machines to be tested and a Compaq por-
  table with software that simply provided the appropriate "response" to the
  programs' prompts.  The Compaq portable was used because we had to travel to
  the machines.  It was also a decent match for the machines we were looking
  at, VAX 750's and peers.


  If I were to try to do something like this again, I would obtain a fast pc
  with Unix and a batch of serial ports.  Though "fast" would probably still
  not be fast enough for a transaction processing simulation on a capable
  host machine.

   greg pavlov, fstrf, amherst, ny