[comp.theory] Average running time of a simple algorithm

kolount@portia.Stanford.EDU (Michael Kolountzakis) (12/23/89)

Hi, here is a question:

We have a vector f[1..n] and we fill it like this:
 f[1]=0 and f[i] = f[i-1]+-1 with equal probability.

 We have the following simple algorithm for finding max{f[i], i=1..n}.

 We have a variable M, initially 0, and we increment it until it becomes
the required value as follows: a pointer starts at 0 and moves to the right
as much as it can, that is M-f[current position]+1. It then updates M if
necessary (that is when f[new-current-position]>M). The algorithm is over
when the pointer passes the end of the vector. This is a simple algorithm.

 Another version of it could be to split the vector into k equal pieces, i.e.
from 1 to n/k, from n/k+1 to 2*n/k, etc, and use the previous algorithm on
these pieces simultaneously. That is, we have ONE global variable which all
copies of the algorithm update.

 If we want to compare these two versions (with k=1 and k>1, say 2) we
must of course decrease the speed of the pointers to 1/k of the speed of
the single pointer (we distribute the time of our computer evenly, in other
words).
 
 The strange thing is that my simulations show that the second version is (on
the average of course) much better than the first. This happens up to a certain
value of k.

 Can anybody tell me if this is true, and if yes WHY?

	Michael Kolountzakis 

	kolount@gauss.stanford.edu