[sci.math] Bit error rates in a transmission

rg@msel.unh.edu (Roger Gonzalez) (03/23/91)

I'm trying to develop a package that exercises various serial devices
(such as RF or underwater acoustic modems) for their (a) bit error rates
and (b) distances between bit errors.  I'm doing this because another
guy here at our lab needs these numbers so that he can plug them into
some formulae for determining the best ways to send data through the
medium. 

I am using a data stream that looks like this:

:AA:AB:AC:AD .... :zx:zy:zz

I'm calling a sequence of ':XX' a packet.

When I detect an error, I start putting bytes into an "analysis buffer".
If, while in this error condition, I read X correct packets (2, at present)
I assume I have resynced, and analyze the contents of the buffer.

My first cut is this:

If # bytes in buffer == # bytes expected, compare each byte in buffer with
what it should be (bitwise) and update the error rate.

If # bytes in buffer < # bytes expected, then there was a dropout somewhere.
I take the current bit error rate, multiply this by the number of bits in
the buffer, and call this the number of good bits in the buffer.  I then
use this to recalculate the error rate.  (For example, if there are 400 bits
in the buffer, and I was expecting 500, and the current rate is .50, and
I have 20000 good bits so far, I recalculate the error rate from 
   rate = (20000 + (.70 * 400))/(40400)

I'm not quite what to do if there are extra bytes.  (buffer > expected)

At any rate, the biggest flaws I see with this are 

    1) In the first instance, if there is a dropout near the beginning
       and just enough extra bytes later on, the entire sequence will 
       be shifted left (or vice versa) and the bit comparison will be
       completely bogus.

    2) I'm not sure about the mathematical legality of my method in
       the second case.  It's a fudge factor that I'm hoping will
       never become statistically bothersome.

It seems to me that I need a more general case procedure that combines
both cases, and is more intelligent about dropouts/extras.  Unfortunately,
my best idea to date (other than what I said above) is to keep recursing
on the buffer, trying to make it "score" as low a bit error rate as possible.
This seems computationally painful (and painful to code as well).

Since I'm trying to keep up with 1200-4800 baud on the fly on a 12MHz '286,
I need something that runs fast.


Any help, ideas, thoughts, or criticisms would be greatly appreciated.

-Roger


BTW: here's a typical extract from the error analysis buffer, if you're
feeling masochistic and want an example

C:C:A!DvFH~A:E:A@zBJ::

or,

C:_C:A!DvFH~A_:E_:A@z___BJ:__:  (aligned with below, '_' = dropout, '!' = extra)

B:AC:A D:AE:AF:AG:AH:AI:AJ:AK:  (what it should have been)

So.. what should the error rate be?

Bit comparisons should be done when vertically aligned, the '!'
indicates an extra character in the buffer that doesn't correspond to
any real position.

While there are some places where ambiguity exists that even a human
cannot distinguish, I need a way to get numbers for parts that are
nonambiguous.


Thanks!


    
-- 
"The question of whether a computer can think is no more interesting
 than the question of whether a submarine can swim" - Edsgar W. Dijkstra 
rg@[msel|unhd].unh.edu        |  UNH Marine Systems Engineering Laboratory
r_gonzalez@unhh.bitnet        |  Durham, NH  03824-3525