[sci.math] Looking for Markov modelling code

aas@brolga.cc.uq.oz.au (Alex Sergejew) (05/26/91)

Gidday!

Back on Thu, 2 May 1991 11:34:25 GMT I asked the net:

>  I'm aware that there is an extensive literature on efficiently estimating
> state transitions and unknown parameters of hidden and semi markov signal
> models from time series data, primarily used in the speech processing
> context, but have been making heavy weather of translating the basic
> references I've found into useful working code.
>
> Could anyone please point me in the right direction, whether it be
> algorithmic descriptions or (preferably) PD source code.  If there is enough
> interest I will gladly post a summary.

The following were also interested:

David Haines <haines%hobart@cs.umass.edu>
Robert Wylie <wylie@watnow.uwaterloo.ca>
Krishna Nanda <nanda@linc.cis.upenn.edu>
Sven.Koenig@a.gp.cs.cmu.edu
Wolfgang Nejdl <vexpert!nejdl@relay.eu.net>
Rick@vee.lrz-muenchen.de

and my thanks to the following whose replies, whilst pointing to helpful
references (most of which I was already aware of), unfortunately did not
pin down any working code:

Mou-Yen Chen <mychen@cs.buffalo.edu>
Len Moskowitz <moskowit@paul.rutgers.edu>
Robert Goldman <rpg@rex.cs.tulane.edu>

I append extracts from their replies.  I have informally been reassured that
semi Markov and HMM algorithms are not all that difficult in themselves, once
one gets past the notation in which they are described.  If anyone *does*
have more information (or code) I'm sure we'd all love to hear about it!

Alex.

 _--_|\    Alex A Sergejew               Internet:  aas@cc.uq.OZ.AU
/      X   University of Queensland      Voice:     +61-7-271-8298
\_.--._/   Brisbane, Qld, Australia      Fax:       +61-7-271-8567

+-+-+-+-+-+-+-+

>From: Mou-Yen Chen <mychen@cs.buffalo.edu>
>Message-Id: <9105030737.AA20871@sirius.cs.buffalo.edu>
>In-Reply-To: <1991May2.113425.802@brolga.cc.uq.oz.au>
>Organization: CEDAR, SUNY at Buffalo

    We are using HMM(Hidden Markov Model) in hand-written word recognition.
As my knowledge, the direct way to implement MM is the Viterbi Algorithm
which is the statistical version of DP(Dynamic Programming). Maybe these
references can help you a little:

(1) A. Kundu, Yang He and P. Bahl, "Recognition of Handwritten Word: First and
    Second Order Hidden Markov Model Based Approach", Pattern Recognition,
    Vol. 22, No. 3, pp.283-297, 1989
(2) L.R. Rabiner and B.H. Juang, "An Introduction to Hidden Markov Model",
    IEEE ASSP Magazine, Vol. 3, No. 1, Jan.1986, pp.4-16
(3) G. David Forney, JR. "The Viterbi Algorithm", Proc. IEEE, Vol.61, No.3,
    March 1973.

+-+-+-+-+-+-+-+

>From: Len Moskowitz <moskowit@paul.rutgers.edu>
>Message-Id: <9105031826.AA17586@paul.rutgers.edu>
>In-Reply-To: USENET article <1991May2.113425.802@brolga.cc.uq.oz.au>

It's not Markov Models but there was some work on something similar
that's embodied in a product called AIM (Abductory Induction
Mechanism) available from AbTech Corp (700 Harris St.,
Charlottesville, VA 22901 USA, phone: 804-977-0686).

+-+-+-+-+-+-+-+

>From: Robert Goldman <rpg@rex.cs.tulane.edu>
>Message-Id: <9105032022.AA21658@rex.cs.tulane.edu>
>In-Reply-To: aas@brolga.cc.uq.oz.au's message of 2 May 91 11:34:25 GMT

... the best presentation of the Viterbi algorithm I've found for presentation
to students is the survey article by Forney, as follows:

@ARTICLE{Forney,
	AUTHOR = {G. David Forney},
	Title = {The Viterbi Algorithm},
	JOURNAL = {Proceedings of the IEEE},
	YEAR = {1973},
	VOLUME = {61},
	NUMBER = {3},
	PAGES = {268--278}
}

I have not found a comparably detailed (i.e., with pseudo-Algol)
treatment of the forward-backward algorithm for HMMs.

+-+-+-+-+-+-+-+