The final tutorial session has been rescheduled for Wednesday 2 June at the same time in the same place, 11:00-12:30 in 40b-AB-05 (CVSSP seminar room).
The purpose of this series of tutorials is to introduce the principles of hidden Markov models in a way that can be applied to many kinds of pattern recognition problem. It is aimed at PhD students, research fellows and academics in the CVSSP group, but other researchers are welcome to attend. The three main elements involve:
A rough outline of the content is given below, and further details will be added at the course proceeds.
# Date Topic Slides 1 11am Wed 21 April Introduction to Markov models and HMMs hmm_tut1.pdf 2 11am Wed 28 April Likelihood calculation and Viterbi decoding hmm_tut2.pdf 3 11am Wed 5 May Maximum likelihood re-estimation hmm_tut3.pdf 4 11am Wed 12 May Output probability distribution functions hmm_tut4.pdf 5 11am Wed 2 June Extensions and applications hmm_tut5.pdf
None needed, unless you've missed the earlier sessions (in which case you should read through the slides). If you are interested in doing some background reading, I would recommend Rabiner's tutorial article:
L.R. Rabiner. "A tutorial on HMM and selected applications in speech recognition". In Proc. IEEE, Vol. 77, No. 2, pp. 257-286, Feb. 1989.
The first session will provide a basic probabilistic framework and mathematical notation for describing finite-state models, which will introduce the Markov model and the hidden Markov model.
Using the HMM framework, we will look at how to calculate the probability of a given path through the model's states, and hence find the optimal path to explain a set of observations. I will show how the Viterbi algorithm computes a good approximation to the best path efficiently, which can thus be used for decoding.
The third key element shows us how to set the model's parameters. From some initial values, the parameters are updated by an Expectation Maximisation process, called Baum-Welch re-estimation. This will be presented first for discrete observations.
The fourth session will present the corresponding results for continuous observations, beginning with a simple Gaussian probability distribution and building up towards multivariate Gaussian mixtures.
Having presented one particular kind of technique, the hidden Markov model, there are a number of extensions that will be discussed which have been developed to overcome one or other of its limitations. Although I'll use examples throughout the tutorials to illustrate how the algorithms work, the final session will show a couple of longer worked examples. Maybe you'll have tried out a little example yourself by then that you could share with the group!
Matthew has recommended a book which has a worked example of an HMMs in chapter 20:
R. Callan. Artificial intelligence. Basingstoke: Palgrave Macmillan, 2003. [ISBN 0333801369]The library has four copies that are held at shelfmark 006.3/CAL.
If you have any other specific questions outside of the tutorials, you can email me and I'll do my best to respond promptly. I'd also be grateful for any handy links that I could add to this page, so send then to me.
People: Philip ]
© 2004, maintained by Philip Jackson, last updated on 2 June 2004.