Learn More about Hidden Markov Models

Back to the main National Seminar Series page

Hidden Markov Models (HMMs) have become a standard tool in many different fields of science and beyond. Specific applications include algorithms for recognizing speech and hand-writing (one of the earliest uses), identification of transcription binding sites in genetics, forecasting of weather for the prediction of wind turbine output, modelling of complex disease dynamics, prediction of financial times-series and exploring animal movement.  According to the Scopus database, the number of papers listing “Hidden Markov Model” as a keyword jumped from 961 in 1996 to over 1300 in 1997 and has remained above this level ever since. In 2019, the 1685 articles listing “Hidden Markov Model” as a keyword included 1061 in computer science, 699 in engineering, 371 in mathematics, 183 in Biochemistry, Genetics, and Molecular Biology, 172 in Medicine, 156 in Physics and Astronomy, and many in the social sciences, material science, energy, agricultural and biological sciences, neuroscience and environmental science. 

The structure of the hidden Markov model (HMM) is simple but powerful.  The basic assumption is that the system of interest follows a Markov process with unknown transition probability matrix, but the states of this process can’t be observed directly. Instead, at each time point we observe one or more random variables whose distribution depends on the current state. The goal is then to make inference about the original process based on what we see.  

One of the key advantages of HMMs is the efficiency with which parameter estimates can be computed. For any set of parameters, the probability of all possible sequences of the hidden states can be efficiently computed through application of the forward-backward algorithm. These probabilities can then be used to maximize the likelihood through the Baum-Welch algorithm, a specific implementation of the EM-algorithm for fitting HMMs, or to conduct Bayesian inference through Markov chain Monte Carlo sampling or approximation of the posterior distribution.  

Many extensions of the simple HMM have also been developed to model more complex systems. These include multi-level HMM, layered HMM, hierarchical HMM, and hidden semi-Markov models. 

Comments are closed.