Academic
Publications
Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory

Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory,10.1186/1471-2105-9-224,BMC Bioinformatics,Alexander G. Churbanov,Step

Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory   (Citations: 8)
BibTex | RIS | RefWorks Download
Background: The Baum-Welch learning procedure for Hidden Markov Models (HMMs) provides a powerful tool for tailoring HMM topologies to data for use in knowledge discovery and clustering. A linear memory procedure recently proposed by Miklos, I. and Meyer, I.M. describes a memory sparse version of the Baum-Welch algorithm with modiflcations to the original probabilistic table topologies to make memory use independent of sequence length (and linearly dependent on state number). The original description of the technique has some errors that we amend. We then compare the corrected implementation on a variety of data sets with conventional and checkpointing implementations. Results: We provide a correct recurrence relation for the emission parameter estimate and extend it to parameter estimates of the Normal distribution. To accelerate estimation of the prior state probabilities, and decrease memory use, we reverse the originally proposed forward sweep. We describe difierent scaling strategies necessary in all real implementations of the algorithm to prevent under∞ow. In this paper we also describe our approach to a linear memory implementation of the Viterbi decoding algorithm (with linearity in the sequence length, while memory use is approximately independent of state number). We demonstrate the use of the linear memory implementation on an extended Duration Hidden Markov Model (DHMM) and on an HMM with a spike detection topology. Comparing the various implementations of the Baum-Welch procedure we flnd that the checkpointing algorithm produces the best overall tradeofi between memory use and speed. In cases where sequence length is very large (for Baum-Welch), or state number is very large (for Viterbi), the linear memory
Journal: BMC Bioinformatics , vol. 9, no. 1, pp. 224-15, 2008
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
    • ...<{[SECTION]}>[4] C. C. Moallemi and B. V. Roy, “Consensus propagation,” IEEE Trans...
    • ...For the experiments described here, we consider scenarios that occur in nanopore detector channel current analysis [3], [4], [14], [15]...

    Stephen Winters-Hiltet al. A hidden Markov modelwith binned duration algorithm

    • ...As Jensen [24] and Khreich et al. [25] describe, computationally more efficient algorithms for Baum-Welch training which render the memory requirement independent of the sequence length have been proposed, first in the communication field by [26-28] and later, independently, in bioinformatics by Miklós and Meyer [29], see also [30]...
    • ...This algorithm was employed by Hobolth and Jensen [31] for comparative gene prediction and has also been implemented, albeit in a modified version, by Churbanov and Winters-Hilt [30] who also compare it to other implementations of Viterbi and Baum-Welch training including checkpointing implementations...

    Tin Y Lamet al. Efficient algorithms for training the parameters of hidden Markov mode...

    • ...In recent work [16] we describe the performance of the following HMM EM algorithms (where we studied the last on the list):...
    • ... Linear memory EM [16,21] takes only O(N(Q + E D)) memory and O(T NQ max (Q + E D)) time...
    • ...Similar improvements are also described for the HMM Viterbi implementation in linear memory [16]...
    • ...- Checkpointing algorithm keeps the memory profile low both on server and client sides without compromising the running time [16]...
    • ...The spikes model, as described in [16], could possibly be used to increase prediction accuracy...

    Alexander G. Churbanovet al. Clustering ionic flow blockade toggles with a Mixture of HMMs

    • ...The Machine Learning (ML) algorithms are amenable to distributed computational solutions (for the HMMs, in particular [11]), which permits a computational speedup that ensures real-time operational feedback on the nanopore detector applications studied...

    A. Murat Erenet al. Pattern Recognition-Informed Feedback for Nanopore Detector Cheminform...

Sort by: