Sign in
Author

Conference

Journal

Organization

Year

DOI
Look for results that meet for the following criteria:
since
equal to
before
between
and
Search in all fields of study
Limit my searches in the following fields of study
Agriculture Science
Arts & Humanities
Biology
Chemistry
Computer Science
Economics & Business
Engineering
Environmental Sciences
Geosciences
Material Science
Mathematics
Medicine
Physics
Social Science
Multidisciplinary
Keywords
(7)
Normal Distribution
Parameter Estimation
Recurrence Relation
viterbi decoder
Baum Welch
Hidden Markov Model
viterbi algorithm
Subscribe
Academic
Publications
Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory
Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory,10.1186/147121059224,BMC Bioinformatics,Alexander G. Churbanov,Step
Edit
Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory
(
Citations: 8
)
BibTex

RIS

RefWorks
Download
Alexander G. Churbanov
,
Stephen Wintershilt
Background: The BaumWelch learning procedure for Hidden Markov Models (HMMs) provides a powerful tool for tailoring HMM topologies to data for use in knowledge discovery and clustering. A linear memory procedure recently proposed by Miklos, I. and Meyer, I.M. describes a memory sparse version of the BaumWelch algorithm with modiflcations to the original probabilistic table topologies to make memory use independent of sequence length (and linearly dependent on state number). The original description of the technique has some errors that we amend. We then compare the corrected implementation on a variety of data sets with conventional and checkpointing implementations. Results: We provide a correct
recurrence relation
for the emission parameter estimate and extend it to parameter estimates of the Normal distribution. To accelerate estimation of the prior state probabilities, and decrease memory use, we reverse the originally proposed forward sweep. We describe difierent scaling strategies necessary in all real implementations of the algorithm to prevent under∞ow. In this paper we also describe our approach to a linear memory implementation of the Viterbi decoding algorithm (with linearity in the sequence length, while memory use is approximately independent of state number). We demonstrate the use of the linear memory implementation on an extended Duration
Hidden Markov Model
(DHMM) and on an HMM with a spike detection topology. Comparing the various implementations of the BaumWelch procedure we flnd that the checkpointing algorithm produces the best overall tradeofi between memory use and speed. In cases where sequence length is very large (for BaumWelch), or state number is very large (for Viterbi), the linear memory
Journal:
BMC Bioinformatics
, vol. 9, no. 1, pp. 22415, 2008
DOI:
10.1186/147121059224
Cumulative
Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
(
www.springerlink.com
)
(
www.biomedcentral.com
)
(
www.biomedcentral.com
)
(
www.cs.uno.edu
)
(
dx.doi.org
)
(
www.informatik.unitrier.de
)
More »
Citation Context
(4)
...<{[SECTION]}>[
4
] C. C. Moallemi and B. V. Roy, “Consensus propagation,” IEEE Trans...
...For the experiments described here, we consider scenarios that occur in nanopore detector channel current analysis [3], [
4
], [14], [15]...
Stephen WintersHilt
,
et al.
A hidden Markov modelwith binned duration algorithm
...As Jensen [24] and Khreich et al. [25] describe, computationally more efficient algorithms for BaumWelch training which render the memory requirement independent of the sequence length have been proposed, first in the communication field by [2628] and later, independently, in bioinformatics by Miklós and Meyer [29], see also [
30
]...
...This algorithm was employed by Hobolth and Jensen [31] for comparative gene prediction and has also been implemented, albeit in a modified version, by Churbanov and WintersHilt [
30
] who also compare it to other implementations of Viterbi and BaumWelch training including checkpointing implementations...
Tin Y Lam
,
et al.
Efficient algorithms for training the parameters of hidden Markov mode...
...In recent work [
16
] we describe the performance of the following HMM EM algorithms (where we studied the last on the list):...
... Linear memory EM [
16
,21] takes only O(N(Q + E D)) memory and O(T NQ max (Q + E D)) time...
...Similar improvements are also described for the HMM Viterbi implementation in linear memory [
16
]...
... Checkpointing algorithm keeps the memory profile low both on server and client sides without compromising the running time [
16
]...
...The spikes model, as described in [
16
], could possibly be used to increase prediction accuracy...
Alexander G. Churbanov
,
et al.
Clustering ionic flow blockade toggles with a Mixture of HMMs
...The Machine Learning (ML) algorithms are amenable to distributed computational solutions (for the HMMs, in particular [
11
]), which permits a computational speedup that ensures realtime operational feedback on the nanopore detector applications studied...
A. Murat Eren
,
et al.
Pattern RecognitionInformed Feedback for Nanopore Detector Cheminform...
References
(22)
Prediction of Complete Gene Structures in Human Genomic DNA
(
Citations: 1644
)
Christopher B. Burge
,
Samuel Karlin
Published in 1997.
Reduced space hidden Markov model training
(
Citations: 23
)
C. Tarnas
,
Richard Hughey
Journal:
Bioinformatics/computer Applications in The Biosciences  BIOINFORMATICS
, vol. 14, no. 5, pp. 401406, 1998
A Parallel Implementation of a Hidden Markov Model with Duration Modeling for Speech Recognition
(
Citations: 11
)
C. D. Mitchell
,
Randall A. Helzerman
,
Leah H. Jamieson
,
Mary P. Harper
Conference:
Symposium on Parallel and Distributed Processing  SPDP
, pp. 298306, 1993
A linear space algorithm for computing maximal common subsequences
(
Citations: 494
)
Daniel S. Hirschberg
Journal:
Communications of The ACM  CACM
, vol. 18, no. 6, pp. 341343, 1975
Optimizing reducedspace sequence analysis
(
Citations: 12
)
Raymond Wheeler
,
Richard Hughey
Journal:
Bioinformatics/computer Applications in The Biosciences  BIOINFORMATICS
, vol. 16, no. 12, pp. 10821090, 2000
Sort by:
Citations
(8)
On the memory complexity of the forwardbackward algorithm
(
Citations: 4
)
Wael Khreich
,
Eric Granger
,
Ali Miri
,
Robert Sabourin
Journal:
Pattern Recognition Letters  PRL
, vol. 31, no. 2, pp. 9199, 2010
A hidden Markov modelwith binned duration algorithm
Stephen WintersHilt
,
Zuliang Jiang
Journal:
IEEE Transactions on Signal Processing  TSP
, vol. 58, no. 2, pp. 948952, 2010
Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training
Tin Y Lam
,
Irmtraud M Meyer
Journal:
Algorithms for Molecular Biology  Algorithms Mol Biol
, vol. 5, no. 1, pp. 3816, 2010
Nanopore analytics: sensing of single molecules
(
Citations: 22
)
Stefan Howorka
,
Zuzanna Siwy
Journal:
Chemical Society Reviews  CHEM SOC REV
, vol. 38, no. 8, 2009
HMMCONVERTER 1.0: a toolbox for hidden Markov models
(
Citations: 1
)
Tin Yin Lam
,
Irmtraud M. Meyer
Journal:
Nucleic Acids Research  NAR
, vol. 37, no. 21, pp. e139e139, 2009