Academic
Publications
Online incremental EM training of GMM and its application to speech processing applications

Online incremental EM training of GMM and its application to speech processing applications,10.1109/ICOSP.2010.5657133,Yongxin Zhang,Lixian Chen,Xin R

Online incremental EM training of GMM and its application to speech processing applications  
BibTex | RIS | RefWorks Download
The traditional Expectation-Maximization (EM) training of Gaussian Mixture Model (GMM) is essentially a batch mode procedure which requires the multiple data samples with the sufficient size to update the model parameters. This severely limits the deployment and adaptation of GMM in many real-time online systems since the newly observed data samples are expected to be incorporated into the system upon available via retraining the model. This paper presents a new online incremental EM training procedure of GMM, which aims to perform the EM training incrementally and so can adapt GMM online sample by sample. The proposed method is extended on two kinds of EM algorithms for GMM, namely, Split-and-Merge EM and the traditional EM. Experiments on both the synthetic data and a speech processing task show the advantages and efficiency of the new method.
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.