Academic
Publications
Sparse Bayesian Learning and the Relevance Vector Machine

Sparse Bayesian Learning and the Relevance Vector Machine,Journal of Machine Learning Research,Michael E. Tipping

Sparse Bayesian Learning and the Relevance Vector Machine   (Citations: 1085)
BibTex | RIS | RefWorks Download
This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classification tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the 'relevance vector machine' (RVM), a model of identical functional form to the popular and state-of-the-art 'support vector machine' (SVM). We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages. These include the benefits of probabilistic predictions, automatic estimation of 'nuisance' parameters, and the facility to utilise arbitrary basis functions (e.g. non-'Mercer' kernels). We detail the Bayesian framework and associated learning algorithm for the RVM, and give some illustrative examples of its application along with some comparative benchmarks. We offer some explanation for the exceptional degree of sparsity obtained, and discuss and demonstrate some of the advantageous features, and potential extensions, of Bayesian relevance learning.
Journal: Journal of Machine Learning Research - JMLR , vol. 1, pp. 211-244, 2001
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
    • ...Past research by Tipping (2001) has suggested that SVM probability estimates may not be totally reliable, and that relevance vector machine (RVM) can provide better probability estimates...

    Brian A. Johnson. High-resolution urban land-cover classification using a competitive mu...

    • ...Motivated by the lack of a formal optimization framework that combines both regularization and ESN parameter adaptation, and inspired by the recent developments of the variational Bayesian methods (Bishop, 2006; Beal, 2003) for sparse Bayesian learning (SBL) (Shutin, Buchgraber, Kulkarni, & Poor, 2011b; Seeger & Wipf, 2010; Tzikas, Likas, & Galatsanos, 2008; Tipping, 2001; Bishop & Tipping, 2000) and variational nonlinear parameter estimation (Shutin & Fleury, 2011), we propose a variational Bayesian ESN training framework...

    Dmitriy Shutinet al. Regularized Variational Bayesian Learning of Echo State Networks with ...

    • ...In the past decade, hierarchical priors have been particularly popular and have been explored and exploited in several research fields, eg, machine learning (Figueiredo, 2003; Tipping, 2001), compressive sensing (Joseph et al, 2008), and computer experiments...

    Haisong Denget al. A Bayesian MetaModeling Approach for Gaussian Stochastic Process Model...

    • ...Namely, we assess two different deterministic algorithms to solve the photometric stereo problem, a convex ‘1-norm based relaxation [2], and a hierarchical Bayesian model derived from a SBL (Sparse Bayesian Learning) framework [12, 14].,SBL [12] assumes the standard Gaussian likelihood function for the first-level, diffuse errors giving,ande, and then maximize the resulting likelihood function with respect to [12]...

    Satoshi Ikehataet al. Robust Photometric Stereo using Sparse Regression

    • ...The main advantage of RVM (Tipping 2001) as compared with an equivalent support vector machine (SVM) (Vapnik 1995) is that RVM achieves a significant improvement in sparsity; RVM also offers a number of additional advantages which will be described in §3...

    Leke Linet al. Profiling tropospheric refractivity in real time, based on a relevance...

Sort by: