Sign in
Author

Conference

Journal

Organization

Year

DOI
Look for results that meet for the following criteria:
since
equal to
before
between
and
Search in all fields of study
Limit my searches in the following fields of study
Agriculture Science
Arts & Humanities
Biology
Chemistry
Computer Science
Economics & Business
Engineering
Environmental Sciences
Geosciences
Material Science
Mathematics
Medicine
Physics
Social Science
Multidisciplinary
Keywords
(8)
Covering Number
Gaussian Noise
Learning Algorithm
Learning Rate
Learning Theory
Least Square
Reproducing Kernel Hilbert Space
Sampling Error
Subscribe
Academic
Publications
Optimal learning rates for least squares regularized regression with unbounded sampling
Optimal learning rates for least squares regularized regression with unbounded sampling,10.1016/j.jco.2010.10.002,Journal of Complexity,Cheng Wang,Din
Edit
Optimal learning rates for least squares regularized regression with unbounded sampling
(
Citations: 1
)
BibTex

RIS

RefWorks
Download
Cheng Wang
,
DingXuan Zhou
A standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the
learning algorithm
for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new
covering number
argument for bounding the sample error.
Journal:
Journal of Complexity
, vol. 27, no. 1, pp. 5567, 2011
DOI:
10.1016/j.jco.2010.10.002
Cumulative
Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
(
www.sciencedirect.com
)
(
www.informatik.unitrier.de
)
(
dx.doi.org
)
Citation Context
(1)
...This standard assumption was abandoned in [3,7,14,
17
] and error bounds for � fz,λ − fρ� L2X under some capacity or entropy conditions stated in term of K and ρX were provided...
...The uniform boundedness assumption was replaced in [7,
17
] by the following moment hypothesis...
ShaoGao Lv
,
et al.
Integral Operator Approach to Learning Theory with Unbounded Sampling
References
(15)
Probability Inequalities for the Sum of Independent Random Variables
(
Citations: 244
)
George Bennett
Journal:
Journal of The American Statistical Association  J AMER STATIST ASSN
, vol. 57, no. 297, pp. 3345, 1962
Support Vector Machine Soft Margin Classifiers: Error Analysis
(
Citations: 61
)
DiRong Chen
,
Qiang Wu
,
Yiming Ying
,
DingXuan Zhou
Journal:
Journal of Machine Learning Research  JMLR
, vol. 5, pp. 11431175, 2004
Model Selection for Regularized LeastSquares Algorithm in Learning Theory
(
Citations: 63
)
Ernesto De Vito
,
Andrea Caponnetto
,
Lorenzo Rosasco
Journal:
Foundations of Computational Mathematics  FoCM
, vol. 5, no. 1, pp. 5985, 2005
Regularization in kernel learning
(
Citations: 11
)
Shahar Mendelson
,
Joseph Neeman
Journal:
Annals of Statistics  ANN STATIST
, vol. 38, no. 1, pp. 526565, 2010
Learning Theory Estimates via Integral Operators and Their Approximations
(
Citations: 88
)
Steve Smale
,
DingXuan Zhou
Journal:
Constructive Approximation  CONSTR APPROX
, vol. 26, no. 2, pp. 153172, 2007
Sort by:
Citations
(1)
Integral Operator Approach to Learning Theory with Unbounded Sampling
ShaoGao Lv
,
YunLong Feng
Journal:
Complex Analysis and Operator Theory  COMPLEX ANAL OPER THEORY
, pp. 116