Sign in
Author

Conference

Journal

Organization

Year

DOI
Look for results that meet for the following criteria:
since
equal to
before
between
and
Search in all fields of study
Limit my searches in the following fields of study
Agriculture Science
Arts & Humanities
Biology
Chemistry
Computer Science
Economics & Business
Engineering
Environmental Sciences
Geosciences
Material Science
Mathematics
Medicine
Physics
Social Science
Multidisciplinary
Keywords
(12)
Cluster Algorithm
Cost Function
Eigenvectors
Generic Model
Image Compression
Natural Extension
Neural Gas
Principal Component
Principal Component Analysis
Self Organized Map
Topographic Map
Generative Topographic Mapping
Subscribe
Academic
Publications
Local matrix adaptation in topographic neural maps
Local matrix adaptation in topographic neural maps,10.1016/j.neucom.2010.08.016,Neurocomputing,Banchar Arnonkijpanich,Alexander Hasenfuss,Barbara Hamm
Edit
Local matrix adaptation in topographic neural maps
(
Citations: 2
)
BibTex

RIS

RefWorks
Download
Banchar Arnonkijpanich
,
Alexander Hasenfuss
,
Barbara Hammer
The selforganizing map (SOM) and
neural gas
(NG) and generalizations thereof such as the generative
topographic map
constitute popular algorithms to represent data by means of prototypes arranged on a (hopefully) topology representing map. Most standard methods rely on the Euclidean metric, hence the resulting clusters tend to have isotropic form and they cannot account for local distortions or correlations of data. For this reason, several proposals exist in the literature which extend prototypebased clustering towards more general models which, for example, incorporate local principal directions into the winner computation. This allows to represent data faithfully using less prototypes. In this contribution, we establish a link of models which rely on local principal components (PCA), matrix learning, and a formal
cost function
of NG and SOM which allows to show convergence of the algorithm. For this purpose, we consider an extension of prototypebased clustering algorithms such as NG and SOM towards a more general metric which is given by a full adaptive matrix such that ellipsoidal clusters are accounted for. The approach is derived from a
natural extension
of the standard cost functions of NG and SOM (in the form of Heskes). We obtain batch optimization learning rules for prototype and matrix adaptation based on these generalized cost functions and we show convergence of the algorithm. The batch optimization schemes can be interpreted as local
principal component analysis
(PCA) and the local
eigenvectors
correspond to the main axes of the ellipsoidal clusters. Thus, this approach provides a
cost function
associated to proposals in the literature which combine SOM or NG with local PCA models. We demonstrate the behavior of matrix NG and SOM in several benchmark examples and in an application to image compression.
Journal:
Neurocomputing  IJON
, vol. 74, no. 4, pp. 522539, 2011
DOI:
10.1016/j.neucom.2010.08.016
Cumulative
Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
(
www.sciencedirect.com
)
(
www.informatik.unitrier.de
)
(
dx.doi.org
)
Citation Context
(2)
...Relevance learning introduced for supervised learning vector quantization [10] and its generalization, the socalled matrix learning [26], were recently extended to unsupervised batch learning in topographic mapping [
1
]...
...has been suggested, with a positive definite matrix Λ to be adapted by gradient descent learning [
1
], and reducible to an Euclidean distance if Λ is decomposable into Λ = Ω T Ω[ 5]. For a diagonal Λ the classical relevance learning is obtained...
...Moreover, these relevance profiles look similar to the inverse variance profile of the data sets, see Fig.2 . This behavior is in agreement with the theoretical results published in [
1
] and [21]...
Marika Kästner
,
et al.
Relevance Learning in Unsupervised Vector Quantization Based on Diverg...
...Alternative strategies dealing with more complex data manifolds or novel metric adaptation techniques in clusterings are typically still limited, unable to employ the full potential of a complex modeling [7,
2
]...
FrankMichael Schleif
,
et al.
Accelerating Kernel Neural Gas
References
(29)
Highdimensional data clustering
(
Citations: 41
)
Charles Bouveyron
,
Stéphane Girard
,
Cordelia Schmid
Journal:
Computational Statistics & Data Analysis  CS&DA
, vol. 52, no. 1, pp. 502519, 2007
Theoretical aspects of the SOM algorithm
(
Citations: 126
)
Marie Cottrell
,
Jeanclaude Fort
,
Gilles Pagès
Journal:
Neurocomputing  IJON
, vol. 21, no. 13, pp. 119138, 1998
Batch and median neural gas
(
Citations: 52
)
Marie Cottrell
,
Barbara Hammer
,
Alexander Hasenfuss
,
Thomas Villmann
Journal:
Neural Networks
, vol. 19, no. 67, pp. 762771, 2006
Selforganizing maps: Generalizations and new optimization techniques
(
Citations: 83
)
Thore Graepel
,
Matthias Burger
,
Klaus Obermayer
Journal:
Neurocomputing  IJON
, vol. 21, no. 13, pp. 173190, 1998
Selforganizing maps, vector quantization, and mixture modeling
(
Citations: 84
)
Tom Heskes
,
VECTOR QUANTIZATION
,
A. Quantization Errors
Journal:
IEEE Transactions on Neural Networks
, vol. 12, no. 6, pp. 12991305, 2001
Sort by:
Citations
(2)
Relevance Learning in Unsupervised Vector Quantization Based on Divergences
Marika Kästner
,
Andreas Backhaus
,
Tina Geweniger
,
Sven Haase
,
Udo Seiffert
,
Thomas Villmann
Accelerating Kernel Neural Gas
FrankMichael Schleif
,
Andrej Gisbrecht
,
Barbara Hammer