Academic
Publications
Local matrix adaptation in topographic neural maps

Local matrix adaptation in topographic neural maps,10.1016/j.neucom.2010.08.016,Neurocomputing,Banchar Arnonkijpanich,Alexander Hasenfuss,Barbara Hamm

Local matrix adaptation in topographic neural maps   (Citations: 2)
BibTex | RIS | RefWorks Download
The self-organizing map (SOM) and neural gas (NG) and generalizations thereof such as the generative topographic map constitute popular algorithms to represent data by means of prototypes arranged on a (hopefully) topology representing map. Most standard methods rely on the Euclidean metric, hence the resulting clusters tend to have isotropic form and they cannot account for local distortions or correlations of data. For this reason, several proposals exist in the literature which extend prototype-based clustering towards more general models which, for example, incorporate local principal directions into the winner computation. This allows to represent data faithfully using less prototypes. In this contribution, we establish a link of models which rely on local principal components (PCA), matrix learning, and a formal cost function of NG and SOM which allows to show convergence of the algorithm. For this purpose, we consider an extension of prototype-based clustering algorithms such as NG and SOM towards a more general metric which is given by a full adaptive matrix such that ellipsoidal clusters are accounted for. The approach is derived from a natural extension of the standard cost functions of NG and SOM (in the form of Heskes). We obtain batch optimization learning rules for prototype and matrix adaptation based on these generalized cost functions and we show convergence of the algorithm. The batch optimization schemes can be interpreted as local principal component analysis (PCA) and the local eigenvectors correspond to the main axes of the ellipsoidal clusters. Thus, this approach provides a cost function associated to proposals in the literature which combine SOM or NG with local PCA models. We demonstrate the behavior of matrix NG and SOM in several benchmark examples and in an application to image compression.
Journal: Neurocomputing - IJON , vol. 74, no. 4, pp. 522-539, 2011
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
    • ...Relevance learning introduced for supervised learning vector quantization [10] and its generalization, the so-called matrix learning [26], were recently extended to unsupervised batch learning in topographic mapping [1]...
    • ...has been suggested, with a positive definite matrix Λ to be adapted by gradient descent learning [1], and reducible to an Euclidean distance if Λ is decomposable into Λ = Ω T Ω[ 5]. For a diagonal Λ the classical relevance learning is obtained...
    • ...Moreover, these relevance profiles look similar to the inverse variance profile of the data sets, see Fig.2 . This behavior is in agreement with the theoretical results published in [1] and [21]...

    Marika Kästneret al. Relevance Learning in Unsupervised Vector Quantization Based on Diverg...

    • ...Alternative strategies dealing with more complex data manifolds or novel metric adaptation techniques in clusterings are typically still limited, unable to employ the full potential of a complex modeling [7,2]...

    Frank-Michael Schleifet al. Accelerating Kernel Neural Gas

Sort by: