Unsupervised nearest neighbor regression for dimensionality reduction

Verfasser / Beitragende:
[Oliver Kramer]
Ort, Verlag, Jahr:
2015
Enthalten in:
Soft Computing, 19/6(2015-06-01), 1647-1661
Format:
Artikel (online)
ID: 605468680
LEADER caa a22 4500
001 605468680
003 CHVBK
005 20210128100318.0
007 cr unu---uuuuu
008 210128e20150601xx s 000 0 eng
024 7 0 |a 10.1007/s00500-014-1354-1  |2 doi 
035 |a (NATIONALLICENCE)springer-10.1007/s00500-014-1354-1 
100 1 |a Kramer  |D Oliver  |u Computational Intelligence Group, Department of Computing Science, University of Oldenburg, Uhlhornsweg 84, 26111, Oldenburg, Germany  |4 aut 
245 1 0 |a Unsupervised nearest neighbor regression for dimensionality reduction  |h [Elektronische Daten]  |c [Oliver Kramer] 
520 3 |a Large numbers of high-dimensional patterns are collected in a variety of disciplines, from astronomy to bioinformatics. In this article, we present an approach to non-linear dimensionality reduction based on fitting nearest neighbor regression to the unsupervised regression framework for learning of low-dimensional manifolds. For each high-dimensional pattern, a low-dimensional latent point is generated. The dimensionality of the induced optimization problem grows with the number of patterns. To cope with the large solution space, an iterative solution construction scheme is proposed. In this paper, we introduce two strategies to embed high-dimensional data. First, the latent sorting approach allows embeddings in a one-dimensional latent space corresponding to a sorting of the high-dimensional patterns. Second, Gaussian embeddings randomly generate candidate positions based on sampling from the Gaussian distribution employing distances on data space as variances. Kernel functions increase the flexibility of the approach by mapping the patterns to feature spaces. We analyze and compare the algorithms experimentally on a set of test functions. 
540 |a Springer-Verlag Berlin Heidelberg, 2014 
690 7 |a Dimensionality reduction  |2 nationallicence 
690 7 |a Manifold learning  |2 nationallicence 
690 7 |a Unsupervised regression  |2 nationallicence 
690 7 |a Nearest neighbors  |2 nationallicence 
773 0 |t Soft Computing  |d Springer Berlin Heidelberg  |g 19/6(2015-06-01), 1647-1661  |x 1432-7643  |q 19:6<1647  |1 2015  |2 19  |o 500 
856 4 0 |u https://doi.org/10.1007/s00500-014-1354-1  |q text/html  |z Onlinezugriff via DOI 
898 |a BK010053  |b XK010053  |c XK010000 
900 7 |a Metadata rights reserved  |b Springer special CC-BY-NC licence  |2 nationallicence 
908 |D 1  |a research-article  |2 jats 
949 |B NATIONALLICENCE  |F NATIONALLICENCE  |b NL-springer 
950 |B NATIONALLICENCE  |P 856  |E 40  |u https://doi.org/10.1007/s00500-014-1354-1  |q text/html  |z Onlinezugriff via DOI 
950 |B NATIONALLICENCE  |P 100  |E 1-  |a Kramer  |D Oliver  |u Computational Intelligence Group, Department of Computing Science, University of Oldenburg, Uhlhornsweg 84, 26111, Oldenburg, Germany  |4 aut 
950 |B NATIONALLICENCE  |P 773  |E 0-  |t Soft Computing  |d Springer Berlin Heidelberg  |g 19/6(2015-06-01), 1647-1661  |x 1432-7643  |q 19:6<1647  |1 2015  |2 19  |o 500