Sparse pixel sampling for appearance edit propagation

Verfasser / Beitragende:
[Tatsuya Yatagawa, Yasushi Yamaguchi]
Ort, Verlag, Jahr:
2015
Enthalten in:
The Visual Computer, 31/6-8(2015-06-01), 1101-1111
Format:
Artikel (online)
ID: 605541035
LEADER caa a22 4500
001 605541035
003 CHVBK
005 20210128100914.0
007 cr unu---uuuuu
008 210128e20150601xx s 000 0 eng
024 7 0 |a 10.1007/s00371-015-1094-y  |2 doi 
035 |a (NATIONALLICENCE)springer-10.1007/s00371-015-1094-y 
245 0 0 |a Sparse pixel sampling for appearance edit propagation  |h [Elektronische Daten]  |c [Tatsuya Yatagawa, Yasushi Yamaguchi] 
520 3 |a Edit propagation is an appearance-editing method using sparsely provided edit strokes from users. Although edit propagation has a wide variety of applications, it is computationally complex, owing to the need to solve large linear systems. To reduce the computational cost, interpolation-based approaches have been studied intensely. This study is inspired by an interpolation-based edit-propagation method that uses a clustering algorithm to determine samples. The method uses an interpolant, which approximates edit parameters with convex combinations of the samples. However, because the clustering algorithm generates samples that lie inside the set of pixels in a feature space, an interpolant with convex combinations does not allow for an exact reconstruction of the pixels outside the convex hull. To address this issue, this paper proposes a novel approximation model for interpolating image colors as well as edit parameters using affine combinations. In addition, this paper introduces sparse pixel sampling to determine the quantity and positions of samples and the weight coefficients of the affine combinations simultaneously. Sparse pixel sampling is performed by updating candidate pixels. Unnecessary pixels are discarded with compressive sensing, and new candidate pixels are greedily resampled following their approximation errors. This paper demonstrates that the proposed model achieves better approximation in terms of both image colors and edit parameters, and discusses the properties of the proposed model with various experiments. 
540 |a Springer-Verlag Berlin Heidelberg, 2015 
690 7 |a Image and video editing  |2 nationallicence 
690 7 |a Interactive editing  |2 nationallicence 
690 7 |a Edit propagation  |2 nationallicence 
690 7 |a Compressive sensing  |2 nationallicence 
700 1 |a Yatagawa  |D Tatsuya  |u University of Tokyo, 3-8-1 Komaba, Meguro-ku, 153-8902, Tokyo, Japan  |4 aut 
700 1 |a Yamaguchi  |D Yasushi  |u University of Tokyo/JST CREST, 3-8-1 Komaba, Meguro-ku, 153-8902, Tokyo, Japan  |4 aut 
773 0 |t The Visual Computer  |d Springer Berlin Heidelberg  |g 31/6-8(2015-06-01), 1101-1111  |x 0178-2789  |q 31:6-8<1101  |1 2015  |2 31  |o 371 
856 4 0 |u https://doi.org/10.1007/s00371-015-1094-y  |q text/html  |z Onlinezugriff via DOI 
898 |a BK010053  |b XK010053  |c XK010000 
900 7 |a Metadata rights reserved  |b Springer special CC-BY-NC licence  |2 nationallicence 
908 |D 1  |a research-article  |2 jats 
949 |B NATIONALLICENCE  |F NATIONALLICENCE  |b NL-springer 
950 |B NATIONALLICENCE  |P 856  |E 40  |u https://doi.org/10.1007/s00371-015-1094-y  |q text/html  |z Onlinezugriff via DOI 
950 |B NATIONALLICENCE  |P 700  |E 1-  |a Yatagawa  |D Tatsuya  |u University of Tokyo, 3-8-1 Komaba, Meguro-ku, 153-8902, Tokyo, Japan  |4 aut 
950 |B NATIONALLICENCE  |P 700  |E 1-  |a Yamaguchi  |D Yasushi  |u University of Tokyo/JST CREST, 3-8-1 Komaba, Meguro-ku, 153-8902, Tokyo, Japan  |4 aut 
950 |B NATIONALLICENCE  |P 773  |E 0-  |t The Visual Computer  |d Springer Berlin Heidelberg  |g 31/6-8(2015-06-01), 1101-1111  |x 0178-2789  |q 31:6-8<1101  |1 2015  |2 31  |o 371