<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">605541434</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20210128100916.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">210128e20151101xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s00371-014-1027-1</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s00371-014-1027-1</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">On spatio-temporal feature point detection for animated meshes</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Vasyl Mykhalchuk, Hyewon Seo, Frederic Cordier]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">Although automatic feature detection has been a long-sought subject by researchers in computer graphics and computer vision, feature extraction on deforming models remains a relatively unexplored area. In this paper, we develop a new method for automatic detection of spatio-temporal feature points on animated meshes. Our algorithm consists of three main parts. We first define local deformation characteristics, based on strain and curvature values computed for each point at each frame. Next, we construct multi-resolution space-time Gaussians and difference-of-Gaussian (DoG) pyramids on the deformation characteristics representing the input animated mesh, where each level contains 3D smoothed and subsampled representation of the previous level. Finally, we estimate locations and scales of spatio-temporal feature points by using a scale-normalized differential operator. A new, precise approximation of spatio-temporal scale-normalized Laplacian has been introduced, based on the space-time DoG. We have experimentally verified our algorithm on a number of examples and conclude that our technique allows to detect spatio and temporal feature points in a reliable manner.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">Springer-Verlag Berlin Heidelberg, 2014</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Feature detection</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Animated mesh</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Multi-scale representation</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Difference of Gaussian</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Mykhalchuk</subfield>
   <subfield code="D">Vasyl</subfield>
   <subfield code="u">University of Strasbourg, Strasbourg, France</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Seo</subfield>
   <subfield code="D">Hyewon</subfield>
   <subfield code="u">University of Strasbourg, Strasbourg, France</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Cordier</subfield>
   <subfield code="D">Frederic</subfield>
   <subfield code="u">University of Haute Alsace, Mulhouse, France</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">The Visual Computer</subfield>
   <subfield code="d">Springer Berlin Heidelberg</subfield>
   <subfield code="g">31/11(2015-11-01), 1471-1486</subfield>
   <subfield code="x">0178-2789</subfield>
   <subfield code="q">31:11&lt;1471</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">31</subfield>
   <subfield code="o">371</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s00371-014-1027-1</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s00371-014-1027-1</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Mykhalchuk</subfield>
   <subfield code="D">Vasyl</subfield>
   <subfield code="u">University of Strasbourg, Strasbourg, France</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Seo</subfield>
   <subfield code="D">Hyewon</subfield>
   <subfield code="u">University of Strasbourg, Strasbourg, France</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Cordier</subfield>
   <subfield code="D">Frederic</subfield>
   <subfield code="u">University of Haute Alsace, Mulhouse, France</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">The Visual Computer</subfield>
   <subfield code="d">Springer Berlin Heidelberg</subfield>
   <subfield code="g">31/11(2015-11-01), 1471-1486</subfield>
   <subfield code="x">0178-2789</subfield>
   <subfield code="q">31:11&lt;1471</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">31</subfield>
   <subfield code="o">371</subfield>
  </datafield>
 </record>
</collection>
