<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">605540225</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20210128100910.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">210128e20150101xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s00371-013-0902-5</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s00371-013-0902-5</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">Automated human motion segmentation via motion regularities</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Rongyi Lan, Huaijiang Sun]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">Analysis and reuse of human motion capture (mocap) data play an important role in animation, games and medical rehabilitation. In various mocap-based animation techniques, motion segmentation is regarded as one of the fundamental functions. Many proposed segmentation methods utilize little or no prior knowledge. However, human motion has its own regularities, so reasonable prior assumptions on these regularities will lead to better performance. In this paper, we focus on the learning of intrinsic regularities of mocap data based on a small set of training data which only contain daily-life motions. By utilizing these learnt motion regularities, we can successfully segment long motion sequences containing motion types that not even include in the training data. First, by assuming that most types of motions can be composed of a small number of typical poses, the motion vocabulary (mo-vocabulary) can be obtained using key pose extraction and clustering analysis, which are regarded as the low-level motion regularity. By replacing each frame with the most similar pose in the mo-vocabulary, mocap data can be transformed into text-like documents. Second, we use latent Dirichlet allocation to capture the patterns of pose combinations that frequently occur in human motions, namely the motion topics (mo-topics), which are regarded as the high-level motion regularities. By representing the target motion as the distribution over the learnt mo-topics, the segmentation task can be naturally turned into a problem of detecting notable changes of this distribution. Finally, we propose local semantic coherence curve to segment motion sequences. Since mo-topics are semantically meaningful and significantly increase the abstraction-level of motion representation, logically correct results can be obtained. The experiments demonstrate that the proposed approach outperforms the available methods on CMU and Bonn mocap database.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">Springer-Verlag Berlin Heidelberg, 2013</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Motion capture</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Motion representation</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Motion segmentation</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Hierarchical clustering</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Latent Dirichlet allocation</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Topic mining</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Lan</subfield>
   <subfield code="D">Rongyi</subfield>
   <subfield code="u">School of Computer Science and Technology, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Sun</subfield>
   <subfield code="D">Huaijiang</subfield>
   <subfield code="u">School of Computer Science and Technology, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">The Visual Computer</subfield>
   <subfield code="d">Springer Berlin Heidelberg</subfield>
   <subfield code="g">31/1(2015-01-01), 35-53</subfield>
   <subfield code="x">0178-2789</subfield>
   <subfield code="q">31:1&lt;35</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">31</subfield>
   <subfield code="o">371</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s00371-013-0902-5</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s00371-013-0902-5</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Lan</subfield>
   <subfield code="D">Rongyi</subfield>
   <subfield code="u">School of Computer Science and Technology, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Sun</subfield>
   <subfield code="D">Huaijiang</subfield>
   <subfield code="u">School of Computer Science and Technology, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">The Visual Computer</subfield>
   <subfield code="d">Springer Berlin Heidelberg</subfield>
   <subfield code="g">31/1(2015-01-01), 35-53</subfield>
   <subfield code="x">0178-2789</subfield>
   <subfield code="q">31:1&lt;35</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">31</subfield>
   <subfield code="o">371</subfield>
  </datafield>
 </record>
</collection>
