<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">605446954</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20210128100129.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">210128e20151101xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s11042-014-2145-5</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s11042-014-2145-5</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">Video object segmentation by integrating trajectories from points and regions</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Geng Zhang, Zejian Yuan, Yuehu Liu, Liang Ma, Nanning Zheng]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">We describe a novel video object segmentation system based on a conditional random field model with high-order term which is capable of capturing longer-range spatial and temporal grouping information. Our system is able to segment different moving objects effectively from complex background due to integrating the complementary properties of trajectories from points and regions. Although point and region trajectories have already been used in video object segmentation, their complementary properties have not been well investigated. In this paper, we propose an ingenious scheme to transfer the labels of sparse point trajectories to region trajectories. Especially, for region trajectories with few texture, this scheme can automatically predict their label probabilities by using a Gaussian mixture model of appearance and motion given the labels of point trajectories. Meanwhile, we design a reliability measurement for region trajectories based on shape consistency, which helps us to design robust high-order potentials for spatially overlapping region trajectories. Our region trajectories are extracted from hierarchical image over-segmentation, and hence they can capture meaningful regions over time. Additionally, our approach is a streaming process, in which object labels are propagated over a video. We validate the effectiveness of our approach on public challenging datasets, and show that our approach outperforms other competing methods</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">Springer Science+Business Media New York, 2014</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Video object segmentation</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Point tajectory</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Region trajectory</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Complementary property</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">High-order model</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Zhang</subfield>
   <subfield code="D">Geng</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Yuan</subfield>
   <subfield code="D">Zejian</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Liu</subfield>
   <subfield code="D">Yuehu</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Ma</subfield>
   <subfield code="D">Liang</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Zheng</subfield>
   <subfield code="D">Nanning</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">Multimedia Tools and Applications</subfield>
   <subfield code="d">Springer US; http://www.springer-ny.com</subfield>
   <subfield code="g">74/21(2015-11-01), 9665-9696</subfield>
   <subfield code="x">1380-7501</subfield>
   <subfield code="q">74:21&lt;9665</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">74</subfield>
   <subfield code="o">11042</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s11042-014-2145-5</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s11042-014-2145-5</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Zhang</subfield>
   <subfield code="D">Geng</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Yuan</subfield>
   <subfield code="D">Zejian</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Liu</subfield>
   <subfield code="D">Yuehu</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Ma</subfield>
   <subfield code="D">Liang</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Zheng</subfield>
   <subfield code="D">Nanning</subfield>
   <subfield code="u">Institute of Artificial Intelligence and Robotics, #28 West Xianning Road, Xi'an, Shaanxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">Multimedia Tools and Applications</subfield>
   <subfield code="d">Springer US; http://www.springer-ny.com</subfield>
   <subfield code="g">74/21(2015-11-01), 9665-9696</subfield>
   <subfield code="x">1380-7501</subfield>
   <subfield code="q">74:21&lt;9665</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">74</subfield>
   <subfield code="o">11042</subfield>
  </datafield>
 </record>
</collection>
