<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">605540578</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20210128100912.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">210128e20151001xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s00371-014-1014-6</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s00371-014-1014-6</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">Robust object tracking via multi-feature adaptive fusion based on stability: contrast analysis</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Zhiyong Li, Shuang He, Mervat Hashem]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">Object tracking under complex circumstances is a challenging task because of background interference, obstacle occlusion, object deformation, etc. Given such conditions, robustly detecting, locating, and analyzing a target through single-feature representation are difficult tasks. Global features, such as color, are widely used in tracking, but may cause the object to drift under complex circumstances. Local features, such as HOG and SIFT, can precisely represent rigid targets, but these features lack the robustness of an object in motion. An effective method is adaptive fusion of multiple features in representing targets. The process of adaptively fusing different features is the key to robust object tracking. This study uses a multi-feature joint descriptor (MFJD) and the distance between joint histograms to measure the similarity between a target and its candidate patches. Color and HOG features are fused as the tracked object of the joint representation. This study also proposes a self-adaptive multi-feature fusion strategy that can adaptively adjust the joint weight of the fused features based on their stability and contrast measure scores. The mean shift process is adopted as the object tracking framework with multi-feature representation. The experimental results demonstrate that the proposed MFJD tracking method effectively handles background clutter, partial occlusion by obstacles, scale changes, and deformations. The novel method performs better than several state-of-the-art methods in real surveillance scenarios.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">Springer-Verlag Berlin Heidelberg, 2014</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Object tracking</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Feature fusion</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Multi-feature joint descriptor</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Stability</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Contrast</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Li</subfield>
   <subfield code="D">Zhiyong</subfield>
   <subfield code="u">College of Computer Science and Electronic Engineering, Hunan University, 410082, Changsha, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">He</subfield>
   <subfield code="D">Shuang</subfield>
   <subfield code="u">College of Computer Science and Electronic Engineering, Hunan University, 410082, Changsha, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Hashem</subfield>
   <subfield code="D">Mervat</subfield>
   <subfield code="u">College of Computer Science and Electronic Engineering, Hunan University, 410082, Changsha, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">The Visual Computer</subfield>
   <subfield code="d">Springer Berlin Heidelberg</subfield>
   <subfield code="g">31/10(2015-10-01), 1319-1337</subfield>
   <subfield code="x">0178-2789</subfield>
   <subfield code="q">31:10&lt;1319</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">31</subfield>
   <subfield code="o">371</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s00371-014-1014-6</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s00371-014-1014-6</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Li</subfield>
   <subfield code="D">Zhiyong</subfield>
   <subfield code="u">College of Computer Science and Electronic Engineering, Hunan University, 410082, Changsha, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">He</subfield>
   <subfield code="D">Shuang</subfield>
   <subfield code="u">College of Computer Science and Electronic Engineering, Hunan University, 410082, Changsha, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Hashem</subfield>
   <subfield code="D">Mervat</subfield>
   <subfield code="u">College of Computer Science and Electronic Engineering, Hunan University, 410082, Changsha, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">The Visual Computer</subfield>
   <subfield code="d">Springer Berlin Heidelberg</subfield>
   <subfield code="g">31/10(2015-10-01), 1319-1337</subfield>
   <subfield code="x">0178-2789</subfield>
   <subfield code="q">31:10&lt;1319</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">31</subfield>
   <subfield code="o">371</subfield>
  </datafield>
 </record>
</collection>
