<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">606209948</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20210128101031.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">210128e20151201xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s11220-014-0102-z</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s11220-014-0102-z</subfield>
  </datafield>
  <datafield tag="100" ind1="1" ind2=" ">
   <subfield code="a">Bai</subfield>
   <subfield code="D">Shuang</subfield>
   <subfield code="u">School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="245" ind1="1" ind2="0">
   <subfield code="a">Human-Centric Image Categorization Based on Poselets</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Shuang Bai]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">In daily life, one kind of images are common, in which many people are present and performing certain activities. We call these images human-centric images. As the number of such images gets increasingly larger, to organize and access them efficiently becomes urgent. Since the categories of human-centric images are determined by human activities in the images, in this paper, we propose to classify human-centric images by analyzing poses of all humans in them. Specifically, first, we introduce the notion of poselets, which represent parts of poses of humans and a method to detect human based on the poselets. Given a human-centric image, to determine its category, we use the poselets and the human detection method to detect all possible poselet activations in it and create a statistical representation of the poses of humans in the image. Additionally, we also investigated the influence of contextual information on the categorization of human-centric images. Finally, for evaluating the human-centric image categorization method, five categories of human-centric images are collected from the internet and used for experiments. Experiment results show that the poselet distribution representations are more suitable for representing human-centric images than the popular bag of visual words method.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">Springer Science+Business Media New York, 2014</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Human-centric images</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Human activity</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Categorization</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Poselet</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">Sensing and Imaging</subfield>
   <subfield code="d">Springer US; http://www.springer-ny.com</subfield>
   <subfield code="g">16/1(2015-12-01), 1-19</subfield>
   <subfield code="x">1557-2064</subfield>
   <subfield code="q">16:1&lt;1</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">16</subfield>
   <subfield code="o">11220</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s11220-014-0102-z</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s11220-014-0102-z</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">100</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Bai</subfield>
   <subfield code="D">Shuang</subfield>
   <subfield code="u">School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">Sensing and Imaging</subfield>
   <subfield code="d">Springer US; http://www.springer-ny.com</subfield>
   <subfield code="g">16/1(2015-12-01), 1-19</subfield>
   <subfield code="x">1557-2064</subfield>
   <subfield code="q">16:1&lt;1</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">16</subfield>
   <subfield code="o">11220</subfield>
  </datafield>
 </record>
</collection>
