<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">606159940</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20210128100627.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">210128e20151101xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s00521-015-1863-6</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s00521-015-1863-6</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">Fusing hierarchical multi-scale local binary patterns and virtual mirror samples to perform face recognition</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Zi Liu, Xiaoning Song, Zhenmin Tang]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">The horizontal axis-symmetrical nature of faces is useful and interesting, which has successfully applied in face detection. In general, images of faces are not strictly captured under a frontal and natural pose. It implies that extra training samples could be generated by means of the symmetry of the face. In this paper, we develop a framework that fuses virtual mirror synthesized training samples as bases and hierarchical multi-scale local binary patterns (LBP) features for classification. More specifically, in the first stage of proposed method, sampling uncertainty of the linear approximation model is alleviated effectively by constructing extra synthesized mirror training samples which generated from original images. Subsequently, in the second stage, features are extracted from the above dictionary using a hierarchical multi-scale LBP scheme. In the third stage, we combine the synthesized samples and original ones together to describe a test sample and simultaneously determine the relatively informative training samples by means of exploiting the reconstruction deviation of all the dictionary atoms. Besides, the introduced sparse coding process with weak l 1 constraint has superior competitiveness that the accuracy has been improved, while the complexity has been fallen. Ultimately, the following step determines again a new decomposition coefficient of remaining samples, which makes final decision of the classification. Experimental results that are engaged in various benchmark face databases have demonstrated the effectiveness of our algorithm.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">The Natural Computing Applications Forum, 2015</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Sparse representation</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Face recognition</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Sparse residuals measurement</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Image classification</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Liu</subfield>
   <subfield code="D">Zi</subfield>
   <subfield code="u">School of Computer Science and Engineering, Nanjing University of Science and Technology, 210094, Nanjing, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Song</subfield>
   <subfield code="D">Xiaoning</subfield>
   <subfield code="u">School of Internet of Things Engineering, Jiangnan University, 214122, Wuxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Tang</subfield>
   <subfield code="D">Zhenmin</subfield>
   <subfield code="u">School of Computer Science and Engineering, Nanjing University of Science and Technology, 210094, Nanjing, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">Neural Computing and Applications</subfield>
   <subfield code="d">Springer London</subfield>
   <subfield code="g">26/8(2015-11-01), 2013-2026</subfield>
   <subfield code="x">0941-0643</subfield>
   <subfield code="q">26:8&lt;2013</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">26</subfield>
   <subfield code="o">521</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s00521-015-1863-6</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s00521-015-1863-6</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Liu</subfield>
   <subfield code="D">Zi</subfield>
   <subfield code="u">School of Computer Science and Engineering, Nanjing University of Science and Technology, 210094, Nanjing, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Song</subfield>
   <subfield code="D">Xiaoning</subfield>
   <subfield code="u">School of Internet of Things Engineering, Jiangnan University, 214122, Wuxi, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Tang</subfield>
   <subfield code="D">Zhenmin</subfield>
   <subfield code="u">School of Computer Science and Engineering, Nanjing University of Science and Technology, 210094, Nanjing, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">Neural Computing and Applications</subfield>
   <subfield code="d">Springer London</subfield>
   <subfield code="g">26/8(2015-11-01), 2013-2026</subfield>
   <subfield code="x">0941-0643</subfield>
   <subfield code="q">26:8&lt;2013</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">26</subfield>
   <subfield code="o">521</subfield>
  </datafield>
 </record>
</collection>
