<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">467936943</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20180406153000.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">170328e20061001xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s00138-006-0036-0</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s00138-006-0036-0</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">Three-dimensional view-invariant face recognition using a hierarchical pose-normalization strategy</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Martin Levine, Ajit Rajwade]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">Face recognition from three-dimensional (3D) shape data has been proposed as a method of biometric identification as a way of either supplanting or reinforcing a two-dimensional approach. This paper presents a 3D face recognition system capable of recognizing the identity of an individual from a 3D facial scan in any pose across the view-sphere, by suitably comparing it with a set of models (all in frontal pose) stored in a database. The system makes use of only 3D shape data, ignoring textural information completely. Firstly, we propose a generic learning strategy using support vector regression [Burges, Data Mining Knowl Discov 2(2): 121-167, 1998] to estimate the approximate pose of a 3D head. The support vector machine (SVM) is trained on range images in several poses belonging to only a small set of individuals and is able to coarsely estimate the pose of any unseen facial scan. Secondly, we propose a hierarchical two-step strategy to normalize a facial scan to a nearly frontal pose before performing any recognition. The first step consists of either a coarse normalization making use of facial features or the generic learning algorithm using the SVM. This is followed by an iterative technique to refine the alignment to the frontal pose, which is basically an improved form of the Iterated Closest Point Algorithm [Besl and Mckay, IEEE Trans Pattern Anal Mach Intell 14(2):239-256, 1992]. The latter step produces a residual error value, which can be used as a metric to gauge the similarity between two faces. Our two-step approach is experimentally shown to outperform both of the individual normalization methods in terms of recognition rates, over a very wide range of facial poses. Our strategy has been tested on a large database of 3D facial scans in which the training and test images of each individual were acquired at significantly different times, unlike all except two of the existing 3D face recognition methods.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">Springer-Verlag, 2006</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">3D face recognition</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Support vector regression</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Iterated closest point</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">3D pose estimation</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Pose normalization</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Residual error</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Levine</subfield>
   <subfield code="D">Martin</subfield>
   <subfield code="u">Center of Intelligent Machines, McGill University, Room 410, 3480 University Street, McConnell Engineering Building, H3A 2A7, Montreal, Canada</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Rajwade</subfield>
   <subfield code="D">Ajit</subfield>
   <subfield code="u">Center of Intelligent Machines, McGill University, Room 410, 3480 University Street, McConnell Engineering Building, H3A 2A7, Montreal, Canada</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">Machine Vision and Applications</subfield>
   <subfield code="d">Springer-Verlag</subfield>
   <subfield code="g">17/5(2006-10-01), 309-325</subfield>
   <subfield code="x">0932-8092</subfield>
   <subfield code="q">17:5&lt;309</subfield>
   <subfield code="1">2006</subfield>
   <subfield code="2">17</subfield>
   <subfield code="o">138</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s00138-006-0036-0</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s00138-006-0036-0</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Levine</subfield>
   <subfield code="D">Martin</subfield>
   <subfield code="u">Center of Intelligent Machines, McGill University, Room 410, 3480 University Street, McConnell Engineering Building, H3A 2A7, Montreal, Canada</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Rajwade</subfield>
   <subfield code="D">Ajit</subfield>
   <subfield code="u">Center of Intelligent Machines, McGill University, Room 410, 3480 University Street, McConnell Engineering Building, H3A 2A7, Montreal, Canada</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">Machine Vision and Applications</subfield>
   <subfield code="d">Springer-Verlag</subfield>
   <subfield code="g">17/5(2006-10-01), 309-325</subfield>
   <subfield code="x">0932-8092</subfield>
   <subfield code="q">17:5&lt;309</subfield>
   <subfield code="1">2006</subfield>
   <subfield code="2">17</subfield>
   <subfield code="o">138</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
 </record>
</collection>
