<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">467936811</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20180406152959.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">170328e20060801xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s00138-006-0031-5</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s00138-006-0031-5</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="2">
   <subfield code="a">A Robust Approach for Structure from Planar Motion by Stereo Image Sequences</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Tai Chen, Yun-Hui Liu]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">This paper proposes a robust method for recovery of motion and structure from two image sequences taken by stereo cameras undergoing a planar motion. The feature correspondences between images are extracted and refined automatically by the relation of the stereo cameras and the property of the motion. To improve the robustness, an auto-scale random sample consensus (RANSAC) algorithm is adopted in the motion and structure estimation. Unlike other work recovering epipolar geometry, here we use a random sampling algorithm to recover the 2D motion and to exclude the outliers which lie both on and out of the epipolar lines. Further more, the idea of RANSAC is used in structure estimation to exclude the outliers from the image sequence. The contribution of this work is the development of an approach to make structure and motion estimation more robust and efficient so as to be applicable to real applications. With the adoption of the auto-scale technique, the algorithm completely automates the estimation process without any prior information or user's specification of parameters like thresholds. Indoor and outdoor experiments have been done to verify the performance of the algorithm. The results demonstrated that the proposed algorithm is robust and efficient for applications in planar motions.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">Springer-Verlag, 2006</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Planar motion</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Random sampling</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Auto-scale</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Stereo cameras</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Chen</subfield>
   <subfield code="D">Tai</subfield>
   <subfield code="u">Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Hong Kong, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Liu</subfield>
   <subfield code="D">Yun-Hui</subfield>
   <subfield code="u">Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Hong Kong, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">Machine Vision and Applications</subfield>
   <subfield code="d">Springer-Verlag</subfield>
   <subfield code="g">17/3(2006-08-01), 197-209</subfield>
   <subfield code="x">0932-8092</subfield>
   <subfield code="q">17:3&lt;197</subfield>
   <subfield code="1">2006</subfield>
   <subfield code="2">17</subfield>
   <subfield code="o">138</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s00138-006-0031-5</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s00138-006-0031-5</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Chen</subfield>
   <subfield code="D">Tai</subfield>
   <subfield code="u">Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Hong Kong, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Liu</subfield>
   <subfield code="D">Yun-Hui</subfield>
   <subfield code="u">Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Hong Kong, People's Republic of China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">Machine Vision and Applications</subfield>
   <subfield code="d">Springer-Verlag</subfield>
   <subfield code="g">17/3(2006-08-01), 197-209</subfield>
   <subfield code="x">0932-8092</subfield>
   <subfield code="q">17:3&lt;197</subfield>
   <subfield code="1">2006</subfield>
   <subfield code="2">17</subfield>
   <subfield code="o">138</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
 </record>
</collection>
