<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">605540276</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20210128100911.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">210128e20150901xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s00371-014-1009-3</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s00371-014-1009-3</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">3D entity-based stereo matching with ground control points and joint second-order smoothness prior</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Jing Liu, Chunpeng Li, Feng Mei, Zhaoqi Wang]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">Disparity estimation for a scene with complex geometric characteristics such as slanted or highly curved surfaces is a basic and important issue in stereo matching. Traditional methods often use first-order smoothness priors that always lead to low-curvature frontal-parallel disparity maps. We propose a stereo framework that views the scene as a set of 3D entities with compact and smooth disparity distributions. The 3D entity-based representation enables some contributions to obtain a precise disparity estimation. A GCPs-plane constraint based on ground control points is used to strengthen the compact distributions of the disparities in each entity by restricting the scope of the disparity variance and reducing matching ambiguities in repetitive or low-texture areas. Furthermore, we have formulated a joint second-order smoothness prior, which combines a geometric weight with the derivative of disparity values. This prior encourages smooth disparity variations inside each entity and means that each entity is biased towards being a 3D planar surface. Segmentation is incorporated as soft constraint by effectively fusing the advantages of the image color gradient and GCPs-plane. This avoids blending of the foreground and background and retains only the disparity discontinuities from geometrically smooth regions with strong texture gradients. Our framework is formulated as a maximum a posteriori probability estimation problem that is optimized using the fusion-move approach. Evaluation results on the Middlebury benchmark show that the proposed method ranks second among the approximately $$152$$ 152 listed algorithms. In addition, it performs well in real-world scenes.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">Springer-Verlag Berlin Heidelberg, 2014</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Stereo matching</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Joint second-order smoothness prior</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">GCPs-plane</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Fusion-move</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">3D entity</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Liu</subfield>
   <subfield code="D">Jing</subfield>
   <subfield code="u">The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, 100190, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Li</subfield>
   <subfield code="D">Chunpeng</subfield>
   <subfield code="u">The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, 100190, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Mei</subfield>
   <subfield code="D">Feng</subfield>
   <subfield code="u">The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, 100190, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Wang</subfield>
   <subfield code="D">Zhaoqi</subfield>
   <subfield code="u">The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, 100190, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">The Visual Computer</subfield>
   <subfield code="d">Springer Berlin Heidelberg</subfield>
   <subfield code="g">31/9(2015-09-01), 1253-1269</subfield>
   <subfield code="x">0178-2789</subfield>
   <subfield code="q">31:9&lt;1253</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">31</subfield>
   <subfield code="o">371</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s00371-014-1009-3</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s00371-014-1009-3</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Liu</subfield>
   <subfield code="D">Jing</subfield>
   <subfield code="u">The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, 100190, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Li</subfield>
   <subfield code="D">Chunpeng</subfield>
   <subfield code="u">The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, 100190, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Mei</subfield>
   <subfield code="D">Feng</subfield>
   <subfield code="u">The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, 100190, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Wang</subfield>
   <subfield code="D">Zhaoqi</subfield>
   <subfield code="u">The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, 100190, Beijing, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">The Visual Computer</subfield>
   <subfield code="d">Springer Berlin Heidelberg</subfield>
   <subfield code="g">31/9(2015-09-01), 1253-1269</subfield>
   <subfield code="x">0178-2789</subfield>
   <subfield code="q">31:9&lt;1253</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">31</subfield>
   <subfield code="o">371</subfield>
  </datafield>
 </record>
</collection>
