<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">46903596X</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20180323132756.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">170328e19921001xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/BF00163583</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/BF00163583</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">Panoramic representation for route recognition by a mobile robot</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Jiang Zheng, Saburo Tsuji]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">Here, we explore a new theme: Route recognition, in robot navigation. It is faced with problems of visual sensing, spatial memory construction, and scene recognition in a global world. The strategy of this work is route description from experience, that is, a robot acquires a route description from route views taken in a trial move, and then uses it to guide the navigation along the same route. In cognition phase, a new representation of scenes along a route termed panoramic representation is proposed. This representation is obtained by scanning sideviews along the route, which provides rich information such as 2D projections of scenes called Panoramic view and generalized panoramic view, a path-oriented 2 1/2D sketch, and a path description, but only contains a small amount of data. The continuous panoramic view (PV) and generalized panoramic view (GPV) are efficient in processing, compared with fusing discrete views into a complete route model. In recognition phase, the robot matches the panoramic representation memorized in the trial move and that from incoming images so that it can locate and orient itself. We employ dynamic programming and circular dynamic programming in coarse matching of GPVs and PVs, and employ feature matching in fine verification. The advantage of wide fields of GPV and PV brings a reliable result to the scene recognition.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">Kluwer Academic Publishers, 1992</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Zheng</subfield>
   <subfield code="D">Jiang</subfield>
   <subfield code="u">Department of Control Engineering, Osaka University, 560, Toyonaka, Osaka, Japan</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Tsuji</subfield>
   <subfield code="D">Saburo</subfield>
   <subfield code="u">Department of Control Engineering, Osaka University, 560, Toyonaka, Osaka, Japan</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">International Journal of Computer Vision</subfield>
   <subfield code="d">Kluwer Academic Publishers</subfield>
   <subfield code="g">9/1(1992-10-01), 55-76</subfield>
   <subfield code="x">0920-5691</subfield>
   <subfield code="q">9:1&lt;55</subfield>
   <subfield code="1">1992</subfield>
   <subfield code="2">9</subfield>
   <subfield code="o">11263</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/BF00163583</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/BF00163583</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Zheng</subfield>
   <subfield code="D">Jiang</subfield>
   <subfield code="u">Department of Control Engineering, Osaka University, 560, Toyonaka, Osaka, Japan</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Tsuji</subfield>
   <subfield code="D">Saburo</subfield>
   <subfield code="u">Department of Control Engineering, Osaka University, 560, Toyonaka, Osaka, Japan</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">International Journal of Computer Vision</subfield>
   <subfield code="d">Kluwer Academic Publishers</subfield>
   <subfield code="g">9/1(1992-10-01), 55-76</subfield>
   <subfield code="x">0920-5691</subfield>
   <subfield code="q">9:1&lt;55</subfield>
   <subfield code="1">1992</subfield>
   <subfield code="2">9</subfield>
   <subfield code="o">11263</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
 </record>
</collection>
