<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">606160477</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20210128100630.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">210128e20150201xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s00521-014-1708-8</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s00521-014-1708-8</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">Affect-insensitive speaker recognition systems via emotional speech clustering using prosodic features</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Dongdong Li, Yubo Yuan, Zhaohui Wu, Yingchun Yang]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">Voice-based biometric security systems involving only neutral speech have achieved promising performance. However, the speakers are very likely to fail the recognition when the test data exhibit multiple emotions. This paper aimed to address the mismatch of the emotional states between training and testing speech. We discuss different modeling strategies that incorporate the emotions (affects) of speakers into the training stage of a Mandarin-based speaker recognition system and propose an alternative approach, which could optimize the utilization of the limited affective speech. The training speeches are partitioned and clustered by the trends of the prosodic variations. Multiple models are built based on the clustered speech for a given speaker. The prosodic differences are characterized by a combination of features that describe the changes of the fundamental frequencies and energy contours. The experiments were carried out based on the Mandarin Affective Speech Corpus. The result shows 73.37% improvement in recognition rate over that of the traditional speaker verification tasks relatively and also achieves 63.53% higher in performance over the structural training-based systems relatively.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">The Natural Computing Applications Forum, 2014</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Speaker recognition</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Emotional speech clustering</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Prosodic features</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Li</subfield>
   <subfield code="D">Dongdong</subfield>
   <subfield code="u">Department of Computer Science and Engineering, East China University of Science and Technology, 200237, Shanghai, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Yuan</subfield>
   <subfield code="D">Yubo</subfield>
   <subfield code="u">Department of Computer Science and Engineering, East China University of Science and Technology, 200237, Shanghai, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Wu</subfield>
   <subfield code="D">Zhaohui</subfield>
   <subfield code="u">Department of Computer Science and Technology, Zhejiang University, 310027, Hangzhou, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Yang</subfield>
   <subfield code="D">Yingchun</subfield>
   <subfield code="u">Department of Computer Science and Technology, Zhejiang University, 310027, Hangzhou, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">Neural Computing and Applications</subfield>
   <subfield code="d">Springer London</subfield>
   <subfield code="g">26/2(2015-02-01), 473-484</subfield>
   <subfield code="x">0941-0643</subfield>
   <subfield code="q">26:2&lt;473</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">26</subfield>
   <subfield code="o">521</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s00521-014-1708-8</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s00521-014-1708-8</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Li</subfield>
   <subfield code="D">Dongdong</subfield>
   <subfield code="u">Department of Computer Science and Engineering, East China University of Science and Technology, 200237, Shanghai, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Yuan</subfield>
   <subfield code="D">Yubo</subfield>
   <subfield code="u">Department of Computer Science and Engineering, East China University of Science and Technology, 200237, Shanghai, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Wu</subfield>
   <subfield code="D">Zhaohui</subfield>
   <subfield code="u">Department of Computer Science and Technology, Zhejiang University, 310027, Hangzhou, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Yang</subfield>
   <subfield code="D">Yingchun</subfield>
   <subfield code="u">Department of Computer Science and Technology, Zhejiang University, 310027, Hangzhou, China</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">Neural Computing and Applications</subfield>
   <subfield code="d">Springer London</subfield>
   <subfield code="g">26/2(2015-02-01), 473-484</subfield>
   <subfield code="x">0941-0643</subfield>
   <subfield code="q">26:2&lt;473</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">26</subfield>
   <subfield code="o">521</subfield>
  </datafield>
 </record>
</collection>
