<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>     caa a22        4500</leader>
  <controlfield tag="001">605477892</controlfield>
  <controlfield tag="003">CHVBK</controlfield>
  <controlfield tag="005">20210128100403.0</controlfield>
  <controlfield tag="007">cr unu---uuuuu</controlfield>
  <controlfield tag="008">210128e20151001xx      s     000 0 eng  </controlfield>
  <datafield tag="024" ind1="7" ind2="0">
   <subfield code="a">10.1007/s10994-014-5447-y</subfield>
   <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(NATIONALLICENCE)springer-10.1007/s10994-014-5447-y</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">Measuring the accuracy of currency crisis prediction with combined classifiers in designing early warning system</subfield>
   <subfield code="h">[Elektronische Daten]</subfield>
   <subfield code="c">[Nor Ramli, Mohd Ismail, Hooy Wooi]</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">Is the prediction accuracy affected by the method used in the ensemble of the classifiers? This paper is a sequel of our experiment in order to find an answer for such question. Previously, we had conducted an experiment by using single classifiers in the machine learning against traditional statistical methods. The results showed that single classifiers in machine learning perform well compared to the traditional statistical methods. Still, we believe that there is another way to increase the prediction accuracy of these classifiers. In this paper, we conducted another experiment by combining these classifiers in predicting currency crisis of 25 countries. The combined classifiers are support vector machine with k-nearest neighbor, logistic regression with k-nearest neighbor and finally LADTree with k-nearest neighbor. These three combined classifiers are tested on 13 chosen macroeconomic indicators which the data is taken from first quarter 1980 to third quarter 2012. The results of this experiment showed that these three different combined classifiers averagely have same higher accuracy and quite comparable. Our proposed method, nearest neighbor tree has the highest area under ROC curve number among these three combined classifiers although in terms of computational time it took longer running times than the others.</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
   <subfield code="a">The Author(s), 2014</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Machine learning</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Combined classifiers</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Currency crisis</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">Early warning system</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="690" ind1=" " ind2="7">
   <subfield code="a">k-Nearest neighbor method</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Ramli</subfield>
   <subfield code="D">Nor</subfield>
   <subfield code="u">School of Mathematical Sciences, Universiti Sains Malaysia, Penang, Malaysia</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Ismail</subfield>
   <subfield code="D">Mohd</subfield>
   <subfield code="u">School of Mathematical Sciences, Universiti Sains Malaysia, Penang, Malaysia</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Wooi</subfield>
   <subfield code="D">Hooy</subfield>
   <subfield code="u">School of Management, Universiti Sains Malaysia, Penang, Malaysia</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="t">Machine Learning</subfield>
   <subfield code="d">Springer US; http://www.springer-ny.com</subfield>
   <subfield code="g">101/1-3(2015-10-01), 85-103</subfield>
   <subfield code="x">0885-6125</subfield>
   <subfield code="q">101:1-3&lt;85</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">101</subfield>
   <subfield code="o">10994</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2="0">
   <subfield code="u">https://doi.org/10.1007/s10994-014-5447-y</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="898" ind1=" " ind2=" ">
   <subfield code="a">BK010053</subfield>
   <subfield code="b">XK010053</subfield>
   <subfield code="c">XK010000</subfield>
  </datafield>
  <datafield tag="900" ind1=" " ind2="7">
   <subfield code="a">Metadata rights reserved</subfield>
   <subfield code="b">Springer special CC-BY-NC licence</subfield>
   <subfield code="2">nationallicence</subfield>
  </datafield>
  <datafield tag="908" ind1=" " ind2=" ">
   <subfield code="D">1</subfield>
   <subfield code="a">research-article</subfield>
   <subfield code="2">jats</subfield>
  </datafield>
  <datafield tag="949" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="F">NATIONALLICENCE</subfield>
   <subfield code="b">NL-springer</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">856</subfield>
   <subfield code="E">40</subfield>
   <subfield code="u">https://doi.org/10.1007/s10994-014-5447-y</subfield>
   <subfield code="q">text/html</subfield>
   <subfield code="z">Onlinezugriff via DOI</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Ramli</subfield>
   <subfield code="D">Nor</subfield>
   <subfield code="u">School of Mathematical Sciences, Universiti Sains Malaysia, Penang, Malaysia</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Ismail</subfield>
   <subfield code="D">Mohd</subfield>
   <subfield code="u">School of Mathematical Sciences, Universiti Sains Malaysia, Penang, Malaysia</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">700</subfield>
   <subfield code="E">1-</subfield>
   <subfield code="a">Wooi</subfield>
   <subfield code="D">Hooy</subfield>
   <subfield code="u">School of Management, Universiti Sains Malaysia, Penang, Malaysia</subfield>
   <subfield code="4">aut</subfield>
  </datafield>
  <datafield tag="950" ind1=" " ind2=" ">
   <subfield code="B">NATIONALLICENCE</subfield>
   <subfield code="P">773</subfield>
   <subfield code="E">0-</subfield>
   <subfield code="t">Machine Learning</subfield>
   <subfield code="d">Springer US; http://www.springer-ny.com</subfield>
   <subfield code="g">101/1-3(2015-10-01), 85-103</subfield>
   <subfield code="x">0885-6125</subfield>
   <subfield code="q">101:1-3&lt;85</subfield>
   <subfield code="1">2015</subfield>
   <subfield code="2">101</subfield>
   <subfield code="o">10994</subfield>
  </datafield>
 </record>
</collection>
