[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Dynamic classifier selection

Published: 01 May 2018 Publication History

Abstract

An updated taxonomy of Dynamic Selection techniques is proposed.A review of the state-of-the-art dynamic selection techniques is presented.Empirical comparison between 18 dynamic selection techniques is conducted.We discuss about the recent findings and open research question in this field. Multiple Classifier Systems (MCS) have been widely studied as an alternative for increasing accuracy in pattern recognition. One of the most promising MCS approaches is Dynamic Selection (DS), in which the base classifiers are selected on the fly, according to each new sample to be classified. This paper provides a review of the DS techniques proposed in the literature from a theoretical and empirical point of view. We propose an updated taxonomy based on the main characteristics found in a dynamic selection system: (1) The methodology used to define a local region for the estimation of the local competence of the base classifiers; (2) The source of information used to estimate the level of competence of the base classifiers, such as local accuracy, oracle, ranking and probabilistic models, and (3) The selection approach, which determines whether a single or an ensemble of classifiers is selected. We categorize the main dynamic selection techniques in the DS literature based on the proposed taxonomy. We also conduct an extensive experimental analysis, considering a total of 18 state-of-the-art dynamic selection techniques, as well as static ensemble combination and single classification models. To date, this is the first analysis comparing all the key DS techniques under the same experimental protocol. Furthermore, we also present several perspectives and open research questions that can be used as a guide for future works in this domain.

References

[1]
L.I. Kuncheva, A theoretical study on six classifier fusion strategies, IEEE Trans. Pattern Anal. Mach. Intell., 24 (2002) 281-286.
[2]
T.G. Dietterich, Ensemble methods in machine learning, Springer, 2000.
[3]
L.I. Kuncheva, Combining Pattern Classifiers: Methods and Algorithms, Wiley-Interscience, 2004.
[4]
M. Fernndez-Delgado, E. Cernadas, S. Barro, D. Amorim, Do we need hundreds of classifiers to solve real world classification problems?, J. Mach. Learn. Res., 15 (2014) 3133-3181.
[5]
D. Opitz, R. Maclin, Popular ensemble methods: an empirical study, J. Artif. Intell. Res., 11 (1999) 169-198.
[6]
R. Polikar, Ensemble based systems in decision making, IEEE Circuits Syst. Mag., 6 (2006) 21-45.
[7]
S. Bashbaghi, E. Granger, R. Sabourin, G. Bilodeau, Dynamic selection of exemplar-svms for watch-list screening through domain adaptation, 2017.
[8]
P.R.L. de Almeida, E.J. da Silva Jnior, T.M. Celinski, A. de Souza Britto, L.E.S. de Oliveira, A.L. Koerich, Music genre classification using dynamic selection of ensemble of classifiers, IEEE, 2012.
[9]
S. Lessmann, B. Baesens, H.-V. Seow, L.C. Thomas, Benchmarking state-of-the-art classification algorithms for credit scoring: an update of research, Eur. J. Oper. Res., 247 (2015) 124-136.
[10]
H. Xiao, Z. Xiao, Y. Wang, Ensemble classification based on supervised clustering for credit scoring, Appl. Soft Comput., 43 (2016) 73-86.
[11]
M. Galar, A. Fernandez, E. Barrenechea, H. Bustince, F. Herrera, A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches, IEEE Trans. Syst. Man. Cybern. Part C, 42 (2012) 463-484.
[12]
C. Porcel, A. Tejeda-Lorente, M. Martnez, E. Herrera-Viedma, A hybrid recommender system for the selective dissemination of research resources in a technology transfer office, Inf. Sci., 184 (2012) 1-19.
[13]
M. Jahrer, A. Tscher, R. Legenstein, Combining predictions for accurate recommender systems, ACM, 2010.
[14]
D. Di Nucci, F. Palomba, R. Oliveto, A. De Lucia, Dynamic selection of classifiers in bug prediction: an adaptive method, IEEE Trans. Emerg. Topics Comput. Intell., 1 (2017) 202-212.
[15]
A. Panichella, R. Oliveto, A. De Lucia, Cross-project defect prediction models: Lunion fait la force, IEEE, 2014.
[16]
G. Giacinto, F. Roli, L. Didaci, Fusion of multiple classifiers for intrusion detection in computer networks, Pattern Recognit. Lett., 24 (2003) 1795-1803.
[17]
G. Giacinto, R. Perdisci, M. Del Rio, F. Roli, Intrusion detection in computer networks by a modular ensemble of one-class classifiers, Inf. Fus., 9 (2008) 69-82.
[18]
B. Krawczyk, L.L. Minku, J. Gama, J. Stefanowski, M. Woniak, Ensemble learning for data stream analysis: a survey, Inf. Fus., 37 (2017) 132-156.
[19]
L.I. Kuncheva, Classifier ensembles for changing environments (2004) 115.
[20]
R. Polikar, L. Udpa, S. Udpa, S. Member, S. Member, V. Honavar, Learn++: an incremental learning algorithm for supervised neural networks, IEEE Trans. Syst. Man Cybern. (C), Special Issue on Knowledge Management, 31 (2001) 497-508.
[21]
M. Wozniak, M. Graa, E. Corchado, A survey of multiple classifier systems as hybrid systems, Inf. Fus., 16 (2014) 3-17.
[22]
L. Rokach, Ensemble-based classifiers, Artif. Intell. Rev., 33 (2010) 1-39.
[23]
Y. Ren, L. Zhang, P.N. Suganthan, Ensemble classification and regression-recent developments, applications and future directions, IEEE Comput. Intell. Mag., 11 (2016) 41-53.
[24]
A.S. Britto, R. Sabourin, L.E.S. de Oliveira, Dynamic selection of classifiers - a comprehensive review, Pattern Recognit., 47 (2014) 3665-3680.
[25]
T. Woloszynski, M. Kurzynski, A probabilistic model of classifier competence for dynamic ensemble selection, Pattern Recognit., 44 (2011) 2656-2668.
[26]
A.H.R. Ko, R. Sabourin, u.S. Britto, From dynamic classifier selection to dynamic ensemble selection, Pattern Recognit., 41 (2008) 1735-1748.
[27]
R.M.O. Cruz, R. Sabourin, G.D.C. Cavalcanti, T.I. Ren, META-DES: a dynamic ensemble selection framework using meta-learning, Pattern Recognit., 48 (2015) 1925-1935.
[28]
X. Zhu, X. Wu, Y. Yang, Dynamic classifier selection for effective mining from noisy data streams, 2004.
[29]
L.I. Kuncheva, Clustering-and-selection model for classifier combination, 2000.
[30]
R.G.F. Soares, A. Santana, A.M.P. Canuto, M.C.P. de Souto, Using accuracy and diversity to select classifiers to build ensembles, 2006.
[31]
K. Woods, W.P. Kegelmeyer, K. Bowyer, Combination of multiple classifiers using local accuracy estimates, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997) 405-410.
[32]
P.C. Smits, Multiple classifier systems for supervised remote sensing image classification based on dynamic classifier selection, IEEE Trans. Geosci. Remote Sens., 40 (2002) 801-813.
[33]
M. Sabourin, A. Mitiche, D. Thomas, G. Nagy, Classifier combination for handprinted digit recognition, International Conference on Document Analysis and Recognition (1993) 163166.
[34]
T. Woloszynski, M. Kurzynski, P. Podsiadlo, G.W. Stachowiak, A measure of competence based on random classification for dynamic ensemble selection, Inf. Fus., 13 (2012) 207-213.
[35]
B. Krawczyk, M. Wozniak, Dynamic classifier selection for one-class classification, Knowl. Based Syst., 107 (2016) 43-53.
[36]
P.R.L. de Almeida, L.S. Oliveira, A. de Souza Britto Jr, R. Sabourin, Handling concept drifts using dynamic selection of classifiers, 2016.
[37]
A. Tsymbal, M. Pechenizkiy, P. Cunningham, S. Puuronen, Dynamic integration of classifiers for handling concept drift, Inf. Fus., 9 (2008) 56-68.
[38]
W. Qu, Y. Zhang, J. Zhu, Q. Qiu, Mining multi-label concept-drifting data streams using dynamic classifier ensemble, Springer, 2009.
[39]
I. Mendialdua, J. Martnez-Otzeta, I. Rodriguez-Rodriguez, T. Ruiz-Vazquez, B. Sierra, Dynamic selection of the best base classifier in one versus one, Knowl. Based Syst., 85 (2015) 298-306.
[40]
M. Galar, A. Fernndez, E. Barrenechea, H. Bustince, F. Herrera, Dynamic classifier selection for one-vs-one strategy: avoiding non-competent classifiers, Pattern Recognit., 46 (2013) 3412-3424.
[41]
Z.-L. Zhang, X.-G. Luo, S. Garca, J.-F. Tang, F. Herrera, Exploring the effectiveness of dynamic ensemble selection in the one-versus-one scheme, Knowl. Based Syst., 125 (2017) 53-63.
[42]
L. Batista, E. Granger, R. Sabourin, Dynamic selection of generativediscriminative ensembles for off-line signature verification, Pattern Recognit., 45 (2012) 1326-1340.
[43]
S. Bashbaghi, E. Granger, R. Sabourin, G.-A. Bilodeau, Robust watch-list screening using dynamic ensembles of svms based on multiple face representations, Mach. Vis. Appl. (2017) 1-23.
[44]
S. Bashbaghi, E. Granger, R. Sabourin, G.-A. Bilodeau, Dynamic ensembles of exemplar-svms for still-to-video face recognition, Pattern Recognit, 69 (2017) 61-81.
[45]
C. Pagano, cole de technologie suprieure, 2015.
[46]
D. Ruta, B. Gabrys, Classifier selection for majority voting, Inf. Fus., 6 (2005) 63-81.
[47]
R.P.W. Duin, The combining classifier: to train or not to train?, Proceedings of the 16th International Conference on Pattern Recognition 2 (2002) 765770.
[48]
L.K. Hansen, P. Salamon, Neural network ensembles, IEEE Trans. Pattern Anal. Mach. Intell., 12 (1990) 993-1001.
[49]
S.-B. Cho, J.H. Kim, Combining multiple neural networks by fuzzy integral for robust classification, IEEE Trans. Syst. Man Cybern., 25 (1995) 380-384.
[50]
L. Breiman, Bagging predictors, Mach. Learn., 24 (1996) 123-140.
[51]
M. Skurichina, R.P.W. Duin, Bagging for linear classifiers, Pattern Recognit., 31 (1998) 909-930.
[52]
Y. Freund, R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, 1995.
[53]
J. Feng, L. Wang, M. Sugiyama, C. Yang, Z.-H. Zhou, C. Zhang, Boosting and margin theory, Front. Electr. Electron. Eng., 7 (2012) 127-133.
[54]
A. Rahman, B. Verma, Novel layered clustering-based approach for generating ensemble of classifiers, IEEE Trans. Neural Netw., 22 (2011) 781-792.
[55]
R.M. O. Cruz, G.D. C. Cavalcanti, T.I. Ren, An ensemble classifier for offline cursive character recognition using multiple feature extraction techniques, Proceedings of the International Joint Conference on Neural Networks (2010a) 744751.
[56]
R.M. O. Cruz, G.D. C. Cavalcanti, T.I. Ren, Handwritten digit recognition using multiple feature extraction techniques and classifier ensemble, 2010.
[57]
A. Rahman, B. Verma, Effect of ensemble classifier composition on offline cursive character recognition, Inf. Process. Manage., 49 (2013) 852-864.
[58]
A. Schindler, R. Mayer, A. Rauber, Facilitating comprehensive benchmarking experiments on the million song dataset., 2012.
[59]
T.K. Ho, The random subspace method for constructing decision forests, IEEE Trans. Pattern Anal. Mach. Intell., 20 (1998) 832-844.
[60]
M. Skurichina, R.P.W. Duin, Bagging, boosting and the random subspace method for linear classifiers, Pattern Anal. Appl., 5 (2002) 121-135.
[61]
R.P. Duin, D.M. Tax, Experiments with classifier combining rules, 2000.
[62]
W. Wang, P. Jones, D. Partridge, Diversity between neural networks and decision trees for building multiple classifier systems, 2000.
[63]
L. Breiman, Random forests, Mach. Learn., 45 (2001) 5-32.
[64]
L. Rokach, Decision forest: twenty years of research, Inf. Fus., 27 (2016) 111-125.
[65]
J.J. Rodrguez, L.I. Kuncheva, C.J. Alonso, Rotation forest: a new classifier ensemble method, IEEE Trans. Pattern Anal. Mach. Intell., 28 (2006) 1619-1630.
[66]
G. Giacinto, F. Roli, Design of effective neural network ensembles for image classification purposes, Image Vis. Comput., 19 (2001) 699-707.
[67]
G. Giacinto, F. Roli, Design of effective neural network ensembles for image classification purposes, Image Vis. Comput., 19 (2001) 699-707.
[68]
M. Aksela, Comparison of classifier selection methods for improving committee performance, 2003.
[69]
R.M.O. Cruz, G.D.C. Cavalcanti, I.R. Tsang, R. Sabourin, Feature representation selection based on classifier projection space and oracle analysis, Expert Syst. Appl., 40 (2013) 3813-3827.
[70]
G. Brown, L.I. Kuncheva, Good and bad diversity in majority vote ensembles, Springer, 2010.
[71]
E.M. dos Santos, R. Sabourin, P. Maupin, Overfitting cautious selection of classifier ensembles with genetic algorithms, Inf. Fus., 10 (2009) 150-162.
[72]
I. Partalas, G. Tsoumakas, I. Vlahavas, Focused ensemble selection: a diversity-based method for greedy ensemble selection, 2008.
[73]
E.M. dos Santos, R. Sabourin, Classifier ensembles optimization guided by population oracle, 2011.
[74]
B. Gabrys, D. Ruta, Genetic algorithms in classifier fusion, Appl. Soft Comput., 6 (2006) 337-347.
[75]
Z.-H. Zhou, J. Wu, W. Tang, Ensembling neural networks: many could be better than all, Artif. Intell., 137 (2002) 239-263.
[76]
R.E. Banfield, L.O. Hall, K.W. Bowyer, W.P. Kegelmeyer, Ensemble diversity measures and their application to thinning, Inf. Fus., 6 (2005) 49-62.
[77]
L. Kuncheva, Fuzzy Classifier Design, Springer Science & Business Media, 2000.
[78]
R.P. Duin, D.M. Tax, Classifier conditional posterior probabilities, Springer, 1998.
[79]
J. Kittler, M. Hatef, R.P.W. Duin, J. Matas, On combining classifiers, IEEE Trans. Pattern Anal. Mach. Intell., 20 (1998) 226-239.
[80]
T.K. Ho, J.J. Hull, S.N. Srihari, Decision combination in multiple classifier systems, IEEE Trans. Pattern Anal. Mach. Intell., 16 (1994) 66-75.
[81]
Y.S. Huang, C.Y. Suen, A method of combining multiple experts for the recognition of unconstrained handwritten numerals, IEEE Trans. Pattern Anal. Mach. Intell., 17 (1995) 90-94.
[82]
L.I. Kuncheva, J.C. Bezdek, R.P.W. Duin, Decision templates for multiple classifier fusion: an experimental comparison, Pattern Recognit., 34 (2001) 299-314.
[83]
Y. Lu, Knowledge integration in a multiple classifier system, Appl. Intell., 6 (1996) 75-86.
[84]
G.L. Rogova, Combining the results of several neural network classifiers, Neural Netw., 7 (1994) 777-781.
[85]
D.M.J. Tax, M. van Breukelen, R.P.W. Duin, J. Kittler, Combining multiple classifiers by averaging or by multiplying?, Pattern Recognit., 33 (2000) 1475-1485.
[86]
A.K. Jain, R.P.W. Duin, J. Mao, Statistical pattern recognition: a review, IEEE Trans. Pattern Anal. Mach. Intell., 22 (2000) 4-37.
[87]
L. Lam, C.Y. Suen, Optimal combinations of pattern classifiers, Pattern Recognit. Lett., 16 (1995) 945-954.
[88]
D.H. Wolpert, Stacked generalization, Neural Netw., 5 (1992) 241-259.
[89]
. Raudys, Trainable fusion rules. ii. small sample-size effects, Neural Netw., 19 (2006) 1517-1527.
[90]
. Raudys, Trainable fusion rules. i. large sample size case, Neural Netw., 19 (2006) 1506-1516.
[91]
R.A. Jacobs, M.I. Jordan, S.J. Nowlan, G.E. Hinton, Adaptive mixtures of local experts, Neural Comput, 3 (1991) 79-87.
[92]
S. Masoudnia, R. Ebrahimpour, Mixture of experts: a literature survey, Artif. Intell. Rev. (2014) 1-19.
[93]
S.E. Yuksel, J.N. Wilson, P.D. Gader, Twenty years of mixture of experts, IEEE Trans. Neural Netw. Learn. Syst., 23 (2012) 1177-1193.
[94]
H. Cevikalp, R. Polikar, Local classifier weighting by quadratic programming, IEEE Trans. Neural Netw., 19 (2008) 1832-1838.
[95]
D. Jimnez, Dynamically weighted ensemble neural networks for classification, IEEE, 1998.
[96]
D. tefka, M. Holea, Dynamic classifier aggregation using interaction-sensitive fuzzy measures, Fuzzy Sets Syst., 270 (2015) 25-52.
[97]
R.M.O. Cruz, R. Sabourin, G.D.C. Cavalcanti, META-DES.H: a dynamic ensemble selection technique using meta-learning and a dynamic weighting approach, 2015.
[98]
L.M. Vriesmann, A.S. Britto, L.S. Oliveira, A.L. Koerich, R. Sabourin, Combining overall and local class accuracies in an oracle-based method for dynamic ensemble selection, IEEE, 2015.
[99]
R.M.O. Cruz, R. Sabourin, G.D.C. Cavalcanti, Meta-des. Oracle: meta-learning and feature selection for dynamic ensemble selection, Inf. Fus., 38 (2017) 84-103.
[100]
M. Wozniak, M. Zmyslony, Designing fusers on the basis of discriminantsevolutionary and neural methods of training, Springer, 2010.
[101]
L. Didaci, G. Giacinto, F. Roli, G.L. Marcialis, A study on the performances of dynamic classifier selection based on local accuracy estimation, Pattern Recognit., 38 (2005) 2188-2191.
[102]
R.M. O. Cruz, G.D. C. Cavalcanti, T.I. Ren, A method for dynamic ensemble selection based on a filter and an adaptive distance to improve the quality of the regions of competence, Proceedings of the International Joint Conference on Neural Networks (2011) 11261133.
[103]
L. Didaci, G. Giacinto, Dynamic classifier selection by adaptive k-nearest-neighbourhood rule, Springer, 2004.
[104]
R.M.O. Cruz, R. Sabourin, G.D.C. Cavalcanti, Analyzing different prototype selection techniques for dynamic classifier and ensemble selection, 2017.
[105]
T. Woloszynski, M. Kurzynski, A measure of competence based on randomized reference classifier for dynamic ensemble selection, 2010.
[106]
P.R. Cavalin, R. Sabourin, C.Y. Suen, Dynamic selection approaches for multiple classifier systems, Neural Comput. Appl., 22 (2013) 673-688.
[107]
R.M.O. Cruz, R. Sabourin, G.D. Cavalcanti, Prototype selection for dynamic classifier and ensemble selection, Neural Comput. Appl. (2016) 1-11.
[108]
R.M.O. Cruz, R. Sabourin, G.D.C. Cavalcanti, A DEEP analysis of the META-DES framework for dynamic selection of ensemble of classifiers, CoRR (2015).
[109]
D.V. Oliveira, G.D. Cavalcanti, R. Sabourin, Online pruning of base classifiers for dynamic ensemble selection, Pattern Recognit. (2017).
[110]
T.P.F. de Lima, A.T. Sergio, T.B. Ludermir, Improving classifiers and regions of competence in dynamic ensemble selection, IEEE, 2014.
[111]
T.P.F. De Lima, T.B. Ludermir, Optimizing dynamic ensemble selection procedure by evolutionary extreme learning machines and a noise reduction filter, IEEE, 2013.
[112]
G. Giacinto, F. Roli, Dynamic classifier selection based on multiple classifier behaviour, Pattern Recognit., 34 (2001) 1879-1881.
[113]
P.R. Cavalin, R. Sabourin, C.Y. Suen, Logid: an adaptive framework combining local and global incremental learning for dynamic selection of ensembles of HMMs, Pattern Recognit., 45 (2012) 3544-3556.
[114]
L. Rastrigin, R. Erenstein, Method of Collective Recognition, 1981.
[115]
C. Lin, W. Chen, C. Qiu, Y. Wu, S. Krishnan, Q. Zou, Libd3c: ensemble classifiers with a clustering and dynamic selection strategy, Neurocomputing, 123 (2014) 424-435.
[116]
M.C. de Souto, R.G. Soares, A. Santana, A.M. Canuto, Empirical comparison of dynamic classifier selection methods based on diversity and accuracy for building ensembles, IEEE, 2008.
[117]
J. Wang, P. Neskovic, L.N. Cooper, Improving nearest neighbor rule with a simple adaptive distance measure, Pattern Recognit. Lett., 28 (2007) 207-213.
[118]
J. Wang, P. Neskovic, L.N. Cooper, Neighborhood size selection in the k-nearest-neighbor rule using statistical confidence, Pattern Recognit., 39 (2006) 417-423.
[119]
B. Sierra, E. Lazkano, I. Irigoien, E. Jauregi, I. Mendialdua, K nearest neighbor equality: giving equal chance to all existing classes, Inf. Sci., 181 (2011) 5158-5168.
[120]
T. Woloszynski, M. Kurzynski, On a new measure of classifier competence applied to the design of multiclassifier systems, 2009.
[121]
L. Batista, E. Granger, R. Sabourin, Dynamic ensemble selection for off-line signature verification, 2011.
[122]
K. M., W. T., R. Lysiak, On two measures of classifier competence for dynamic ensemble selection - experimental comparative analysis, 2010.
[123]
A.L. Brun, A.S.B. Jr., L.S. Oliveira, F. Enembreck, R. Sabourin, Contribution of data complexity features on dynamic classifier selection, 2016.
[124]
R.M.O. Cruz, R. Sabourin, G.D.C. Cavalcanti, On meta-learning for dynamic ensemble selection, 2014.
[125]
F. Pinto, C. Soares, J. Mendes-Moreira, Chade: metalearning with classifier chains for dynamic combination of classifiers, Springer, 2016.
[126]
J. Xiao, L. Xie, C. He, X. Jiang, Dynamic classifier ensemble model for customer classification with imbalanced class distribution, Expert Syst. Appl., 39 (2012) 3668-3675.
[127]
E.M. Dos Santos, R. Sabourin, P. Maupin, A dynamic overproduce-and-choose strategy for the selection of classifier ensembles, Pattern Recognit., 41 (2008) 2993-3009.
[128]
M. Wozniak, Hybrid Classifiers: Methods of Data, Knowledge, and Classifier Combination, Springer, 2013.
[129]
G. Giacinto, F. Roli, Methods for dynamic classifier selection, IEEE, 1999.
[130]
R.M.O. Cruz, cole de Technologie Suprieure, 2016.
[131]
C.A. Shipp, L.I. Kuncheva, Relationships between combination methods and measures of diversity in combining classifiers, Inf. Fus., 3 (2002) 135-148.
[132]
T.K. Ho, M. Basu, Complexity measures of supervised classification problems, IEEE Trans. Pattern Anal. Mach. Intell., 24 (2002) 289-300.
[133]
M. Galar, A. Fernndez, E. Barrenechea, H. Bustince, F. Herrera, An overview of ensemble methods for binary classifiers in multi-class problems: experimental study on one-vs-one and one-vs-all schemes, Pattern Recognit., 44 (2011) 1761-1776.
[134]
M. Galar, A. Fernndez, E. Barrenechea, F. Herrera, Drcw-ovo: distance-based relative competence weighting combination for one-vs-one strategy in multi-class problems, Pattern Recognit., 48 (2015) 28-42.
[135]
D.M.J. Tax, Technische Universiteit Delft, 2001.
[136]
B. Antosik, M. Kurzynski, New measures of classifier competence heuristics and application to the design of multiple classifier systems., 2011.
[137]
J. Martins, L.S. Oliveira, A. Britto, R. Sabourin, Forest species recognition based on dynamic classifier selection and dissimilarity feature vector representation, Mach Vis Appl, 26 (2015) 279-293.
[138]
A.T. Sergio, T.P. de Lima, T.B. Ludermir, Dynamic selection of forecast combiners, Neurocomputing, 218 (2016) 37-50.
[139]
M. Kurzynski, A. Wolczowski, Dynamic selection of classifiers ensemble applied to the recognition of emg signal for the control of bioprosthetic hand, IEEE, 2011.
[140]
M. Kurzynski, M. Krysmann, P. Trajdos, A. Wolczowski, Multiclassifier system with hybrid learning applied to the control of bioprosthetic hand, Comput. Biol. Med., 69 (2016) 286-297.
[141]
M. Kurzynski, M. Krysmann, P. Trajdos, A. Wolczowski, Two-stage multiclassifier system with correction of competence of base classifiers applied to the control of bioprosthetic hand, IEEE, 2014.
[142]
D.J. Hand, W.E. Henley, Statistical classification methods in consumer credit scoring: a review, J. R. Statist. Soc., 160 (1997) 523-541.
[143]
C.N. Silla Jr, A.L. Koerich, C.A. Kaestner, The latin music database., 2008.
[144]
J. Milgram, M. Cheriet, R. Sabourin, Estimating accurate multi-class probabilities with support vector machines, IEEE, 2005.
[145]
L.S. Oliveira, R. Sabourin, F. Bortolozzi, C.Y. Suen, Automatic recognition of handwritten numerical strings: a recognition and verification strategy, IEEE Trans. Pattern Anal. Mach. Intell., 24 (2002) 1438-1454.
[146]
F. Vargas, M. Ferrer, C. Travieso, J. Alonso, Off-line handwritten signature gpds-960 corpus, IEEE, 2007.
[147]
A. Wolczowski, M. Kurzynski, Human-machine interface in bioprosthesis control using EMG signal classification, Expert Syst., 27 (2010) 53-70.
[148]
K. Bache, M. Lichman, UCI Machine Learning Repository, 2013.
[149]
R.D. King, C. Feng, A. Sutherland, Statlog: Comparison of Classification Algorithms on Large Real-World Problems, 1995.
[150]
J. Alcal-Fdez, A. Fernndez, J. Luengo, J. Derrac, S. Garca, KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework, Mult. Val. Log. Soft Comput., 17 (2011) 255-287.
[151]
L. Kuncheva, Ludmila kuncheva collection LKC, 2004.
[152]
R.P.W. Duin, P. Juszczak, D. de Ridder, P. Paclik, E. Pekalska, D.M. Tax, Prtools, a matlab toolbox for pattern recognition, 2004.
[153]
S. Mirjalili, A. Lewis, S-shaped versus v-shaped transfer functions for binary particle swarm optimization, Swarm Evol. Comput., 9 (2013) 1-14.
[154]
M. Friedman, The use of ranks to avoid the assumption of normality implicit in the analysis of variance, J. Am. Stat. Assoc., 32 (1937) 675-701.
[155]
J. Demar, Statistical comparisons of classifiers over multiple data sets, J. Mach. Learn. Res., 7 (2006) 1-30.
[156]
.Y. Salam, W.N. Street, Distant diversity in dynamic class prediction, Ann. Oper. Res. (2016) 1-15.
[157]
A. Benavoli, G. Corani, F. Mangili, Should we really use post-hoc tests based on mean-ranks, J. Mach. Learn. Res., 17 (2016) 1-10.
[158]
B. Bergmann, G. Hommel, Improvements of general multiple test procedures for redundant systems of hypotheses, Springer, 1988.
[159]
S. Garcia, F. Herrera, An extension onstatistical comparisons of classifiers over multiple data setsfor all pairwise comparisons, J. Mach. Learn. Res., 9 (2008) 2677-2694.
[160]
J. Derrac, S. Garca, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm Evol. Comput., 1 (2011) 3-18.
[161]
J. Zhu, H. Zou, S. Rosset, T. Hastie, Multi-class adaboost, Stat. Interface, 2 (2009) 349-360.
[162]
S. Bernard, L. Heutte, S. Adam, Influence of hyperparameters on random forest accuracy, Springer, 2009.
[163]
S. Bernard, L. Heutte, S. Adam, Forest-rk: a new random forest induction method, Springer, 2008.
[164]
M.T. Hagan, M.B. Menhaj, Training feedforward networks with the marquardt algorithm, IEEE Trans. Neural Netw., 5 (1994) 989-993.
[165]
A. Roy, R.M.O. Cruz, R. Sabourin, G.D. Cavalcanti, Meta-regression based pool size prediction scheme for dynamic selection of classifiers, 2016.
[166]
M.A. Souza, G.D. Cavalcanti, R.M.O. Cruz, R. Sabourin, On the characterization of the oracle for dynamic classifier selection, 2017.
[167]
E.M. dos Santos, R. Sabourin, P. Maupin, A dynamic overproduce-and-choose strategy for the selection of classifier ensembles, Pattern Recognit., 41 (2008) 2993-3009.
[168]
A. Roy, R.M.O. Cruz, R. Sabourin, G.D.C. Cavalcanti, Meta-learning recommendation of default size of classifier pool for META-DES, 2016.
[169]
S. Garcia, J. Derrac, J. Cano, F. Herrera, Prototype selection for nearest neighbor classification: taxonomy and empirical study, IEEE Trans. Pattern Anal. Mach. Intell., 34 (2012) 417-435.
[170]
I. Triguero, J. Derrac, S. Garca, F. Herrera, A taxonomy and experimental study on prototype generation for nearest neighbor classification, IEEE Trans. Syst. Man Cybern. Part C, 42 (2012) 86-100.
[171]
R. Lysiak, M. Kurzynski, T. Woloszynski, Probabilistic approach to the dynamic ensemble selection using measures of competence and diversity of base classifiers, 2011.
[172]
T.K. Ho, Complexity of classification problems and comparative advantages of combined classifiers, Springer, 2000.
[173]
B. Krawczyk, M. Woniak, G. Schaefer, Cost-sensitive decision tree ensembles for effective imbalanced classification, Appl. Soft Comput., 14 (2014) 554-562.
[174]
Y. Sun, M.S. Kamel, A.K. Wong, Y. Wang, Cost-sensitive boosting for classification of imbalanced data, Pattern Recognit., 40 (2007) 3358-3378.
[175]
S. Bernard, C. Chatelain, S. Adam, R. Sabourin, The multiclass roc front method for cost-sensitive classification, Pattern Recognit., 52 (2016) 46-60.
[176]
C. Dubos, S. Bernard, S. Adam, R. Sabourin, Roc-based cost-sensitive classification with a reject option, 2016.
[177]
J. Dez-Pastor, J.J. Rodrguez, C. Garca-Osorio, L.I. Kuncheva, Diversity techniques improve the performance of the best imbalance learning ensembles, Inf. Sci., 325 (2015) 98-117.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Information Fusion
Information Fusion  Volume 41, Issue C
May 2018
264 pages

Publisher

Elsevier Science Publishers B. V.

Netherlands

Publication History

Published: 01 May 2018

Author Tags

  1. Classifier competence
  2. Dynamic classifier selection
  3. Dynamic ensemble selection
  4. Ensemble of classifiers
  5. Multiple classifier systems
  6. Survey

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)DES-ASPattern Recognition10.1016/j.patcog.2024.110899157:COnline publication date: 1-Jan-2025
  • (2025)A meta-heuristic approach to estimate and explain classifier uncertaintyApplied Intelligence10.1007/s10489-024-06127-055:5Online publication date: 14-Jan-2025
  • (2024)Regression with multi-expert deferralProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693483(34738-34759)Online publication date: 21-Jul-2024
  • (2024)Latest Advancements in Credit Risk Assessment with Machine Learning and Deep Learning TechniquesCybernetics and Information Technologies10.2478/cait-2024-003424:4(22-44)Online publication date: 1-Dec-2024
  • (2024)Learn Together Stop Apart: An Inclusive Approach to Ensemble PruningProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3672018(1166-1176)Online publication date: 25-Aug-2024
  • (2024)Adaptive classifier ensemble for multibiometric VerificationProcedia Computer Science10.1016/j.procs.2024.09.242246:C(4038-4047)Online publication date: 1-Jan-2024
  • (2024)Adaptive regularized ensemble for evolving data stream classificationPattern Recognition Letters10.1016/j.patrec.2024.02.026180:C(55-61)Online publication date: 1-Apr-2024
  • (2024)DESRegNeurocomputing10.1016/j.neucom.2024.127487580:COnline publication date: 1-May-2024
  • (2024)Meta-learning-based sample discrimination framework for improving dynamic selection of classifiers under label noiseKnowledge-Based Systems10.1016/j.knosys.2024.111811295:COnline publication date: 18-Jul-2024
  • (2024)Adaptive K values and training subsets selection for optimal K-NN performance on FPGAJournal of King Saud University - Computer and Information Sciences10.1016/j.jksuci.2024.10208136:5Online publication date: 24-Jul-2024
  • Show More Cited By

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media