Abstract
Ensemble learning, also known as multiple classifier system and committee-based learning, trains and combines multiple learners to solve a learning problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Breiman L (1996a) Bagging predictors. Mach Learn 24(2):123–140
Breiman L (1996b) Stacked regressions. Mach Learn 24(1):49−64
Breiman L (2000) Randomizing outputs to increase prediction accuracy. Mach Learn 40(3):113−120
Breiman L (2001a) Random forests. Mach Learn 45(1):5−32
Breiman L (2001b) Using iterated bagging to debias regressions. Mach Learn 45(3):261−277
Clarke B (2003) Comparing Bayes model averaging and stacking when model approximation error cannot be ignored. J Mach Learn Res 4:683−712
Demiriz A, Bennett KP, Shawe-Taylor J (2008) Linear programming Boosting via column generation. Mach Learn 46(1−3):225−254
Dietterich TG(2000) Ensemble methods in machine learning. In: Proceedings of the 1st international workshop on multiple classifier systems (MCS), pp 1−15. Cagliari, Italy
Dietterich TG, Bakiri G (1995) Solving multiclass learning problems via error-correcting output codes. J Artif Intell Res 2:263−286
Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119−139
Friedman J, Hastie T, Tibshirani R (2000) Additive logistic regression: a statistical view of boosting (with discussions). Ann Stat 28(2):337−407
Friedman JH (2001) Greedy function approximation: a gradient Boosting machine. Ann Stat 29(5):1189−1232
Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Patt Anal Mach Intell 20(8):832−844
Ho TK, Hull JJ, Srihari SN (1994) Decision combination in multiple classifier systems. IEEE Trans Patt Anal Mach Intell 16(1):66−75
Kearns M, Valiant LG (1989) Cryptographic limitations on learning Boolean formulae and finite automata. In: Proceedings of the 21st annual ACM symposium on theory of computing (STOC), pp 433−444. Seattle, WA
Kittler J, Hatef M, Duin R, Matas J (1998) On combining classifiers. IEEE Trans Patt Anal Mach Intell 20(3):226−239
Kohavi R, Wolpert DH (1996) Bias plus variance decomposition for zeroone loss functions. In: Proceedings of the 13th international conference on machine learning (ICML), pp 275−283. Bari, Italy
Krogh A, Vedelsby J (1995) Neural network ensembles, cross validation and active learning. In: Tesauro G, Touretzky DS, Leen TK (eds) Advances in neural information processing systems 7 (NIPS). MIT Press, Cambridge, MA, pp 231−238
Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley, Hoboken, NJ
Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181−207
Liu Y, Yao X (1999) Ensemble learning via negative correlation. Neural Netw 12(10):1399−1404
Markowitz H (1952) Portfolio selection. J Financ 7(1):77−91
Mease D, Wyner A (2008) Evidence contrary to the statistical view of boosting (with discussions). J Mach Learn Res 9:131−201
Perrone MP, Cooper LN (1993) When networks disagree: ensemble method for neural networks. In: Mammone RJ (ed) Artificial neural networks for speech and vision. Chapman & Hall, New York, NY, pp 126−142
Platt J (2000) Probabilities for SV machines. In: Smola A, Bartlett P, Schölkopf B, Schuurmans D (eds) Advances in large margin classifiers. MIT Press, Cambridge, MA, pp 61−74
Rokach L (2010a) Ensemble-based classifiers. Artif Intell Rev 33(1):1−39
Rokach L (2010b) Pattern classification using ensemble methods. World Scientific, Singapore
Schapire RE (1990) The strength of weak learnability. Mach Learn 5(2):197−227
Schapire RE, Freund Y (2012) Boosting: foundations and algorithms. MIT Press, Cambridge, MA
Schapire RE, Freund Y, Bartlett P, Lee WS (1998) Boosting the margin: a new explanation for the effectiveness of voting methods. Ann Stat 26(5):1651−1686
Seewald AK (2002) How to make stacking better and faster while also taking care of an unknown weakness. In: Proceedings of the 19th international conference on machine learning (ICML), pp 554−561. Sydney, Australia
Tang EK, Suganthan PN, YaoX(2006) An analysis of diversity measures. Mach Learn 65(1):247−271
Ting KM, Witten IH (1999) Issues in stacked generalization. J Artif Intell Res 10:271−289
Webb GI (2000) MultiBoosting: a technique for combining boosting and wagging. Mach Learn 40(2):159−196
Wolpert DH (1992) Stacked generalization. Neural Netw 5(2):241−260
Wolpert DH, Macready WG (1999) An efficient method to estimate Bagging’s generalization error. Mach Learn 35(1):41−55
Xu L, Krzyzak A, Suen CY (1992) Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans Syst Man Cybern 22(3):418−435
Zadrozny B, Elkan C (2001) Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers. In: Proceedings of the 18th international conference on machine learning (ICML), pp 609−616. Williamstown, MA
Zhou Z-H (2012) Ensemble methods: foundations and algorithms. Chapman & Hall/CRC, Boca Raton, FL
Zhou Z-H (2014) Large margin distribution learning. In: Proceedings of the 6th IAPR international workshop on artificial neural networks in pattern recognition (ANNPR), pp 1−11. Montreal, Canada
Zhou Z-H, Jiang Y (2004) NeC4.5: neural ensemble based C4.5. IEEE Trans Knowl Data Eng 16(6):770−773
Zhou Z-H, Wu J, Wei T (2002) Ensembling neural networks: many could be better than all. Artif Intell 137(1−2):239−263
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Zhou, ZH. (2021). Ensemble Learning. In: Machine Learning. Springer, Singapore. https://doi.org/10.1007/978-981-15-1967-3_8
Download citation
DOI: https://doi.org/10.1007/978-981-15-1967-3_8
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-1966-6
Online ISBN: 978-981-15-1967-3
eBook Packages: Computer ScienceComputer Science (R0)