Abstract
In many learning tasks, to obtain labeled instances is hard due to heavy cost while unlabeled instances can be easily collected. Active learners can significantly reduce labeling cost by only selecting the most informative instances for labeling. Graph-based learning methods are popular in machine learning in recent years because of clear mathematical framework and strong performance with suitable models. However, they suffer heavy computation when the whole graph is in huge size. In this paper, we propose a scalable algorithm for graph-based active learning. The proposed method can be described as follows. In the beginning, a backbone graph is constructed instead of the whole graph. Then the instances in the backbone graph are chosen for labeling. Finally, the instances with the maximum expected information gain are sampled repeatedly based on the graph regularization model. The experiments show that the proposed method obtains smaller data utilization and average deficiency than other popular active learners on selected datasets from semi-supervised learning benchmarks.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Hieu, T.N., Arnold, S.: Active learning using pre-clustering. In: Proc. 21th International Conf. on Machine Learning, Banff. Morgan Kaufmann (2004)
Muslea, I., Minton, S., Knoblock, C.A.: Active learning with multiple views. Journal of Artificial Intelligence Research 27, 203–233 (2006)
Lewis, D.D., Gale, W.A.: A sequential algorithm for training text classifiers. In: 17th ACM International Conference on Research and Development in Information Retrieval, pp. 3–12. Springer, Heidelberg (1994)
Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. Journal of Machine Learning Research 2, 45–66 (2001)
Schohn, G., Cohn, D.: Less is more: Active learning with support vector machines. In: Proc. 17th International Conf on Machine Learning, pp. 839–846. Morgan Kaufmann, San Francisco (2000)
Cohn, D.A., Ghahramani, Z., Jordan, M.I.: Active learning with statistical models. Journal of Artificial Intelligence research 4, 129–145 (1996)
Roy, N., McCallum, A.: Toward optimal active learning through sampling estimation of error reduction. In: Proc. 18th International Conf. on Machine Learning, pp. 441–448. Morgan Kaufmann, San Francisco (2001)
Seung, H.S., Opper, M., Sompolinsky, H.: Query by committee. In: Proceedings of the Fifth Workshop on Computational Learning Theory, pp. 287–294. Morgan Kaufmann, San Mateo (1992)
Freund, Y., Seung, H.S., Shamir, E., Tishby, N.: Selective sampling using the query by committee algorithm. Machine Learning 28, 133–168 (1997)
Abe, N., Mamitsuka, H.: Query learning using boosting and bagging. In: Proc. 15th International Conf. on Machine Learning, Madison, pp. 1–10. Morgan Kaufmann (1998)
Melville, P., Mooney, R.J.: Diverse ensembles for active learning. In: Proc. 21th International Conf. on Machine Learning, Banff, pp. 584–591. Morgan Kaufmann (2004)
McCallum, A., Nigam, K.: Employing em and pool-based active learning for text classification. In: ICML, pp. 350–358 (1998)
Chapelle, O., Schölkopf, B., Zien, A. (eds.): Semi-Supervised Learning. MIT Press, Cambridge (2006)
Baram, Y., El-Yaniv, R., Luz, K.: Online choice of active learning algorithms. In: ICML, pp. 19–26 (2003)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhao, W., Long, J., Zhu, E., Liu, Y. (2008). A Scalable Algorithm for Graph-Based Active Learning. In: Preparata, F.P., Wu, X., Yin, J. (eds) Frontiers in Algorithmics. FAW 2008. Lecture Notes in Computer Science, vol 5059. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69311-6_32
Download citation
DOI: https://doi.org/10.1007/978-3-540-69311-6_32
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69310-9
Online ISBN: 978-3-540-69311-6
eBook Packages: Computer ScienceComputer Science (R0)