Abstract
Mixtures of Gaussians are a crucial statistical modeling tool at the heart of many challenging applications in computer vision and machine learning. In this paper, we first describe a novel and efficient algorithm for simplifying Gaussian mixture models using a generalization of the celebrated k-means quantization algorithm tailored to relative entropy. Our method is shown to compare experimentally favourably well with the state-of-the-art both in terms of time and quality performances. Second, we propose a practical enhanced approach providing a hierarchical representation of the simplified GMM while automatically computing the optimal number of Gaussians in the simplified mixture. Application to clustering-based image segmentation is reported.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum-likelihood from incomplete data via the EM algorithm. Journal of Royal Statistical Society B 39, 1–38 (1977)
Parzen, E.: On the estimation of a probability density function and mode. Annals of Mathematical Statistics 33, 1065–1076 (1962)
Zhang, K., Kwok, J.T.: Simplifying mixture models through function approximation. In: Neural Information Processing Systems (2006)
Goldberger, J., Greenspan, H., Dreyfuss, J.: Simplifying mixture models using the unscented transform. IEEE Transactions Pattern Analysis Machine Intelligence 30, 1496–1502 (2008)
Goldberger, J., Gordon, S., Greenspan, H.: An efficient image similarity measure based on approximations of kl-divergence between two gaussian mixtures. In: IEEE International Conference on Computer Vision (2003)
Julier, S.J., Uhlmann, J.K.: Unscented filtering and nonlinear estimation. Proceedings of the IEEE 92, 401–422 (2004)
Davis, J.V., Dhillon, I.: Differential entropic clustering of multivariate gaussians. In: Neural Information Processing Systems (2006)
Goldberger, J., Roweis, S.: Hierarchical clustering of a mixture model. In: Neural Information Processing Systems (2004)
Novoviov, J., Malk, A.: Application of multinomial mixture model to text classification. In: Pattern Recognition and Image Analysis (2003)
Hamerly, G., Elkan, C.: Learning the k in k-means. In: Neural Information Processing Systems (2003)
Banerjee, A., Merugu, S., Dhillon, I., Ghosh, J.: Clustering with bregman divergences. Journal of Machine Learning Research 6, 234–245 (2005)
Nielsen, F., Boissonnat, J.D., Nock, R.: On Bregman Voronoi diagrams. In: SIAM Symposium on Discrete Algorithms (2007)
Anderson, T.W., Darling, D.A.: Asymptotic theory of certain ”goodness of fit” criteria based on stochastic processes. In: Annals of Mathematical Statistics (1952)
Hershey, J.R., Olsen, P.A.: Approximating the Kullback Leibler divergence between gaussian mixture models. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Garcia, V., Nielsen, F., Nock, R. (2010). Levels of Details for Gaussian Mixture Models. In: Zha, H., Taniguchi, Ri., Maybank, S. (eds) Computer Vision – ACCV 2009. ACCV 2009. Lecture Notes in Computer Science, vol 5995. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12304-7_48
Download citation
DOI: https://doi.org/10.1007/978-3-642-12304-7_48
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-12303-0
Online ISBN: 978-3-642-12304-7
eBook Packages: Computer ScienceComputer Science (R0)