Abstract
Finite mixture models are widely used to perform model-based clustering of multivariate data sets. Most of the existing mixture models work with linear data; whereas, real-life applications may involve multivariate data having both circular and linear characteristics. No existing mixture models can accommodate such correlated circular–linear data. In this paper, we consider designing a mixture model for multivariate data having one circular variable. In order to construct a circular–linear joint distribution with proper inclusion of correlation terms, we use the semi-wrapped Gaussian distribution. Further, we construct a mixture model (termed SWGMM) of such joint distributions. This mixture model is capable of approximating the distribution of multi-modal circular–linear data. An unsupervised learning of the mixture parameters is proposed based on expectation maximization method. Clustering is performed using maximum a posteriori criterion. To evaluate the performance of SWGMM, we choose the task of color image segmentation in LCH space. We present comprehensive results and compare SWGMM with existing methods. Our study reveals that the proposed mixture model outperforms the other methods in most cases.
Similar content being viewed by others
References
Agiomyrgiannakis Y, Stylianou Y (2009) Wrapped gaussian mixture models for modeling and high-rate quantization of phase data of speech. IEEE Trans Audio Speech Lang Process 17(4):775–786
Amayri O, Bouguila N (2013) Beyond hybrid generative discriminative learning: spherical data classification. Pattern Anal Appl pp 1–21
Bahlmann C (2006) Directional features in online handwriting recognition. Pattern Recognit 39:115–125
Banerjee A, Dhillon IS, Ghosh J, Sra S (2005) Clustering on the unit hypersphere using von Mises–Fisher distributions. J Mach Learn Res 6:1345–1382
Boutemedjet S, Bouguila N, Ziou D (2009) A hybrid feature extraction selection approach for high-dimensional non-gaussian data clustering. IEEE Trans Pattern Anal Mach Intell 31(8):1429–1443
Fernández-Durán JJ (2007) Models for circular–linear and circular–circular data constructed from circular distributions based on nonnegative trigonometric sums. Biometrics 63:579–585
Figueiredo MAT, Jain AK (2002) Unsupervised learning of finite mixture models. IEEE Trans Pattern Anal Mach Intell 24(3):381–396
Freixenet J, Muñoz X, Raba D, Martí J, Cufí X (2002) Yet another survey on image segmentation: region and boundary information integration. In: Proceedings of European conference on computer vision-Part III. Springer, Berlin, pp 408–422
Gong H, Shi J (2011) Conditional entropies as over-segmentation and under-segmentation metrics for multi-part image segmentation. Technical Report MS-CIS-11-17, University of Pennsylvania Department of Computer and Information Science
Jammalamadaka SR, Sengupta A (2001) Topics in Circular Statistics. World Scientific Publication Co Inc, New York
Johnson RA, Wehrly TE (1978) Some angular-linear distributions and related regression models. J Am Stat Assoc 73(363):602–606
Law MHC, Figueiredo MAT, Jain AK (2004) Simultaneous feature selection and clustering using mixture models. IEEE Trans Pattern Anal Mach Intell 26(9):1154–1166
Mardia KV, Jupp P (2000) Directional Statistics. Wiley, New York
Mardia KV, Sutton TW (1978) A model for cylindrical variables with applications. J Royal Stat Soc Ser B (Methodological) 40(2):229–233
Martin D, Fowlkes C, Tal D, Malik J (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of international conference computer vision, pp 416–423
Martin DR (2002) An empirical approach to grouping and segmentation. Ph.D. thesis, EECS Department, University of California, Berkeley
Mclachlan GJ, Peel D (2000) Finite Mixture Models. Wiley, New York
Meilǎ M (2002) Comparing clusterings: an axiomatic view. In: Proceedings of international conference on machine learning. ACM, New York, pp 577–584
Peel D, Mclachlan GJ (2000) Robust mixture modelling using the t distribution. Stat Comput 10:339–348
Roy A, Parui SK, Roy U (2007) A beta mixture model based approach to text extraction from color images. In: Proceedings of international conference on advances in pattern recognition. World Scientific, New York, pp 321–326
Roy A, Parui SK, Roy U (2011) Color image segmentation using a semi-wrapped gaussian mixture model. In: proceedings of international conference on pattern recognition and machine intelligence. Springer, Berlin, pp 148–153
Roy A, Parui SK, Roy U (2012) A mixture model of circular–linear distributions for color image segmentation. Int J Comput Appl 58(9):6–11
Shieh GS, Zheng SR, Shimizu K (2006) A bivariate generalized von Mises distribution with applications to circular genomes. Technical report, Institute of Statistical Science, Academia Sinica, Taiwan
Unnikrishnan R, Pantofaru C, Hebert M (2007) Toward objective evaluation of image segmentation algorithms. IEEE Trans Pattern Anal Mach Intell 29(6):929–944
Wang Q, Kulkarni SR, Verd\(\acute{u}\) S (2009) Divergence estimation for multidimensional densities via \(k\)-nearest-neighbor distances. IEEE Trans Inf Theory 55(5):2392–2405
Yang AY, Wright J, Ma Y, Sastry SS (2008) Unsupervised segmentation of natural images via lossy data compression. Comput Vis Image Underst 110:212–225
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: MLE for wrapped Gaussian distribution
Let us assume that a wrapped-Gaussian distribution has the pdf as in Eq. 6. First we write the log likelihood for \(n\) given samples \(\varvec{\Upsilon } = \{\theta _1, \ldots , \theta _n\}\). The log likelihood takes the following form.
In order to make Eq. 20 more tractable, we use the Jensen’s inequality. For each sample \(\theta _i\), we compute the following.
Next, we include \(\alpha (w|\theta _i)\) inside the log likelihood to obtain the following form.
Now we may apply the Jensen’s inequality for concave functions. Then we have the following modified form of \(\Lambda\).
Only the first term of Eq. 23 is significant during maximizing \(\Lambda (\varvec{\Upsilon },\mu ^{c},\sigma ^{c})\). The rest of the work is straightforward by maximizing Eq. 23 with respect to \(\mu ^{c}\) and \(\sigma ^{c}\). In this way, we finally get the following estimates.
The above procedure can easily be extended for semi-wrapped Gaussian distributions involving one circular variable.
Appendix B: EM for wrapped Gaussian mixture model
The wrapped Gaussian mixture model (WGMM) for a random variable \(\Theta\) has the following form:
where \(P_h\) is, as before, the mixing proportion of the \(h\)th mixture component.
Let \(\varvec{\Upsilon } = \{\theta _1, \ldots , \theta _n\}\) be a set of \(n\) samples drawn from WGMM. Then the log likelihood for the mixture distribution is given as follows.
where \(\lambda\) is the lagrange multiplier.
In order to estimate the mixture parameters, we may apply the EM methodology. Here, the cluster information for a pixel is taken as the hidden variable. Then the distribution of hidden variables can be written as follows:
With the insertion of \(\beta (h|\theta _i)\) and after applying the Jensen’s inequality, we have the following modified form for the log likelihood.
The mixing proportion \(P_h\) is estimated by Eq. 30.
Let us now turn to estimating the mixture parameters. This can be done by maximizing the log likelihood w.r.t. \(\mu ^c_h\) and \(\sigma ^c_h\). Since we do not need the mixing proportions any more, we have the following expression to estimate \(\mu {^c_h}\) and \(\sigma {^c_h}\).
This equation can be made simpler with the help of \(\alpha (w|\theta _i)\) defined in Eq. 21. In particular, with \(\alpha (w|\theta _i)\) (for \(\mu _h\) and \(\sigma _h\)), and using the Jensen’s inequality again, we can express \(\Phi _{\mu ^c_h,\sigma ^c_h}(\varvec{\Upsilon })\) in the following “sum of logarithm” form:
Now since \(\beta (h|\theta _i)\) is invariant under the change of \(w\), we can write \(q(h,w|\theta _i) = \beta (h|\theta _i).\alpha (w|\theta _i)\) inside the summation over \(w\). In other words, we take both the cluster and wrapping information as the hidden information. Then the hidden variable distribution becomes \(q(h,w|\theta _i)\). So, finally we have the following log likelihood to maximize for the parameters.
On the basis of this, the estimates of \(\mu ^c_h\) and \(\sigma ^c_h\) are obtained as follows:
This completes the estimation for WGMM. Now, since we have only one circular variable involved in SWGMM, Eqs. 34 and 35 can be extended for SWGMM in a straightforward way.
Appendix C: computations for SWGMM
For notational simplicity, we assume that the mean vector \(\mu ^{\rm cl}\) (in Eq. 7) is a zero vector. Then, expanding Eq. 7, the semi-wrapped Gaussian density becomes:
Let the \((i,j)\)th element of the matrix \(\Sigma ^{-1}\) be denoted by \(a_{i,j}\). The inner products \((\varvec{p}\pm 2\pi )^T \Sigma ^{-1}(\varvec{p}\pm 2\pi )\) can be expressed as follows:
where \(C\) is the product \(\varvec{p}^T\times C_1\) and \(C_1\) is the first column of \(\Sigma ^{-1}\). Hence, we compute \(\varvec{p}^T \Sigma ^{-1}\varvec{p}\) once and obtain the other inner products by simple addition and subtraction. Also note that the product term \(C\) is a byproduct when we compute \(\varvec{p}^T \Sigma ^{-1}\varvec{p}\) and hence needs no extra computation.
Finally, we can express \((\varvec{p}_i \pm 2\pi )(\varvec{p}_i \pm 2\pi )^T\) in terms of \(\varvec{p}_i\varvec{p}_i^T\) by changing only the first row and the first column of \(\varvec{p}_i\varvec{p}_i^T\). This simplifies the estimation of covariance matrix (Eq. 14).
Rights and permissions
About this article
Cite this article
Roy, A., Parui, S.K. & Roy, U. SWGMM: a semi-wrapped Gaussian mixture model for clustering of circular–linear data. Pattern Anal Applic 19, 631–645 (2016). https://doi.org/10.1007/s10044-014-0418-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10044-014-0418-2