Abstract
We propose a multi-agent algorithm able to automatically discover relevant regularities in a given dataset, determining at the same time the set of configurations of the adopted parametric dissimilarity measure that yield compact and separated clusters. Each agent operates independently by performing a Markovian random walk on a weighted graph representation of the input dataset. Such a weighted graph representation is induced by a specific parameter configuration of the dissimilarity measure adopted by an agent for the search. During its lifetime, each agent evaluates different parameter configurations and takes decisions autonomously for one cluster at a time. Results show that the algorithm is able to discover parameter configurations that yield a consistent and interpretable collection of clusters. Moreover, we demonstrate that our algorithm shows comparable performances with other similar state-of-the-art algorithms when facing specific clustering problems. Notably, we compare our method with respect to several graph-based clustering algorithms and a well-known subspace search method.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Aggarwal CC, Wolf JL, Yu PS, Procopiuc C, Park JS (1999) Fast algorithms for projected clustering. SIGMOD Rec 28(2):61–72. doi:10.1145/304181.304188
Agogino A, Tumer K (2006) Efficient agent-based cluster ensembles. In: Proceedings of the fifth international joint conference on autonomous agents and multiagent systems, ACM, pp 1079–1086
Alamgir M, von Luxburg U (2010) Multi-agent random walks for local clustering on graphs. In: Proceedings of the IEEE 10th international conference on data mining, pp 18–27. doi:10.1109/ICDM.2010.87
Andersen R, Chung F, Lang K (2006) Local graph partitioning using pagerank vectors. In: Proceedings of the 47th annual IEEE symposium on foundations of computer science. IEEE Computer Society, Washington, DC, USA, pp 475–486. doi:10.1109/FOCS.2006.44
Arora S, Rao S, Vazirani U (2008) Geometry, flows, and graph-partitioning algorithms. Commun ACM 51(10):96–105
Arora S, Rao S, Vazirani U (2009) Expander flows, geometric embeddings and graph partitioning. J ACM 56(2):5
Azran A, Ghahramani Z (2006) Spectral methods for automatic multiscale data clustering. In: Proceedings of the 2006 IEEE computer society conference on computer vision and pattern recognition. IEEE Computer Society, Washington, DC, USA, pp 190–197. doi:10.1109/CVPR.2006.289
Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml. Accessed 18 Feb 2015
Bereta M, Pedrycz W, Reformat M (2013) Local descriptors and similarity measures for frontal face recognition: a comparative analysis. J Vis Commun Image Represent 24(8):1213–1231. doi:10.1016/j.jvcir.2013.08.004
Bianchi FM, Livi L, Rizzi A (2015) Two density-based k-means initialization algorithms for non-metric data clustering. Pattern Anal Appl. doi:10.1007/s10044-014-0440-4
Bulò SR, Pelillo M (2013) A game-theoretic approach to hypergraph clustering. IEEE Trans Pattern Anal Mach Intell 35(6):1312–1327
Cao L (2009) Data mining and multi-agent integration. Springer, Berlin
Cao J, Wu Z, Wu J, Liu W (2013) Towards information-theoretic K-means clustering for image indexing. Signal Process 93(7):2026–2037. doi:10.1016/j.sigpro.2012.07.030
Chaimontree S, Atkinson K, Coenen F (2012) A framework for multi-agent based clustering. Auton Agents Multi-Agent Syst 25(3):425–446. doi:10.1007/s10458-011-9187-0
Chandrasekhar U, Naga P (2011) Recent trends in ant colony optimization and data clustering: a brief survey. In: 2nd international conference on intelligent agent and multi-agent systems (IAMA), pp 32–36. doi:10.1109/IAMA.2011.6048999
Chang CC (2012) A boosting approach for supervised mahalanobis distance metric learning. Pattern Recogn 45(2):844–862
Chung F (1994) Spectral graph theory. AMS, Providence
De Smet F, Aeyels D (2009) Cluster transitions in a multi-agent clustering model. In: Proceedings of the 48th IEEE conference on decision and control, 2009 held jointly with the 2009 28th Chinese control conference. CDC/CCC 2009, pp 4778–4784. doi:10.1109/CDC.2009.5400314
Delvenne JC, Yaliraki SN, Barahona M (2010) Stability of graph communities across time scales. Proc Natl Acad Sci 107(29):12,755–12,760. doi:10.1073/pnas.0903215107
Ditterrich TG (1997) Machine learning research: four current direction. Artif Intell Magz 4:97–136
Duin RPW, Pȩkalska E (2012) The dissimilarity space: bridging structural and statistical pattern recognition. Pattern Recogn Lett 33(7):826–832. doi:10.1016/j.patrec.2011.04.019
Ferrer M, Valveny E, Serratosa F, Bardají I, Bunke H (2009) Graph-based k-means clustering: a comparison of the set median versus the generalized median graph. In: Proceedings of the 13th international conference on computer analysis of images and patterns, CAIP ’09. Springer, Berlin, pp 342–350. doi:10.1007/978-3-642-03767-2_42
Forestier G, Gançarski P, Wemmert C (2010) Collaborative clustering with background knowledge. Data Knowl Eng 69(2):211–228
Gallesco C, Mueller S, Popov S (2011) A note on spider walks. ESAIM: Probab Stat 15:390–401
Galluccio L, Michel O, Comon P, Hero AO III (2012) Graph based k-means clustering. Signal Process 92(9):1970–1984. doi:10.1016/j.sigpro.2011.12.009
Galluccio L, Michel O, Comon P, Kliger M, Hero AO III (2013) Clustering with a new distance measure based on a dual-rooted tree. Inf Sci 251:96–113. doi:10.1016/j.ins.2013.05.040
Giannella C, Bhargava R, Kargupta H (2004) Multi-agent systems and distributed data mining. In: Cooperative information agents VIII. Springer, Berlin, pp 1–15
Gisbrecht A, Schleif FM (2015) Metric and non-metric proximity transformations at linear costs. Neurocomputing 167:643–657. doi:10.1016/j.neucom.2015.04.017
Gkantsidis C, Mihail M, Saberi A (2003) Conductance and congestion in power law graphs. ACM SIGMETRICS Perform Eval Rev ACM 31:148–159
Gönen M, Alpaydın E (2011) Multiple kernel learning algorithms. J Mach Learn Res 12:2211–2268
Gorodetsky V, Karsaeyv O, Samoilov V (2003) Multi-agent technology for distributed data mining and classification. In: Proceedings of the IEEE/WIC international conference on intelligent agent technology, IEEE, pp 438–441
Han J, Kamber M, Pei J (2011) Data mining: concepts and techniques: concepts and techniques. Elsevier, New York
Hoory S, Linial N, Wigderson A (2006) Expander graphs and their applications. Bull Am Math Soc 43(4):439–561
Izakian H, Pedrycz W, Jamal I (2013) Clustering spatio-temporal data: an augmented fuzzy C-means. IEEE Trans Fuzzy Syst 21(5):855–868. doi:10.1109/TFUZZ.2012.2233479
Kannan R, Vempala S, Vetta A (2004) On clusterings: good, bad and spectral. J ACM 51(3):497–515
Kim SW, Duin RPW (2009) A combine-correct-combine scheme for optimizing dissimilarity-based classifiers. In: Bayro-Corrochano E, Eklundh JO (eds) Progress in pattern recognition, image analysis, computer vision, and applications, LNCS, vol 5856. Springer, Berlin, pp 425–432. doi:10.1007/978-3-642-10268-4_49
Klusch M, Lodi S, Moro G (2003) Agent-based distributed data mining: the KDEC scheme. In: Klusch M, Bergamaschi S, Edwards P, Petta P (eds) Intelligent information agents, vol 2586. Springer, Berlin, pp 104–122. doi:10.1007/3-540-36561-3_5
Komorowski J, Zytkow J (1997) Principles of data mining and knowledge discovery. Springer, Berlin
Kriegel HP, Kröger P, Zimek A (2009) Clustering high-dimensional data: a survey on subspace clustering, pattern-based clustering, and correlation clustering. ACM Trans Knowl Discov Data 3(1):1:1–1:58. doi:10.1145/1497577.1497578
Leighton T, Rao S (1999) Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms. J ACM 46:787–832. doi:10.1145/331524.331526
Livi L, Rizzi A, Sadeghian A (2014) Optimized dissimilarity space embedding for labeled graphs. Inf Sci 266:47–64. doi:10.1016/j.ins.2014.01.005
Livi L, Rizzi A, Sadeghian A (2015) Granular modeling and computing approaches for intelligent analysis of non-geometric data. Appl Soft Comput 27:567–574. doi:10.1016/j.asoc.2014.08.072
Lovász L (1996) Random walks on graphs: a survey. In: Miklós D, Sós VT, Szőnyi T (eds) Combinatorics, Paul Erdős is eighty, vol 2. János Bolyai Mathematical Society, Budapest, pp 353–398
Madry A (2010) Fast approximation algorithms for cut-based problems in undirected graphs. In: Proceedings of the 51st annual IEEE symposium on foundations of computer science, pp 245–254. doi:10.1109/FOCS.2010.30
Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equation of state calculations by fast computing machines. J Chem Phys 21(6):1087–1092. doi:10.1063/1.1699114
Mitra S, Banka H, Pedrycz W (2006) Rough-fuzzy collaborative clustering. IEEE Trans Syst Man Cybern Part B: Cybern 36(4):795–805
Mu Y, Ding W, Tao D (2013) Local discriminative distance metrics ensemble learning. Pattern Recogn 46(8):2337–2349. doi:10.1016/j.patcog.2013.01.010
Negenborn RR, Hug-Glanzmann G, De Schutter B, Andersson G (2010) A novel coordination strategy for multi-agent control using overlapping subnetworks with application to power systems. In: Mohammadpour J, Grigoriadis KM (eds) Efficient modeling and control of large-scale systems. Springer, Norwell, pp 251–278
Nguyen TM, Wu QMJ (2013) Dynamic fuzzy clustering and its application in motion segmentation. IEEE Trans Fuzzy Syst 21(6):1019–1031. doi:10.1109/TFUZZ.2013.2240689
North MJ (2014) A theoretical formalism for analyzing agent-based models. Complex Adapt Syst Model 2(1):3
Parsons L, Haque E, Liu H (2004) Subspace clustering for high dimensional data: a review. ACM SIGKDD Explor Newslett 6(1):90–105
Pedrycz W (2002) Collaborative fuzzy clustering. Pattern Recogn Lett 23(14):1675–1686
Pedrycz W (2005) Knowledge-based clustering: from data to information granules. Wiley, New York
Pedrycz W (2013) Proximity-based clustering: a search for structural consistency in data with semantic blocks of features. IEEE Trans Fuzzy Syst 21(5):978–982. doi:10.1109/TFUZZ.2012.2236842
Prodromidis A, Chan P, Stolfo S (2000) Meta-learning in distributed data mining systems: issues and approaches. Adv Distrib Parallel Knowl Discov 3:81–114
Provost FJ, Hennessy DN (1996) Scaling up: distributed machine learning with cooperation. In: Proceedings of the thirteenth national conference on artificial intelligence, vol 1, pp 74–79
Queiroz S, de Carvalho FDAT, Lechevallier Y (2013) Nonlinear multicriteria clustering based on multiple dissimilarity matrices. Pattern Recogn 46(12):3383–3394. doi:10.1016/j.patcog.2013.06.008
Sarma AD, Gollapudi S, Panigrahy R (2011) Estimating pagerank on graph streams. J ACM 58(3):13
Shen C, Kim J, Liu F, Wang L, van den Hengel A (2014) Efficient dual approach to distance metric learning. IEEE Trans Neural Netw Learn Syst 25(2):394–406. doi:10.1109/TNNLS.2013.2275170
Spielman DA, Teng SH (2013) A local clustering algorithm for massive graphs and its application to nearly linear time graph partitioning. SIAM J Comput 42(1):1–26
Tabrizi SA, Shakery A, Asadpour M, Abbasi M, Tavallaie MA (2013) Personalized pagerank clustering: a graph clustering algorithm based on random walks. Phys A: Stat Mech Appl 392(22):5772–5785. doi:10.1016/j.physa.2013.07.021
Trefethen LN, Bau D III (1997) Numerical linear algebra, vol 50. SIAM, Philadelphia
Vidal R (2010) A tutorial on subspace clustering. IEEE Signal Process Maga 28(2):52–68
Yang L, Jin R, Mummert L, Sukthankar R, Goode A, Zheng B, Hoi SCH, Satyanarayanan M (2010) A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval. IEEE Trans Pattern Anal Mach Intell 32(1):30–44. doi:10.1109/TPAMI.2008.273
Yin X, Shu T, Huang Q (2012) Semi-supervised fuzzy clustering with metric learning and entropy regularization. Knowl-Based Syst 35:304–311
Zhang H, Yu J, Wang M, Liu Y (2012) Semi-supervised distance metric learning based on local linear regression for data clustering. Neurocomputing 93:100–105
Zhang L, Pedrycz W, Lu W, Liu X, Zhang L (2014) An interval weighed fuzzy c-means clustering by genetically guided alternating optimization. Expert Syst Appl 41(13):5960–5971
Acknowledgments
The work presented in this paper has been partially funded by Telecom Italia S.p.a. The authors wish to thank Corrado Moiso, Software System Architect at Telecom Italia—Future Centre, for the benefits that he provided to the present project through the valuable comments, ideas and assistance to the writing and the undertaking of the research summarized here.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Communicated by V. Loia.
Appendix: Graph conductance and related approximation
Appendix: Graph conductance and related approximation
Given a graph \(G=(\mathcal {V}, \mathcal {E})\), with \(n=|\mathcal {V}|\), the conductance of a cut induced by the subset \(\mathcal {S}\subset \mathcal {V}\) is defined as:
where \(\overline{S}=\mathcal {V}{\setminus }\mathcal {S}\) and \(A(S) = \sum _{u,v \in S} A(u, v)\) are the number of edges in \(\mathcal {S}\). If the graph is weighted, then A(u, v) contains the weight (i.e., the strength) of the edge among u and v; if it is not weighted then A(u, v) is equal to one if and only if there is an edge among u and v. While computing the conductance (9) of any subset \(\mathcal {S}\subset \mathcal {V}\) is simple, computing the conductance of the graph \(\Phi (G)\) consists in solving the following NP-hard problem (Chung 1994):
Finding the global optimum is infeasible even for small graphs. As a consequence, many approximation techniques have been proposed so far (Leighton and Rao 1999; Arora et al. 2009; Madry 2010; Gkantsidis et al. 2003; Sarma et al. 2011).
Among the many techniques, spectral techniques (Chung 1994) provide a very powerful approach. Let \(\mathbf {A}\) be the (weighted) adjacency matrix of G, and let \(\mathbf {D}\) be diagonal matrix containing the vertex degrees:
Let us define the transition matrix \(\mathbf {M}\) as:
The matrix \(\mathbf {M}\) is not always symmetric. Therefore, it does not always admit a spectral representation of the form \(\mathbf {M}=\mathbf {U}\Lambda \mathbf {U}^T\), where \(\Lambda \) is a diagonal matrix containing the n eigenvalues and \(\mathbf {U}\) is a matrix containing the corresponding eigenvectors. Notwithstanding, \(\mathbf {M}\) is conjugate to a symmetric matrix, \(\mathbf {N}\), which is defined as follows:
\(\mathbf {M}\) and \(\mathbf {N}\) have the same eigenvalues and the eigenvectors are linearly correlated (Lovász 1996; Chung 1994). The eigenvalues of \(\mathbf {N}\) satisfy the following relation:
The celebrated Cheeger inequality (Lovász 1996) establishes an important relation among the conductance of G (10) with \(\lambda _2\):
which can be rewritten as:
Since \(\phi (S) \ge \Phi (G)\) for any \(\mathcal {S}\subset \mathcal {V}\), Eq. (16) can be used as a local reference for a specific graph. According to Eq. (16), it is possible to define the lower and the upper bound of the graph conductance as
which can be used for evaluating how much the conductance of a cut \(\phi (S)\) is close to the conductance of the whole graph, \(\Phi (G)\).
To make use of the bounds of Eq. (17), we need to compute the \(\lambda _2\) eigenvalue. The QR decomposition (Trefethen and Bau 1997) is the most straightforward numerical technique for this purpose, which is, however, characterized by a cubic computational complexity. To overcome this drawback, we can use the power method described in Trefethen and Bau (1997), a fast algorithm that is able to compute in pseudo-linear time the largest eigenvalue and related eigenvector of a positive semi-definite (PSD) matrix. Notably, the computational complexity of the power method is \(O((\mathcal {V} + |\mathcal {E}|) \frac{1}{\epsilon } \log {\frac{|\mathcal {V}|}{\epsilon }})\), where \(\epsilon \ge 0\) is the approximation used in computing \(\lambda _2\). Algorithm 1 describes the pseudo-code of the power method. The algorithm starts by randomly initializing a vector, \(\mathbf {x}_0 \in [-1, 1]^n\); it returns the vector \(\mathbf {x}_t = \widetilde{\mathbf {M}}^t \mathbf {x}_0\), where \(\widetilde{\mathbf {M}}\) is the PSD under analysis. The following theorem is an important result for the convergence of the power method (Arora et al. 2008; Hoory et al. 2006).
Theorem 1
For every PSD matrix \(\widetilde{\mathbf {M}}\), positive integer t, a parameter \(\epsilon > 0\) and a vector \(\mathbf {x}_0\) randomly picked with uniform probability p in \([-1,1]^n\), with \(p> \frac{3}{16}\) over the choice of \(\mathbf {x}_0\), the power method outputs a vector \(\mathbf {x}_t\) such that
where \(\lambda _1\) is the largest eigenvalue.
The eigenvector \(\mathbf {v}_1\) related to \(\lambda _1\) would be approximated by \(\frac{\mathbf {x}_t}{\Vert \mathbf {x}_t \Vert }\). Given a PSD matrix \(\widetilde{\mathbf {M}}\) and the (unitary) eigenvector \(\mathbf {v}_1\) related to \(\lambda _1\), we can compute \(\lambda _2\) by means of Algorithm 2, which is a variation of Algorithm 1. The algorithm (2) returns a vector \(\mathbf {x}_t \bot \mathbf {v}_1\), such that
The power method can only be applied to a PSD matrix, which is not the case of \(\mathbf {N}\), whose eigenvalues are the ones in Eq. (14). Consider now the matrix \(\overline{\mathbf {N}} = \mathbf {N} + \mathbf {I}\). Every eigenvector of \(\mathbf {N}\) with eigenvalue \(\lambda \) is clearly also an eigenvector of \(\overline{\mathbf {N}}\) with eigenvalue \(1+\lambda \) and vice versa, thus \(\overline{\mathbf {N}}\) has eigenvalues \(2 = 1+\lambda _1 > 1+\lambda _2 \ge \cdots \ge 1+\lambda _n \ge 0\) and thus it is PSD.
Using \(\mathbf {v}_1\) (an eigenvector of \(\lambda _1\) computed with Algorithm 1), and setting \(t = O(\epsilon ^{-1} \log {\frac{n}{\epsilon }})\), Algorithm 2 will find with probability at least 3 / 16 a vector \(\mathbf {x}_t \bot \mathbf {1}\), such that
From Eq. (20), it is possible to derive the approximation of \(\lambda _2\) that in turn can be used in Eq. (17).
Rights and permissions
About this article
Cite this article
Bianchi, F.M., Maiorino, E., Livi, L. et al. An agent-based algorithm exploiting multiple local dissimilarities for clusters mining and knowledge discovery. Soft Comput 21, 1347–1369 (2017). https://doi.org/10.1007/s00500-015-1876-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-015-1876-1