US20140047089A1 - System and method for supervised network clustering - Google Patents
System and method for supervised network clustering Download PDFInfo
- Publication number
- US20140047089A1 US20140047089A1 US13/572,179 US201213572179A US2014047089A1 US 20140047089 A1 US20140047089 A1 US 20140047089A1 US 201213572179 A US201213572179 A US 201213572179A US 2014047089 A1 US2014047089 A1 US 2014047089A1
- Authority
- US
- United States
- Prior art keywords
- network
- nodes
- computer
- densities
- threshold
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/46—Cluster building
Definitions
- the present invention relates generally to mining and learning network data. More specifically, the present invention describes methods and systems for supervised network clustering using densities associated with nodes and extracting node components from the network, based on using thresholds on densities.
- Network data has become increasingly popular, because of the increasing proliferation of social and information networks.
- a significant amount of research has been devoted to the problem of mining and learning network data.
- a subset of the nodes in the network may have labels associated with them, and this information can be effectively used for a variety of clustering and classification applications.
- a network generally refers to a group of entities connected by links. This is a useful abstraction for many real-world scenarios, such as computer routers, pages on a website, or the participants in a social network.
- the nodes refer to the individual entities (e.g., routers, pages, participants) which are connected by links, which could either be communication links, hyperlinks, or social network friendship links.
- the useful properties of such nodes can be captured by labels, which are essentially drawn from a small set of keywords describing the node. For example, in a social network of researchers, the label on the node could correspond to their topic area of interest.
- labels can provide useful background knowledge for a variety of applications, including directing a clustering process in different ways, depending upon the nature of the underlying application.
- the available labels may often be noisy, incomplete, and are often partially derived from unreliable data sources.
- Many of the underlying clusters in the network may also not be fully described from such information, and even when the labels for a particular kind of desired category are available, they may represent an extremely small subset of the nodes. Nevertheless, such noisy, sparse, and incomplete information can also be useful in some parts of the network, and should therefore not be ignored during clustering.
- an exemplary feature of the present invention is to provide a new method and structure for supervised network clustering.
- Another exemplary feature of the present invention is to provide a highly adaptive network clustering algorithm.
- Another exemplary feature of the present invention is to provide an approach for constraining the nodes to belong to specific clusters, which is particularly useful in cases where prior knowledge is available for directing the clustering process.
- a method for supervised network clustering including receiving and reading node labels from a plurality of nodes on a network, the network being defined as a group of entities interconnected by links; using the node labels to define densities associated with the nodes; extracting node components from the network, based on using thresholds on densities; and merging smaller components having a size below a user-defined threshold.
- a method of clustering including receiving and reading node labels from a plurality of nodes on a network, as executed by a processor on a computer having access to the network, the network defined by a group of entities connected by links; calculating a random-walk-based probability for each node on the network, to define densities associated with the nodes; and defining clusters of nodes in the network based on the densities.
- a method of clustering including calculating a density associated with a plurality of nodes on a network, as executed by a processor on a computer having access to the network, the network defined by a group of entities connected by links; and defining clusters of nodes in the network based on the densities.
- an apparatus including a processor, and a memory device, the memory device storing therein a set of machine-readable instructions permitting the processor to execute a method of supervised network clustering, the method including receiving and reading node labels from a plurality of nodes on a network, as executed by a processor on a computer having access to the network; using the node labels to define densities associated with the nodes, extracting node components from the network, based on using thresholds on densities, and merging smaller components having a size below a user-defined threshold.
- a server including an input port to receive information concerning nodes on a network and a processor, wherein the processor receives, via the input port, and reads node labels from a plurality of nodes on the network, the network defined by a group of entities connected by links, calculates a random-walk-based probability for each node on the network, to define densities associated with the nodes, and defines clusters of nodes in the network based on the densities.
- a computer including a processor; and a memory device, the memory device storing a set of computer-readable instructions for the CPU to execute a method of clustering, the method including calculating a density associated with each of a plurality of nodes on a network, as executed by a processor on a computer having access to the network, the network defined by a group of entities connected by links, and defining clusters of nodes in the network based on the densities.
- FIG. 1 is a detailed description of the architecture 100 of an exemplary embodiment of the present invention.
- FIG. 2 is a detailed description in flowchart format 200 of the process for performing the supervised network clustering of an exemplary embodiment
- FIG. 3 is a detailed description in flowchart format 300 of determining the density connected components from the densities, as an exemplary detailed description of step 220 of FIG. 2 ;
- FIG. 4 is a detailed description in flowchart format 400 of the process of separating out a connected component for a density threshold of a specific value, as an exemplary detailed description of step 320 of FIG. 3 ;
- FIG. 5 is a detailed description in flowchart format 500 of the process for performing the density estimation, as an exemplary detailed description of step 210 of FIG. 2 ;
- FIG. 6 is a detailed description in flowchart format 600 of the process of putting together the different density connected components as an exemplary detailed description of step 230 of FIG. 2 ;
- FIG. 7 illustrates an exemplary hardware/information handling system 700 for incorporating the present invention therein, such as the exemplary server 5 of FIG. 1 ;
- FIG. 8 illustrates a signal-bearing storage medium 800 (e.g., storage medium or memory device) for storing steps of a program of a method according to the present invention.
- a signal-bearing storage medium 800 e.g., storage medium or memory device
- the learning process for a network can take on many forms, depending on the level of supervision in the learning process:
- the most suitable level of supervision may depend upon the underlying data and the task at hand. In most scenarios involving very large networks, a plethora of partial labels may be available in order to supervise the cluster creation process. Some examples are as follows:
- the social network for a given user may be defined in a variety of ways, depending upon their professional contacts, family, school, or alma-mater contacts. Interesting formation about the user (such as their school info) could be considered labels. These different communities of the same user are often quite different from one another and represent different segments of the user social life. The supervision process can help in focusing the community detection approach in a particular direction.
- the available labels may often be noisy, incomplete, and are often partially derived from unreliable data sources.
- Many of the underlying clusters in the network may also not be fully described from such information, and even when the labels for a particular kind of desired category are available, they may represent an extremely small subset of the nodes. Nevertheless, such noisy, sparse, and incomplete information can also be useful in some parts of the network, and should therefore not be ignored during clustering.
- the present invention addresses these issues by providing a fully adaptive approach, in which the level of supervision for network clustering can be controlled adaptively depending upon the application at hand.
- the two extreme versions of this scheme can perform either purely unsupervised clustering or fully supervised collective classification.
- the present invention demonstrates a density-based approach, which is able to satisfy both goals.
- it provides a highly adaptive network clustering algorithm, which can discover clusters of varying shape and density, and also incorporate different levels of supervision in the clustering process.
- FIGS. 1-8 exemplary embodiments of the method and structures of the present invention are explained.
- N is the set of nodes
- A is the set of edges. It is assumed that the number of nodes in N is n.
- the edges in the network may be associated with a weight, which indicates the strength of the relationship.
- the weight of edge (i,j) is denoted by w_ ⁇ ij ⁇ .
- the weights might represent the number of papers authored by a pair of individuals.
- the weight w_ ⁇ ij ⁇ is assumed to be 1, though the present invention allows the use of a weight for greater generality, if needed.
- the subset N_s of nodes are labeled, and that there are 1 different labels denoted by ⁇ 1 . . . 1 ⁇ . Therefore, all nodes in N_s are labeled with one of the values drawn from ⁇ 1 . . . 1 ⁇ , whereas the remaining nodes in N-N_s) are unlabeled.
- An exemplary goal of the present invention is to partition the nodes in $N$ into $k$ different sets of nodes C — 1 . . . C_k (i.e., clusters).
- C_ 1 . . . C_k i.e., clusters
- N C — 1 U C — 2 U . . . U C_k U O.
- the labeling of the set of nodes in N_s can be used to supervise the creation of the clusters C — 1 . . . C_k at a variety of different levels depending upon the application at hand.
- I(i) For a given node i, we assume that the edges incident on it are denote by I(i).
- An exemplary overall idea of the supervised clustering approach of the present invention is to design a density-based method in which clusters are defined in terms of density-connected sets of nodes.
- a density-connected pair of nodes is one in which a path of nodes exists between the pair, such that each node has density above a pre-defined threshold.
- One natural advantage of density-based methods is that they are not restricted to particular topological shapes of the clusters, as are implied by distance-based methods.
- the concept of density is much more challenging to define in the context of structural data.
- the density at a node is defined in terms of random walk probabilities.
- a more intuitive way of understanding the clustering process is in the context of a page-rank style random-walk process in which a surfer traverses the different nodes in the network by randomly picking any neighbor of a node during the walk.
- the page-rank random-walk concept has been used in the Google search engine and is described in more detail below.
- the density of a node, as used in the present invention, is essentially defined in an identical way to the page-rank computation.
- the density of a node i is defined as the steady-state probability that a random surfer on the network with a pre-defined set of reset probabilities visits node $i$ at any given transition.
- a random surfer on the network (upon entering a cluster), tends to get trapped in the cluster because the nodes in this dense region tend to have much higher visit probability than the surrounding nodes with lower visit probability. Therefore, a natural way of demarcating the boundaries of this cluster would be to exclude nodes from the cluster for which the density is below a given threshold, and then considering only connected regions of these high density nodes as candidates for a cluster.
- probability bias vector or personalization vector
- FIG. 1 is a detailed description of the architecture for the present invention. It is assumed that the processing is performed on a server 5 which contains a CPU 10 , disk 30 and main memory 20 . While other components may also be available on such a system, these particular components are necessary, in one exemplary embodiment of the present invention, for enabling the effective operation of the system.
- the graph is stored on the server end, along with the intermediate training data, which can be stored either on the disk 30 or main memory 20 .
- the CPU processes the graph continuously over time, and uses the main memory 20 , for intermediate book keeping of the statistics. These statistics may eventually be sometimes stored on the disk.
- the clustering of the stream is also performed on the CPU.
- Input A refers to the user inputs for labels for clustering.
- Output B refers to the output of clustering, and output C refers to the network stored on the disk 30 .
- the input to the clustering process is the nodes, links, and labels.
- the output is the grouping of the nodes into labels.
- FIG. 2 is description of the overall clustering process.
- the overall clustering process consists of three components.
- the first component is that of estimating the densities of the different nodes.
- the densities can be estimated in terms of the random walk probabilities.
- the second step is to separate out the different connected components based on these density values. Once these connected components have been found, we merge the smaller connected components into larger ones in order to provide each cluster with a critical mass.
- the first step of density estimation is performed in step 210 .
- This step is performed with the use of the labels from the underlying data.
- the labels provide help in supervising the density estimation process. This step will be discussed in detail in FIG. 5 .
- the next step is to find connected components from these densities. This is performed in step 220 in FIG. 2 . This step is described in detail in FIG. 3 .
- step 230 the smaller connected components are merged, in order to create the larger clusters from the underlying network.
- FIG. 3 is a description of the process of determining the density connected components from the network, once the densities have already been estimated. It can be considered a detailed description of step 220 of FIG. 2 .
- the major challenge in performing the density estimation process is that the densities of different nodes are quite different, and a single threshold cannot be used to remove all the components.
- our approach generates the different thresholds in an iterative process.
- we remove the nodes based on the density threshold qualification and go back to step 320 . This process is continually repeated, until the remaining network does not contain a sufficient number of nodes.
- FIG. 4 is a description of how the nodes are removed, based on the use of a specific density threshold.
- the first step is determining all the nodes in the network for which the density is above this threshold. This step is described by block 410 of FIG. 4 .
- This new set of nodes induces a much smaller subgraph of the original network. This smaller subgraph is typically disconnected into smaller portions, which are dense regions of the data. These connected components are reported as the smaller clusters determined in this step.
- the remaining network is then processed again for removal of nodes after updating the threshold, as described in FIG. 3 .
- FIG. 5 is a detailed description of the process of performing the density stimulation from the nodes and labels. It can be considered a detailed description of step 210 of FIG. 2 .
- the first step is to perform random walks in the network with restart probabilities defined by the node labels. For example, when the labels are of a particular class only, we can set the restart only at nodes which are labeled by that class. This is performed in step 510 .
- step 520 we perform the density estimation with the use of the random walk probabilities from this process.
- the process of calculating the random-walk-based probabilities, used in the present invention as the basis for calculating clusters, is discussed in the prior art in S. Brin, “The anatomy of a large-scale hypertextual search engine”, WWW Conference, 1998, the contents of which are incorporated herein by reference.
- the citation (link) graph of the web is an important resource that has largely gone unused in existing web search engines.
- Maps containing as many as 518 million of these hyperlinks, a significant sample of the total. These maps allow rapid calculation of a web page's “PageRank”, an objective measure of its citation importance that corresponds well with people's subjective idea of importance. Because of this correspondence.
- PageRank is an excellent way to prioritize the results of web keyword searches. For most popular subjects, a simple text matching search that is restricted to web page titles performs admirably when PageRank prioritizes the results (demo available at google.stanford.edu). For the type of full text searches in the main Google system, PageRank also helps a great deal.
- PageRank extends this idea by not counting links from all pages equally, and by normalizing by the number of links on a page.
- PageRank is defined as follows:
- page A has pages T1 . . . Tn which point to it (i.e., are citations).
- the parameter d is a damping factor which can be set between 0 and 1. We usually set d to 0.85. There are more details about d in the next section.
- C(A) is defined as the number of links going out of page A.
- PageRank of a page A is given as follows:
- PR ( A ) (1 ⁇ d )+ d ( PR ( T 1)/ C ( T 1)+ . . . + PR ( Tn )/ C ( Tn ))
- PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one.
- PageRank or PR(A) can be calculated using a simple iterative algorithm, and corresponds to the principal eigenvector of the normalized link matrix of the web. Also, a PageRank for 26 million web pages can be computed in a few hours on a medium size workstation. There are many other details which are beyond the scope of this paper.
- PageRank can be thought of as a model of user behavior. We assume there is a “random surfer” who is given a web page at random and keeps clicking on links, never hitting “back” but eventually gets bored and starts on another random page. The probability that the random surfer visits a page is its PageRank.
- the d damping factor is the probability at each page the “random surfer” will get bored and request another random page.
- One important variation is to only add the damping factor d to a single page, or a group of pages. This allows for personalization and can make it nearly impossible to deliberately mislead the system in order to get a higher ranking.
- PageRank again see [Page 98].
- a page can have a high PageRank if there are many pages that point to it, or if there are some pages that point to it and have a high PageRank.
- pages that are well cited from many places around the web are worth looking at.
- pages that have perhaps only one citation train something like the Yahoo! homepage are also generally worth looking at. If a page was not high quality, or was a broken link, it is quite likely that Yahoo's homepage would not link to it.
- PageRank handles both these cases and everything in between by recursively propagating weights through the link structure of the web.”
- the prior art teaches to calculate the random-walk-based probabilities of nodes for purpose of ranking pages.
- the present invention uses the random-walk-based probabilities for purpose of calculating clusters of nodes.
- a user-defined threshold is utilized in order to define the minimum expected size of a cluster. All clusters for which the size is less than this user-defined threshold are merged with clusters for which the size is above a user-defined threshold. In each iteration, we determine all clusters for which the size of above a user-defined threshold, and we merge the smaller clusters with these larger clusters. Each smaller component is merged with a larger component with which it has the maximum number of connections.
- step 610 we check if at least one merge did occur in the last iteration. If at least one merge did occur, we go back to step 610 . Otherwise, we report the current components as relevant clusters in step 630 , and terminate.
- FIG. 7 illustrates a typical hardware configuration of an information handling/computer system in accordance with the invention and which preferably has at least one processor or central processing unit (CPU) 711 .
- processors central processing unit
- the CPUs 711 are interconnected via a system bus 712 to a random access memory (RAM) 714 , read-only memory (ROM) 716 , input/output (I/O) adapter 718 (for connecting peripheral devices such as disk units 721 and tape drives 740 to the bus 712 ), user interface adapter 722 (for connecting a keyboard 724 , mouse 726 , speaker 728 , microphone 732 , and/or other user interface device to the bus 712 ), a communication adapter 734 for connecting an information handling system to a data processing network, the Internet, an Intranet, a personal area network (PAN), etc., and a display adapter 736 for connecting the bus 712 to a display device 738 and/or printer 739 (e.g., a digital printer or the like).
- RAM random access memory
- ROM read-only memory
- I/O input/output
- I/O input/output
- user interface adapter 722 for connecting a keyboard 724 , mouse 726
- a different aspect of the invention includes a computer-implemented method for performing the above method. As an example, this method may be implemented in the particular environment discussed above.
- Such a method may be implemented, for example, by operating a computer, as embodied by a digital data processing apparatus, to execute a sequence of machine-readable instructions. These instructions may reside in various types of signal-bearing storage media.
- this aspect of the present invention is directed to a programmed product, comprising signal-bearing storage media tangibly embodying a program of machine-readable instructions executable by a digital data processor incorporating the CPU 711 and hardware above, to perform the method of the invention.
- This signal-bearing storage media may include, for example, a RAM contained within the CPU 711 , as represented by the fast-access storage for example.
- the instructions may be contained in another signal-bearing storage media, such as a magnetic data storage diskette 800 ( FIG. 8 ), directly or indirectly accessible by the CPU 711 .
- the instructions may be stored on a variety of machine-readable data storage media, such as DASD storage (e.g., a conventional “hard drive” or a RAID array), magnetic tape, electronic read-only memory (e.g., ROM, EPROM, or EEPROM), an optical storage device (e.g. CD-ROM, WORM, DVD, digital optical tape, etc.), paper “punch” cards, or other suitable signal-bearing storage media including memory devices in transmission hardware, communication links, and wireless, and including different formats such as digital and analog.
- DASD storage e.g., a conventional “hard drive” or a RAID array
- magnetic tape e.g., magnetic tape, electronic read-only memory (e.g., ROM, EPROM, or EEPROM), an optical storage device (e.g. CD-ROM, WORM, DVD, digital optical tape, etc.), paper “punch” cards, or other suitable signal-bearing storage media including memory devices in transmission hardware, communication links, and wireless, and including different formats
- the present invention discusses a new method for supervised network clustering which can be useful for constraining the nodes to belong to specific clusters. This is particularly useful in cases where prior knowledge is available for directing the clustering process.
- the invention could also be implemented as a user-interactive application, in which the user interactively labels nodes as relevant to a particular group.
- the user may even look at the output of the clustering and further modify the labels.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A method (and system) for supervised network clustering includes receiving and reading node labels from a plurality of nodes on a network, as executed by a processor on a computer having access to the network, the network defined as a group of entities interconnected by links. The node labels are used to define densities associated with the nodes. Node components are extracted from the network, based on using thresholds on densities. Smaller components having a size below a user-defined threshold are merged.
Description
- 1. Field of the Invention
- The present invention relates generally to mining and learning network data. More specifically, the present invention describes methods and systems for supervised network clustering using densities associated with nodes and extracting node components from the network, based on using thresholds on densities.
- 2. Description of the Related Art
- Network data has become increasingly popular, because of the increasing proliferation of social and information networks. A significant amount of research has been devoted to the problem of mining and learning network data. In many scenarios, a subset of the nodes in the network may have labels associated with them, and this information can be effectively used for a variety of clustering and classification applications.
- In the context of the present invention, a network generally refers to a group of entities connected by links. This is a useful abstraction for many real-world scenarios, such as computer routers, pages on a website, or the participants in a social network. The nodes refer to the individual entities (e.g., routers, pages, participants) which are connected by links, which could either be communication links, hyperlinks, or social network friendship links.
- The useful properties of such nodes can be captured by labels, which are essentially drawn from a small set of keywords describing the node. For example, in a social network of researchers, the label on the node could correspond to their topic area of interest. Such labels can provide useful background knowledge for a variety of applications, including directing a clustering process in different ways, depending upon the nature of the underlying application.
- On the other hand, the available labels may often be noisy, incomplete, and are often partially derived from unreliable data sources. Many of the underlying clusters in the network may also not be fully described from such information, and even when the labels for a particular kind of desired category are available, they may represent an extremely small subset of the nodes. Nevertheless, such noisy, sparse, and incomplete information can also be useful in some parts of the network, and should therefore not be ignored during clustering.
- In view of the foregoing and other exemplary problems, drawbacks, and disadvantages of the conventional methods and systems, an exemplary feature of the present invention is to provide a new method and structure for supervised network clustering.
- Another exemplary feature of the present invention is to provide a highly adaptive network clustering algorithm.
- Another exemplary feature of the present invention is to provide an approach for constraining the nodes to belong to specific clusters, which is particularly useful in cases where prior knowledge is available for directing the clustering process.
- In a first exemplary aspect of the present invention, described herein is a method for supervised network clustering, including receiving and reading node labels from a plurality of nodes on a network, the network being defined as a group of entities interconnected by links; using the node labels to define densities associated with the nodes; extracting node components from the network, based on using thresholds on densities; and merging smaller components having a size below a user-defined threshold.
- In a second exemplary aspect of the present invention, also described herein is a method of clustering, including receiving and reading node labels from a plurality of nodes on a network, as executed by a processor on a computer having access to the network, the network defined by a group of entities connected by links; calculating a random-walk-based probability for each node on the network, to define densities associated with the nodes; and defining clusters of nodes in the network based on the densities.
- In a third exemplary aspect of the present invention, also described herein is a method of clustering, including calculating a density associated with a plurality of nodes on a network, as executed by a processor on a computer having access to the network, the network defined by a group of entities connected by links; and defining clusters of nodes in the network based on the densities.
- In a fourth exemplary aspect, also described herein is an apparatus, including a processor, and a memory device, the memory device storing therein a set of machine-readable instructions permitting the processor to execute a method of supervised network clustering, the method including receiving and reading node labels from a plurality of nodes on a network, as executed by a processor on a computer having access to the network; using the node labels to define densities associated with the nodes, extracting node components from the network, based on using thresholds on densities, and merging smaller components having a size below a user-defined threshold.
- In a fifth exemplary aspect, also described herein is a server including an input port to receive information concerning nodes on a network and a processor, wherein the processor receives, via the input port, and reads node labels from a plurality of nodes on the network, the network defined by a group of entities connected by links, calculates a random-walk-based probability for each node on the network, to define densities associated with the nodes, and defines clusters of nodes in the network based on the densities.
- In a sixth exemplary aspect, also described herein is a computer including a processor; and a memory device, the memory device storing a set of computer-readable instructions for the CPU to execute a method of clustering, the method including calculating a density associated with each of a plurality of nodes on a network, as executed by a processor on a computer having access to the network, the network defined by a group of entities connected by links, and defining clusters of nodes in the network based on the densities.
- Other aspects, features and advantages of the invention will be more fully apparent from the ensuing disclosure and appended claims.
- These and other advantages may be achieved with the present invention.
- The foregoing and other exemplary purposes, aspects and advantages will be better understood from the following detailed description of an exemplary embodiment of the invention with reference to the drawings, in which:
-
FIG. 1 is a detailed description of the architecture 100 of an exemplary embodiment of the present invention; -
FIG. 2 is a detailed description inflowchart format 200 of the process for performing the supervised network clustering of an exemplary embodiment; -
FIG. 3 is a detailed description inflowchart format 300 of determining the density connected components from the densities, as an exemplary detailed description ofstep 220 ofFIG. 2 ; -
FIG. 4 is a detailed description inflowchart format 400 of the process of separating out a connected component for a density threshold of a specific value, as an exemplary detailed description ofstep 320 ofFIG. 3 ; -
FIG. 5 is a detailed description inflowchart format 500 of the process for performing the density estimation, as an exemplary detailed description ofstep 210 ofFIG. 2 ; -
FIG. 6 is a detailed description inflowchart format 600 of the process of putting together the different density connected components as an exemplary detailed description ofstep 230 ofFIG. 2 ; -
FIG. 7 illustrates an exemplary hardware/information handling system 700 for incorporating the present invention therein, such as theexemplary server 5 ofFIG. 1 ; and -
FIG. 8 illustrates a signal-bearing storage medium 800 (e.g., storage medium or memory device) for storing steps of a program of a method according to the present invention. - The learning process for a network, as meaning in the context of the present invention a group of entities interconnected by links, can take on many forms, depending on the level of supervision in the learning process:
-
- At one end of the spectrum, no supervision may be available in the form of node labels. This problem is equivalent to the unsupervised network clustering problem.
- In many cases, some labels may me available at the nodes, which provide the partial supervision necessary for the clustering process. However, many other node clusters may also exist, which are not necessarily related to the labels on the nodes. This problem has remained largely unexplored for the case of structural data. The other end of the spectrum consists of the fully supervised learning or collective classification scenario, in which all the unlabeled nodes need to be classified, based on the current pattern of labeling of a small subset of the nodes.
- For a given network application, the most suitable level of supervision may depend upon the underlying data and the task at hand. In most scenarios involving very large networks, a plethora of partial labels may be available in order to supervise the cluster creation process. Some examples are as follows:
-
- In a scientific community network, it may be possible to label a small subset of nodes, depending upon the area of interest of the particular academic.
- In a movie information network such as the Internet Movie Database (IMDb), containing both movies and actors, the genre of the movie may be labeled, whereas the pre-dominant genre of an actor may not be available. This information can be used to direct the clustering process towards a scenario in which actors are clustered together not just by their linkage to one another, but also by their similarity in terms of the genre of the movie in which they may act.
- In a social network application, it may be desirable to cluster actor nodes based on their affinity to some set of products. While such labels may not be known across all the nodes, they may be available for some small subset of the nodes.
- Furthermore, the social network for a given user may be defined in a variety of ways, depending upon their professional contacts, family, school, or alma-mater contacts. Interesting formation about the user (such as their school info) could be considered labels. These different communities of the same user are often quite different from one another and represent different segments of the user social life. The supervision process can help in focusing the community detection approach in a particular direction.
- It is clear that such labels can be very useful for directing the clustering process in different ways, depending upon the nature of the underlying application. On the other hand, the available labels may often be noisy, incomplete, and are often partially derived from unreliable data sources. Many of the underlying clusters in the network may also not be fully described from such information, and even when the labels for a particular kind of desired category are available, they may represent an extremely small subset of the nodes. Nevertheless, such noisy, sparse, and incomplete information can also be useful in some parts of the network, and should therefore not be ignored during clustering.
- In contrast, the present invention addresses these issues by providing a fully adaptive approach, in which the level of supervision for network clustering can be controlled adaptively depending upon the application at hand. The two extreme versions of this scheme can perform either purely unsupervised clustering or fully supervised collective classification.
- One challenge which has recently been observed with network clustering is that different regions of the data have different levels of density in the network, as a result of which homogeneous clustering algorithms tend to create unusually large and incoherent clusters containing a significant percentage of the nodes from the network. This means that the link densities in different regions of the social network may be quite different. In such a scenario, the use of global analysis can either construct very small communities in sparse local regions, or report large and incoherent communities in dense regions. Therefore, it is important to use local structural analysis for determining the relevance of communities in a social network. Furthermore, the topological shape of the clusters in a graph may be arbitrary, and is not necessarily spherical, as is implied by distance-based clustering algorithms in networks.
- The present invention demonstrates a density-based approach, which is able to satisfy both goals. Thus, it provides a highly adaptive network clustering algorithm, which can discover clusters of varying shape and density, and also incorporate different levels of supervision in the clustering process.
- Referring now to the drawings, and more particularly to
FIGS. 1-8 , exemplary embodiments of the method and structures of the present invention are explained. - Initially, a summary of the present invention on supervised network clustering is provided, including a discussion of how a density-based model can be used for supervised network clustering. In this discussion, an undirected network G=(N, A) is assumed, in which N is the set of nodes, and A is the set of edges. It is assumed that the number of nodes in N is n. In many applications, the edges in the network may be associated with a weight, which indicates the strength of the relationship.
- We further assume that the weight of edge (i,j) is denoted by w_{ij}. For example, in an author-relationship network, the weights might represent the number of papers authored by a pair of individuals. In many network applications, the weight w_{ij} is assumed to be 1, though the present invention allows the use of a weight for greater generality, if needed. It is assumed that the subset N_s of nodes are labeled, and that there are 1 different labels denoted by {1 . . . 1}. Therefore, all nodes in N_s are labeled with one of the values drawn from {1 . . . 1}, whereas the remaining nodes in N-N_s) are unlabeled.
- An exemplary goal of the present invention is to partition the nodes in $N$ into $k$ different sets of nodes C—1 . . . C_k (i.e., clusters). In addition, we have a set of small subgraphs O which are referred to as the outlier set. Thus, we have N=C—1 U C—2 U . . . U C_k U O. We note that the labeling of the set of nodes in N_s can be used to supervise the creation of the clusters C—1 . . . C_k at a variety of different levels depending upon the application at hand. For a given node i, we assume that the edges incident on it are denote by I(i).
- An exemplary overall idea of the supervised clustering approach of the present invention is to design a density-based method in which clusters are defined in terms of density-connected sets of nodes. A density-connected pair of nodes is one in which a path of nodes exists between the pair, such that each node has density above a pre-defined threshold. One natural advantage of density-based methods is that they are not restricted to particular topological shapes of the clusters, as are implied by distance-based methods. On the other hand, the concept of density is much more challenging to define in the context of structural data.
- The density at a node is defined in terms of random walk probabilities. A more intuitive way of understanding the clustering process is in the context of a page-rank style random-walk process in which a surfer traverses the different nodes in the network by randomly picking any neighbor of a node during the walk. The page-rank random-walk concept has been used in the Google search engine and is described in more detail below. The density of a node, as used in the present invention, is essentially defined in an identical way to the page-rank computation.
- Specifically, the density of a node i is defined as the steady-state probability that a random surfer on the network with a pre-defined set of reset probabilities visits node $i$ at any given transition. Intuitively, a random surfer on the network (upon entering a cluster), tends to get trapped in the cluster because the nodes in this dense region tend to have much higher visit probability than the surrounding nodes with lower visit probability. Therefore, a natural way of demarcating the boundaries of this cluster would be to exclude nodes from the cluster for which the density is below a given threshold, and then considering only connected regions of these high density nodes as candidates for a cluster.
- Before discussing the clustering process in more detail, we will introduce the fundamentals of random-walk computation. In the random-walk process, at any given step, the random surfer either transitions to any node j adjacent to i with probability proportional to p_{ij}=w_{ij}Λsum_{j\in I(i)} w_{ij}, or it resets to a random node in the network with probability bias vector (or personalization vector)\gamma. Thus, the conditional probability of transition to node i (in case of a reset) is denoted by gamma(i). By picking the value of gamma(i) to be consistent with the different class labels, it is possible to perform the supervision process effectively.
-
FIG. 1 is a detailed description of the architecture for the present invention. It is assumed that the processing is performed on aserver 5 which contains aCPU 10,disk 30 andmain memory 20. While other components may also be available on such a system, these particular components are necessary, in one exemplary embodiment of the present invention, for enabling the effective operation of the system. The graph is stored on the server end, along with the intermediate training data, which can be stored either on thedisk 30 ormain memory 20. - The CPU processes the graph continuously over time, and uses the
main memory 20, for intermediate book keeping of the statistics. These statistics may eventually be sometimes stored on the disk. The clustering of the stream, as discussed inFIG. 2 , is also performed on the CPU. Input A refers to the user inputs for labels for clustering. Output B refers to the output of clustering, and output C refers to the network stored on thedisk 30. The input to the clustering process is the nodes, links, and labels. The output is the grouping of the nodes into labels. -
FIG. 2 is description of the overall clustering process. The overall clustering process consists of three components. The first component is that of estimating the densities of the different nodes. The densities can be estimated in terms of the random walk probabilities. The second step is to separate out the different connected components based on these density values. Once these connected components have been found, we merge the smaller connected components into larger ones in order to provide each cluster with a critical mass. - The first step of density estimation is performed in
step 210. This step is performed with the use of the labels from the underlying data. The labels provide help in supervising the density estimation process. This step will be discussed in detail inFIG. 5 . - The next step is to find connected components from these densities. This is performed in
step 220 inFIG. 2 . This step is described in detail inFIG. 3 . - Finally in
step 230, the smaller connected components are merged, in order to create the larger clusters from the underlying network. -
FIG. 3 is a description of the process of determining the density connected components from the network, once the densities have already been estimated. It can be considered a detailed description ofstep 220 ofFIG. 2 . The major challenge in performing the density estimation process is that the densities of different nodes are quite different, and a single threshold cannot be used to remove all the components. - Therefore, in one exemplary embodiment, our approach generates the different thresholds in an iterative process. In order to generate the threshold for the density connected components, we use the average density over all the nodes as a threshold for the estimation process. In
step 320, we remove all the nodes in the network based on this density-threshold qualification. The removal of these nodes reduces the number of remaining nodes in the network. Instep 330, we check if a sufficient number of nodes are still remaining in the network. If this is indeed the case, then we update the density threshold instep 340. This updated density threshold is the density across the remaining nodes in the network. After updating the density threshold, we remove the nodes based on the density threshold qualification, and go back tostep 320. This process is continually repeated, until the remaining network does not contain a sufficient number of nodes. -
FIG. 4 is a description of how the nodes are removed, based on the use of a specific density threshold. The first step is determining all the nodes in the network for which the density is above this threshold. This step is described byblock 410 ofFIG. 4 . This new set of nodes induces a much smaller subgraph of the original network. This smaller subgraph is typically disconnected into smaller portions, which are dense regions of the data. These connected components are reported as the smaller clusters determined in this step. The remaining network is then processed again for removal of nodes after updating the threshold, as described inFIG. 3 . -
FIG. 5 is a detailed description of the process of performing the density stimulation from the nodes and labels. It can be considered a detailed description ofstep 210 ofFIG. 2 . The first step is to perform random walks in the network with restart probabilities defined by the node labels. For example, when the labels are of a particular class only, we can set the restart only at nodes which are labeled by that class. This is performed instep 510. Instep 520, we perform the density estimation with the use of the random walk probabilities from this process. The process of calculating the random-walk-based probabilities, used in the present invention as the basis for calculating clusters, is discussed in the prior art in S. Brin, “The anatomy of a large-scale hypertextual search engine”, WWW Conference, 1998, the contents of which are incorporated herein by reference. - This paper describes the PageRank algorithm of the Google search engine, described in this paper by its two cofounders, as follows:
- “2.1 PageRank: Bringing Order to the Web
- The citation (link) graph of the web is an important resource that has largely gone unused in existing web search engines. We have created maps containing as many as 518 million of these hyperlinks, a significant sample of the total. These maps allow rapid calculation of a web page's “PageRank”, an objective measure of its citation importance that corresponds well with people's subjective idea of importance. Because of this correspondence. PageRank is an excellent way to prioritize the results of web keyword searches. For most popular subjects, a simple text matching search that is restricted to web page titles performs admirably when PageRank prioritizes the results (demo available at google.stanford.edu). For the type of full text searches in the main Google system, PageRank also helps a great deal.
- 2.1.1 Description of PageRank Calculation
- Academic citation literature has been applied to the web, largely by counting citations or backlinks to a given page. This gives some approximation of a page's importance or quality. PageRank extends this idea by not counting links from all pages equally, and by normalizing by the number of links on a page.
- PageRank is defined as follows:
- We assume page A has pages T1 . . . Tn which point to it (i.e., are citations). The parameter d is a damping factor which can be set between 0 and 1. We usually set d to 0.85. There are more details about d in the next section. Also C(A) is defined as the number of links going out of page A. The PageRank of a page A is given as follows:
-
PR(A)=(1−d)+d(PR(T1)/C(T1)+ . . . +PR(Tn)/C(Tn)) - Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one.
- PageRank or PR(A) can be calculated using a simple iterative algorithm, and corresponds to the principal eigenvector of the normalized link matrix of the web. Also, a PageRank for 26 million web pages can be computed in a few hours on a medium size workstation. There are many other details which are beyond the scope of this paper.
- 2.1.2 Intuitive Justification
- PageRank can be thought of as a model of user behavior. We assume there is a “random surfer” who is given a web page at random and keeps clicking on links, never hitting “back” but eventually gets bored and starts on another random page. The probability that the random surfer visits a page is its PageRank.
- And, the d damping factor is the probability at each page the “random surfer” will get bored and request another random page. One important variation is to only add the damping factor d to a single page, or a group of pages. This allows for personalization and can make it nearly impossible to deliberately mislead the system in order to get a higher ranking. We have several other extensions to PageRank, again see [Page 98].
- Another intuitive justification is that a page can have a high PageRank if there are many pages that point to it, or if there are some pages that point to it and have a high PageRank. Intuitively, pages that are well cited from many places around the web are worth looking at. Also, pages that have perhaps only one citation train something like the Yahoo! homepage are also generally worth looking at. If a page was not high quality, or was a broken link, it is quite likely that Yahoo's homepage would not link to it.
- PageRank handles both these cases and everything in between by recursively propagating weights through the link structure of the web.”
- Thus, from the above-recited passages describing the Google search engine, it can be seen that the prior art teaches to calculate the random-walk-based probabilities of nodes for purpose of ranking pages. In contrast, the present invention uses the random-walk-based probabilities for purpose of calculating clusters of nodes.
- Returning now to the present invention, we note that many of the component clusters found by the present invention's algorithm may be smaller than a user-defined threshold. Such clusters need to be merged into larger clusters, as discussed in
step 230 ofFIG. 2 . This step is described in detail inFIG. 6 . A user-defined threshold is utilized in order to define the minimum expected size of a cluster. All clusters for which the size is less than this user-defined threshold are merged with clusters for which the size is above a user-defined threshold. In each iteration, we determine all clusters for which the size of above a user-defined threshold, and we merge the smaller clusters with these larger clusters. Each smaller component is merged with a larger component with which it has the maximum number of connections. - This is performed in
step 610. Instep 620, we check if at least one merge did occur in the last iteration. If at least one merge did occur, we go back tostep 610. Otherwise, we report the current components as relevant clusters instep 630, and terminate. - Exemplary Hardware Implementation
-
FIG. 7 illustrates a typical hardware configuration of an information handling/computer system in accordance with the invention and which preferably has at least one processor or central processing unit (CPU) 711. - The CPUs 711 are interconnected via a
system bus 712 to a random access memory (RAM) 714, read-only memory (ROM) 716, input/output (I/O) adapter 718 (for connecting peripheral devices such asdisk units 721 and tape drives 740 to the bus 712), user interface adapter 722 (for connecting akeyboard 724,mouse 726,speaker 728,microphone 732, and/or other user interface device to the bus 712), acommunication adapter 734 for connecting an information handling system to a data processing network, the Internet, an Intranet, a personal area network (PAN), etc., and adisplay adapter 736 for connecting thebus 712 to adisplay device 738 and/or printer 739 (e.g., a digital printer or the like). - In addition to the hardware/software environment described above, a different aspect of the invention includes a computer-implemented method for performing the above method. As an example, this method may be implemented in the particular environment discussed above.
- Such a method may be implemented, for example, by operating a computer, as embodied by a digital data processing apparatus, to execute a sequence of machine-readable instructions. These instructions may reside in various types of signal-bearing storage media.
- Thus, this aspect of the present invention is directed to a programmed product, comprising signal-bearing storage media tangibly embodying a program of machine-readable instructions executable by a digital data processor incorporating the CPU 711 and hardware above, to perform the method of the invention.
- This signal-bearing storage media may include, for example, a RAM contained within the CPU 711, as represented by the fast-access storage for example. Alternatively, the instructions may be contained in another signal-bearing storage media, such as a magnetic data storage diskette 800 (
FIG. 8 ), directly or indirectly accessible by the CPU 711. - Whether contained in the
diskette 800, the computer/CPU 711, or elsewhere, the instructions may be stored on a variety of machine-readable data storage media, such as DASD storage (e.g., a conventional “hard drive” or a RAID array), magnetic tape, electronic read-only memory (e.g., ROM, EPROM, or EEPROM), an optical storage device (e.g. CD-ROM, WORM, DVD, digital optical tape, etc.), paper “punch” cards, or other suitable signal-bearing storage media including memory devices in transmission hardware, communication links, and wireless, and including different formats such as digital and analog. In an illustrative embodiment of the invention, the machine-readable instructions may comprise software object code. - As is readily apparent from the above description, the present invention discusses a new method for supervised network clustering which can be useful for constraining the nodes to belong to specific clusters. This is particularly useful in cases where prior knowledge is available for directing the clustering process.
- Although the present invention has been described as an exemplary embodiment, it should be apparent that variations of this exemplary embodiment are possible and considered as included in the present invention.
- For example, rather than using the
server 5, the invention could also be implemented as a user-interactive application, in which the user interactively labels nodes as relevant to a particular group. - As another example of a possible modification, if desired, the user may even look at the output of the clustering and further modify the labels.
- Therefore, it is noted that, Applicant's intent is to encompass equivalents of all claim elements, even if amended later during prosecution.
Claims (20)
1. A method of supervised network clustering, said method comprising:
receiving and reading node labels from a plurality of nodes on a network, as executed by a processor on a computer having access to said network, said network being defined as a group of entities interconnected by links;
using said node labels to define densities associated with said nodes;
extracting node components from the network, based on using thresholds on densities; and
merging smaller components having a size below a user-defined threshold.
2. The method of claim 1 , wherein a random walk process is used to define the densities.
3. The method of claim 2 , wherein a restart vector associated with the random walk is defined on a basis of node labels.
4. The method of claim 1 , wherein density-connected nodes above a given threshold are determined as initial components.
5. The method of claim 4 , wherein the density-connected nodes are defined as nodes between a path that exists in which all nodes have densities greater than the threshold.
6. The method of claim 1 , wherein smaller components having a size less than the user-defined threshold are merged with larger components.
7. The method of claim 6 , wherein each smaller component is merged to the component with which it has a largest number of connections.
8. The method of claim 4 , wherein said clustering is iterative and continues until no further merging of small clusters occurs.
9. The method of claim 8 , wherein a threshold for an iteration comprises an average density over all remaining nodes.
10. The method of claim 1 , as embodied as a set of computer-readable instructions tangibly embodied on a non-transitory storage medium.
11. A method of clustering, said method comprising:
receiving and reading node labels from a plurality of nodes on a network, as executed by a processor on a computer having access to said network, said network defined as a group of entities interconnected by links;
calculating a random-walk-based probability for each said node on said network, to define densities associated with said nodes; and
defining clusters of nodes in said network based on said densities.
12. The method of claim 11 , wherein said clusters are extracted based on using a threshold on said densities.
13. The method of claim 12 , further comprising merging smaller clusters with sizes below a user-defined threshold.
14. The method of claim 12 , wherein said cluster extraction comprises an iterative process.
15. The method of claim 14 , wherein said threshold initially comprises an average density of said network.
16. The method of claim 11 , as embodied as a set of computer-readable instructions tangibly embodied on a non-transitory storage medium.
17. The method of claim 16 , wherein said non-transitory storage medium comprises one of:
a Random Access Memory (RAM) device of a computer, as storing said computer-readable instructions for a program currently executing on said computer;
a Read Only Memory (ROM) device of a computer, as storing said computer-readable instructions for a program that can selectively be executed by said computer;
a standalone memory device storing said computer-readable instructions for a program that can selectively be uploaded onto a memory device in a computer; and
a memory device associated with a computer on the network, as storing said computer-readable instructions for a program that can selectively be downloaded to a memory device of another computer on said network.
18. A method if clustering, said method comprising:
calculating a density associated with a plurality of nodes on a network, as executed by a processor on a computer having access to said network, said network defined as a group of entities interconnected by links; and
defining clusters of nodes in said network based on said densities.
19. The method of claim 18 , wherein said densities are calculated as a random-walk-based probability for each said node on said network.
20. The method of claim 18 , further comprising merging smaller cluster components having a size below a user-defined threshold.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/572,179 US20140047089A1 (en) | 2012-08-10 | 2012-08-10 | System and method for supervised network clustering |
US13/610,092 US10135723B2 (en) | 2012-08-10 | 2012-09-11 | System and method for supervised network clustering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/572,179 US20140047089A1 (en) | 2012-08-10 | 2012-08-10 | System and method for supervised network clustering |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/610,092 Continuation US10135723B2 (en) | 2012-08-10 | 2012-09-11 | System and method for supervised network clustering |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140047089A1 true US20140047089A1 (en) | 2014-02-13 |
Family
ID=50067035
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/572,179 Abandoned US20140047089A1 (en) | 2012-08-10 | 2012-08-10 | System and method for supervised network clustering |
US13/610,092 Active 2033-04-14 US10135723B2 (en) | 2012-08-10 | 2012-09-11 | System and method for supervised network clustering |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/610,092 Active 2033-04-14 US10135723B2 (en) | 2012-08-10 | 2012-09-11 | System and method for supervised network clustering |
Country Status (1)
Country | Link |
---|---|
US (2) | US20140047089A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110544108A (en) * | 2019-04-18 | 2019-12-06 | 国家计算机网络与信息安全管理中心 | social user classification method and device, electronic equipment and medium |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10901996B2 (en) | 2016-02-24 | 2021-01-26 | Salesforce.Com, Inc. | Optimized subset processing for de-duplication |
US10956450B2 (en) * | 2016-03-28 | 2021-03-23 | Salesforce.Com, Inc. | Dense subset clustering |
US10949395B2 (en) | 2016-03-30 | 2021-03-16 | Salesforce.Com, Inc. | Cross objects de-duplication |
US10841321B1 (en) * | 2017-03-28 | 2020-11-17 | Veritas Technologies Llc | Systems and methods for detecting suspicious users on networks |
CN108880846B (en) * | 2017-05-16 | 2020-10-09 | 清华大学 | Method and device for determining vector representation form for nodes in network |
CN110738577B (en) * | 2019-09-06 | 2022-02-22 | 平安科技(深圳)有限公司 | Community discovery method, device, computer equipment and storage medium |
US11565185B2 (en) * | 2020-03-31 | 2023-01-31 | Electronic Arts Inc. | Method and system for automatic and interactive model training using domain knowledge in video games |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5444796A (en) * | 1993-10-18 | 1995-08-22 | Bayer Corporation | Method for unsupervised neural network classification with back propagation |
US5857169A (en) * | 1995-08-28 | 1999-01-05 | U.S. Philips Corporation | Method and system for pattern recognition based on tree organized probability densities |
US6556983B1 (en) * | 2000-01-12 | 2003-04-29 | Microsoft Corporation | Methods and apparatus for finding semantic information, such as usage logs, similar to a query using a pattern lattice data space |
US20070162473A1 (en) * | 2000-10-16 | 2007-07-12 | University Of North Carolina At Charlotte | Incremental Clustering Classifier and Predictor |
US20070239694A1 (en) * | 2006-02-27 | 2007-10-11 | Singh Ambuj K | Graph querying, graph motif mining and the discovery of clusters |
US7333998B2 (en) * | 1998-06-25 | 2008-02-19 | Microsoft Corporation | Apparatus and accompanying methods for visualizing clusters of data and hierarchical cluster classifications |
US20090083222A1 (en) * | 2007-09-21 | 2009-03-26 | Microsoft Corporation | Information Retrieval Using Query-Document Pair Information |
US20100198837A1 (en) * | 2009-01-30 | 2010-08-05 | Google Inc. | Identifying query aspects |
US20110029519A1 (en) * | 2003-04-25 | 2011-02-03 | Leland Stanford Junior University | Population clustering through density-based merging |
US7937264B2 (en) * | 2005-06-30 | 2011-05-03 | Microsoft Corporation | Leveraging unlabeled data with a probabilistic graphical model |
US8055664B2 (en) * | 2007-05-01 | 2011-11-08 | Google Inc. | Inferring user interests |
US20110289025A1 (en) * | 2010-05-19 | 2011-11-24 | Microsoft Corporation | Learning user intent from rule-based training data |
US8359238B1 (en) * | 2009-06-15 | 2013-01-22 | Adchemy, Inc. | Grouping user features based on performance measures |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5561768A (en) | 1992-03-17 | 1996-10-01 | Thinking Machines Corporation | System and method for partitioning a massively parallel computer system |
US6594694B1 (en) * | 2000-05-12 | 2003-07-15 | Hewlett-Packard Development Company, Lp. | System and method for near-uniform sampling of web page addresses |
JP2004535023A (en) | 2001-07-06 | 2004-11-18 | コンピュータ アソシエイツ シンク,インコーポレイテッド | System and method for managing an object-based cluster |
US6870846B2 (en) | 2002-04-29 | 2005-03-22 | Harris Corporation | Hierarchical mobile ad-hoc network and methods for performing reactive routing therein using dynamic source routing (DSR) |
US7809704B2 (en) | 2006-06-15 | 2010-10-05 | Microsoft Corporation | Combining spectral and probabilistic clustering |
US20080004959A1 (en) * | 2006-06-30 | 2008-01-03 | Tunguz-Zawislak Tomasz J | Profile advertisements |
US8108413B2 (en) | 2007-02-15 | 2012-01-31 | International Business Machines Corporation | Method and apparatus for automatically discovering features in free form heterogeneous data |
US8438189B2 (en) * | 2007-07-23 | 2013-05-07 | Microsoft Corporation | Local computation of rank contributions |
US20090089285A1 (en) * | 2007-09-28 | 2009-04-02 | Yahoo! Inc. | Method of detecting spam hosts based on propagating prediction labels |
US7840662B1 (en) | 2008-03-28 | 2010-11-23 | EMC(Benelux) B.V., S.A.R.L. | Dynamically managing a network cluster |
US20090313286A1 (en) * | 2008-06-17 | 2009-12-17 | Microsoft Corporation | Generating training data from click logs |
US8423538B1 (en) * | 2009-11-02 | 2013-04-16 | Google Inc. | Clustering query refinements by inferred user intent |
US8533134B1 (en) * | 2009-11-17 | 2013-09-10 | Google Inc. | Graph-based fusion for video classification |
EP2355593B1 (en) | 2010-01-28 | 2015-09-16 | Alcatel Lucent | Network node control |
US20110295845A1 (en) * | 2010-05-27 | 2011-12-01 | Microsoft Corporation | Semi-Supervised Page Importance Ranking |
US8583669B2 (en) * | 2011-05-30 | 2013-11-12 | Google Inc. | Query suggestion for efficient legal E-discovery |
-
2012
- 2012-08-10 US US13/572,179 patent/US20140047089A1/en not_active Abandoned
- 2012-09-11 US US13/610,092 patent/US10135723B2/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5444796A (en) * | 1993-10-18 | 1995-08-22 | Bayer Corporation | Method for unsupervised neural network classification with back propagation |
US5857169A (en) * | 1995-08-28 | 1999-01-05 | U.S. Philips Corporation | Method and system for pattern recognition based on tree organized probability densities |
US7333998B2 (en) * | 1998-06-25 | 2008-02-19 | Microsoft Corporation | Apparatus and accompanying methods for visualizing clusters of data and hierarchical cluster classifications |
US6556983B1 (en) * | 2000-01-12 | 2003-04-29 | Microsoft Corporation | Methods and apparatus for finding semantic information, such as usage logs, similar to a query using a pattern lattice data space |
US20070162473A1 (en) * | 2000-10-16 | 2007-07-12 | University Of North Carolina At Charlotte | Incremental Clustering Classifier and Predictor |
US20110029519A1 (en) * | 2003-04-25 | 2011-02-03 | Leland Stanford Junior University | Population clustering through density-based merging |
US7937264B2 (en) * | 2005-06-30 | 2011-05-03 | Microsoft Corporation | Leveraging unlabeled data with a probabilistic graphical model |
US20070239694A1 (en) * | 2006-02-27 | 2007-10-11 | Singh Ambuj K | Graph querying, graph motif mining and the discovery of clusters |
US8055664B2 (en) * | 2007-05-01 | 2011-11-08 | Google Inc. | Inferring user interests |
US20090083222A1 (en) * | 2007-09-21 | 2009-03-26 | Microsoft Corporation | Information Retrieval Using Query-Document Pair Information |
US20100198837A1 (en) * | 2009-01-30 | 2010-08-05 | Google Inc. | Identifying query aspects |
US8359238B1 (en) * | 2009-06-15 | 2013-01-22 | Adchemy, Inc. | Grouping user features based on performance measures |
US20110289025A1 (en) * | 2010-05-19 | 2011-11-24 | Microsoft Corporation | Learning user intent from rule-based training data |
Non-Patent Citations (18)
Title |
---|
"How Computers Work: The CPU and Memory" December 15, 2003 http://web.archive.org/web/20031215230244/http://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading04.htm * |
Aggarwal, Charu C.; Wang, Haixun et al. "Managing and Mining Graph Data;" Advances in Databasse Systems Volume 40 2010 http://link.springer.com/book/10.1007/978-1-4419-6045-0 * |
Backstrom, Lars, and Jure Leskovec. "Supervised random walks: predicting and recommending links in social networks." Proceedings of the fourth ACM international conference on Web search and data mining. ACM, 2011. * |
Bar-Yossef�, Ziv, et al. "Approximating aggregate queries about web pages via random walks." (2000). * |
Chawathe, Yatin, et al. "A case study in building layered DHT applications." ACM SIGCOMM Computer Communication Review 35.4 (2005): 97-108. * |
Denton, Anna, "Kernel-Density-Based Clustering of Time Series Subseequences Using a Continuous Random-Walk Noise Model;" November 27, 2005 http://ieeexplore.ieee.org/document/1565670/?arnumber=1565670&tag=1 * |
Eick, Christoph F., Nidal Zeidat, and Zhenghong Zhao. "Supervised clustering-algorithms and benefits." Tools with Artificial Intelligence, 2004. ICTAI 2004. 16th IEEE International Conference on. IEEE, 2004. * |
El-Yaniv, Ran; Souroujon, Oren, "Iterative Double Clustering for Unsupervised and Semi-supervised Learning;" Machine Learning ECML 2001 Volume 2167; August 30, 2001 http://link.springer.com/book/10.1007/978-1-4419-6045-0 * |
Ester, Martin; Kriegel, Hans-Peter; Sander, Jorg; Xu, Xiaowei; "A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise" 1996 http://www.aaai.org/Papers/KDD/1996/KDD96-037 * |
Garcia-Luna-Aceves, J. J., and Dhananjay Sampath. "Scalable integrated routing using prefix labels and distributed hash tables for MANETs." Mobile Adhoc and Sensor Systems, 2009. MASS'09. IEEE 6th International Conference on. IEEE, 2009. * |
Harel, David, and Yehuda Koren. "On clustering using random walks." FST TCS 2001: Foundations of Software Technology and Theoretical Computer Science. Springer Berlin Heidelberg, 2001. 18-41. * |
Kim, Min-Soo, and Jiawei Han. "A particle-and-density based evolutionary clustering method for dynamic networks." Proceedings of the VLDB Endowment 2.1 (2009): 622-633. * |
Pons, Pascal, and Matthieu Latapy. "Computing communities in large networks using random walks." Computer and Information Sciences-ISCIS 2005. Springer Berlin Heidelberg, 2005. 284-293. * |
Ramabhadran, Sriram, et al. "Brief announcement: prefix hash tree." Proceedings of the twenty-third annual ACM symposium on Principles of distributed computing. ACM, 2004. * |
Ramabhadran, Sriram, et al. "Prefix hash tree: An indexing data structure over distributed hash tables." Proceedings of the 23rd ACM symposium on principles of distributed computing. Vol. 37. 2004. * |
Rycroft, C. H., and M. Z. Bazant. "Lecture 1: Introduction to Random Walks and Diffusion." Thecnical Report, Department of Mathematics-MIT 2 (2005). * |
Stoica, Ion, et al. "Chord: A scalable peer-to-peer lookup service for internet applications." ACM SIGCOMM Computer Communication Review. Vol. 31. No. 4. ACM, 2001. * |
Tong, Hanghang, Christos Faloutsos, and Jia-Yu Pan. "Fast random walk with restart and its applications." (2006). * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110544108A (en) * | 2019-04-18 | 2019-12-06 | 国家计算机网络与信息安全管理中心 | social user classification method and device, electronic equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
US20140047091A1 (en) | 2014-02-13 |
US10135723B2 (en) | 2018-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10135723B2 (en) | System and method for supervised network clustering | |
WO2022041979A1 (en) | Information recommendation model training method and related device | |
Konstas et al. | On social networks and collaborative recommendation | |
Qian et al. | Social event classification via boosted multimodal supervised latent dirichlet allocation | |
JP5454357B2 (en) | Information processing apparatus and method, and program | |
US20080168070A1 (en) | Method and apparatus for classifying multimedia artifacts using ontology selection and semantic classification | |
CN108647322B (en) | Method for identifying similarity of mass Web text information based on word network | |
CN111259220B (en) | Data acquisition method and system based on big data | |
CN110929046A (en) | Knowledge entity recommendation method and system based on heterogeneous network embedding | |
CN103761286B (en) | A kind of Service Source search method based on user interest | |
Wang et al. | Link prediction in heterogeneous collaboration networks | |
Torres-Tramón et al. | Topic detection in Twitter using topology data analysis | |
Jaffali et al. | Survey on social networks data analysis | |
Eda et al. | The effectiveness of latent semantic analysis for building up a bottom-up taxonomy from folksonomy tags | |
Papadopoulos et al. | Image clustering through community detection on hybrid image similarity graphs | |
Tang et al. | Sketch the storyline with charcoal: a non-parametric approach | |
López-Sánchez et al. | Dynamic detection of radical profiles in social networks using image feature descriptors and a case-based reasoning methodology | |
JP4544047B2 (en) | Web image search result classification presentation method and apparatus, program, and storage medium storing program | |
WO2015178758A1 (en) | A system and method for analyzing concept evolution using network analysis | |
CN112269877A (en) | Data labeling method and device | |
CN112434174A (en) | Method, device, equipment and medium for identifying issuing account of multimedia information | |
CN104731867B (en) | A kind of method and apparatus that object is clustered | |
CN107292750B (en) | Information collection method and information collection device for social network | |
Bide et al. | Cross event detection and topic evolution analysis in cross events for man-made disasters in social media streams | |
JP2006285419A (en) | Information processor, processing method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AGGARWAL, CHARU C.;REEL/FRAME:028767/0401 Effective date: 20120801 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |