CN109617989B - Method, apparatus, system, and computer readable medium for load distribution - Google Patents
Method, apparatus, system, and computer readable medium for load distribution Download PDFInfo
- Publication number
- CN109617989B CN109617989B CN201811621089.8A CN201811621089A CN109617989B CN 109617989 B CN109617989 B CN 109617989B CN 201811621089 A CN201811621089 A CN 201811621089A CN 109617989 B CN109617989 B CN 109617989B
- Authority
- CN
- China
- Prior art keywords
- node
- load
- nodes
- hash
- light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1061—Peer-to-peer [P2P] networks using node-based peer discovery mechanisms
- H04L67/1065—Discovery involving distributed pre-established resource-based relationships among peers, e.g. based on distributed hash tables [DHT]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer And Data Communications (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present disclosure relates to methods, apparatuses, systems, and computer readable media for load distribution. The method comprises the following steps: acquiring the information of the relay arc length and the node capacity of each node in the physical network; determining the average load rate of the physical network according to the forward arc length of each node and the node capacity information; determining heavy-load nodes with the load rates of the nodes larger than the average load rate and light-load nodes with the load rates of the nodes smaller than the average load rate based on comparison between the load rates of the nodes and the average load rate obtained through the forward arc length and the node capacity information of each node; determining at least one hash ID to be transferred to one of the light load nodes in the hash IDs managed by one of the heavy load nodes; and indicating to nodes within the physical network that the at least one hash ID is managed by one of the light nodes. According to the method, part of Hash ID managed by the heavy-load nodes is transferred to the light-load nodes, so that load balance can be realized, bottleneck is avoided, and resource utilization is optimized.
Description
Technical Field
The present disclosure relates to the field of peer-to-peer networks, and more particularly, to methods, apparatuses, systems, and computer readable media for load distribution in a peer-to-peer network employing a distributed hash algorithm.
Background
Existing digital resource systems, such as video systems, typically employ a centralized management approach. In the method, digital resources such as video files, audio files, etc. are stored on a dedicated server, and the corresponding files are distributed to clients by the clients sending requests to the server. This not only requires a dedicated hardware investment for the dedicated server, but also presents the problem of a single point of failure where the distribution service will be suspended once the server is damaged, and the large number of files are only stored on the dedicated server, presenting a bottleneck obstacle to network performance.
To deal with the specialized hardware investment of centralized management and the resulting single point of failure and bottleneck problems, an effective approach is to employ peer-to-peer networks for distributed storage. Specifically, in a peer-to-peer network, a hash ID (hash number) is assigned to each node and file in the network using a distributed hash algorithm, and the file is stored on different nodes in the peer-to-peer network according to the hash ID of the node and the hash ID of the file, thereby realizing distributed storage. And searching the file on the corresponding node by the node in the network by using the hash ID of the file according to the routing table established based on the hash ID space.
For example, a Chord ring can be established in a peer-to-peer network using the Chord algorithm. In the Chord algorithm, the consistent hash function uses the secure hash algorithm SHA 1 to assign an m-bit identifier as its hash ID for each node's IP address or MAC address, and for the file's key, with m being large enough to avoid unlikely collisions. In this way, nodes and files can exist in the same hash ID space. And arranging the nodes on a ring from small to large clockwise according to the size of the Hash ID to obtain a logical ring corresponding to the peer-to-peer network, wherein the ring is also called a Hash ring. And allocating the file to a first node of which the hash ID is larger than the hash ID of the file according to the hash ID obtained from the keyword of the file.
In the process of establishing a routing table for nodes, each node stores m table entries, each table entry containing a value of n +2i-1And a field indicating a next hop node, where n is the hash ID of the node, 1 ≦ i ≦ m, and the next hop node is a node having a hash ID of n +2 or morei-1The first node of (1). Searching the table item of the routing table according to the target Hash ID in the message received by the node, and selecting the target Hash ID not to exceed n +2i-1The message is forwarded by the entry corresponding to the field of (a), thereby implementing the routing.
When a distributed hash algorithm other than the Chord algorithm is adopted in the peer-to-peer network, nodes and files can be allocated with hash IDs, so that a hash ring such as a Chord ring is logically constructed, and storage and route forwarding according to the hash IDs of the files are realized.
In existing peer-to-peer networks that employ distributed hash algorithms, as described above, hash IDs are randomly generated for nodes and files by a hash function, and the nodes select the next routing hop based on the hash ID in the message. The path of the distributed hash route is often composed of application-level route hops between an originating node and a destination node, and only the hash ID is considered for message forwarding, so that the same message may be forwarded back and forth between different physical networks, which causes a great end-to-end delay. In addition, since the routing table of the distributed hash model is established based on the hash ID, the routing path is random in the host capability space although it is optimized in the hash ID space.
Because the host capability is not considered in the file storage process, a large number of files may be stored on the nodes with weak capability, but the amount of the files stored on the nodes with strong capability is insufficient, so that the file allocation and routing based on the hash ID causes a large pressure on the nodes with weak capability, and the resources of the nodes with strong capability are not fully utilized, so that the problems of low resource utilization rate and network bottleneck exist.
Disclosure of Invention
The present disclosure presents a method, apparatus, system, and computer-readable storage medium for load distribution that enables load balancing in a peer-to-peer network utilizing a distributed hash algorithm.
According to an aspect of the invention, there is provided a method for load distribution, the method comprising: acquiring the information of the relay arc length and the node capacity of each node in the physical network; determining the average load rate of the physical network according to the forward arc length of each node and the node capacity information; determining heavy-load nodes with the load rates of the nodes being greater than the average load rate and light-load nodes with the load rates of the nodes being less than the average load rate based on comparison between the load rates of the nodes and the average load rate obtained through the forward arc length and the node capability information of each node; determining at least one hash ID to be transferred to one of the light load nodes in the hash IDs managed by one of the heavy load nodes; and indicating to nodes within the physical network that the at least one hash ID is managed by one of the light nodes.
According to another aspect of the present invention, there is provided an apparatus for load distribution, the apparatus comprising: the acquisition unit is used for acquiring the relay arc length and the node capacity information of each node in the physical network; the average load rate determining unit is used for determining the average load rate of the physical network according to the forward arc length of each node and the node capacity information; the heavy-load and light-load node determining unit is used for determining heavy-load nodes with the load rates of the nodes being greater than the average load rate and light-load nodes with the load rates of the nodes being less than the average load rate based on comparison between the load rate of each node and the average load rate, wherein the comparison is obtained through the forward arc length and the node capacity information of each node; the adjusting unit is used for determining at least one Hash ID to be transferred to one of the light load nodes in the Hash IDs managed by one of the heavy load nodes; and an indicating unit for indicating to nodes within the physical network that the at least one hash ID is managed by one of the light nodes.
According to still another aspect of the present invention, there is provided a network system including a plurality of nodes communicating with each other, one of the plurality of nodes being the above apparatus.
According to yet another aspect of the present invention, there is provided a system for load distribution, the system comprising: a memory storing computer-executable instructions; and a processor coupled to the memory, the processor configured to perform the above-described method when the computer-executable instructions are executed.
According to yet another aspect of the invention, there is provided a computer-readable storage medium having stored thereon computer-executable instructions which, when executed, cause a processor to perform the above-described method.
In the technical scheme of the embodiment of the invention, a special server is not needed to be adopted to carry out centralized management on the digital resources, and the part of the Hash ID managed by the heavy-load nodes can be transferred to the light-load nodes by determining whether each node is the heavy-load node or the light-load node according to the forward arc length and the node capacity information of the nodes, so that the burden of the heavy-load nodes can be reduced, the resources of the light-load nodes can be more fully utilized, and the load balance can be realized. The method can avoid bottleneck caused by incapability of bearing excessive load by the node, and can also avoid resource waste caused by too little load borne by the node.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1A and 1B show an example of a physical network and an example of a hash ring corresponding to the physical network, respectively, and show an example of a message forwarding path according to the related art.
FIG. 2 illustrates the relationship between the arc length and load of a relay in a DHT network.
Fig. 3 shows a flow chart of a method for load distribution according to an embodiment of the invention.
FIG. 4 illustrates a schematic diagram of adjusting the arc length of a node's predecessor in the process of distributing the load of a heavily loaded node j to a lightly loaded node k within an exemplary physical network.
FIG. 5 shows a flow diagram of a method for adjusting the arc length of a node according to an embodiment of the invention.
Fig. 6 shows a schematic diagram of the adjustment of the arc length of the relay by taking a light load node a and a heavy load node b as an example.
Fig. 7 illustrates a block diagram of an apparatus for load distribution according to an embodiment of the present invention.
Fig. 8 shows another structural block diagram of an apparatus for load distribution according to an embodiment of the present invention.
Fig. 9 shows a block diagram of a system for load distribution according to an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
With the development of computer networks, several physical networks, which are composed of host nodes through hubs, switches, and/or routers, etc., can communicate with each other via the physical networks and the interconnection devices between the physical networks. Three physical networks 1_1, 1_2 and 1_3 are specifically shown in fig. 1A. Each of these physical networks may be a local area network, a metropolitan area network, or a corporate or a building network. The network in a certain area may be set as one physical network according to a preset setting or specification. The three physical networks shown in FIG. 1A include nodes a, b, c, d, e, nodes f, g, h, i, and nodes j, k, respectively. These nodes may include computers, gateway servers, mobile phones, and other computing devices with information processing capabilities.
By using the distributed hash algorithm, the nodes are respectively allocated with m bits of hash ID (if the value obtained by using the hash algorithm is more than 2)mThen for 2mPerforming a modulo operation to convert the value to less than 2mAs hash ID), the nodes are distributed on the periphery of the ring clockwise from small to large according to the order of the hash ID, so as to obtain a hash ring, as shown in fig. 1BShown in the figure. For a node, a node adjacent to the node located upstream of the node in the clockwise direction is referred to as a previous node of the node, an arc length between the node and the previous node is referred to as a previous arc length of the node, and a numerical value of the previous arc length is equal to a difference between a hash ID of the node and a hash ID of the previous node. If the hash ID of the node is less than that of the previous node, the hash ID of the node is increased by 2mAnd then differencing with the hash ID of its predecessor node. For example. Assuming that the hash ID of the node k is 60 and the hash ID of the succeeding node b is 40, the value of the succeeding arc length bk of the node k is 60-40-20. For another example, assuming that the hash ID of the node a is 10, the hash ID of the node h is 120, and the hash ID is represented by a 7-bit string, the value of the successive arc length ha of the node a is 10+27-120=18。
Each node in the hash ring has a respective routing table, the routing table contains m table entries, and the next-hop node pointed to by each table entry is that the hash ID is just not less than n +2 in the table entryi-1The node of (2). For example, assuming that the hash IDs represented by 7-bit strings of the nodes a to h in fig. 1B are 10, 20, 26, 40, 60, 80, 85, 90, 100, 110, and 120, respectively, the routing table of the node a is shown in table 1:
table 1 routing table for node a
Hash ID | Next hop node |
10+20 | i |
10+21 | i |
10+22 | i |
10+23 | i |
10+24 | e |
10+25 | k |
10+26 | d |
By hashing a key (such as a file name, the first 20 bytes of a file, etc.) of a digital resource such as a video file, an audio file, etc. using the same distributed hash algorithm, a hash ID is obtained (again, if the value obtained using the hash operation is greater than 2mThen for 2mPerforming modulo operation to obtain a value of less than 2mA hash ID within a range of (a). The file is distributed to a first node having a hash ID greater than the hash ID of the file. The distribution of the file may be to store the file on the corresponding node, or may be to store information related to the file on the corresponding node, where the information may be an IP address of a database, a server, or the like where the file is stored, or may be other index information of the file.
In the example shown in fig. 1A, node a sends a request message for searching for file "XX", according to the hash ID of file "XX", node a sends the request message to node k through its own routing table, node k further forwards the request message to node d according to its own routing table, then each node forwards the request message to g, j and h in sequence through its own routing table, h finds that it stores information about file "XX", and then returns a response message to node a according to the original path. It can be seen that although the routing indicated by the thick solid line is optimal for the hash ring shown in fig. 1B, which logically corresponds to the physical network of fig. 1A, the distributed hash path represented by the dashed line in the physical network shown in fig. 1A shuttles back and forth between different physical networks, increasing the latency of the network and making routing inefficient.
To improve the routing efficiency of DHT, embodiments of the present invention take into account the proximity of peer nodes on both the Hash ID space and the physical network when routing. One or more super nodes are designated within a predefined physical network (such as a local area network, corporate network, etc.), which assist other nodes within the physical network in updating routing information. A super node is a next hop node that can be a next hop node of not only other nodes in the physical network but also other nodes in the physical network in the hash ID space. The super node may be a gateway node in the physical network, or may be a predetermined node in the physical network, and the node can directly communicate with all other nodes in the physical network in the hash ID space.
The meaning of the super node can be understood more clearly by the connection matrix. The connection matrix is described below to aid in understanding the super nodes.
The distributed hash model may be represented as a directed graph G ═ (V, E), where V is the set of peer nodes and E is the set of directed edges. The connection matrix of the graphs (V, E) is represented by formula (1).
R={r1,r2,...,ri,...,rn}T i=1,...,n (1)
Where n is the number of nodes in the entire network formed by at least one physical network, and the vector riIs node vi,viE.g., V, and maintained by the node. Vector riContains m entries (m is the number of bits representing the hash ID), and can be represented by equation (2).
ri={ri1,ri2,...,rij,...,rim}i=1,...,n j=1,...,m (2)
Wherein
Based on the above definitions, the connection relationship between the nodes in fig. 1B can be represented as a connection matrix R shown by equation (3):
the nodes which are physically adjacent and belong to the same management domain are gathered into one type, one or more super nodes can be manually arranged in each cluster or mutually negotiated among the nodes, the super nodes can point to all client nodes in the cluster (the client nodes are the nodes except the super nodes in the cluster), and meanwhile, each client node can point to the super nodes. The clustered physical network is defined as follows:
for a distributed hash model directed graph G ═ (V, E), there are k clusters V1,V2,…,VkAnd if so:
(2)v is thenl,vmPhysical proximity (where l and m are different, with the size of cluster V at 1 and ithiBetween the total number of nodes in).
Then, the DHT contains a cluster V1,V2,…,VkRespectively corresponding to different physical networks, also called clustered physical networks. For any one Vi,ViHaving one or more supernodes therein, the supernodes maintaining client node information, ViIs associated with a super node. Representing the routing table of the client node as tiThen all the routing table information R' of the clustered physical network is represented by formula (4):
R'={t1,t2,...,ti,...,tn}T i=1,...,n (4)
where n is the number of nodes included in the clustered physical network.
tiFurther represented by formula (5):
ti={ti1,ti2,...,tij,...,tin} i=1,...,n;j=1,...,n (5)
wherein:
the connection matrix T of the physical networks 1_1, 1_2, and 1_3 in fig. 1A can be represented by equation (6):
from T, the physical network has clusters of three neighboring nodes. The three clusters are: v1={a,b,c,d,e},V2={f,g,h,i},V3J, k. In these three clusters, nodes e, i, and k are supernodes. Matrix T2And (4) representing that full connection is realized between the nodes in the cluster, namely each node and other nodes in the cluster are neighbors. t is t2 ijIs a matrix T2Element of row i, column j, t2 ijThe value is represented by equation (7):
the addition operation and the multiplication operation in expression (7) are boolean operations. For multiple physical networks as shown in FIG. 1A, T is shown as equation (8)2Expressed as:
thus, each physical network shown in fig. 1A is provided with a super node, and the next hop nodes of other nodes in the physical network may be the super node, and the other nodes in the physical network may also be the next hop nodes of the super node. Full connectivity of all nodes can be achieved through super nodes within each physical network.
The above describes that the super nodes are arranged in the physical network or the cluster, and then the hash ID managed by the nodes can be adjusted in the physical network through the super nodes, so that the load distribution is adjusted to realize load balancing. This will be described in detail below.
As described above, the distributed hash algorithm employs a hash technique to uniformly distribute nodes and files in a hash ID space of m bits (e.g., 160 bits), with a file being stored on the first node having a hash ID greater than the hash ID of the file. Since the hash IDs are randomly distributed throughout the hash ID space, the load of a node increases as the hash ID space managed by the node increases.
In the distributed hash algorithm, the hash ID space managed by a peer node is the difference between the hash ID of the node and the hash ID of the preceding node. Since nodes are logically distributed on a hash ring in the distributed hash network, the size of a hash ID space for a node is an arc length between the node and a preceding node, which is referred to herein as a preceding arc of the node, and the length of the preceding arc is the preceding arc length. For example, as shown in FIG. 1B, the predecessor node of node k is B, then the successor arc of node k is the arc from B to k, and the length of the successor arc is denoted by bk.
Since the hash ID of the distributed hash algorithm is randomly generated, the forward arc lengths of the peer nodes are not exactly equal. If the total length of the dht ring is l (for example, in the case where the hash ID is represented by 160 bits, l is 2)160) Then n nodes are randomly distributed over the hash ring. In a peer-to-peer network, a peer nodeThe longer the arc length of the predecessor, the higher the load of the node.
The inventor experimentally investigated the relationship between the preceding arc length and the load. The experimental environment is a distributed hash network with 28800 nodes, and a 160-bit hash ID is distributed to the nodes by adopting a distributed hash algorithm. Each node randomly sends out 10 search requests, the load of each node is counted, and the message number of the node routing is used as the measure for measuring the node load. Fig. 2 shows the relationship between the forward arc length and the load of the peer node obtained through the experiment, where the abscissa is the forward arc length of the node and the ordinate is the load of the corresponding node.
The relationship between the arc length and the load of the successor obtained by the above experiment shows that: (1) although the expected value of the arc length of the successor is 2160/28800, but because the node hash ID is obtained by a hash algorithm, the arc length of each node is not exactly the same; (2) the load between nodes varies greatly; and (3) probabilistically, the load of a node is proportional to the node's forward arc length.
Let the arc length of the node s be AsThe sum of the successive arc lengths of the DHT is A (A is the size of the hash ID space, e.g., in a DHT network where the hash ID is represented by 160 bits, the size of the hash ID space is 2160) The sum of the loads of all nodes in the distributed hash model (hash ring) is W, then, probabilistically, the load of node s is WsAnd the preceding arc length AsThe relationship (c) is shown in the formula (9).
As described above, W is proportional to the forward arc length of a node, since the load on that node issIs proportional to AsIs represented by Ws∝As。
Each node may have different node capabilities due to differences in hardware configuration, network connection conditions, and the like. The node capability of a node may refer to the processing capability, storage capability, and network interface of the nodeRead-write capability, etc., the node capability of a node can be measured by employing conventional techniques. Let the node capability of node s be CsThen the load factor λ of the node ssMay be as shown in equation (10).
Due to Ws∝AsTherefore:
average load rate λ of all nodes in the DHT model0Can be represented by equation (12), where C is the sum of the node capabilities of all nodes.
Since W ∈ A, therefore:
if λs>λ0Then node s is overloaded and may be referred to as a overloaded node. If λs<λ0Then node s is underloaded and node s may be referred to as a underloaded node. In general, equations (11) and (13) are used to characterize the node load rate and the average load rate to determine whether the node is a heavy load node or a light load node.
In order to fully utilize the resources of each node and avoid bottleneck of a certain node due to the fact that the load exceeds the capacity of the node, it is desirable to realize load balancing in a distributed hash model and transfer excessive load carried by a heavy-load node to a light-load node capable of bearing the load.
Let Ws' load of node s in load balancing, then Ws' the formula (14) is satisfied.
The load of the node is adjusted by adjusting the arc length of the node. Let AsThe adjusted forward arc length of the node s. A is in direct proportion to the length of the node's forward arc in the DHT models' can be obtained from formula (15).
Since the peer nodes dynamically join and leave the network, it is difficult to obtain the total capacity of all nodes in the system, the present invention performs load balancing in units of preset or specified physical networks or clusters, and can periodically determine the total node capacity. Let C0Is the sum of the capabilities of the nodes within a cluster, A0The sum of the arc lengths of the successive nodes in the cluster is shown in a formula (16). The forward arc length of the node s in the distributed hash model load balancing can be obtained by equation (17).
△As=As-As′(18)
Delta A can be obtained by the formula (18)sIf Δ AsPositive, s is a heavy load node, if Δ AsAnd if the load is negative, s is a light load node. In order to optimize the load distribution of the heavy load nodes and the light load nodes, each node in the cluster can issue their respective Δ a to the super node, and the super node makes a decision of load balancing through matching of the Δ a. Nodes within a cluster may also have their own arc length and node capabilitiesAnd (3) information is issued to the super nodes, so that the super nodes determine the load rates of the nodes and the average load rate of the network according to the formulas (11) and (13), and the respective delta A of the nodes is calculated according to the formulas (16) to (18).
As described above, in the embodiments of the present invention, by changing the node's forward arc length, the excessive load managed by the heavy-load node is distributed to the light-load node based on the load that can be also borne by the light-load node, so that the load rate of the heavy-load node can be close to or equal to the average load rate of the network, thereby implementing load balancing. FIG. 3 illustrates a flow diagram of a method 300 for load distribution in a DHT network employing a DHT algorithm in accordance with an embodiment of the present invention. The method 300 of fig. 3 may be performed by any node in or outside of the physical network, and preferably may be performed by a super node within the physical network.
In S310, the information of the relay arc length and the node capability of each node in the physical network is acquired.
In the example shown in FIG. 1A, for the physical network 1_1, the nodes a, b, c, d and its own arc length of the relay A can be obtained by the super node ea、Ab、Ac、AdAnd AeAnd node capability information Ca、Cb、Cc、CdAnd Ce。
In S320, an average load rate of the physical network is determined according to the forward arc length of each node and the node capability information.
In the example shown in FIG. 1A, supernode e may be based on the node's arc length of the successor Aa、Ab、Ac、AdAnd AeObtaining the total forward arc length A of the physical network as Aa+Ab+Ac+Ad+AeAccording to node capabilities C of the nodea、Cb、Cc、CdAnd CeObtaining the total node capability C ═ C of the physical networka+Cb+Cc+Cd+CeThereby determining A/C as the average load rate lambda of the network0。
In S330, based on the comparison between the load rate of each node obtained from the previous arc length and the node capability information of the node and the average load rate determined in S320, a heavy-load node having a load rate of the node greater than the average load rate and a light-load node having a load rate of the node less than the average load rate are determined.
In the example shown in fig. 1A, the supernode e may obtain the load rate of each node by calculating the ratio of the obtained forward arc lengths and node capability information of the nodes a, b, c, d, and e, and then determine whether the node is a heavy load node or a light load node by comparing the load rate of each node with the average load rate. For example, node e will be Aa/CaDetermined as the load factor lambda of the node aaIf λ isaGreater than λ0If the node b is a heavy-load node, if lambda is greater than lambdaaLess than λ0And the node a is a light load node. Alternatively, the super node e may determine the average load rate λ0To nodes a, b, c and d. And the nodes a, b, c and d calculate the ratio according to the self forward arc length and the node capacity to obtain the self load rate, further determine whether the node is a heavy load node or a light load node according to the size relationship between the self load rate and the average load rate, and report the judgment result to the super node e.
At S340, at least one hash ID to be transferred to one of the light nodes is determined among the hash IDs managed by one of the heavy nodes.
In the example shown in fig. 1A, assuming that it is determined in S330 that the node a is a light load node and the node b is a heavy load node, the supernode e determines that a part of the hash ID managed by the node b can be transferred to the node a.
Taking the physical network 1_3 shown in fig. 1A as an example, fig. 4 shows a schematic diagram of adjusting the node's forward arc length in the process of distributing the load of the heavy load node j to the light load node k. Wherein FIG. 4(a) shows the locations of nodes j and k in the IP network in a physical network 1_3 (again denoted as C) having a total node capability equal to the node capability C of node jjAnd node capability C of node kkThe node capability refers to the processing capability of the node in this example. FIG. 4(b) shows the processing capabilities of nodes j and kCjAnd Ck. FIG. 4(c) shows the arc length A of the node j and the node kjAnd Ak. FIG. 4(d) shows the change A of the arc length of the previous hop determined by one of the nodes j and k or the external node corresponding to the partial hash ID to be transferred from the heavy load node j to the light load node kj-Aj' influence on the arc lengths of the nodes j and k so that the arc lengths of the nodes j and k subjected to load balancing are Aj' and Ak’。
A flow diagram of a method 500 for adjusting the arc length of a node in accordance with an embodiment of the present invention is specifically shown in fig. 5. The determination of which heavy-duty nodes have what arc length to transfer to which light-duty nodes can be made by the method 500, and the method 500 will be described in detail below. Before describing method 500, method 300 is described next.
In S350, it is indicated to nodes within the physical network that at least one hash ID in S340 is managed by one of the light nodes in S340.
In the example shown in fig. 1A, since the super node e determines to transfer a part of the hash ID managed by the heavy load node b to the light load node a in S340, in this step, the super node e indicates to the nodes a, b, c, and d in the physical network 1_1 that the part of the hash ID is transferred to the light load node a, so that the light load node a can manage the part of the hash ID, while the other nodes b, c, and d may know that the part of the hash ID is managed by the light load node a as the super node e does. In this way, the message addressed to the part of the hash ID is then forwarded to the heavy load node b, but to the light load node a, so that the load is transferred from the heavy load node to the light load node, and load balancing is achieved.
According to an embodiment of the present invention, one of the light nodes in S340 may be notified to store information about files corresponding to at least one hash ID in S340 and to send a message addressed to the at least one hash ID to the one of the light nodes within the physical network.
In the example shown in fig. 1A, since the light load node a needs to manage a part of the hash IDs originally managed by the heavy load node b, the light load node a stores information about files corresponding to the part of the hash IDs, including the storage file itself, the storage address of the storage file, and other information indicating the storage file. In addition, nodes b, c, and d need to modify their respective routing tables by receiving a message from supernode e in order to modify the next-hop node corresponding to the partial hash ID to node a. Since the information related to the file corresponding to the hash ID transferred to the node a is stored by the light-load node a, the information stored by the node b may be deleted, and needless to say, the information on the node b may not be processed, thereby achieving information redundancy. Since other nodes do not forward the message addressed to the partial hash ID to node b again, the processing load of node b is reduced even if the information is not deleted.
Through the method 300, it can be determined whether the node is a heavy load node or a light load node by means of the node's arc length and node capability information, so that the load of node management is adjusted by changing the node's arc length, thereby realizing load balancing and avoiding resource waste and bottleneck.
Next, a process of how to adjust load distribution based on the division of the heavy-load nodes and the light-load nodes is described, as shown in method 500 of FIG. 5.
In S510, a sorting step is performed, in which all the heavy-load nodes are sorted according to the overload capacity to obtain a heavy-load node list, and all the light-load nodes are sorted according to the underload capacity to obtain a light-load node list.
For example, to calculate the average load rate of nodes within a cluster, each customer node (e.g., node s) submits its node capability information C to a super node when joining a physical networksAnd a preceding arc length As. Super node calculates average load rate lambda of nodes in cluster0', and calculating the arc length Δ A of the preceding stage to be adjusted at the node s from the equation (19)s。
△As=As-λ0′×Cs (19)
If Δ As>0, then probabilistically the node s is the heavy load node, its overload, etcAt Δ As(ii) a If Δ As<0, then the node s is a light load node in terms of probability, and the underload amount is equal to-Delta As. The overload arc length of the heavy-load node corresponding to the overload capacity can be partially or completely transferred to the light-load node.
After determining the overload or underload of each node, sorting the overload nodes according to the magnitude of the overload, and sorting the underload nodes according to the magnitude of the underload to obtain a heavy-load node list and a light-load node list.
In S520, a first determination step is performed, in which the smaller of the overload amount of the overloaded node with the largest overload amount in the list of overloaded nodes and the underload amount of the underload node with the largest underload amount in the list of underload nodes is determined.
For example, assume a is a lightly loaded node and b is a heavily loaded node. Let Δ Aa be the arc length of the node a to be adjusted obtained from equation (19), and Δ Ab be the arc length of the node b to be adjusted obtained from equation (19), and the length of the arc length of the node b can be min (| Δ a)a|,|△Ab|) to node a.
In S530, a second determination step is performed, in which hash IDs managed by the heavy-load node with the largest overload amount are determined to be hash IDs to be transferred to the light-load node with the largest underload amount, and the number of the hash IDs to be transferred is equal to the smaller one determined in S520.
FIG. 6 shows an example of the distribution of nodes on a DHT ring, and also shows the adjustment of the a and b arc lengths for the case where a is a lightly loaded node and b is a heavily loaded node. The solid horizontal line is the expansion of the arc length of the node a, and the dashed horizontal line is the expansion of the arc length of the node b. Aa is the initial preceding arc length of node a, Δ Aa is the arc length transferred from node b to node a, Aa 'is the adjusted preceding arc length of node a, Ab is the initial preceding arc length of node b, Δ Ab is the overloaded arc length of node b, and Ab' is the adjusted preceding arc length of node b. As can be seen from fig. 6, since the underload of node a is less than the overload of node b, only a portion of Δ Ab is converted to Δ Aa, while another portion of Δ Ab is transferred to other lightly loaded nodes (not shown in fig. 6) on the hash ring.
For simplicity of illustration, assume that the hash ID space size of the peer-to-peer network is 256. In fig. 6, the arc lengths associated with the light load node a and the heavy load node b are: a. thea=25,△Aa=-10,Ab=60,△Ab35. The node performing method 500 (e.g., a super node or other designated node in the network) determines that the overload arc length that node b needs to transfer to node a is min (| Δ a)a|,|△AbMin (35,10) ═ 10. After the node b transfers the overload arc length to the node a, the transferred delta Ab35-10-25, Δ a after transferaNode b is still the overloaded node, and the node performing method 500 will continue to determine the load that node b needs to transfer to other nodes.
If with AsBID represents the forward arc length A of node ssStarting point hash ID of AsEID represents the arc length A of the successor of node ssThen, for example, assume that the hash ID associated with the successive arc lengths of node a and node b is initially Aa.BID=35、Aa.EID=60、Ab.BID=128、AbEID 188. Since the number of hash IDs transferred from the node b to the node a is 10 as described above, 10 hash IDs can be arbitrarily selected from the hash ID space of 128 to 188 managed by the node b as hash IDs to be transferred, and preferably, the hash IDs to be transferred are continuous discrete points in the hash ID space. Assuming that the part of 153-162 of the hash ID managed by node b is transferred to node a, the hash ID managed by node a subject to adjustment includes 35-60 and 153-162, and the hash ID managed by node b subject to adjustment includes 128-152 and 163-188.
The node performing the method 500 may record the adjusted hash ID and the corresponding target node. For example, for the above example, the transfer information may be recorded as shown in table 2:
TABLE 2 transfer information for Hash ID
Of course, when there are more hash ID migration scenarios, the migration information recorded in the table will increase accordingly.
A deletion step is performed in S540, in which if the smaller determined in S520 is the capacity of the overloaded node with the largest capacity, the overloaded node with the largest capacity is deleted from the list of overloaded nodes, and if the smaller determined in S520 is the underload of the underload node with the largest underload, the underload node with the largest underload is deleted from the list of underload nodes.
In S550, it is determined whether one of the heavy load node list and the light load node list is empty. If neither list is empty, method 500 proceeds to S560. If one of the two lists is empty, the method 500 proceeds to S570.
A modification step is performed in S560, wherein the delete step of S540 does not reorder the heavy loaded node list or the light loaded node list from which the node was deleted. The method 500 then returns to S520.
In S570, a notification step is performed, in which all the determined hash IDs to be transferred and the light-load nodes to which the hash IDs are to be transferred are notified to the nodes in the physical network, so that the nodes in the physical network modify the target nodes corresponding to the hash IDs to be transferred in the respective routing tables to the light-load nodes to which the hash IDs are to be transferred.
For example, in the example shown in FIG. 1A, for the physical network 1_1, the super node e notifies the nodes a, b, c and d of the recorded hash ID transfer information shown in the following Table 3, so that the nodes modify the next hop nodes corresponding to the hash IDs 153-162 to a, the next hop nodes corresponding to the hash IDs 60-72 to d, and the next hop nodes corresponding to the hash IDs 73-80 to e in their respective routing tables. The supernode e may also modify its own routing table based on the forwarding information. Alternatively, the target node to which the hash ID may be reassigned may not be involved with the supernode itself.
TABLE 3 transfer information for Hash ID
When a node modifies its routing table based on the migration information of the hash ID, the migrated hash ID and its corresponding destination node may be recorded in the routing table by various means. For example, a node may add a redirect field to an existing routing table, indicating the hash ID of the transfer and its corresponding destination node in the redirect field.
According to the embodiment of the invention, when the node b receives a request for searching a certain hash ID from the node p, the node b judges whether the node b stores information about the hash ID or not, if so, a response is returned to the node p, otherwise, the searching request is forwarded. During forwarding, node b determines whether the hash ID belongs to the diverted hash ID, e.g., recorded in the redirect field. If so, not only forwarding is performed according to the new next hop node, but also the corresponding hash ID of the notification node p is transferred to the new node, so that the node p updates its routing table.
For example, in the case of the transfer of Table 3, if node b determines that the target hash ID in the node p lookup request belongs to 60-72 in Table 3, node b forwards in its routing table according to the new next hop node d corresponding to 60-72. And, node b determines that node p and it do not belong to the same physical network, since node p is unaware of the fact that the partial hash ID has been transferred, notifying node p that the partial hash ID has been transferred to node d. And the node p updates the routing table thereof according to the notification, and also modifies the next hop node corresponding to the Hash ID within the range of 60-72 into the node d.
According to the method provided by the embodiment of the invention, for the purpose of realizing load balancing, the nodes in the physical network can update their routing tables according to the adjustment while the hash IDs managed by the nodes are adjusted according to the relay arc length and the node capacity of the nodes, so that the routing forwarding is optimized, and the nodes outside the physical network can also update their routing tables to optimize the routing forwarding. Therefore, not only can load balancing be realized to optimize resource utilization and reduce network bottlenecks, but also nodes inside and outside the physical network can adjust the routes based on the load balancing result, so that the routes are optimized.
Having described a method for load distribution in a peer-to-peer network employing a distributed hash algorithm in accordance with an embodiment of the present invention, an apparatus, system, and computer-readable storage medium for load distribution will be described next.
Fig. 7 shows a block diagram of an apparatus 700 for load distribution according to an embodiment of the present invention.
The apparatus 700 may include an obtaining unit 710, an average load rate determining unit 720, a heavy and light load node determining unit 730, an adjusting unit 740, and an indicating unit 750. These units may be implemented by electronic circuits such as a processor, a programmable application specific integrated circuit, a single chip, or by functional modules such as program segments, routines, and the like. The obtaining unit 710 may be configured to obtain the node capacity information and the arc length of the relay of each node in the physical network. The average load rate determining unit 720 may be configured to determine an average load rate of the physical network according to the node capability information and the arc length of the previous node of each node. The heavy load and light load node determining unit 730 may be configured to determine, based on a comparison between the load rate of each node obtained through the previous arc length and the node capability information of the node and the average load rate, a heavy load node whose load rate of the node is greater than the average load rate and a light load node whose load rate of the node is less than the average load rate. The adjusting unit 740 may be configured to determine at least one hash ID to be transferred to one of the light nodes among the hash IDs managed by one of the heavy nodes. The indicating unit 750 may be configured to indicate to nodes within the physical network that the at least one hash ID is managed by one of the light nodes.
The above and other operations and/or functions of the obtaining unit 710, the average load rate determining unit 720, the heavy load and light load node determining unit 730, the adjusting unit 740, and the indicating unit 750 may refer to the above description related to fig. 3 to 6, and are not described herein again.
Fig. 8 shows another block diagram of an apparatus 800 for load distribution according to an embodiment of the present invention.
The obtaining unit 810, the average load rate determining unit 820, the heavy and light load node determining unit 830, the adjusting unit 840, and the indicating unit 850 included in the apparatus 800 are substantially the same as the obtaining unit 710, the average load rate determining unit 720, the heavy and light load node determining unit 730, the adjusting unit 740, and the indicating unit 750 included in the apparatus 700.
The adjusting unit 840 in the apparatus 800 may include an ordering subunit 841, a first determining subunit 842, and a second determining subunit 843, according to an embodiment of the present invention. Sorting subunit 841 may be configured to sort all the overloaded nodes according to the overload amount to obtain a list of overloaded nodes, and sort all the underloaded nodes according to the underload amount to obtain a list of underloaded nodes. The first determining subunit 842 may be configured to determine the smaller of the overload amount of the reload node with the largest overload amount in the reload node list and the underload amount of the light node with the largest underload amount in the light node list. The second determining subunit 843 may be configured to determine, from the hash IDs managed by the overloaded node with the largest overload, hash IDs to be transferred to the underloaded node with the largest underload amount, where the number of the hash IDs to be transferred is equal to the smaller one.
According to an embodiment of the present invention, the adjusting unit 840 may further include a deleting subunit 844 and a modifying subunit 845. Deletion subunit 844 may be configured to delete the overloaded node with the largest amount of overload from the list of overloaded nodes if the smaller is the amount of overload of the overloaded node with the largest amount of overload, and delete the underloaded node with the largest amount of underload from the list of underloaded nodes if the smaller is the amount of underload of the underloaded node with the largest amount of underload. The modify subunit 845 may be used to reorder the element for the list of reloaded nodes or the list of lightly loaded nodes from which the delete subunit 844 did not delete a node, in the event that neither the list of reloaded nodes nor the list of lightly loaded nodes is empty. In the case where the heavy loaded node list or the light loaded node list is changed, the first determining subunit 842, the second determining subunit 843, and the deleting unit 844 operate again.
According to an embodiment of the present invention, the adjusting unit 840 may further include a notification subunit 846. The notifying subunit 846 may be configured to notify, to the nodes in the physical network, all the determined hash IDs to be transferred and the light-load nodes to which the hash IDs are to be transferred respectively when at least one of the heavy-load node list and the light-load node list is empty, so that the nodes in the physical network modify the target nodes corresponding to the hash IDs to be transferred in the respective routing tables into the light-load nodes to which the hash IDs are to be transferred.
According to an embodiment of the present invention, the indication unit 850 may be further configured to notify one of the light nodes to store information about files corresponding to the at least one hash ID and to notify nodes within the physical network to send a message addressed to the at least one hash ID to the one of the light nodes.
The sorting subunit 841, the first determination subunit 842, the second determination subunit 843, the deletion subunit 844, the modification subunit 845, and the notification subunit 846 may be implemented by electronic circuits such as a processor, or may be implemented by functional modules such as program segments. The above and other operations and/or functions of these sub-units and the indicating unit 850 can refer to the above description related to fig. 3 to 6, and are not described herein again.
By adopting the device 700 or 800 according to the embodiment of the invention, part of the hash ID managed by the heavy-load node can be transferred to the light-load node, so that the burden of the heavy-load node can be reduced, the resource of the light-load node can be more fully utilized, and the load balance is realized. Furthermore, by changing the routing tables of the nodes based on load balancing, the routing can be optimized.
According to an embodiment of the present invention, in a network system employing a distributed hash algorithm, the apparatus 700 or 800 as described above may be included, so that load balancing based on a hash ID space can be advantageously implemented in such a peer-to-peer network.
Fig. 9 shows a block diagram of a system 900 for load distribution according to an embodiment of the invention.
System 900 may be any information processing capable device now or in the future. The system 900 includes a memory 910 and a processor 920. The memory 910 may be a read-only memory, an optical disk, a hard disk, a magnetic disk, a flash memory, or any other non-volatile storage medium. The memory may store computer-executable instructions for implementing one or more steps in method 300 and/or method 500.
Coupled to memory 910, processor 920 may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 920 is configured to execute computer-executable instructions stored in the memory 910 for implementing one or more steps of the method 300 and/or the method 500, thereby implementing load balancing in a peer-to-peer network and facilitating balanced distribution of digital resources, such as video files, in the peer-to-peer network, avoiding the drawbacks of centralized management.
The processor 920 may be coupled to the memory 910 by a bus, as in conventional computer devices. The system 900 may be connected to an external storage device through a read/write interface for accessing external data, and may also be connected to a network or other computer device through a network interface, which will not be described in detail herein.
According to an embodiment of the present invention, a computer-readable storage medium may have stored thereon computer-executable instructions for performing one or more steps of the method 300 and/or the method 500, which when executed by a processor, enable the processor to perform the corresponding steps, thereby enabling load balancing based on a hash ID space in a peer-to-peer network employing a distributed hash algorithm.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
The method and system of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.
Claims (11)
1. A method for load distribution, comprising:
acquiring the previous arc length and node capacity information of each node in the physical network, wherein the previous arc length of each node is the difference between the hash ID of the node and the hash ID of the previous node of the node, and the previous node of the node is a node which has the hash ID adjacent to the hash ID of the node and is smaller than the hash ID of the node in all the nodes in the physical network;
determining the average load rate of the physical network according to the ratio of the sum of the forward arc length of each node to the sum of the node capability information of each node;
determining heavy-load nodes with the load rates of the nodes being greater than the average load rate and light-load nodes with the load rates of the nodes being less than the average load rate based on comparison between the load rate of each node and the average load rate, which is obtained through the ratio between the forward arc length of each node and the node capability information;
determining at least one hash ID to be transferred to one of the light load nodes in the hash IDs managed by one of the heavy load nodes; and
indicating to nodes within the physical network that the at least one hash ID is managed by one of the light nodes,
wherein the determining at least one hash ID to be transferred to one of the light load nodes among the hash IDs managed by one of the heavy load nodes comprises:
a sorting step, wherein all heavy-load nodes are sorted according to the overload capacity to obtain a heavy-load node list, and all light-load nodes are sorted according to the underload capacity to obtain a light-load node list, wherein the overload capacity of each heavy-load node is equal to the sum of the arc length of the front relay of the heavy-load node minus the arc length of the front relay of all nodes in the physical network and is matched with the node capacity information of the heavy-load node, and the underload capacity of each light-load node is equal to the sum of the arc length of the front relay of all nodes in the physical network and is matched with the node capacity information of the light-load node minus the arc length of the front relay of the light-load node;
a first determination step, wherein the smaller of the overload amount of the heavy-load node with the maximum overload amount in the heavy-load node list and the underload amount of the light-load node with the maximum underload amount in the light-load node list is determined;
a second determination step, namely determining the hash IDs to be transferred to the light load nodes with the largest underload amount in the hash IDs managed by the heavy load nodes with the largest overload amount, wherein the number of the hash IDs to be transferred is equal to the smaller one;
a deleting step of deleting the overloaded node with the largest overload amount from the list of the overloaded nodes if the smaller is the overload amount of the overloaded node with the largest overload amount, and deleting the underloaded node with the largest underloaded amount from the list of the underloaded nodes if the smaller is the underloaded amount of the underloaded node with the largest underloaded amount; and
and a modifying step, which is used for reordering the heavy load node list or the light load node list from which the node is not deleted in the deleting step under the condition that neither the heavy load node list nor the light load node list is empty, so that the first determining step, the second determining step and the deleting step are executed again.
2. The method of claim 1, wherein after the deleting step, further comprising:
and a notification step, in which all the determined hash IDs to be transferred and the light load nodes to which the hash IDs are respectively to be transferred are notified to the nodes in the physical network under the condition that at least one of the heavy load node list and the light load node list is empty, so that the nodes in the physical network modify the target nodes corresponding to the hash IDs to be transferred in the respective routing tables into the light load nodes to which the hash IDs are to be transferred.
3. The method of claim 1, wherein said indicating to nodes within the physical network that the at least one hash ID is managed by one of the light nodes comprises:
notifying one of the light nodes to store information about files corresponding to the at least one hash ID and notifying nodes within the physical network to send messages addressed to the at least one hash ID to the one of the light nodes.
4. The method of claim 1, wherein, when a message from a node outside the physical network addressed to one of the at least one hash ID is received by a node inside the physical network, the node inside the physical network notifies the node outside the physical network that the target node to which the at least one hash ID corresponds is one of the light nodes, such that the node outside the physical network modifies the target node in its routing table to which the at least one hash ID corresponds to one of the light nodes.
5. The method of claim 1, wherein the method is performed by a gateway node within the physical network or by a predetermined node within the physical network that is capable of communicating directly with all other nodes within the physical network in a hash ID space.
6. An apparatus for load distribution, comprising:
an acquisition unit configured to acquire a relay arc length and node capability information of each node within the physical network, wherein the relay arc length of each node is a difference between the hash ID of the node and a hash ID of a relay node of the node, and the relay node of the node is a node whose hash ID is adjacent to and smaller than the hash ID of the node among all nodes within the physical network;
the average load rate determining unit is used for determining the average load rate of the physical network according to the ratio of the sum of the forward arc length of each node to the sum of the node capability information of each node;
a heavy-load and light-load node determining unit, configured to determine, based on a comparison between the load rate of each node obtained by a ratio between a forward arc length of the node and node capability information and the average load rate, a heavy-load node whose load rate of the node is greater than the average load rate and a light-load node whose load rate of the node is less than the average load rate;
the adjusting unit is used for determining at least one hash ID to be transferred to one of the light load nodes in the hash IDs managed by one of the heavy load nodes; and
an indicating unit for indicating to nodes within the physical network that the at least one hash ID is managed by one of the light nodes,
wherein the adjusting unit includes:
the sequencing subunit is used for sequencing all the heavy-load nodes according to the overload capacity to obtain a heavy-load node list, and sequencing all the light-load nodes according to the underload capacity to obtain a light-load node list, wherein the overload capacity of each heavy-load node is equal to the sum of the arc length of the previous relay of the heavy-load node minus the arc length of the previous relay of all the nodes in the physical network and is matched with the node capacity information of the heavy-load node, and the underload capacity of each light-load node is equal to the sum of the arc length of the previous relay of all the nodes in the physical network and is matched with the node capacity information of the light-load node minus the arc length of the previous relay of the light-load node;
the first determining subunit is used for determining the smaller of the overload amount of the heavy-load node with the largest overload amount in the heavy-load node list and the underload amount of the light-load node with the largest underload amount in the light-load node list;
the second determining subunit is configured to determine, from the hash IDs managed by the overloaded nodes with the largest overload, hash IDs to be transferred to the underloaded nodes with the largest underload, where the number of the hash IDs to be transferred is equal to the smaller one;
a deleting subunit, configured to delete, if the smaller one is an overload of a heavy-load node with a largest overload, the heavy-load node with the largest overload from the heavy-load node list, and delete, if the smaller one is an underload of a light-load node with a largest underload, the light-load node with the largest underload from the light-load node list; and
and the modifying subunit is used for reordering the heavy load node list or the light load node list from which the node is not deleted by the deleting subunit under the condition that neither the heavy load node list nor the light load node list is empty, so that the first determining subunit, the second determining subunit and the deleting subunit operate again.
7. The apparatus of claim 6, wherein the adjustment unit further comprises:
and the notifying subunit is configured to notify all the determined hash IDs to be transferred and the light load nodes to which the hash IDs are to be transferred to the nodes in the physical network when at least one of the heavy load node list and the light load node list is empty, so that the nodes in the physical network modify target nodes corresponding to the hash IDs to be transferred in respective routing tables into the light load nodes to which the hash IDs are to be transferred.
8. The apparatus of claim 6, wherein the means for instructing is further for instructing one of the light nodes to store information about files corresponding to the at least one hash ID and instructing nodes within the physical network to send messages addressed to the at least one hash ID to the one of the light nodes.
9. A network system, comprising:
a plurality of nodes in communication with each other, one of the plurality of nodes being the apparatus of claim 6.
10. A system for load distribution, the system comprising:
a memory storing computer-executable instructions; and
a processor coupled with a memory, the processor configured to perform the method of any of claims 1-5 when the computer-executable instructions are executed.
11. A computer-readable storage medium having stored thereon computer-executable instructions that, when executed, cause a processor to perform the method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811621089.8A CN109617989B (en) | 2018-12-28 | 2018-12-28 | Method, apparatus, system, and computer readable medium for load distribution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811621089.8A CN109617989B (en) | 2018-12-28 | 2018-12-28 | Method, apparatus, system, and computer readable medium for load distribution |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109617989A CN109617989A (en) | 2019-04-12 |
CN109617989B true CN109617989B (en) | 2021-11-26 |
Family
ID=66012140
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811621089.8A Active CN109617989B (en) | 2018-12-28 | 2018-12-28 | Method, apparatus, system, and computer readable medium for load distribution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109617989B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112887345A (en) * | 2019-11-29 | 2021-06-01 | 上海交通大学 | Node load balancing scheduling method for edge computing environment |
TWI729606B (en) * | 2019-12-05 | 2021-06-01 | 財團法人資訊工業策進會 | Load balancing device and method for an edge computing network |
CN112015552A (en) * | 2020-08-27 | 2020-12-01 | 平安科技(深圳)有限公司 | Hash ring load balancing method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101610287A (en) * | 2009-06-16 | 2009-12-23 | 浙江大学 | A kind of load-balancing method that is applied to distributed mass memory system |
CN101883113A (en) * | 2010-06-25 | 2010-11-10 | 中兴通讯股份有限公司 | Method and physical nodes for realizing overlay network load balance |
WO2014047902A1 (en) * | 2012-09-28 | 2014-04-03 | 华为技术有限公司 | Load balancing method, device, system and computer readable medium |
CN103955404A (en) * | 2014-03-28 | 2014-07-30 | 哈尔滨工业大学 | Load judgment method based on NoC multi-core homogeneous system and task immigration method based on method |
CN104378412A (en) * | 2014-10-15 | 2015-02-25 | 东南大学 | Dynamic load balancing method taking user periodical resource demand into account in cloud environment |
CN107895111A (en) * | 2017-10-11 | 2018-04-10 | 西安电子科技大学 | Internet of things equipment supply chain trust systems management method, computer program, computer |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9749286B2 (en) * | 2014-07-17 | 2017-08-29 | Brocade Communications Systems, Inc. | Method and system for optimized load balancing across distributed data plane processing entities for mobile core network |
CN106330747B (en) * | 2015-06-30 | 2019-09-13 | 富士通株式会社 | Forward destination selecting method and communication device |
CN108337170B (en) * | 2018-01-30 | 2021-08-17 | 浙江省公众信息产业有限公司 | Distributed resource searching method and system |
-
2018
- 2018-12-28 CN CN201811621089.8A patent/CN109617989B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101610287A (en) * | 2009-06-16 | 2009-12-23 | 浙江大学 | A kind of load-balancing method that is applied to distributed mass memory system |
CN101883113A (en) * | 2010-06-25 | 2010-11-10 | 中兴通讯股份有限公司 | Method and physical nodes for realizing overlay network load balance |
WO2014047902A1 (en) * | 2012-09-28 | 2014-04-03 | 华为技术有限公司 | Load balancing method, device, system and computer readable medium |
CN103955404A (en) * | 2014-03-28 | 2014-07-30 | 哈尔滨工业大学 | Load judgment method based on NoC multi-core homogeneous system and task immigration method based on method |
CN104378412A (en) * | 2014-10-15 | 2015-02-25 | 东南大学 | Dynamic load balancing method taking user periodical resource demand into account in cloud environment |
CN107895111A (en) * | 2017-10-11 | 2018-04-10 | 西安电子科技大学 | Internet of things equipment supply chain trust systems management method, computer program, computer |
Also Published As
Publication number | Publication date |
---|---|
CN109617989A (en) | 2019-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11431791B2 (en) | Content delivery method, virtual server management method, cloud platform, and system | |
CN109617989B (en) | Method, apparatus, system, and computer readable medium for load distribution | |
US20050063318A1 (en) | Providing a notification including location information for nodes in an overlay network | |
CN108337170B (en) | Distributed resource searching method and system | |
WO2010069198A1 (en) | Distributed network construction method and system and job processing method | |
JP5599943B2 (en) | Server cluster | |
CN105519053A (en) | Dynamic interest forwarding mechanism for information centric networking | |
JP2014044677A (en) | Transmission control program, communication node, and transmission control method | |
CN109412963B (en) | Service function chain deployment method based on stream splitting | |
EP2856355B1 (en) | Service-aware distributed hash table routing | |
CN110226159B (en) | Method for performing database functions on a network switch | |
CN110851235B (en) | Virtual network function deployment method suitable for multidimensional resource optimization configuration | |
WO2023124309A1 (en) | Cloud native upf signaling plane load balancing selection method and system | |
WO2016186861A1 (en) | Method and apparatus for self-tuned adaptive routing | |
US10805241B2 (en) | Database functions-defined network switch and database system | |
JP6256343B2 (en) | Network, network node, distribution method, and network node program | |
JP6580212B1 (en) | COMMUNICATION DEVICE, COMMUNICATION METHOD, AND COMMUNICATION PROGRAM | |
CN117176796A (en) | Message pushing method, device, computer equipment and storage medium | |
US10474644B2 (en) | Systems and methods for optimizing selection of a replication data node in a distributed file system | |
CN112860799A (en) | Management method for data synchronization of distributed database | |
EP1741270A1 (en) | Zone-based peer-to-peer | |
JP4533923B2 (en) | Super-peer with load balancing function in hierarchical peer-to-peer system and method of operating the super-peer | |
WO2012055243A1 (en) | Method and apparatus for implementing load balancing of distributed hash table network | |
JP6182861B2 (en) | Information processing apparatus, information processing terminal, information search program, and information search method | |
US20130111068A1 (en) | Creating an optimized distribution network for the efficient transfer of data between endpoints using crossover connections |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |