CN104184813B - The load-balancing method and relevant device and group system of virtual machine - Google Patents
The load-balancing method and relevant device and group system of virtual machine Download PDFInfo
- Publication number
- CN104184813B CN104184813B CN201410412412.6A CN201410412412A CN104184813B CN 104184813 B CN104184813 B CN 104184813B CN 201410412412 A CN201410412412 A CN 201410412412A CN 104184813 B CN104184813 B CN 104184813B
- Authority
- CN
- China
- Prior art keywords
- virtual machine
- cluster
- virtual machines
- load balancing
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000013508 migration Methods 0.000 claims abstract description 123
- 230000005012 migration Effects 0.000 claims abstract description 123
- 230000015654 memory Effects 0.000 claims description 66
- 238000004891 communication Methods 0.000 claims description 35
- 238000013507 mapping Methods 0.000 claims description 24
- 238000004364 calculation method Methods 0.000 claims description 18
- 238000010586 diagram Methods 0.000 claims description 17
- 238000012216 screening Methods 0.000 claims description 7
- 238000004088 simulation Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 15
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000003860 storage Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 101710176296 Switch 2 Proteins 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002955 isolation Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a kind of load-balancing method of virtual machine and relevant device and group system, to improve network performance of the group system after load balancing.In some feasible embodiments of the invention, method includes:Management node determines to need the multiple virtual machines for carrying out load balancing in group system;According to the network traffics relation between the multiple virtual machine, the multiple virtual machine is divided at least one virtual machine cluster, so that, the network traffics between at least one other virtual machine in each virtual machine and same virtual machine cluster are more than or equal to network traffics threshold value, and the network traffics between any one other virtual machine in each virtual machine and different virtual machine cluster are less than network traffics threshold value;Generation migration suggests that migration is proposed to be used in whole virtual machines that instruction includes each virtual machine cluster, moves to or host is on same target computing nodes;The source calculate node of the one or more virtual machine hosts included to virtual machine cluster sends the migration and suggested.
Description
Technical Field
The invention relates to the technical field of computers and communication, in particular to a load balancing method of a virtual machine, related equipment and a cluster system.
Background
The server virtualization technology is a key technology of cloud computing, and achieves that multiple Virtual Machines (VMs) are deployed on a single physical server by virtualizing the physical server (or called a physical host), so as to improve the resource utilization rate of the physical server. A plurality of virtualized physical servers can form a virtualized cluster, the virtualized cluster abstracts physical resources in the cluster into a resource pool formed by various resources such as storage, calculation and the like, and a virtual machine is provided for a user in a mode of applying for the resources as required.
An important feature of virtualized clusters is Dynamic Resources Scheduling (DRS). The DRS can uniformly present resources such as calculation, storage and the like of a plurality of physical hosts in the cluster to a user, and can migrate the virtual machines in different physical hosts by using a virtual machine migration technology under the condition of not influencing user services or not sensing the user, so that resource use hot spots are eliminated, and the resource utilization rate is improved.
In the prior art, for example, a borui (VMware) DRS may measure the computing resource load of a computing node (i.e., the aforementioned physical server or physical host) in a cluster by calculating the resource load of two dimensions, i.e., a Central Processing Unit (CPU) and a Memory (MEM). By measuring and calculating the computational resource load in a virtualized cluster, the VMware DRS implements the following functions: (1) initial placement: when the virtual machine is started, the virtual machine is placed on a computing node with lower computing resource load; (2) load balancing: monitoring the computing resource load condition of each computing node in the virtualized cluster in real time, generating a virtual machine migration suggestion, and migrating the virtual machine of the high-load computing node to the low-load computing node; (3) energy consumption optimization: and in combination with a DPM (Distributed Power Management) function, Power-off or Power-on operation is performed on the computing nodes according to the load of each computing node, so that energy consumption is optimized.
Practice shows that the DRS function in the prior art performs load balancing according to the load condition of the computing resource, and has the following defects: if a performance bottleneck occurs in a cluster network link, the network performance of a virtual machine in a cluster is easily influenced; inappropriate virtual machine migration suggestions can result in degradation of virtual machine network performance.
Disclosure of Invention
The embodiment of the invention provides a load balancing method of a virtual machine, related equipment and a cluster system, and aims to improve the network performance of the virtualized cluster system after load balancing.
The first aspect of the present invention provides a load balancing method for a virtual machine, where the method is used in a cluster system including a management node and a plurality of computing nodes, each computing node in the plurality of computing nodes includes a hardware layer, a host running on the hardware layer, and a virtual machine running on the host; the method comprises the following steps: the management node determines a plurality of virtual machines which need load balancing in the cluster system, wherein the virtual machines are distributed to run on part or all of the computing nodes in the cluster system; dividing the virtual machines into at least one virtual machine cluster according to the network traffic relation among the virtual machines, so that the network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold value, and the network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold value; generating a migration suggestion for each virtual machine cluster, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node; sending the migration suggestion to source computing nodes of one or more virtual machine hosts included in the virtual machine cluster so that all virtual machines included in the virtual machine cluster are migrated to or hosted on the target computing node.
With reference to the first aspect of the present invention, in a first possible implementation manner, the dividing, according to a network traffic relationship among the virtual machines, the virtual machines into at least one virtual machine cluster includes: constructing a network communication connection topology of the virtual machines according to the network traffic relation among the virtual machines; the network communication connection topology is divided, each group of virtual machines with the network flow larger than or equal to a network flow threshold value among the virtual machines are determined as a virtual machine cluster, each isolated virtual machine is independently determined as a virtual machine cluster, and the network flow of each isolated virtual machine and the network flow of any other virtual machine are smaller than the network flow threshold value.
With reference to the first aspect or the first possible implementation manner of the first aspect, in a second possible implementation manner, the determining, by the management node, the plurality of virtual machines that need to perform load balancing in the cluster system includes: screening out virtual machines with a network traffic relationship from all the virtual machines running in the cluster system, and determining the virtual machines with the network traffic relationship as a plurality of virtual machines needing load balancing in the cluster system.
With reference to the first aspect or the first or second possible implementation manner of the first aspect, in a third possible implementation manner, before the determining, by the management node, that a plurality of virtual machines in the cluster system need to perform load balancing, the method further includes: the management node acquires resource use data of the cluster system, wherein the resource use data comprises the CPU utilization rate and the memory utilization rate of each computing node in the cluster system and the network resource utilization rate of each link between the computing nodes; calculating a load balancing index value according to the resource usage data, wherein the load balancing index value is used for representing a load balancing state of the cluster system; and if the load balancing index value is larger than a load balancing threshold value, judging that the cluster system needs to carry out load balancing.
With reference to the third possible implementation manner of the first aspect of the present invention, in a fourth possible implementation manner, the calculating a load balancing index value according to the resource usage data includes: calculating a load balance index value by adopting the following formula; t (S) ═ omega1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S);
Wherein S represents a topology map of virtual machines and compute nodesT (S) represents a load balance index value, σ (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the average weighted delay, ω, of all virtual machines running in the cluster system1、ω2、ω3、ω4Respectively, are weighting coefficients.
With reference to the first aspect or the first or second possible implementation manner of the first aspect, in a fifth possible implementation manner, the generating a migration suggestion for each virtual machine cluster, where the migration suggestion is used to instruct all virtual machines included in each virtual machine cluster to be migrated or hosted on a same target computing node includes: selecting a plurality of candidate computing nodes for each virtual machine cluster, and simulating and calculating a load balance index value of the cluster system after each virtual machine cluster is migrated to or hosted by one candidate computing node in the plurality of candidate computing nodes; for each virtual machine cluster, determining a target computing node from the candidate computing nodes, so that the load balance index value of the cluster system after all virtual machines in each virtual machine cluster are migrated to or hosted by the selected target computing node is minimum; generating a migration suggestion for each virtual machine cluster, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the selected target computing node.
With reference to the fifth possible implementation manner of the first aspect of the present invention, in a sixth possible implementation manner, the simulating and calculating a load balancing index value of the cluster system after each virtual machine cluster is migrated to or hosted by one candidate computing node of the plurality of candidate computing nodes includes: calculating a load balance index value by adopting the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)+ω5×cost(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the average weighted delay of all virtual machines running in the cluster system, cost (S) represents the cluster system mapping cost, ω1、ω2、ω3、ω4、ω5Respectively, are weighting coefficients.
A second aspect of the present invention provides a management node for a cluster system including a plurality of computing nodes and the management node, where each computing node in the plurality of computing nodes includes a hardware layer, a host running on the hardware layer, and a virtual machine running on the host; the management node includes:
a determining module, configured to determine a plurality of virtual machines that need load balancing in the cluster system, where the plurality of virtual machines run on some or all of the plurality of computing nodes in the cluster system in a distributed manner;
the clustering module is used for dividing the virtual machines into at least one virtual machine cluster according to the network traffic relation among the virtual machines, so that the network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold value, and the network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold value;
the system comprises a suggestion module, a migration module and a migration module, wherein the suggestion module is used for generating a migration suggestion for each virtual machine cluster, and the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node;
a sending module, configured to send the migration suggestion to a source computing node of one or more virtual machine hosts included in the virtual machine cluster, so that all virtual machines included in the virtual machine cluster are migrated to or hosted on the target computing node.
With reference to the second aspect of the present invention, in a first possible implementation manner, the clustering module includes:
the construction unit is used for constructing the network communication connection topology of the virtual machines according to the network traffic relation among the virtual machines;
the dividing unit is used for dividing the network communication connection diagram, dividing each group of virtual machines with the network flow larger than or equal to a network flow threshold value into a virtual machine cluster, and independently determining each isolated virtual machine as a virtual machine cluster, wherein the network flows of the isolated virtual machine and any other virtual machine are smaller than the network flow threshold value.
With reference to the second aspect or the first possible implementation manner of the second aspect, in a second possible implementation manner, the determining module is specifically configured to: screening out virtual machines with a network flow relationship from all the virtual machines running in the cluster system, and determining the virtual machines with the network flow relationship as a plurality of virtual machines needing load balancing in the cluster system.
With reference to the second aspect or the first or second possible implementation manner of the second aspect of the present invention, in a third possible implementation manner, the management node further includes: an obtaining module, configured to obtain resource usage data of the cluster system, where the resource usage data includes a CPU utilization rate and a memory utilization rate of each computing node in the cluster system, and a network resource utilization rate of each link between each computing node; a calculation module, configured to calculate a load balancing index value according to the resource usage data, where the load balancing index value is used to represent a load balancing state of the cluster system; and the judging module is used for judging that the cluster system needs to carry out load balancing if the load balancing index value is greater than a load balancing threshold value.
With reference to the third possible implementation manner of the second aspect of the present invention, in a fourth possible implementation manner, the calculating module is specifically configured to calculate the load balancing index value by using the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the average weighted delay, ω, of all virtual machines running in the cluster system1、ω2、ω3、ω4Respectively, are weighting coefficients.
With reference to the second aspect or the first or second possible implementation manner of the second aspect, in a fifth possible implementation manner, the suggesting module includes:
the simulation calculation unit is used for selecting a plurality of candidate calculation nodes for each virtual machine cluster, and simulating and calculating the load balance index value of the cluster system after each virtual machine cluster is migrated to or hosted at one candidate calculation node in the plurality of candidate calculation nodes;
a determining unit, configured to determine, for each virtual machine cluster, a target computing node from the multiple candidate computing nodes, so that a load balancing index value of the cluster system after all virtual machines in each virtual machine cluster are migrated to or hosted by the selected target computing node is minimum;
and an advice generating unit, configured to generate a migration advice for each virtual machine cluster, where the migration advice is used to instruct all virtual machines included in each virtual machine cluster to be migrated to or hosted in the selected target computing node.
With reference to the fifth possible implementation manner of the second aspect of the present invention, in a sixth possible implementation manner, the analog computation unit is specifically configured to: calculating a load balance index value by adopting the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)+ω5×cost(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the average weighted delay of all virtual machines running in the cluster system, cost (S) represents the system mapping cost, ω1、ω2、ω3、ω4、ω5Respectively, are weighting coefficients.
A third aspect of the present invention provides a virtualized cluster comprising: a plurality of computing nodes and a management node according to the second aspect of the invention; the computing node is used for receiving the migration suggestion sent by the management node and executing the virtual machine migration operation according to the indication of the migration suggestion.
As can be seen from the above, in some embodiments of the present invention, according to a network traffic relationship between virtual machines, a plurality of virtual machines that need to be load-balanced are divided into at least one virtual machine cluster, so that a network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold, and a network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold; generating a migration suggestion for each virtual machine cluster in the plurality of virtual machine clusters, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node; the following technical effects are achieved:
when load balancing is carried out, the influence of network dimensionality is considered, and a plurality of virtual machines in the same virtual machine cluster with a network flow relation are concentrated on the same computing node, so that the network flow among the virtual machines in the same virtual machine cluster does not occupy the load of links among the computing nodes, the network load of each link in a cluster system can be reduced, and the performance bottleneck of the links is prevented; moreover, by concentrating a plurality of virtual machines in the same virtual machine cluster on the same computing node, the network bandwidth and the communication reliability between the virtual machines in the same virtual machine cluster can be improved, so that the network performance of the virtual machine cluster with dense flow can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following briefly introduces the embodiments and the drawings used in the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a cluster system according to an embodiment of the present invention;
fig. 2a is a schematic flowchart of a load balancing method for a virtual machine according to an embodiment of the present invention;
fig. 2b is a schematic flowchart of another load balancing method for a virtual machine according to an embodiment of the present invention;
fig. 3a is a schematic structural diagram of a management node according to an embodiment of the present invention;
fig. 3b is a schematic structural diagram of another management node according to an embodiment of the present invention;
FIG. 4a is a diagram of the structure of a prior art cluster system and the distribution of virtual machines therein;
FIG. 4b is a diagram illustrating a structure of a cluster system and a distribution of virtual machines therein according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a data center to which embodiments of the present invention are applied;
fig. 6a is a schematic structural diagram of a load balancing apparatus deployed on a management node according to an embodiment of the present invention;
FIG. 6b is a schematic diagram of a cluster resource management system operating on a management node according to an embodiment of the present invention;
FIG. 7 is a flow chart of a management node performing load balancing scheduling according to an embodiment of the present invention;
fig. 8 is a flowchart of the management node determining whether to perform load balancing and generate a virtual machine migration suggestion according to the embodiment of the present invention;
FIG. 9 is a flowchart illustrating a process of dividing a virtual machine cluster according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating partitioning virtual machine clusters according to an exemplary scenario;
fig. 11 is a schematic structural diagram of another management node according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a load balancing method of a virtual machine, related equipment and a cluster system, and aims to improve the network performance of the cluster system after load balancing.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
To facilitate an understanding of the embodiments of the present invention, a system to which the embodiments of the present invention are applied and several elements that will be introduced in the description are first introduced herein.
The technical solution of the embodiment of the present invention is applied to a virtualized cluster system (referred to as a virtualized cluster or a cluster system or a cluster for short), as shown in fig. 1, the cluster system may include a management node 310 and a computation node 320; one or more management nodes can be provided, for example, two management nodes can be provided, and the management nodes are divided into a main management node and a standby management node; there may be a plurality of compute nodes. The management node 310 and the computing node 320 are both computer devices, the management node 310 may also be referred to as a management server, and the computing node 320 may also be referred to as a physical host.
The computing node 320 may include a hardware layer 3201, a host 3202 running on the hardware layer, and at least one virtual machine VM running on the host 3202. The following is a detailed description:
virtual Machine (VM):
a virtual machine refers to a computer system with complete hardware system functionality, which is emulated by software, running in a completely isolated environment. One or more virtual machines can be simulated on one physical host through virtual machine software, the virtual machines work like real computers, an operating system and an application program can be installed on the virtual machines, and the virtual machines can also access network resources. It is true for applications running in a virtual machine to work as if in a true computer.
Hardware layer:
a hardware platform on which the virtualized environment runs. The hardware layer may include various hardware, for example, the hardware layer of a certain compute node or a Management node may include a processor (e.g., CPU) and a Memory (e.g., a Memory), and may further include a network card, a Memory, and other high-speed/low-speed Input/Output (I/O) devices, and other devices with specific processing functions, such as an Input/Output Memory Management Unit (IOMMU), where the IOMMU may be used for translation of a virtual machine physical address and a Host physical address.
Host (Host):
as management layer, it is used to complete the management and allocation of hardware resources; presenting a virtual hardware platform for a virtual machine; and the scheduling and isolation of the virtual machine are realized. Wherein, the Host may be a Virtual Machine Monitor (VMM); in addition, sometimes the VMM and 1 privileged virtual machine cooperate, the combination of which constitutes the Host. The virtual hardware platform provides various hardware resources for each virtual machine running thereon, such as virtual CPU, memory, virtual disk, virtual network card, and the like. The virtual disk may correspond to a file of the Host or a logical block device. The virtual machine runs on a virtual hardware platform prepared by the Host, and one or more virtual machines run on the Host.
The following are detailed descriptions of the respective embodiments.
Referring to fig. 2a, an embodiment of the invention provides a load balancing method for a virtual machine. The method may be used in a cluster system comprising a management node and a plurality of compute nodes, wherein the compute nodes may comprise a hardware layer, a host running on top of the hardware layer, and a virtual machine running on top of the host.
Aiming at the problem that the network performance of the virtual machines in the cluster cannot be improved because the load balancing of the virtual machines in the cluster is only carried out by the virtualization cluster system according to the load of the computing resources in the prior art, the technical scheme of the embodiment of the invention introduces the utilization of the network resources, and considers the influence of network factors when carrying out load balancing so as to improve the network performance of the virtualization cluster after load balancing.
Referring to fig. 2a, a method according to an embodiment of the invention may include:
110. the management node determines a plurality of virtual machines needing load balancing in the cluster system; the plurality of virtual machines are distributively run on part or all of the plurality of computing nodes in the cluster system;
120. the management node divides the virtual machines into at least one virtual machine cluster according to the network traffic relation among the virtual machines; causing network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster to be greater than or equal to a network traffic threshold, the network traffic between each virtual machine and any other virtual machine in a different virtual machine cluster being less than the network traffic threshold;
130. the management node generates a migration suggestion aiming at each virtual machine cluster, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node;
140. sending the migration suggestion to source computing nodes of one or more virtual machine hosts included in the virtual machine cluster so that all virtual machines included in the virtual machine cluster are migrated to or hosted on the target computing node.
Referring to fig. 2b, in some embodiments of the present invention, 110 may further include:
100. and the management node judges whether the cluster system needs to perform load balancing or not.
The following is further detailed:
100. and the management node judges whether the cluster system needs to perform load balancing or not.
As a manager of the cluster system, the management node provides the functions of life cycle management, resource scheduling management, operation and maintenance and the like of the computing node and the virtual machine, and is the brain of the whole cluster system. The management node may collect, in real time, various performance data of the cluster system, including resource usage data, configuration parameters, and the like, where the resource usage data refers to usage data (such as utilization rate) of each resource (including computing resources, network resources, storage resources, and the like), and may include, for example, usage data of computing resources of each computing node, such as CPUs, memories, usage data of computing resources of each virtual machine, such as CPUs, memories, network resource usage data of each virtual machine, a network traffic relationship between virtual machines, a network load of a link between each two computing nodes, and the like, and may be stored as historical data.
The management node can realize a Dynamic Resource Scheduling (DRS) function according to the system performance data acquired in real time, and load balance is carried out on the virtual machines in the cluster system. Firstly, a load balancing algorithm can be executed to judge whether the cluster system needs to carry out load balancing, and if so, the subsequent steps are started to be executed. There are various methods for determining whether load balancing is required, and generally, the determining step may include:
acquiring resource use data of the cluster system; the resource usage data comprises the CPU utilization rate and the memory utilization rate of each computing node in the cluster system, and the network resource utilization rate of each link between each computing node; calculating a load balancing index value according to the resource usage data, wherein the load balancing index value is used for representing a load balancing state of the cluster system; and if the load balancing index value is larger than the load balancing threshold value, judging that the cluster system needs to carry out load balancing.
In one embodiment, the management node may specifically obtain the CPU utilization of each compute node in the resource usage data, and further calculate, according to the CPU utilization of each compute node, a variance of the CPU utilization of each compute node in the cluster system, as a load balancing index value; and if the load balancing index value is larger than the load balancing threshold value, judging that the load balancing is required. In general, the variance is the degree of deviation from the central value, and the greater the variance is, the greater the difference between the CPU utilization rates of the respective computing nodes is, the more load balancing is required. For example, assuming that the CPU capacity of a computing node is 10GHz, the average CPU resource occupied by each virtual machine running on the computing node in the past period is 2GHz, and if 3 such virtual machines run on the computing node, the CPU utilization of the computing node is 60%. In the limit, assuming that the CPU utilization of a certain compute node reaches 100% and the CPU utilization of another compute node is 0, the variance may be large at this time, and exceeds the threshold, and load balancing is required.
In another embodiment, the management node may specifically obtain a CPU utilization rate and a memory (MEM) utilization rate of each computing node in the resource usage data, and further calculate, according to the CPU utilization rate of each computing node, a variance of the CPU utilization rate of each computing node in the cluster system and a variance of the MEM utilization rate of each computing node, and use a weighted sum of the two calculated variances as the load balancing index value. Different weights can be set for the two dimensions of the CPU and the MEM according to different practical application scenes. For example, assuming that the memory capacity of a compute node is 10GB, the average memory resource occupied by each virtual machine running on the compute node in a past period of time is 2 GB; if 3 such virtual machines are running on the compute node, the memory resource utilization of the compute node is 60%. Let us assume that the weights of the two dimensions CPU and MEM are a and b, respectively, by σ (Util)CPU) The variance, denoted by σ (Util), of CPU utilization for each compute nodeMEM) Representing the variance of MEM utilization for each compute node, the load balancing index value may be equal to a σ (Util)CPU)+b×σ(UtilMEM)。
In a preferred embodiment of the present invention, the management node may specifically obtain the following resource usage data of the cluster system: the CPU utilization rate and the memory utilization rate of each computing node, and the network resource utilization rate of each link between each computing node; the load balancing index value may be calculated using the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)+ω5×cost(S)
wherein T (S) represents a load balancing index value; s represents the topological mapping relationship between the virtual machines and the computing nodes, namely, which virtual machines run on the computing nodes;
σ denotes the variance, in particular σ (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) A variance representing network resource utilization of each link between each compute node; the link here refers to a link connecting any two computing nodes in the cluster system;
δVM(S) represents the average weighted delay of all virtual machines running in the cluster system,
cost (S) represents the mapping cost of the cluster system, and specifically may refer to the mapping cost of the cluster system after each virtual machine migration and before the virtual machine migration; in this step, when calculating the load balancing index value in order to determine whether or not load balancing is necessary, since virtual machine migration has not been performed yet and cost (S) is 0, the above formula may be made not to include + ω5The term "x cost (S)" or "cost (S)" is 0.
ω1、ω2、ω3、ω4、ω5The weighting coefficients are respectively, and can be manually set according to actual needs.
In the above formula, σ (Util) is usedCPU)、σ(UtilMEM)、σ(UtilNET)、δVMFive parameters of (S) and cost (S)And calculating the load balancing index value, so that the load balancing degree of the cluster system can be reflected more accurately. The smaller T (S) calculated by this formula is, the better the load balance degree is, and the larger T (S) is, the worse the load balance degree is. When T (S) is larger than the load balancing threshold value, the load balancing is required.
In some embodiments of the present invention, δ may not be consideredVM(S), cost (S), and the load balancing index value may be one or two of the two parameters: t (S) ═ omega1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET). In other embodiments, other parameters may be added to the above calculation formula according to actual needs. In some embodiments, the standard deviation may be used instead of the variance, and the symbol σ in the above formula may also refer to the standard deviation.
Next, how to calculate the load balancing index value according to the above formula is further described by taking the example that σ in the above formula represents the standard deviation.
(1) Calculating the standard deviation sigma (Util) of the CPU utilization rate of each computing nodeCPU)
Let the computing node PM numbered iiThe actual usage amount of the CPU of (a)iThe capacity of the CPU is ciThen the computing node PMiHas a CPU utilization of ρi=ai/ciAssuming that there are n computing nodes in the cluster system, the average utilization rate μ of the CPU of the n computing nodescpuComprises the following steps:
correspondingly, the standard deviation of the CPU utilization of n compute nodes is:
(2) calculating the standard deviation sigma (Util) of the memory utilization rate of each computing nodeMEM)
Let the computing node PM numbered iiThe actual usage amount of the memory is aiThe capacity of the memory is ciThen the computing node PMiThe memory utilization ratio is rhoi=ai/ciIf n computing nodes are provided in the cluster system, the average utilization rate μ of the memory of the n computing nodes iscpuComprises the following steps:
correspondingly, the standard deviation of the memory utilization of the n computing nodes is as follows:
(3) calculating the standard deviation sigma (Util) of the resource utilization rate of each linkMEM)
The link set in the cluster system can be divided into core link and edge link, using LKcoreRepresenting core links by LKedgeRepresenting an edge link and assuming LKcoreIn which there are n links, LKedgeIn which there are m links, denoted LKcoreThe resource utilization rate of the ith link is viThen LK iscoreThe average of the link resource utilization is:
LKcorethe standard deviation of the link resource utilization of (a) is:
memory LKedgeThe resource utilization rate of the ith link is viThen LK isedgeThe average of the link resource utilization is:
LKedgethe standard deviation of the link resource utilization of (a) is:
the standard deviation of the total link utilization is:
σ(UtilNET)=σ(Utilcore)+σ(Utiledge)
(4) computing an average weighted delay δ for all virtual machines in a clustered systemVM(S)
Virtual machine vm with number iiThe computing node pm (vm) wherei) And virtual machine vm with number jjThe computing node pm (vm) wherej) The time delay of network communication between the two isThe weighting coefficients of the two are empirically set to be lambdai,jThen, the average weighted delay of all virtual machines in the cluster system can be expressed as:
wherein,i.e. vmi、vmjAre any two virtual machines in the clustered system.
(5) Computing cluster system mapping cost (S)
Wherein the PCvmRepresenting performance loss after virtual machine migration, PMCvmRepresenting performance loss of a virtual machine during migration, PBvmthe performance improvement of the virtual machine after the migration is shown, and alpha and beta are weighting coefficients.
The cluster system mapping cost (S) is used for the influence of the virtual machine migration operation on the performance of the whole cluster system.
In the above, σ (Util) was calculatedCPU)、σ(UtilMEM)、σ(UtilMEM)、δVMAfter (S) and cost (S), substituting the formula:
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)+ω5×cost(S)
and obtaining the load balance index value T (S).
In other embodiments of the present invention, the load balancing index value may also be calculated in other manners, which is not limited herein. Any parameter that can be used to indicate the load balancing degree of the cluster system may be used as the load balancing index value. If the calculated load balance index value is larger than the load balance threshold value, the load balance is required. This step of determining whether load balancing is required is an optional step.
110. The management node determines a plurality of virtual machines which need to be subjected to load balancing in the cluster system, and the virtual machines are distributed to run on part or all of the computing nodes in the cluster system.
When the load balancing is judged to be needed, the management node can determine a plurality of virtual machines needing the load balancing in the cluster system according to the strategy. There are various methods for determining the multiple virtual machines that need load balancing, for example:
in one embodiment, virtual machines that need load balancing may be screened from a computing resource perspective. The management node may specifically obtain computing resource usage data of each node, for example, the CPU utilization rate and the memory utilization rate may be obtained, a computing node whose computing resource utilization rate exceeds a set threshold (that is, a computing node with a higher computing resource load) may be screened out first, and a part of virtual machines selected from the screened computing nodes is determined as virtual machines that need load balancing and added to the list of virtual machines to be migrated. For example, one or more virtual machines with the highest utilization rate of computing resources may be selected from the screened computing nodes, and the virtual machines may be determined as the virtual machines that need load balancing.
In another embodiment, virtual machines that need load balancing may be screened from a network resource perspective. The management node can screen out virtual machines having a network traffic relationship from all the virtual machines running in the cluster system according to the acquired network resource usage data, and determines the virtual machines as the virtual machines needing load balancing, namely: and for all the virtual machines in the cluster system, eliminating the virtual machines which have no network traffic relation with any other virtual machine, determining the rest virtual machines as the virtual machines needing load balancing, and adding the virtual machines into a virtual machine list to be migrated.
In the embodiment of the present invention, preferably, a combination of the two manners is adopted to determine a plurality of virtual machines that need to perform load balancing in a cluster system, that is: and adding the virtual machines needing load balancing and screened from the view of computing resources and the virtual machines needing load balancing and screened from the view of network resources into a virtual machine list to be migrated to serve as the virtual machines needing load balancing finally.
Of course, in other embodiments of the present invention, there may be other screening manners, for example, virtual machines that need to be load balanced may also be screened from the perspective of storage resources, and the virtual machines may also be added to the list of virtual machines to be migrated. The manner in which the plurality of virtual machines that need to be load balanced is determined is not limited herein.
120. The management node divides the virtual machines into at least one virtual machine cluster according to the network traffic relation among the virtual machines, so that the network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold value, and the network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold value.
In the embodiment of the invention, before the management node generates the migration suggestion for the virtual machines in the virtual machine list to be migrated, the management node can cluster a plurality of virtual machines needing load balancing into one or more virtual machine clusters according to the network traffic relation among the virtual machines, wherein each virtual machine cluster comprises at least one virtual machine.
The virtual machine cluster refers to a group of virtual machines with specific performance requirements, and herein, the network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold, and the network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold.
Or may be understood as follows: the divided virtual machine clusters can be divided into two types according to the number of the included virtual machines, the virtual machine cluster comprising at least two virtual machines is called a first virtual machine cluster, the virtual machine cluster comprising only one virtual machine is called a second virtual machine cluster, wherein the network traffic between any virtual machine in the first virtual machine cluster and at least one other virtual machine in the same first virtual machine cluster is greater than a network traffic threshold value, and the network traffic between only one virtual machine in the second virtual machine cluster and any other virtual machine in the plurality of virtual machines is less than the network traffic threshold value.
In the embodiment of the present invention, a network traffic affinity algorithm is adopted to divide a plurality of virtual machines requiring load balancing into at least one virtual machine cluster, which may specifically include: constructing a network communication connection topology of a plurality of virtual machines according to a network traffic relation among the plurality of virtual machines needing load balancing; then, network communication connection topology is divided, each group of virtual machines with network flow larger than or equal to a network flow threshold value among the virtual machines are divided into a virtual machine cluster, each isolated virtual machine is independently determined as a virtual machine cluster, and the network flow of each isolated virtual machine and the network flow of any other virtual machine are smaller than the network flow threshold value. Or, setting a network traffic threshold value for a network communication connection topology, and if the network traffic between two virtual machines is smaller than the network traffic threshold value, determining that a network traffic relationship does not exist between the two virtual machines, and deleting the connection relationship between the two virtual machines; after all the virtual machines in the network communication connection topology are processed, all the virtual machines are naturally divided into a plurality of groups, wherein each group is a virtual machine cluster. Multiple virtual machines in the same virtual machine cluster may run on the same compute node or on different compute nodes. After the virtual machine clusters are divided, a corresponding relation list of the virtual machine clusters and the host can be further generated, wherein the list comprises the corresponding relation between each virtual machine in the virtual machine clusters and the computing node of each virtual machine host, and therefore preparation is made for subsequently generating a migration suggestion.
130. The management node generates a migration suggestion for each virtual machine cluster, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node.
In the embodiment of the present invention, when the management node generates the virtual machine migration suggestion, the generated migration suggestion is used to instruct all virtual machines included in each virtual machine cluster to be migrated to or hosted on the same target computing node, with the virtual machine cluster as a unit. In general, the step of generating a migration suggestion for each virtual machine cluster may include:
selecting a plurality of candidate computing nodes for each virtual machine cluster, and simulating and calculating a load balance index value of the cluster system after each virtual machine cluster is migrated to or hosted by one candidate computing node in the plurality of candidate computing nodes; for each virtual machine cluster, determining a target computing node from the candidate computing nodes, so that the load balance index value of the cluster system after all virtual machines in the virtual machine cluster are migrated to or hosted at the selected target computing node is minimum; generating a migration suggestion for each virtual machine cluster, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the selected target computing node.
In one embodiment, selecting a plurality of candidate compute nodes for each virtual machine cluster may include: screening out at least one computing node with the computing resource utilization rate lower than the computing resource load threshold value as a candidate computing node. In another embodiment, selecting a plurality of candidate compute nodes for each virtual machine cluster may include: the virtual machine cluster is supposed to be migrated to or hosted in one of the plurality of random computing nodes, simulation computation is carried out, and at least one computing node with a load balance index value smaller than a load threshold value after the virtual machine cluster is migrated is screened out and serves as a candidate computing node. In other embodiments, other methods may be used to screen candidate computing nodes, which are not limited herein.
For example, assume that the cluster system includes a total of three compute nodes, each represented by A, B, C; one virtual machine cluster which needs to be subjected to load balancing comprises virtual machines VM1, VM2, VM3, VM4 and VM5, wherein VM1 and VM2 run on a computing node A, VM4 and VM5 run on a computing node B, and VM3 runs on a computing node C; candidate compute nodes, such as compute nodes B and C, may be screened first; then, according to the principle of centralizing all virtual machines in the same virtual machine cluster on the same computing node, the following candidate migration suggestions can be generated: that is, candidate migration suggestions to migrate virtual machines VM1, VM2, VM3 onto compute node B, and candidate migration suggestions to migrate VM1, VM2, VM4, VM5 onto compute node C; then, the management node may perform simulation calculation for each candidate migration suggestion, and assuming that after each candidate migration suggestion is executed, a load balancing index value of the cluster system is simulated and calculated, and the candidate migration suggestion with the smallest load balancing index value is determined as the migration suggestion to be executed, for example, a migration cost of a candidate migration suggestion for migrating a virtual machine VM1, VM2, VM3 onto a computing node B may be lower than a migration cost of another candidate migration suggestion, and the load balancing index value after migration may be smaller, so that the computing node B may be determined as a target computing node, and a candidate migration suggestion for migrating a virtual machine VM1, VM2, VM3 onto a computing node B may be determined as the migration suggestion to be executed. Finally, the determined migration suggestion is executed, the virtual machines to be migrated are migrated to the target computing node, so that part of the virtual machines on the high-load computing node can be migrated to the low-load computing node, and meanwhile, a plurality of virtual machines in the same virtual machine cluster are concentrated into the same virtual machine, and the load of a communication link between the computing nodes is reduced.
140. Sending the migration suggestion to source computing nodes of one or more virtual machine hosts included in the virtual machine cluster so that all virtual machines included in the virtual machine cluster are migrated to or hosted on the target computing node.
After the management node generates the migration suggestion for each virtual machine cluster, the management node may send the migration suggestion for each virtual machine cluster to each source computing node of one or more virtual machine hosts included in the virtual machine cluster, and each source computing node executes the migration suggestion to migrate or host all virtual machines included in one virtual machine cluster to the same target computing node.
It is understood that the foregoing solution of the embodiment of the present invention may be embodied in, for example, a computer device serving as a management node of a cluster system.
The embodiment of the invention discloses a load balancing method of virtual machines, which divides a plurality of virtual machines needing load balancing into at least one virtual machine cluster according to the network traffic relation among the virtual machines, so that the network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold value, and the network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold value; generating a migration suggestion for each virtual machine cluster in the plurality of virtual machine clusters, wherein the migration suggestion is used for indicating a technical scheme that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node; the following technical effects are achieved:
when load balancing is carried out, the influence of network dimensionality is considered, and a virtual machine cluster with a network traffic relation is divided by monitoring the network traffic relation among virtual machines in real time and is used as a scheduling unit for subsequently generating a virtual machine migration suggestion;
the virtual machines in the same virtual machine cluster can be concentrated on the same computing node, so that the flow among the virtual machines in one virtual machine cluster does not occupy the load of the links among the computing nodes, the network load of each link in a cluster system can be reduced, and the performance bottleneck of the links can be prevented;
moreover, by concentrating a plurality of virtual machines in the same virtual machine cluster on the same computing node, the network bandwidth and the communication reliability between the virtual machines in the same virtual machine cluster can be improved, so that the network performance of the virtual machine cluster with dense flow can be improved.
In order to better implement the above-mentioned aspects of the embodiments of the present invention, the following also provides related devices for implementing the above-mentioned aspects cooperatively.
Referring to fig. 3a, an embodiment of the present invention provides a management node 200, which is applied to a cluster system including a plurality of computing nodes and the management node 200, where each computing node in the plurality of computing nodes includes a hardware layer, a host running on the hardware layer, and at least one virtual machine running on the host; the management node 200 may include:
a determining module 210, configured to determine a plurality of virtual machines that need load balancing in the cluster system, where the plurality of virtual machines run on some or all of the plurality of computing nodes in the cluster system in a distributed manner;
a clustering module 220, configured to divide the multiple virtual machines into at least one virtual machine cluster according to a network traffic relationship among the multiple virtual machines, so that a network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold, and a network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold;
a suggestion module 230, configured to generate a migration suggestion for each virtual machine cluster, where the migration suggestion is used to indicate that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node;
a sending module 240, configured to send the migration suggestion to a source computing node of one or more virtual machine hosts included in the virtual machine cluster, so that all virtual machines included in the virtual machine cluster are migrated to or hosted on the target computing node.
In some embodiments of the present invention, the clustering module 220 may include:
the construction unit is used for constructing the network communication connection topology of the virtual machines according to the network traffic relation among the virtual machines;
the dividing unit is used for dividing the network communication connection diagram, dividing each group of virtual machines with the network flow larger than or equal to a network flow threshold value into a virtual machine cluster, and independently determining each isolated virtual machine as a virtual machine cluster, wherein the network flows of the isolated virtual machine and any other virtual machine are smaller than the network flow threshold value.
In some embodiments of the present invention, the determining module 210 may be specifically configured to: screening out virtual machines with a network flow relationship from all the virtual machines running in the cluster system, and determining the virtual machines with the network flow relationship as a plurality of virtual machines needing load balancing in the cluster system.
Referring to fig. 3b, in some embodiments of the present invention, the management node 200 may further include:
an obtaining module 250, configured to obtain resource usage data of the cluster system, where the resource usage data includes a CPU utilization rate and a memory utilization rate of each computing node in the cluster system, and a network resource utilization rate of each link between each computing node;
a calculating module 260, configured to calculate a load balancing index value according to the resource usage data, where the load balancing index value is used to represent a load balancing state of the cluster system;
a determining module 270, configured to determine that the cluster system needs to perform load balancing if the load balancing index value is greater than a load balancing threshold.
In some embodiments of the present invention, the calculating module 260 may specifically be configured to calculate the load balancing index value by using the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S);
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the average weighted delay, ω, of all virtual machines running in the cluster system1、ω2、ω3、ω4Respectively, are weighting coefficients.
In some embodiments of the present invention, the suggestion module 230 may include:
the simulation calculation unit is used for selecting a plurality of candidate calculation nodes for each virtual machine cluster, and simulating and calculating the load balance index value of the cluster system after each virtual machine cluster is migrated to or hosted at one candidate calculation node in the plurality of candidate calculation nodes;
a determining unit, configured to determine, for each virtual machine cluster, a target computing node from the multiple candidate computing nodes, so that a load balancing index value of the cluster system after all virtual machines in each virtual machine cluster are migrated to or hosted by the selected target computing node is minimum;
and an advice generating unit, configured to generate a migration advice for each virtual machine cluster, where the migration advice is used to instruct all virtual machines included in each virtual machine cluster to be migrated to or hosted in the selected target computing node.
In some embodiments of the present invention, the analog computation unit may be specifically configured to compute the load balancing index value by using the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)+ω5×cost(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the average weighted delay of all virtual machines running in the cluster system, cost (S) represents the cluster system mapping cost, ω1、ω2、ω3、ω4、ω5Respectively, are weighting coefficients.
It can be understood that the functions of each functional module of the management node in the embodiment of the present invention may be specifically implemented according to the method in the method embodiment shown in fig. 2a or 2b, and the specific implementation process may refer to the related description in the above method embodiment, which is not described herein again.
The management node of the embodiment of the present invention may specifically be a computer device.
As can be seen from the above, in some possible embodiments of the present invention, a management node may divide a plurality of virtual machines that need load balancing into at least one virtual machine cluster according to a network traffic relationship between the virtual machines, so that a network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold, and a network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold; generating a migration suggestion for each virtual machine cluster in the plurality of virtual machine clusters, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node; the following technical effects are achieved:
when load balancing is carried out, the influence of network dimensionality is considered, and a virtual machine cluster with a network traffic relation is divided by monitoring the network traffic relation among virtual machines in real time and is used as a scheduling unit for subsequently generating a virtual machine migration suggestion;
the virtual machines in the same virtual machine cluster can be concentrated on the same computing node, so that the flow among the virtual machines in one virtual machine cluster does not occupy the load of the links among the computing nodes, the network load of each link in a cluster system can be reduced, and the performance bottleneck of the links can be prevented;
moreover, by concentrating a plurality of virtual machines in the same virtual machine cluster on the same computing node, the network bandwidth and the communication reliability between the virtual machines in the same virtual machine cluster can be improved, so that the network performance of the virtual machine cluster with dense flow can be improved.
Referring to fig. 1, an embodiment of the present invention further provides a cluster system, where the cluster system includes a plurality of computing nodes 320 and a management node 310, and the management node 310 is a management node as shown in the embodiment of fig. 3a or fig. 3 b. The compute node 320 includes a hardware layer 3201, a host 3202 running on the hardware layer, and at least one virtual machine VM running on the host. The computing node 320 is configured to receive the migration suggestion sent by the management node 310, and execute a virtual machine migration operation according to an indication of the migration suggestion. The cluster system 300 may specifically be a data center or a virtualized cluster.
When the cluster system performs load balancing, the influence of network dimensionality is considered, and a virtual machine cluster with a network traffic relation is divided by monitoring the network traffic relation among virtual machines in real time and is used as a scheduling unit for subsequently generating a virtual machine migration suggestion;
the virtual machines in the same virtual machine cluster can be concentrated on the same computing node, so that the flow among the virtual machines in one virtual machine cluster does not occupy the load of the links among the computing nodes, the network load of each link in a cluster system can be reduced, and the performance bottleneck of the links can be prevented;
moreover, by concentrating a plurality of virtual machines in the same virtual machine cluster on the same computing node, the network bandwidth and the communication reliability between the virtual machines in the same virtual machine cluster can be improved, so that the network performance of the virtual machine cluster with dense flow can be improved.
In order to better understand the technical solutions provided by the embodiments of the present invention, the following describes the technical solutions of the embodiments of the present invention by taking an implementation mode in a specific scenario as an example.
Referring to fig. 4a, it is assumed that the virtualized cluster includes four compute nodes, and host hosts are respectively run on the four compute nodes and are respectively represented by host1, host2, host3, and host 4; the computing nodes of host1 and host2 are connected through a switch2, the computing nodes of host3 and host4 are connected through a switch3, and the switch2 is connected with the switch3 through a switch 1. Assuming that there is a high traffic among App Server, DB Server, MailServer and Web Server of four different applications, and the four virtual machines are distributed on different computing nodes, it is obvious that network loads of core links switch2 to switch1 and switch1 to switch3 are prone to generate bottlenecks, and network performance among the virtual machines is also affected.
According to the core idea of the present invention, if migrating a virtual machine cluster with a traffic relationship to the same compute node under the permission of compute resources, for example, as shown in fig. 4b, four virtual machines with a higher traffic relationship among App Server, DB Server, Mail Server, and WebServer are considered as one virtual machine cluster, and all of them are migrated to Host1, then not only the load of the core link can be reduced, but also the network performance between the virtual machines can be improved.
Referring to fig. 5, a data center 500 according to an embodiment of the present invention is shown, where the data center 500 includes a management node 502 and a plurality of computing nodes 501. In which several computing nodes 501 may form a Cluster (Cluster) system 510 (as shown by 510a or 510 b), and the functional entity of the management node 502 for managing the Cluster system 510 may be considered as a part of the Cluster system 510, in other words, each Cluster system may also be considered to have its own management node. In addition, there may be two management nodes 502, one as the active management node, and the other as the standby management node. The method provided by the embodiment of the invention can be used for the whole data center (at this time, the whole data center is regarded as a cluster system), and can also be used for one cluster system.
A cluster system is a logical concept that consists of multiple compute nodes and provides some advanced functions with granularity, such as resource scheduling, high availability, etc. A compute node is a physical host (i.e., a physical server) that provides a stand-alone virtualization function, on which multiple virtual machines are typically run. The management node manages the whole data center, provides functions of life cycle management, resource scheduling management, operation and maintenance and the like of the computing node and the virtual machine, and is the brain of the whole data center. Both the compute node and the management node are computer devices.
As shown in fig. 6a, which is a schematic diagram of a load balancing apparatus 600 deployed on a management node 502, the load balancing apparatus 600 may include:
the performance data collection module 601 is configured to collect performance data of the entire data center in real time, such as resource usage data, which may include usage data of computing resources of each computing node, such as CPUs and memories, usage data of computing resources of each virtual machine, network resource usage data of each virtual machine, a network traffic relationship between the virtual machines, a network load of a link between each two computing nodes, and the like, and may be stored as historical data. The performance data acquisition module 601 corresponds to the acquisition module 250 described in the embodiment of fig. 3 b.
A DRS control module 602, configured to control and schedule other modules, so as to dynamically manage resources of the data center or a virtualized cluster therein. For example, the performance data acquired by the performance data acquisition module 601 is acquired, and the control instruction and the corresponding performance data are issued to other modules to instruct the other modules to perform corresponding processing. The DRS control module 602 corresponds to the determining module 210, the calculating module 260 and the determining module 270 described in the embodiment of fig. 3a or 3 b.
The virtual machine traffic affinity identifying module 603 is configured to identify all virtual machines having a network traffic relationship with each other in the cluster system according to the performance data acquired by the performance data acquiring module 601, and further configured to divide the plurality of virtual machines requiring load balancing into at least one virtual machine cluster according to the network traffic relationship between the plurality of virtual machines requiring load balancing, so that a network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold, and a network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is smaller than the network traffic threshold. The virtual machine traffic affinity identification module 603 corresponds to the clustering module 220 described in the embodiment of fig. 3a or 3 b.
The load balancing suggestion generation module 604 may specifically include a network dimension migration suggestion generation module 604a and a computing resource dimension suggestion generation module 604b, where: a computing resource dimension suggestion generation module 604b, configured to filter virtual machines that need load balancing from a computing resource perspective, and generate candidate migration suggestions; a network dimension migration suggestion generation module 604a, configured to filter virtual machines that need load balancing from a network resource perspective, and generate candidate migration suggestions; for a detailed description, refer to the description of the embodiment of FIG. 1. The load balancing suggestion generation module 604 may correspond to the suggestion module 230 described in the embodiments of fig. 3a or 3 b.
The load balancing suggestion execution module 605 performs benefit evaluation on the generated candidate migration suggestion, for example, calculates whether a calculated load balancing index value is minimum or is smaller than a set threshold, and if the benefit evaluation is passed, executes the migration suggestion, that is, executes a virtual machine migration operation, and migrates the virtual machine to be migrated to a target computing node specified by the migration suggestion. The load balancing proposal execution module 605 corresponds to the sending module 240 described in the embodiment of fig. 3a or 3 b.
Fig. 6b is a schematic diagram of a cluster resource management system running on a management node 502 of the cluster system in this embodiment of the present invention. The performance data acquisition module 601 of the load balancing apparatus 600 described above corresponds to the input data processing subsystem in fig. 6b, the DRS control module 602 corresponds to the control subsystem in fig. 6b, the load balancing suggestion generation module 604 corresponds to the algorithm subsystem in fig. 6b (in particular, the load balancing algorithm therein), and the load balancing suggestion execution module 605 corresponds to the output subsystem in fig. 6 b. The virtual machine traffic affinity identifying module 603 is a module newly added to the cluster resource management system in the embodiment of the present invention, and also corresponds to the algorithm subsystem in fig. 6 b.
As can be seen from the above, by deploying the load balancing device 600, the management node 502 can implement resource scheduling management on the cluster system, and when load balancing is performed, the influence of network dimensionality is considered, and by monitoring the network traffic relationship between the virtual machines in real time, a virtual machine cluster having the network traffic relationship is partitioned and used as a scheduling unit for subsequently generating a virtual machine migration suggestion.
Referring to fig. 6a and fig. 7, a process of executing load balancing scheduling by the management node 502 deployed with the load balancing apparatus 600 is shown, for example, for a virtualized cluster system, where the load balancing scheduling process may include:
701: the management node 502 of the virtualization cluster enables the DRS function of the load balancing apparatus 600;
702: DRS control module 602 periodically or periodically obtains performance data, for example, including topology information of the cluster network, performance data of CPU, Memory and network, and configuration of parameters and rules, from performance data acquisition module 601;
703: the DRS control device 602 executes a load balancing algorithm, and calculates a load balancing index value of the virtualized cluster according to the performance data such as the CPU, the Memory, the network topology, the bandwidth, and the like;
704: the DRS control device 602 determines whether load balancing is required according to the calculated load balancing index value, wherein if the load balancing index value of the cluster is less than or equal to the load balancing threshold, it indicates that the cluster is in a balanced state, the control flow is ended, and if the load balancing index value is greater than the load balancing threshold, the cluster system is in a non-balanced state, and load balancing is required, the next step is performed;
705: the DRS control device 602 instructs the load balancing suggestion generation module 604 and utilizes the virtual machine traffic affinity identification module 603 to screen out virtual machines to be load balanced from the computation resource dimension and the network resource dimension, respectively, so as to form a list of virtual machines to be migrated;
706: the DRS control apparatus 602 executes a virtual machine flow affinity identification algorithm by using the virtual machine flow affinity identification module 603, and clusters a plurality of virtual machine clusters to be migrated from the virtual machine list to be migrated;
707: the DRS control device 602 instructs the load balancing suggestion generation module 604 to screen out at least one computing node whose computing resource usage rate is lower than the computing resource load threshold as a candidate computing node, and select a target computing node from the candidate computing nodes for each virtual machine cluster, so that the load balancing index value of the cluster system is minimum after all virtual machines in the virtual machine cluster are migrated to or hosted at the selected target computing node;
708: DRS control means 602 instructs load balancing suggestion generation module 604 to generate a migration suggestion for each virtual machine cluster, where the migration suggestion instructs to migrate or host all virtual machines of each virtual machine cluster to the same target computing node;
709: the DRS control device 602 instructs the load balancing suggestion execution module 605 to send the virtual machine cluster migration suggestion to the source computing node of each virtual machine host in the virtual machine cluster, so as to migrate all virtual machines of each virtual machine cluster to the same target computing node;
then, the next iteration control is entered. In the embodiment of the present invention, the management node 502 performs load balancing periodically or at regular time, and performs one round of load balancing in each iteration control period.
Through the process, the management node can centralize a plurality of virtual machines in the same virtual machine cluster with a network flow relation to the same computing node, so that the network flow among the virtual machines in the same virtual machine cluster does not occupy the load of links among the computing nodes, the network load of each link in a cluster system can be reduced, and the performance bottleneck of the links can be prevented; moreover, by concentrating a plurality of virtual machines in the same virtual machine cluster on the same computing node, the network bandwidth and the communication reliability between the virtual machines in the same virtual machine cluster can be improved, so that the network performance of the virtual machine cluster with dense flow can be improved.
Referring to fig. 8, in some embodiments of the present invention, in step 703, the process of the DRS control device 602 executing the load balancing algorithm to determine whether to perform load balancing and generate the virtual machine migration suggestion includes:
801: converting the load balancing problem into an optimization problem, establishing a uniform optimization objective function for calculating a load balancing index value, wherein the optimization objective function is as follows:
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)+ω5×cost(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the weighted delay of the virtual machine, cost (S) represents the cluster system mapping cost, omega1、ω2、ω3、ω4、ω5Respectively, are weighting coefficients.
802: acquiring various performance data through a timer or manual triggering, and calculating a load balance index value by using the optimization objective function;
803: when judging that the load balancing index value is smaller than the load balancing threshold value, quitting load balancing scheduling, otherwise, judging that load balancing is needed, and continuing to execute the following steps;
804: selecting a virtual machine needing load balancing from a dimension of a computing resource (such as a CPU (Central processing Unit) and adding the virtual machine into a virtual machine list to be migrated;
805: selecting a virtual machine needing load balancing from network resource dimensions, and adding the virtual machine into a virtual machine list to be migrated;
806: traversing a virtual machine list to be migrated, dividing one or more virtual machine clusters by using a virtual machine flow affinity recognition algorithm, and selecting a target computing node for each virtual machine cluster according to an optimized target function;
807: and selecting the virtual machine cluster and the target computing node with the minimum load balance index value after migration, and generating a virtual machine migration suggestion to be executed.
Through the process, the management node can centralize a plurality of virtual machines in the same virtual machine cluster with a network flow relation to the same computing node, so that the network flow among the virtual machines in the same virtual machine cluster does not occupy the load of links among the computing nodes, the network load of each link in a cluster system can be reduced, and the performance bottleneck of the links can be prevented; moreover, by concentrating a plurality of virtual machines in the same virtual machine cluster on the same computing node, the network bandwidth and the communication reliability between the virtual machines in the same virtual machine cluster can be improved, so that the network performance of the virtual machine cluster with dense flow can be improved.
Referring to fig. 9, in some embodiments of the present invention, the step 806 of traversing the list of virtual machines to be migrated and using a virtual machine traffic affinity identification algorithm to partition one or more virtual machine clusters may specifically include:
s1: constructing communication matrixes of all virtual machines in a virtual machine list to be migrated according to the network traffic relation among the virtual machines;
s2: constructing a virtual machine communication connection diagram according to a communication matrix of a virtual machine, for example, as shown in fig. 10(a), wherein each circle in the diagram represents a virtual machine VM, two virtual machines having a network traffic relationship are connected by a line segment, and numbers on the line segment are used for representing the size of traffic;
s3: converting the virtual machine flow affinity identification problem into a weighted graph segmentation problem;
s4: setting a segmentation threshold, cutting line segments with weights in an interval [0, h ], and sequentially segmenting a virtual machine communication connection graph; for example, the intervals [0,5] are set, the line segment with the flow value between the intervals [0,5] is cut off, and only the line segment with the flow value greater than 5 is reserved, as shown in fig. 10(b), four virtual machine clusters (Cluster) are obtained, where Cluster1 includes virtual machines VM1, VM2, VM3, VM5, and VM6, Cluster2 includes virtual machine VM4, Cluster3 includes virtual machines VM9 and VM10, and Cluster4 includes virtual machines VM7, VM8, VM11, and VM 12.
S5: according to the attribution relationship between the virtual machine and the computing node, the communication connectivity graph of the virtual machine is divided twice, and as shown in fig. 10(c), a correspondence list between the virtual machine cluster and the computing node is generated.
The above flow introduces a method for dividing virtual machine clusters, and the method can divide a plurality of virtual machines needing load balancing into a plurality of virtual machine clusters by constructing a virtual machine communication connection graph and dividing the virtual machine communication connection graph.
Subsequently, the management node can generate a virtual machine migration suggestion according to the divided virtual machine clusters and the corresponding relation list of the virtual machine clusters and the computing nodes. For example, for a virtual machine Cluster1, which includes virtual machines VM1, VM2 and VM3 running on a computing node Host1, and virtual machines VM5 and VM6 running on a computing node Host3, then: candidate migration suggestions for migrating virtual machines VM1, VM2, VM3 to compute node Host3, or candidate migration suggestions for migrating virtual machines VM5, VM6 to compute node Host1, or candidate migration suggestions for migrating all of VM1, VM2, VM3, VM4, VM5 to another compute node, such as Host 2.
In summary, the technical solution of the embodiment of the present invention is introduced by taking the implementation in a specific scenario as an example, and for unclear points, reference may be made to the description of other parts in the foregoing.
As can be seen from the above, in some feasible embodiments of the present invention, when load balancing is performed, the influence of network dimensions is considered, and multiple virtual machines in the same virtual machine cluster having a network traffic relationship are concentrated on the same computing node, so that network traffic between the multiple virtual machines in the same virtual machine cluster does not occupy the load of links between the computing nodes, thereby reducing the network load of each link in the cluster system and preventing performance bottlenecks from occurring in the links; moreover, by concentrating a plurality of virtual machines in the same virtual machine cluster on the same computing node, the network bandwidth and the communication reliability between the virtual machines in the same virtual machine cluster can be improved, so that the network performance of the virtual machine cluster with dense flow can be improved.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium may store a program, and when the program is executed, the program includes some or all of the steps of the load balancing method for the virtual machine described in the method embodiment shown in fig. 2a or 2 b.
Referring to fig. 11, an embodiment of the present invention further provides a management node 900.
The management node 900 of the embodiment of the present invention may be applied to a cluster system including the management node 900 and a plurality of computing nodes, and is used as a management node in the cluster system, and it should be understood that the computing nodes in the cluster system may include a processor and a memory; in some embodiments, the memory included in the compute nodes in the cluster system stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
host machine as management layer to complete hardware resource management and distribution; presenting a virtual hardware platform for a virtual machine; and the scheduling and isolation of the virtual machine are realized. Wherein, the Host may be a Virtual Machine Monitor (VMM); in addition, sometimes the VMM and 1 privileged virtual machine cooperate, the combination of which constitutes the Host. The virtual hardware platform provides various hardware resources for each virtual machine running thereon, such as a virtual processor, a memory, a virtual disk, a virtual network card, and the like. The virtual disk may correspond to a file of the Host or a logical block device. The virtual machine runs on a virtual hardware platform prepared by the Host, and one or more virtual machines run on the Host.
One or more virtual machines: one or more virtual computers can be simulated on one physical computer through virtual machine software, the virtual machines work like real computers, an operating system and an application program can be installed on the virtual machines, and the virtual machines can also access network resources. For applications running in a virtual machine, the virtual machine operates as if it were a real computer.
Management node 900 of embodiments of the present invention may include an input device 901 (optional), an output device 904 (optional), a processor 902, and a memory 903. Memory 903 may include both read-only memory and random access memory, among other things, and provides instructions and data to processor 902. A portion of the memory 903 may also include non-volatile random access memory (NVRAM).
The memory 903 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
and (3) operating instructions: including various operational instructions for performing various operations.
Operating the system: including various system programs for implementing various basic services and for handling hardware-based tasks.
In the embodiment of the present invention, the processor 902 performs the following operations by calling an operation instruction (which may be stored in an operating system) stored in the memory 903: determining a plurality of virtual machines which need to be subjected to load balancing in the cluster system, wherein the plurality of virtual machines are operated on part or all of the plurality of computing nodes in the cluster system in a distributed manner; dividing the virtual machines into at least one virtual machine cluster according to the network traffic relation among the virtual machines, so that the network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold value, and the network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold value; generating a migration suggestion for each virtual machine cluster, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node; and sending the migration suggestion to source computing nodes of one or more virtual machine hosts included in the virtual machine cluster, so that all virtual machines included in the virtual machine cluster are migrated to or hosted on the target computing node.
In the embodiment of the invention, when load balancing is carried out, the influence of network dimensionality is considered, and a plurality of virtual machines in the same virtual machine cluster with a network flow relation are concentrated on the same computing node, so that the network flow among the virtual machines in the same virtual machine cluster does not occupy the load of links among the computing nodes, thereby reducing the network load of each link in a cluster system and preventing the performance bottleneck of the links; moreover, by concentrating a plurality of virtual machines in the same virtual machine cluster on the same computing node, the network bandwidth and the communication reliability between the virtual machines in the same virtual machine cluster can be improved, so that the network performance of the virtual machine cluster with dense flow can be improved.
Processor 902 controls the operation of computer device 900, and processor 902 may also be referred to as a CPU (Central processing Unit). The memory 903 may include both read-only memory and random access memory, and provides instructions and data to the processor 902. A portion of the memory 903 may also include non-volatile random access memory (NVRAM). In a particular application, the various components of the gateway 600 are coupled together by a bus system 905, where the bus system 905 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. For clarity of illustration, however, the various buses are labeled in the figure as bus system 905.
The method disclosed in the above embodiments of the present invention may be applied to the processor 902, or implemented by the processor 902. The processor 902 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 902. The processor 902 may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 903, and the processor 902 reads the information in the memory 903 and performs the steps of the above method in combination with the hardware thereof.
Optionally, in terms of dividing the multiple virtual machines into at least one virtual machine cluster according to a network traffic relationship between the multiple virtual machines, the processor 902 is specifically configured to: constructing a network communication connection topology of the virtual machines according to the network traffic relation among the virtual machines; the network communication connection topology is divided, each group of virtual machines with the network flow larger than or equal to a network flow threshold value among the virtual machines are determined as a virtual machine cluster, each isolated virtual machine is independently determined as a virtual machine cluster, and the network flow of each isolated virtual machine and the network flow of any other virtual machine are smaller than the network flow threshold value.
Optionally, in terms of determining a plurality of virtual machines that need to be load balanced in the cluster system, the processor 902 is specifically configured to screen out virtual machines that have a network traffic relationship with each other from all virtual machines running in the cluster system, and determine that the virtual machines that have the network traffic relationship with each other are the plurality of virtual machines that need to be load balanced in the cluster system.
Optionally, before the processor 902 determines a plurality of virtual machines in the cluster system that need load balancing, the processor is further configured to: acquiring resource use data of the cluster system, wherein the resource use data comprises the CPU utilization rate and the memory utilization rate of each computing node in the cluster system and the network resource utilization rate of each link between the computing nodes; calculating a load balancing index value according to the resource usage data, wherein the load balancing index value is used for representing a load balancing state of the cluster system; and if the load balancing index value is larger than a load balancing threshold value, judging that the cluster system needs to carry out load balancing.
Optionally, in terms of calculating a load balancing index value according to the resource usage data, the processor 902 is specifically configured to:
calculating a load balance index value by adopting the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents all virtual running in the cluster systemMean weighted delay of the machine, ω1、ω2、ω3、ω4Respectively, are weighting coefficients.
Optionally, a migration suggestion is generated for each virtual machine cluster, where the migration suggestion is used to indicate that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node, and the processor 902 is specifically configured to: selecting a plurality of candidate computing nodes for each virtual machine cluster, and simulating and computing a load balance index value of the cluster system after each virtual machine cluster is migrated to or hosted by one candidate computing node; for each virtual machine cluster, determining a target computing node from the candidate computing nodes, so that the load balance index value of the cluster system after all virtual machines in each virtual machine cluster are migrated to or hosted by the selected target computing node is minimum; generating a migration suggestion for each virtual machine cluster, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the selected target computing node.
Optionally, in terms of simulating and calculating a load balancing index value of the cluster system after each virtual machine cluster is migrated to or hosted by one candidate computing node, the processor 902 is specifically configured to:
calculating a load balance index value by adopting the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)+ω5×cost(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents operation in the cluster systemRepresents the cluster system mapping cost, ω, of all virtual machines, cost (S)1、ω2、ω3、ω4、ω5Respectively, are weighting coefficients.
As can be seen from the above, in some embodiments of the present invention, a management node is applied to a cluster system including the management node and a plurality of computing nodes, and can divide a plurality of virtual machines that need load balancing into at least one virtual machine cluster according to a network traffic relationship between the virtual machines in the cluster system, so that a network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold, and a network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold; generating a migration suggestion for each virtual machine cluster in the plurality of virtual machine clusters, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node; the following technical effects are achieved:
when load balancing is carried out, the influence of network dimensionality is considered, and a plurality of virtual machines in the same virtual machine cluster with a network flow relation are concentrated on the same computing node, so that the network flow among the virtual machines in the same virtual machine cluster does not occupy the load of links among the computing nodes, the network load of each link in a cluster system can be reduced, and the performance bottleneck of the links is prevented; moreover, by concentrating a plurality of virtual machines in the same virtual machine cluster on the same computing node, the network bandwidth and the communication reliability between the virtual machines in the same virtual machine cluster can be improved, so that the network performance of the virtual machine cluster with dense flow can be improved.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program to instruct associated hardware (e.g., a processor), the program may be stored in a computer readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
The load balancing method of the virtual machine, the related device and the cluster system provided by the embodiment of the present invention are introduced in detail, and a specific example is applied in the present document to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (13)
1. A load balancing method of a virtual machine is characterized in that the method is used for a cluster system comprising a management node and a plurality of computing nodes, each computing node in the plurality of computing nodes comprises a hardware layer, a host machine running on the hardware layer and a virtual machine running on the host machine;
the method comprises the following steps:
the management node determines a plurality of virtual machines which need load balancing in the cluster system, wherein the virtual machines are distributed to run on part or all of the computing nodes in the cluster system;
dividing the virtual machines into at least one virtual machine cluster according to the network traffic relation among the virtual machines, so that the network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold value, and the network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold value;
generating a migration suggestion for each virtual machine cluster, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node;
sending the migration suggestion to source computing nodes of one or more virtual machine hosts included in the virtual machine cluster so that all virtual machines included in the virtual machine cluster are migrated to or hosted on the target computing node;
wherein the dividing the plurality of virtual machines into at least one virtual machine cluster according to the network traffic relationship among the plurality of virtual machines comprises:
constructing a network communication connection topology of the virtual machines according to the network traffic relation among the virtual machines;
the network communication connection topology is divided, each group of virtual machines with the network flow larger than or equal to a network flow threshold value among the virtual machines are determined as a virtual machine cluster, each isolated virtual machine is independently determined as a virtual machine cluster, and the network flow of each isolated virtual machine and the network flow of any other virtual machine are smaller than the network flow threshold value.
2. The method of claim 1, wherein the management node determining a plurality of virtual machines in the cluster system that need load balancing comprises:
screening out virtual machines with a network traffic relationship from all the virtual machines running in the cluster system, and determining the virtual machines with the network traffic relationship as a plurality of virtual machines needing load balancing in the cluster system.
3. The method according to any one of claims 1 to 2, wherein before the management node determines a plurality of virtual machines in the cluster system that need load balancing, the method further comprises:
the management node acquires resource use data of the cluster system, wherein the resource use data comprises the CPU utilization rate and the memory utilization rate of each computing node in the cluster system and the network resource utilization rate of each link between the computing nodes;
calculating a load balancing index value according to the resource usage data, wherein the load balancing index value is used for representing a load balancing state of the cluster system;
and if the load balancing index value is larger than a load balancing threshold value, judging that the cluster system needs to carry out load balancing.
4. The method of claim 3, wherein said calculating a load balancing index value based on said resource usage data comprises:
calculating a load balance index value by adopting the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the average weighted delay, ω, of all virtual machines running in the cluster system1、ω2、ω3、ω4Respectively, are weighting coefficients.
5. The method according to any one of claims 1 to 2, wherein the generating, for each virtual machine cluster, a migration suggestion for instructing to migrate or host all virtual machines included in each virtual machine cluster to the same target computing node comprises:
selecting a plurality of candidate computing nodes for each virtual machine cluster, and simulating and calculating a load balance index value of the cluster system after each virtual machine cluster is migrated to or hosted by one candidate computing node in the plurality of candidate computing nodes;
for each virtual machine cluster, determining a target computing node from the candidate computing nodes, so that the load balance index value of the cluster system after all virtual machines in each virtual machine cluster are migrated to or hosted by the selected target computing node is minimum;
generating a migration suggestion for each virtual machine cluster, wherein the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the selected target computing node.
6. The method of claim 5, wherein simulating the computation of the load balancing metric value for the cluster system after each virtual machine cluster is migrated to or hosted by a candidate compute node of the plurality of candidate compute nodes comprises:
calculating a load balance index value by adopting the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)+ω5×cos t(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Representing individual chains between individual compute nodesVariance of network resource utilization of a way, δVM(S) represents the average weighted delay of all virtual machines running in the cluster system, cos t (S) represents the cluster system mapping cost, ω1、ω2、ω3、ω4、ω5Respectively, are weighting coefficients.
7. A management node for a cluster system comprising a plurality of compute nodes and the management node, each compute node in the plurality of compute nodes comprising a hardware layer, a host running on top of the hardware layer, and a virtual machine running on top of the host; the management node includes:
a determining module, configured to determine a plurality of virtual machines that need load balancing in the cluster system, where the plurality of virtual machines run on some or all of the plurality of computing nodes in the cluster system in a distributed manner;
the clustering module is used for dividing the virtual machines into at least one virtual machine cluster according to the network traffic relation among the virtual machines, so that the network traffic between each virtual machine and at least one other virtual machine in the same virtual machine cluster is greater than or equal to a network traffic threshold value, and the network traffic between each virtual machine and any other virtual machine in different virtual machine clusters is less than the network traffic threshold value;
the system comprises a suggestion module, a migration module and a migration module, wherein the suggestion module is used for generating a migration suggestion for each virtual machine cluster, and the migration suggestion is used for indicating that all virtual machines included in each virtual machine cluster are migrated to or hosted on the same target computing node;
a sending module, configured to send the migration suggestion to a source computing node of one or more virtual machine hosts included in the virtual machine cluster, so that all virtual machines included in the virtual machine cluster are migrated to or hosted on the target computing node;
wherein the clustering module comprises:
the construction unit is used for constructing the network communication connection topology of the virtual machines according to the network traffic relation among the virtual machines;
the dividing unit is used for dividing the network communication connection diagram, dividing each group of virtual machines with the network flow larger than or equal to a network flow threshold value into a virtual machine cluster, and independently determining each isolated virtual machine as a virtual machine cluster, wherein the network flows of the isolated virtual machine and any other virtual machine are smaller than the network flow threshold value.
8. The management node of claim 7, wherein the determining module is specifically configured to: screening out virtual machines with a network flow relationship from all the virtual machines running in the cluster system, and determining the virtual machines with the network flow relationship as a plurality of virtual machines needing load balancing in the cluster system.
9. The management node according to any of claims 7 to 8, further comprising:
an obtaining module, configured to obtain resource usage data of the cluster system, where the resource usage data includes a CPU utilization rate and a memory utilization rate of each computing node in the cluster system, and a network resource utilization rate of each link between each computing node;
a calculation module, configured to calculate a load balancing index value according to the resource usage data, where the load balancing index value is used to represent a load balancing state of the cluster system;
and the judging module is used for judging that the cluster system needs to carry out load balancing if the load balancing index value is greater than a load balancing threshold value.
10. The management node according to claim 9, wherein the calculating module is specifically configured to calculate the load balancing index value using the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the average weighted delay, ω, of all virtual machines running in the cluster system1、ω2、ω3、ω4Respectively, are weighting coefficients.
11. The management node according to any of claims 7 to 8, wherein the recommendation module comprises:
the simulation calculation unit is used for selecting a plurality of candidate calculation nodes for each virtual machine cluster, and simulating and calculating the load balance index value of the cluster system after each virtual machine cluster is migrated to or hosted at one candidate calculation node in the plurality of candidate calculation nodes;
a determining unit, configured to determine, for each virtual machine cluster, a target computing node from the multiple candidate computing nodes, so that a load balancing index value of the cluster system after all virtual machines in each virtual machine cluster are migrated to or hosted by the selected target computing node is minimum;
and an advice generating unit, configured to generate a migration advice for each virtual machine cluster, where the migration advice is used to instruct all virtual machines included in each virtual machine cluster to be migrated to or hosted in the selected target computing node.
12. The management node of claim 11, wherein the analog computation unit is specifically configured to: calculating a load balance index value by adopting the following formula;
T(S)=ω1×σ(UtilCPU)+ω2×σ(UtilMEM)+ω3×σ(UtilNET)+ω4×δVM(S)+ω5×cos t(S)
wherein S represents the topological mapping relation between the virtual machine and the computing node, T (S) represents a load balance index value, and sigma (Util)CPU) Variance, σ (Util), representing CPU utilization of each compute nodeMEM) Variance, σ (Util), representing memory utilization of each compute nodeNET) Variance, δ, representing network resource utilization of each link between each compute nodeVM(S) represents the average weighted delay of all virtual machines running in the cluster system, cos t (S) represents the system mapping cost, ω1、ω2、ω3、ω4、ω5Respectively, are weighting coefficients.
13. A cluster system comprising a plurality of compute nodes and a management node according to any one of claims 7 to 12;
the computing node is used for receiving the migration suggestion sent by the management node and executing the virtual machine migration operation according to the indication of the migration suggestion.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410412412.6A CN104184813B (en) | 2014-08-20 | 2014-08-20 | The load-balancing method and relevant device and group system of virtual machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410412412.6A CN104184813B (en) | 2014-08-20 | 2014-08-20 | The load-balancing method and relevant device and group system of virtual machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104184813A CN104184813A (en) | 2014-12-03 |
CN104184813B true CN104184813B (en) | 2018-03-09 |
Family
ID=51965542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410412412.6A Expired - Fee Related CN104184813B (en) | 2014-08-20 | 2014-08-20 | The load-balancing method and relevant device and group system of virtual machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104184813B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112711465A (en) * | 2021-03-23 | 2021-04-27 | 腾讯科技(深圳)有限公司 | Data processing method and device based on cloud platform, electronic equipment and storage medium |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104507150B (en) * | 2014-12-23 | 2018-07-31 | 西安电子科技大学 | Virtual resource cluster-dividing method in a kind of baseband pool |
WO2016134542A1 (en) * | 2015-02-28 | 2016-09-01 | 华为技术有限公司 | Virtual machine migration method, apparatus and device |
EP3317763B1 (en) * | 2015-06-30 | 2023-11-29 | Telefonaktiebolaget LM Ericsson (PUBL) | Commissioning of virtualized entities |
CN105045667B (en) * | 2015-07-13 | 2018-11-30 | 中国科学院计算技术研究所 | A kind of resource pool management method for virtual machine vCPU scheduling |
CN105207856A (en) * | 2015-10-28 | 2015-12-30 | 广州西麦科技股份有限公司 | Load balancing system and method based on SDN virtual switch |
CN106775918A (en) * | 2015-11-23 | 2017-05-31 | 中国电信股份有限公司 | Dispatching method of virtual machine, virtual machine manager and SDN systems |
CN105760227B (en) * | 2016-02-04 | 2019-03-12 | 中国联合网络通信集团有限公司 | Resource regulating method and system under cloud environment |
CN108206838B (en) * | 2016-12-16 | 2019-10-22 | 中国移动通信有限公司研究院 | A kind of SiteServer LBS, method and device |
CN108667859A (en) * | 2017-03-27 | 2018-10-16 | 中兴通讯股份有限公司 | A kind of method and device for realizing scheduling of resource |
CN108664496B (en) * | 2017-03-29 | 2022-03-25 | 腾讯科技(深圳)有限公司 | Data migration method and device |
CN109213566B (en) * | 2017-06-29 | 2022-05-13 | 华为技术有限公司 | Virtual machine migration method, device and equipment |
CN107733701B (en) * | 2017-09-29 | 2019-11-08 | 中国联合网络通信集团有限公司 | A kind of method and apparatus of deploying virtual machine |
CN110119300A (en) * | 2018-02-06 | 2019-08-13 | 北京京东尚科信息技术有限公司 | The load-balancing method and device of dummy unit cluster |
CN109117227A (en) * | 2018-08-01 | 2019-01-01 | 长沙拓扑陆川新材料科技有限公司 | A kind of method that user interface generates block function chain |
CN109254829A (en) * | 2018-08-29 | 2019-01-22 | 郑州云海信息技术有限公司 | A kind of compatibility rule verification method and device based on load balancing |
CN109039943B (en) * | 2018-09-14 | 2022-03-18 | 迈普通信技术股份有限公司 | Flow allocation method and device, network system and SDN controller |
CN110958182B (en) * | 2018-09-26 | 2023-04-28 | 华为技术有限公司 | Communication method and related equipment |
US11182362B2 (en) * | 2019-01-16 | 2021-11-23 | Kabushiki Kaisha Toshiba | Calculating device, data base system, calculation system, calculation method, and storage medium |
CN109936627A (en) * | 2019-02-21 | 2019-06-25 | 山东浪潮云信息技术有限公司 | A kind of automaticdata equalization methods and tool based on hadoop |
CN110275759A (en) * | 2019-06-21 | 2019-09-24 | 长沙学院 | A kind of virtual machine cluster dynamic deployment method |
CN110928661B (en) * | 2019-11-22 | 2023-06-16 | 北京浪潮数据技术有限公司 | Thread migration method, device, equipment and readable storage medium |
CN111619473B (en) * | 2020-05-29 | 2023-04-25 | 重庆长安汽车股份有限公司 | Automobile static power supply management system and management method |
CN111737083A (en) * | 2020-06-22 | 2020-10-02 | 中国银行股份有限公司 | VMware cluster resource monitoring method and device |
CN112866131B (en) * | 2020-12-30 | 2023-04-28 | 神州绿盟成都科技有限公司 | Traffic load balancing method, device, equipment and medium |
CN113630383B (en) * | 2021-07-08 | 2023-03-28 | 杨妍茜 | Edge cloud cooperation method and device |
CN113535411B (en) * | 2021-09-17 | 2022-03-01 | 阿里云计算有限公司 | Resource scheduling method, equipment and system |
CN113821349A (en) * | 2021-09-28 | 2021-12-21 | 维沃移动通信有限公司 | Load balancing method and device |
CN114048006B (en) * | 2021-11-29 | 2024-10-29 | 中国电信股份有限公司 | Virtual machine dynamic migration method, device and storage medium |
CN114035906B (en) * | 2021-12-13 | 2024-09-13 | 中国电信股份有限公司 | Virtual machine migration method and device, electronic equipment and storage medium |
CN114422517A (en) * | 2022-01-24 | 2022-04-29 | 广东三合电子实业有限公司 | Server load balancing system and method thereof |
CN114726909A (en) * | 2022-03-15 | 2022-07-08 | 阿里云计算有限公司 | Cloud service migration information processing method, device, equipment, medium and product |
CN114466019B (en) * | 2022-04-11 | 2022-09-16 | 阿里巴巴(中国)有限公司 | Distributed computing system, load balancing method, device and storage medium |
CN114996022B (en) * | 2022-07-18 | 2024-03-08 | 山西华美远东科技有限公司 | Multi-channel available big data real-time decision-making system |
CN117632519B (en) * | 2024-01-24 | 2024-05-03 | 深圳市活力天汇科技股份有限公司 | Method and device for equalizing and adjusting fragmented data, medium and electronic equipment |
CN118316881B (en) * | 2024-05-11 | 2024-10-29 | 陕西东泽瑞科技开发有限公司 | Big data network communication coordination method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102082692A (en) * | 2011-01-24 | 2011-06-01 | 华为技术有限公司 | Method and equipment for migrating virtual machines based on network data flow direction, and cluster system |
CN102984137A (en) * | 2012-11-14 | 2013-03-20 | 江苏南开之星软件技术有限公司 | Multi-target server scheduling method based on multi-target genetic algorithm |
CN103677958A (en) * | 2013-12-13 | 2014-03-26 | 华为技术有限公司 | Virtualization cluster resource scheduling method and device |
EP2717158A1 (en) * | 2012-08-21 | 2014-04-09 | Huawei Technologies Co., Ltd. | Method and device for integrating virtualized cluster, and virtualized cluster system |
CN103812895A (en) * | 2012-11-12 | 2014-05-21 | 华为技术有限公司 | Scheduling method, management nodes and cloud computing cluster |
-
2014
- 2014-08-20 CN CN201410412412.6A patent/CN104184813B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102082692A (en) * | 2011-01-24 | 2011-06-01 | 华为技术有限公司 | Method and equipment for migrating virtual machines based on network data flow direction, and cluster system |
EP2717158A1 (en) * | 2012-08-21 | 2014-04-09 | Huawei Technologies Co., Ltd. | Method and device for integrating virtualized cluster, and virtualized cluster system |
CN103812895A (en) * | 2012-11-12 | 2014-05-21 | 华为技术有限公司 | Scheduling method, management nodes and cloud computing cluster |
CN102984137A (en) * | 2012-11-14 | 2013-03-20 | 江苏南开之星软件技术有限公司 | Multi-target server scheduling method based on multi-target genetic algorithm |
CN103677958A (en) * | 2013-12-13 | 2014-03-26 | 华为技术有限公司 | Virtualization cluster resource scheduling method and device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112711465A (en) * | 2021-03-23 | 2021-04-27 | 腾讯科技(深圳)有限公司 | Data processing method and device based on cloud platform, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104184813A (en) | 2014-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104184813B (en) | The load-balancing method and relevant device and group system of virtual machine | |
US11226846B2 (en) | Systems and methods of host-aware resource management involving cluster-based resource pools | |
ES2939689T3 (en) | Method and device for performing resource planning | |
US9329889B2 (en) | Rapid creation and reconfiguration of virtual machines on hosts | |
US10474488B2 (en) | Configuration of a cluster of hosts in virtualized computing environments | |
Chen et al. | RIAL: Resource intensity aware load balancing in clouds | |
US10205771B2 (en) | System and method for deploying an application in a computer system | |
EP3253027B1 (en) | Resource allocation method and apparatus for virtual machines | |
EP2629490A1 (en) | Optimizing traffic load in a communications network | |
CN104270416A (en) | Load balancing control method and management node | |
WO2016134542A1 (en) | Virtual machine migration method, apparatus and device | |
KR20140022922A (en) | Native cloud computing via network segmentation | |
CN104216784B (en) | Focus balance control method and relevant apparatus | |
US20210373971A1 (en) | Cross-cluster load balancer | |
US20210211391A1 (en) | Automated local scaling of compute instances | |
US20150350055A1 (en) | Shared resource contention | |
US11093288B2 (en) | Systems and methods for cluster resource balancing in a hyper-converged infrastructure | |
US20180026853A1 (en) | System and method for determining resources utilization in a virtual network | |
US20160094424A1 (en) | Virtual Machine Processor & Memory Resource Coordinator | |
Kakakhel et al. | Virtualization at the network edge: A technology perspective | |
CN111953732A (en) | Resource scheduling method and device in cloud computing system | |
Kanniga Devi et al. | Load monitoring and system-traffic-aware live VM migration-based load balancing in cloud data center using graph theoretic solutions | |
Bhardwaj et al. | Impact of factors affecting pre-copy virtual machine migration technique for cloud computing | |
US20220138001A1 (en) | Measuring host utilization in a datacenter | |
Wood | Improving data center resource management, deployment, and availability with virtualization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180309 Termination date: 20190820 |
|
CF01 | Termination of patent right due to non-payment of annual fee |