CN114518933A - Method and system for realizing hot migration of virtual machine across CPUs (central processing units) - Google Patents
Method and system for realizing hot migration of virtual machine across CPUs (central processing units) Download PDFInfo
- Publication number
- CN114518933A CN114518933A CN202111611039.3A CN202111611039A CN114518933A CN 114518933 A CN114518933 A CN 114518933A CN 202111611039 A CN202111611039 A CN 202111611039A CN 114518933 A CN114518933 A CN 114518933A
- Authority
- CN
- China
- Prior art keywords
- cpu
- model
- node
- configuration
- nova
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013508 migration Methods 0.000 title claims abstract description 32
- 230000005012 migration Effects 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 title claims description 14
- 230000003044 adaptive effect Effects 0.000 claims abstract description 15
- 238000004458 analytical method Methods 0.000 claims abstract description 11
- 238000004364 calculation method Methods 0.000 claims abstract description 8
- 230000001419 dependent effect Effects 0.000 claims abstract description 8
- 238000003032 molecular docking Methods 0.000 claims description 3
- 238000009877 rendering Methods 0.000 claims description 3
- 239000008186 active pharmaceutical agent Substances 0.000 claims 3
- 238000004220 aggregation Methods 0.000 description 15
- 230000002776 aggregation Effects 0.000 description 15
- 238000013507 mapping Methods 0.000 description 3
- GOLXNESZZPUPJE-UHFFFAOYSA-N spiromesifen Chemical compound CC1=CC(C)=CC(C)=C1C(C(O1)=O)=C(OC(=O)CC(C)(C)C)C11CCCC1 GOLXNESZZPUPJE-UHFFFAOYSA-N 0.000 description 3
- 101100058542 Danio rerio bmi1a gene Proteins 0.000 description 1
- 102100025471 Epiphycan Human genes 0.000 description 1
- 101001056751 Homo sapiens Epiphycan Proteins 0.000 description 1
- 101150082137 Mtrr gene Proteins 0.000 description 1
- 101150112439 SMAP gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000000306 component Substances 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
Abstract
The invention discloses a method and a system for realizing hot migration of a CPU-crossing virtual machine, belonging to the field of cloud computing; the method comprises the following specific steps: s1, configuring the added computing node set supporting the cross-CPU model thermal migration; s2, automatically deploying a virtual machine management program Libvirt and other basic dependent software for the computing nodes in the cluster according to the specified software and version; s3, acquiring the configuration information of the node CPU, and performing adaptive calculation analysis; s4, automatically deploying and initializing nova-computer service for the computing nodes in the cluster; according to the method for migrating the virtual machine between the computing nodes with different CPU models, the function of migrating the virtual machine between the computing nodes with different CPU models in the cloud management platform under two scenes of new deployment and new addition of the computing nodes of the existing cloud management platform is realized.
Description
Technical Field
The invention discloses a method and a system for realizing cross-CPU virtual machine thermal migration, and relates to the technical field of cloud computing.
Background
Most current cloud computing and cloud service providers rely on open source project OpenStack as a cloud management base platform to realize deployment and management of public cloud and private cloud. The OpenStack Nova component serves as a core component in an OpenStack project and is used for providing cloud platform computing capacity. A conventional cloud computing cluster is generally composed of management nodes, computing nodes, and storage nodes. The computing node deploys OpenStack Nova-computer service related services by depending on a physical machine, and is used for providing upper-layer computing management services for the virtual machine. At the Nova-computer service bottom layer, different virtual machine management tools are called, and kernel KVM, Xen, VMware ESX, QEMU and other virtualization technologies are managed. Taking Libvirt as an example, the kernel KVM is called to manage the virtual machine in the cloud platform.
With the continuous expansion of cluster size and the increase of service complexity, a plurality of computing node physical machines with different CPU models may be contained in a cluster. In this case, generally, the physical machines of the same model may be divided into the same host aggregation, so as to implement mutual hot migration between virtual machines in the same host aggregation. However, the host aggregation does not only include attribute information of the CPU model, and with continuous refinement of the host aggregation information, a large number of host aggregations must be planned, which increases maintenance cost, and if the cluster size is small, the refined host aggregation causes a small number of hosts in each aggregation, and loses the classification effect. Therefore, in actual production, the CPU information is generally not added to the host aggregation as an attribute, which results in that computing nodes of different CPU models may exist in the same host aggregation. When the virtual machine is executed with live migration, the Nova-computer service calls the Libvirt computeCPU API to verify whether the characteristics of the virtual machine can be migrated to a target computing node, so that the situation that the live migration of two computing nodes with different CPU models possibly fails is caused.
The live migration function of the virtual machine is influenced by the configuration of a virtual machine starting file, and Libvirt mainly supports three CPU modes: host-passhrough, host-model, and custom. Theoretically, the host-passhigh mode requires that the instruction sets of the source node and the destination node are completely consistent; the host-model mode allows slight differences in the instruction sets of the source node and the destination node; the custom mode allows a large difference between the instruction sets of the source node and the destination node (subject to the actual configuration of the virtual machine start xml file). When no hot migration scene exists, a host-passthreough mode is recommended to be selected; for a host-model mode, if CPUs of different models can use host aggregation division, a host aggregation filter can be used for matching physical machines of the same model during migration; when the number of CPU models is excessive and it is not convenient to aggregate partitions with the host, the custom mode is suggested to be used. When the custom mode is used, the CPU models and required/disabled CPU features that are designated for use need to be displayed, but the model types and physical machine CPU characteristics supported by different host CPU models are different, and how to automatically and accurately select and configure the CPU models and the CPU features becomes increasingly important.
The OpenStack computing service Nova-computer configuration file libvirt group supports the configuration of parameters such as cpu _ mode, cpu _ models and cpu _ model _ extra _ flags, and corresponds to the configuration of mode attributes, domain/cpu/model and domain/cpu/model/feature in domain/cpu in the xml configuration file. By adding the above three configurations, the CPU-related configuration at the time of startup of the virtual machine is specified.
The invention mainly aims at the problems and provides a method and a system for realizing the hot migration of a virtual machine across CPUs.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method and a system for realizing the hot migration of a virtual machine across CPUs (central processing units), wherein the adopted technical scheme is as follows: a method for realizing the hot migration of a virtual machine across CPUs (central processing units) comprises the following specific steps:
s1, configuring the added computing node set supporting the cross-CPU model thermal migration;
s2, automatically deploying a virtual machine management program Libvirt and other basic dependent software for the computing nodes in the cluster according to the specified software and version;
s3, acquiring the configuration information of the node CPU, and performing adaptive calculation analysis;
s4 initializes the nova-computer service for automated deployment of computing nodes within the cluster.
The specific steps of the S1 configuring the added compute node set supporting the cross-CPU model live migration are as follows:
s101, the IP docking is carried out on a cloud management platform deployed by OpenStack through a configuration computing node;
s102 is connected with a Libvirt client of the computing node and calls a Libvirt command line to acquire node configuration information.
The step of S3 obtaining the node CPU configuration information, and performing adaptive computation analysis includes the following steps:
s301, executing Libvirt-qemu related API on the computing nodes in the set to obtain node CPU configuration information;
s301, configuring an initialization program to automatically acquire related configuration xml files of all the computing node CPUs of the cluster;
s302, analyzing the CPU xml file collected by the node and calculating available CPU model and CPU features information of the adaptive computing node set;
s303, synchronizing the analyzed configuration to the Nova-computer configuration file of each node in the set.
And the S301 configuration initialization program calls a virsh-c < computer-node-ip > domcapabilities or Libvirt API, wherein getDomainCapabilities acquire mod node information of 'host-model' under the domainCapabilities/cpu of a target computing node and store the mod node information as an xml file.
The specific steps of analyzing the CPU xml file collected by the node and calculating the available CPU model and CPU features information of the adapted computing node set in S302 are as follows:
s3021 selecting a supportable lowest-configuration CPU model by combining the model name in the model of 'host-model' in the xml file of each computing node in the cluster with the ordered CPU model list;
s3022, calculating a public CPU features set which can be supported by the computing nodes according to the features list in the xml file, and rendering the common CPU features set to a corresponding CPU xml file;
s3023 checks whether the name of the model is within the smallest available set of models selected by the compute node.
And S3023, checking whether the name of the model is in the minimum available model set selected by the computing node, and if so, executing Libvirt API in each computation: and the CompareHypervisor CPU verifies whether the constructed CPU xml file is adaptive to each computing node or not, and ensures that the configured model meets the requirements of each computing node.
The step of synchronizing the analyzed configuration to the Nova-computer configuration file of each node in the set in S303 is as follows:
s3031, verifying whether the selected CPU model and features are available again at each node in the set;
s3032 adds the selected CPU model and features configuration and converts the CPU _ models, CPU _ models and CPU _ model _ extra _ flags configurations available for the nova-computer service into the nova.
The specific steps of the S4 for automatically deploying the initialization nova-computer service to the computing nodes in the cluster are as follows:
s401, converting the model and features in the XML file into corresponding parameters in the nova-computer configuration file: the cpu _ mode, cpu _ models and cpu _ model _ extra _ flags configuration are added to the nova.conf configuration file;
s402, the configuration distributor covers/etc/nova/nova.conf and/etc/nova/nova-computer.conf in each computing node in the set, and the initialization of the nova-computer configuration file is completed.
A method for realizing the hot migration of a virtual machine across CPUs (central processing units) is disclosed, and the system specifically comprises a node configuration module, a node deployment module, a data processing module and a service deployment module:
a node configuration module: configuring a set of added computing nodes supporting cross-CPU model thermal migration;
a node deployment module: automatically deploying a virtual machine management program Libvirt and other basic dependent software for the computing nodes in the cluster according to the specified software and version;
a data processing module: acquiring node CPU configuration information, and performing adaptive calculation analysis;
a service deployment module: and automatically deploying the initialization nova-computer service for the computing nodes in the cluster.
The invention has the beneficial effects that: the method provided by the invention aims at the method for migrating the virtual machine among the computing nodes with different CPU models, and realizes the function of migrating the virtual machine among the computing nodes with different CPU models in the cloud management platform under two scenes of new deployment and newly added computing nodes of the existing cloud management platform. On one hand, the host aggregation attribute can be simplified (the problem of hot migration failure caused by different CPU models is avoided by a mode of adding CPU model information to the host aggregation, namely a mode of assigning the same CPU model to the same host aggregation), and meanwhile, the flexibility of the hot migration of the virtual machine is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a call chain in a cloud management platform scenario in an embodiment; FIG. 2 is a diagram illustrating an implementation mechanism of configuration initialization according to an embodiment; fig. 3 is a flow chart of the method of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
The first embodiment is as follows:
a method for realizing the hot migration of a virtual machine across CPUs (central processing units) comprises the following specific steps:
s1, configuring the added computing node set supporting the cross-CPU model thermal migration;
s2, automatically deploying a virtual machine management program Libvirt and other basic dependent software for the computing nodes in the cluster according to the specified software and version;
s3, acquiring the configuration information of the node CPU, and performing adaptive calculation analysis;
s4, automatically deploying and initializing nova-computer service for the computing nodes in the cluster;
firstly, configuring a computing node set supporting cross-CPU model hot migration according to S1, adding computing nodes with different CPU attributes (suitable for newly building a cloud management platform or expanding the capacity of the computing nodes of the existing cloud management platform), and then automatically deploying a virtual machine management program Libvert and other basic dependent software (kvm, qemu, openvswitch and the like) for the computing nodes in the cluster according to a specified software source and version according to S2;
then, acquiring the configuration information of the node CPU according to S3, and performing adaptive calculation analysis; finally, initializing nova-computer service according to automatic deployment of the computing nodes in the cluster, starting the nova-computer service by using the CPU configuration analyzed by the S3, and finishing initialization of the computing node nova service;
as shown in fig. 1, for a newly built cloud management platform, first, each computing node Libvirt service is initialized; after the Libvirt service is started successfully, calling a configuration initialization system, and initializing Nova-computer service configuration files of each computing node; and finally completing the launching of the Nova-computer service. Aiming at the scene of expanding the existing computing nodes, firstly initializing a Libvirt service of a new expansion computing node; after the Libvirt service is started successfully, calling a configuration initialization system, and initializing a Nova-computer service configuration file of a related computing node and a new expansion computing node to be merged into a host; finally, completing the startup of the Nova-computer service of the computing node;
further, the step S1 of configuring the added computing node set supporting the cross-CPU model live migration specifically includes:
s101, the IP docking is carried out on a cloud management platform deployed by OpenStack through a configuration computing node;
s102, connecting a Libvirt client of the computing node and calling a Libvirt command line to acquire node configuration information;
connecting to a cloud management platform deployed by OpenStack, and realizing connecting to a Libvirt client of a computing node and calling a Libvirt command line to acquire node configuration information by configuring a computing node ip;
the step of S3 obtaining the node CPU configuration information, and performing adaptive computation analysis includes the following steps:
s301, executing Libvirt-qemu related API on the computing nodes in the set to obtain node CPU configuration information;
s301, configuring an initialization program to automatically acquire related configuration xml files of all the computing node CPUs of the cluster;
s302, analyzing the CPU xml file collected by the node and calculating available CPU model and CPU features information of the adaptive computing node set;
s303, synchronizing the analyzed configuration to the Nova-computer configuration file of each node in the set;
as shown in fig. 2, by the method, after the virtual machine management program is initialized successfully, the configuration initialization program automatically obtains the relevant configuration of the CPUs of all the computing nodes of the cluster, analyzes the CPU configuration suitable for all the computing nodes in the cluster, and adapts to the Nova-computer configuration files of each computing node; when a computing node is newly added for an existing cloud management platform, CPU configuration can be collected and analyzed for a computing node set in mapping through configuring newly added computing node names and corresponding host aggregation or mapping of an existing computing node set, and the CPU configuration is distributed to the computing nodes in the set through a configuration distributor, so that mutual hot migration of computing node virtual machines in the mapping set is realized;
the S301 configuration initialization program calls a virsh-c < computer-node-ip > domcapabilities or Libvirt API, wherein getDomainCapabilities acquire mod node information of 'host-model' under a target computing node domainCapabilities/cpu and store the mod node information as an xml file;
by calling the Libvirt API: getDomainCapabilites in each computing node domcapabilites in the set, acquiring a model with the name being 'custom' under the domaincapabilites/cpu, and obtaining a model with the node usable being 'yes', namely an example one; simultaneously acquiring a model name and intra-model information in a model node of which the name is 'host-model' under the domain capabilities/cpu, namely an example two; processing xml and adding architecture information to be stored as an xml file, namely example three;
example one:
<mode name='custom'supported='yes'>
<model usable='no'>qemu64</model>
<model usable='yes'>qemu32</model>
<model usable='no'>phenom</model>
<model usable='yes'>pentium3</model>
<model usable='yes'>pentium2</model>
<model usable='yes'>pentium</model>
<model usable='yes'>n270</model>
<model usable='yes'>kvm64</model>
<model usable='yes'>kvm32</model>
<model usable='yes'>coreduo</model>
<model usable='yes'>core2duo</model>
<model usable='no'>athlon</model>
<model usable='yes'>Westmere-IBRS</model>
<model usable='yes'>Westmere</model>
<model usable='yes'>Skylake-Server-noTSX-IBRS</model>
<model usable='yes'>Skylake-Server-IBRS</model>
<model usable='yes'>Skylake-Server</model>
<model usable='yes'>Skylake-Client-noTSX-IBRS</model>
<model usable='yes'>Skylake-Client-IBRS</model>
<model usable='yes'>Skylake-Client</model>
<model usable='yes'>SandyBridge-IBRS</model>
<model usable='yes'>SandyBridge</model>
<model usable='yes'>Penryn</model>
<model usable='no'>Opteron_G5</model>
<model usable='no'>Opteron_G4</model>
<model usable='no'>Opteron_G3</model>
<model usable='no'>Opteron_G2</model>
<model usable='yes'>Opteron_G1</model>
<model usable='yes'>Nehalem-IBRS</model>
<model usable='yes'>Nehalem</model>
<model usable='yes'>IvyBridge-IBRS</model>
<model usable='yes'>IvyBridge</model>
<model usable='no'>Icelake-Server-noTSX</model>
<model usable='no'>Icelake-Server</model>
<model usable='no'>Icelake-Client-noTSX</model>
<model usable='no'>Icelake-Client</model>
<model usable='yes'>Haswell-noTSX-IBRS</model>
<model usable='yes'>Haswell-noTSX</model>
<model usable='yes'>Haswell-IBRS</model>
<model usable='yes'>Haswell</model>
<model usable='no'>EPYC-Rome</model>
<model usable='no'>EPYC-IBPB</model>
<model usable='no'>EPYC</model>
<model usable='no'>Dhyana</model>
<model usable='yes'>Conroe</model>
<model usable='no'>Cascadelake-Server-noTSX</model>
<model usable='no'>Cascadelake-Server</model>
<model usable='yes'>Broadwell-noTSX-IBRS</model>
<model usable='yes'>Broadwell-noTSX</model>
<model usable='yes'>Broadwell-IBRS</model>
<model usable='yes'>Broadwell</model>
<model usable='yes'>486</model>
</mode>
example two:
<mode name='host-model'supported='yes'>
<model fallback='forbid'>Cascadelake-Server</model>
<vendor>Intel</vendor>
<feature policy='require'name='ss'/>
<feature policy='require'name='hypervisor'/>
<feature policy='require'name='tsc_adjust'/>
<feature policy='require'name='pku'/>
<feature policy='require'name='md-clear'/>
<feature policy='require'name='stibp'/>
<feature policy='require'name='xsaves'/>
<feature policy='require'name='invtsc'/>
<feature policy='disable'name='avx512vnni'/>
</mode>
example three:
<cpu>
<arch>x86_64</arch>
<model fallback='forbid'>Cascadelake-Server</model>
<vendor>Intel</vendor>
<feature policy='require'name='ss'/>
<feature policy='require'name='hypervisor'/>
<feature policy='require'name='tsc_adjust'/>
<feature policy='require'name='pku'/>
<feature policy='require'name='md-clear'/>
<feature policy='require'name='stibp'/>
<feature policy='require'name='xsaves'/>
<feature policy='require'name='invtsc'/>
<feature policy='disable'name='avx512vnni'/>
</cpu>
the specific steps of analyzing the CPU xml file collected by the node and calculating the available CPU model and CPU features information of the adapted computing node set in S302 are as follows:
s3021 selecting a supportable lowest-configuration CPU model by combining the model name in the model of 'host-model' in the xml file of each computing node in the cluster with the ordered CPU model list;
s3022, calculating a public CPU featureset which can be supported by the computing nodes according to the featurelists in the xml file, and rendering the public CPU featureset into a corresponding CPU xml file;
s3023, checking whether the name of the model is in the minimum available model set selected by the computing node;
calculating a minimum available model set which can be matched by the computing nodes in the set according to the model information of each computing node usable in the set reported by the data acquisition unit, wherein the usable is 'yes';
meanwhile, the data analyzer can maintain an ordered CPU model list (representing the CPU release time and performance) according to different CPU field years;
and the data analyzer executes Libvirt API from each computing node CPU xml file reported by the data collector to the corresponding computing node: all features supported by the BaselineHypervisor CPU extension node, example four;
example four:
<cpu mode='custom'match='exact'>
<model fallback='forbid'>Cascadelake-Server</model>
<vendor>Intel</vendor>
<feature policy='require'name='3dnowprefetch'/>
<feature policy='require'name='abm'/>
<feature policy='require'name='adx'/>
<feature policy='require'name='aes'/>
<feature policy='require'name='apic'/>
<feature policy='require'name='arat'/>
<feature policy='require'name='avx'/>
<feature policy='require'name='avx2'/>
<feature policy='require'name='avx512bw'/>
<feature policy='require'name='avx512cd'/>
<feature policy='require'name='avx512dq'/>
<feature policy='require'name='avx512f'/>
<feature policy='require'name='avx512vl'/>
<feature policy='disable'name='avx512vnni'/>
<feature policy='require'name='bmi1'/>
<feature policy='require'name='bmi2'/>
<feature policy='require'name='clflush'/>
<feature policy='require'name='clflushopt'/>
<feature policy='require'name='clwb'/>
<feature policy='require'name='cmov'/>
<feature policy='require'name='cx16'/>
<feature policy='require'name='cx8'/>
<feature policy='require'name='de'/>
<feature policy='require'name='erms'/>
<feature policy='require'name='f16c'/>
<feature policy='require'name='fma'/>
<feature policy='require'name='fpu'/>
<feature policy='require'name='fsgsbase'/>
<feature policy='require'name='fxsr'/>
<feature policy='require'name='hle'/>
<feature policy='require'name='hypervisor'/>
<feature policy='require'name='invpcid'/>
<feature policy='require'name='invtsc'/>
<feature policy='require'name='lahf_lm'/>
<feature policy='require'name='lm'/>
<feature policy='require'name='mca'/>
<feature policy='require'name='mce'/>
<feature policy='require'name='md-clear'/>
<feature policy='require'name='mmx'/>
<feature policy='require'name='movbe'/>
<feature policy='require'name='mpx'/>
<feature policy='require'name='msr'/>
<feature policy='require'name='mtrr'/>
<feature policy='require'name='nx'/>
<feature policy='require'name='pae'/>
<feature policy='require'name='pat'/>
<feature policy='require'name='pcid'/>
<feature policy='require'name='pclmuldq'/>
<feature policy='require'name='pdpe1gb'/>
<feature policy='require'name='pge'/>
<feature policy='require'name='pku'/>
<feature policy='require'name='pni'/>
<feature policy='require'name='popcnt'/>
<feature policy='require'name='pse'/>
<feature policy='require'name='pse36'/>
<feature policy='require'name='rdrand'/>
<feature policy='require'name='rdseed'/>
<feature policy='require'name='rdtscp'/>
<feature policy='require'name='rtm'/>
<feature policy='require'name='sep'/>
<feature policy='require'name='smap'/>
<feature policy='require'name='smep'/>
<feature policy='require'name='spec-ctrl'/><feature policy='require'name='ss'/>
<feature policy='require'name='ssbd'/>
<feature policy='require'name='sse'/>
<feature policy='require'name='sse2'/>
<feature policy='require'name='sse4.1'/>
<feature policy='require'name='sse4.2'/>
<feature policy='require'name='ssse3'/>
<feature policy='require'name='stibp'/>
<feature policy='require'name='syscall'/>
<feature policy='require'name='tsc'/>
<feature policy='require'name='tsc-deadline'/>
<feature policy='require'name='tsc_adjust'/>
<feature policy='require'name='vme'/>
<feature policy='require'name='x2apic'/>
<feature policy='require'name='xgetbv1'/>
<feature policy='require'name='xsave'/>
<feature policy='require'name='xsavec'/>
<feature policy='require'name='xsaveopt'/>
<feature policy='require'name='xsaves'/>
</cpu>
further, the S3023 checks whether the name of the model is in the minimum available model set selected by the computing node, and if so, performs Libvirt API on each computation: the ComplereHypervisor CPU verifies whether the constructed CPU xml file is adapted to each computing node or not, and ensures that the configured model meets the requirements of each computing node;
further, the step of synchronizing the analyzed configuration to the Nova-computer configuration file of each node in the set in S303 is as follows:
s3031, verifying whether the selected CPU model and features are available again at each node in the set;
s3032 adding the selected CPU model and features configuration and conversion of the configuration of the nova-computer service available as CPU _ mode, CPU _ models and CPU _ model _ extra _ flags into the nova.conf configuration file;
still further, the step S4 of automatically deploying the initialization nova-computer service to the computing nodes in the cluster includes:
s401, converting the model and features in the XML file into corresponding parameters in the nova-computer configuration file: the cpu _ mode, cpu _ models and cpu _ model _ extra _ flags configuration are added to the nova.conf configuration file;
s402, the configuration distributor covers the/etc/nova/nova.conf and/etc/nova/nova-computer.conf of each computing node in the set, and the initialization of the nova-computer configuration file is completed.
Example two:
a method for realizing the hot migration of a virtual machine across CPUs (central processing units) is disclosed, and the system specifically comprises a node configuration module, a node deployment module, a data processing module and a service deployment module:
a node configuration module: configuring a set of added computing nodes supporting cross-CPU model thermal migration;
a node deployment module: automatically deploying a virtual machine management program Libvirt and other basic dependent software for the computing nodes in the cluster according to the specified software and version;
a data processing module: acquiring node CPU configuration information, and performing adaptive calculation analysis;
a service deployment module: and automatically deploying the initialization nova-computer service for the computing nodes in the cluster.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (9)
1. A method for realizing the hot migration of a virtual machine across CPUs is characterized by comprising the following steps:
s1, configuring the added computing node set supporting the cross-CPU model thermal migration;
s2, automatically deploying a virtual machine management program Libvirt and other basic dependent software for the computing nodes in the cluster according to the specified software and version;
s3, acquiring the configuration information of the node CPU, and performing adaptive calculation analysis;
s4 initializes the nova-computer service for automated deployment of computing nodes within the cluster.
2. The method as claimed in claim 1, wherein said step S1 of configuring the joined set of compute nodes supporting cross-CPU model hot migration comprises the following steps:
s101, the IP docking is carried out on a cloud management platform deployed by OpenStack through a configuration computing node;
s102 is connected with a Libvirt client of the computing node and calls a Libvirt command line to acquire node configuration information.
3. The method as claimed in claim 2, wherein the step of S3 obtaining node CPU configuration information and performing adaptive computation analysis comprises the following steps:
s301, executing Libvirt-qemu related APIs on the computing nodes in the set to acquire node CPU configuration information;
s301, configuring an initialization program to automatically acquire related configuration xml files of all the computing node CPUs of the cluster;
s302, analyzing the CPU xml file collected by the node and calculating available CPU models and CPU features information of the adaptive computing node set;
s303, synchronizing the parsed configuration to the Nova-computer configuration file of each node in the set.
4. The method as claimed in claim 3, wherein the S301 configuration initialization program calls a virsh-c < computer-node-ip > domcapabilities or Libvirt API, getDomainCapabilities obtains mod node information of name-host-model' under target computing node domainCapabilities/cpu and saves as xml file.
5. The method as claimed in claim 5, wherein the step of S302 parsing the CPU xml file collected by the node and calculating the available CPU models and CPU features information of the adapted computing node set comprises the following steps:
s3021 selecting a supportable lowest-configuration CPU model by combining the model name in the model of 'host-model' in the xml file of each computing node in the cluster with the ordered CPU model list;
s3022, calculating a public CPU features set which can be supported by the computing nodes according to the features list in the xml file, and rendering the common CPU features set to a corresponding CPU xml file;
s3023 checks whether the name of the model is within the smallest available set of models selected by the compute node.
6. The method as claimed in claim 5, wherein said S3023 checks whether the name of the model is in the minimum available model set selected by the compute node, and if so, by performing Libvirt API at each compute: and the CompareHypervisor CPU verifies whether the constructed CPU xml file is matched with each computing node or not, and ensures that the configured model meets the requirements of each computing node.
7. The method as claimed in claim 6, wherein the step of S303 synchronizing the parsed configuration to the Nova-computer configuration files of the nodes in the set comprises:
s3031, verifying whether the selected CPU model and features are available again at each node in the set;
s3032 adds the selected CPU model and features configuration and converts the CPU _ models, CPU _ models and CPU _ model _ extra _ flags configurations available for the nova-computer service into the nova.
8. The method as claimed in claim 7, wherein the step of S4 automatically deploying the initialization nova-computer service to the computing nodes in the cluster comprises:
s401, converting the model and features in the XML file into corresponding parameters in the nova-computer configuration file: the cpu _ mode, cpu _ models and cpu _ model _ extra _ flags configuration are added to the nova.conf configuration file;
s402, the configuration distributor covers/etc/nova/nova.conf and/etc/nova/nova-computer.conf in each computing node in the set, and the initialization of the nova-computer configuration file is completed.
9. A method for realizing the hot migration of a virtual machine across CPUs is characterized in that the system specifically comprises a node configuration module, a node deployment module, a data processing module and a service deployment module:
a node configuration module: configuring a set of added computing nodes supporting cross-CPU model thermal migration;
a node deployment module: automatically deploying a virtual machine management program Libvirt and other basic dependent software for the computing nodes in the cluster according to the specified software and version;
a data processing module: acquiring node CPU configuration information, and performing adaptive calculation analysis;
a service deployment module: and automatically deploying and initializing nova-computer service for the computing nodes in the cluster.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111611039.3A CN114518933A (en) | 2021-12-27 | 2021-12-27 | Method and system for realizing hot migration of virtual machine across CPUs (central processing units) |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111611039.3A CN114518933A (en) | 2021-12-27 | 2021-12-27 | Method and system for realizing hot migration of virtual machine across CPUs (central processing units) |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114518933A true CN114518933A (en) | 2022-05-20 |
Family
ID=81596200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111611039.3A Pending CN114518933A (en) | 2021-12-27 | 2021-12-27 | Method and system for realizing hot migration of virtual machine across CPUs (central processing units) |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114518933A (en) |
-
2021
- 2021-12-27 CN CN202111611039.3A patent/CN114518933A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104410672B (en) | Method, the method and device of forwarding service of network function virtualization applications upgrading | |
US9720682B2 (en) | Integrated software and hardware system that enables automated provisioning and configuration of a blade based on its physical location | |
JP7391862B2 (en) | AUTOMATICALLY DEPLOYED INFORMATION TECHNOLOGY (IT) SYSTEMS AND METHODS | |
US10050850B2 (en) | Rack awareness data storage in a cluster of host computing devices | |
US9563459B2 (en) | Creating multiple diagnostic virtual machines to monitor allocated resources of a cluster of hypervisors | |
US9329889B2 (en) | Rapid creation and reconfiguration of virtual machines on hosts | |
US12045642B2 (en) | Virtual machine management method and apparatus for cloud platform | |
US8892945B2 (en) | Efficient application management in a cloud with failures | |
CN115280728A (en) | Software defined network coordination in virtualized computer systems | |
US20120291028A1 (en) | Securing a virtualized computing environment using a physical network switch | |
JP2016507100A (en) | Master Automation Service | |
US20190377592A1 (en) | System and method for provisioning devices of a decentralized cloud | |
US10860375B1 (en) | Singleton coordination in an actor-based system | |
WO2017121153A1 (en) | Software upgrading method and device | |
CN109002354B (en) | OpenStack-based computing resource capacity elastic expansion method and system | |
US11586447B2 (en) | Configuration after cluster migration | |
CN114518933A (en) | Method and system for realizing hot migration of virtual machine across CPUs (central processing units) | |
WO2017206092A1 (en) | Life cycle management method and management unit | |
US11405277B2 (en) | Information processing device, information processing system, and network communication confirmation method | |
US9348672B1 (en) | Singleton coordination in an actor-based system | |
US20230315506A1 (en) | Support of virtual network and non-virtual network connectivity on the same virtual machine | |
US10419283B1 (en) | Methods, systems, and computer readable mediums for template-based provisioning of distributed computing systems | |
US20240028322A1 (en) | Coordinated upgrade workflow for remote sites of a distributed container orchestration system | |
CN110795201B (en) | Management method and device for servers in cloud platform | |
CN116302926A (en) | Test environment deployment method, system, device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |