CN113382077A - Micro-service scheduling method and device, computer equipment and storage medium - Google Patents
Micro-service scheduling method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113382077A CN113382077A CN202110680340.3A CN202110680340A CN113382077A CN 113382077 A CN113382077 A CN 113382077A CN 202110680340 A CN202110680340 A CN 202110680340A CN 113382077 A CN113382077 A CN 113382077A
- Authority
- CN
- China
- Prior art keywords
- node
- micro
- service
- target
- resources
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
- H04L67/025—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/547—Remote procedure calls [RPC]; Web services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application relates to a micro-service scheduling method, a micro-service scheduling device, computer equipment and a storage medium. The method comprises the following steps: acquiring a node state directory of a container management system, wherein the node state directory comprises available operating resources of each node in the container management system; acquiring target operation resources required by normal operation of the micro-service to be scheduled; and determining a target node from each node in the container management system according to the target operation resource and the node state directory, and deploying the micro service to be scheduled to the target node for operation. When the micro-service is scheduled, the target node is determined by combining two influence factors, namely the running resource required by normal running of the micro-service to be scheduled and the available running resource of each node. Therefore, the micro-service scheduling efficiency is improved, the micro-services on each node in the container management system are guaranteed to be distributed evenly, and each micro-service can run normally.
Description
Technical Field
The present application relates to the field of micro-service scheduling technologies, and in particular, to a micro-service scheduling method, an apparatus, a computer device, and a storage medium.
Background
With the continuous development of power grid monitoring technology, the traditional integral architecture cannot well meet the requirements in some aspects, and based on the requirement, the system application of the micro-service architecture is generated.
The system of the micro-service architecture is a distributed system, and is divided into independent service units according to services, each micro-service only focuses on completing one service, and a plurality of micro-services play a role together, so that normal operation of one application can be realized. The container service orchestration program (Kubernetes, K8s) system is a container-based cluster management platform, and is used for managing containerized applications on multiple hosts in a cloud platform, and micro-services are used as a small service in the applications, and can be deployed and operated by relying on containers of each node in the K8s system.
However, in the K8s system, when a node simultaneously floods a large number of containers of micro-services, a congested state may occur in the node, and even the node may be halted, and the micro-services carried by the container of the node are not available.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, and a storage medium for scheduling micro services, which can efficiently schedule micro services and balance load distribution of each node.
In a first aspect, a method for scheduling micro-services is provided, where the method includes:
acquiring a node state directory of the container management system, wherein the node state directory comprises available operating resources of each node in the container management system;
acquiring target operation resources required by normal operation of the micro-service to be scheduled;
and determining a target node from each node in the container management system according to the target running resource and the node state directory, and deploying the micro-service to be scheduled to the target node for running.
In one embodiment, obtaining a node state directory of a container management system includes:
acquiring node operation information of each node in a container management system;
determining available operating resources of each node according to the node operating information;
and arranging the available running resources of each node according to a preset specification to generate a node state directory.
In one embodiment, obtaining node operation information of each node in the container management system includes:
sending a state reporting instruction to each node, wherein the state reporting instruction is used for indicating each node to report node operation information;
and receiving the node operation information reported by each node.
In one embodiment, determining a target node from each node in the container management system according to a target running resource and a node state directory, and deploying the micro service to be scheduled to the target node for running includes:
acquiring at least one candidate node from a node state directory according to the available running resources and target running resources of each node in the container management system;
and determining a target node from at least one candidate node, and deploying the micro-service to be scheduled to the target node for operation.
In one embodiment, the node operation information further includes a node link, where the node link is used to indicate a transfer path between a node where the micro service to be scheduled is located and each node;
determining a target node from the at least one candidate node, comprising:
acquiring a transfer path between a node where the micro service to be scheduled is located and each candidate node;
and determining the candidate node of which the transfer path meets the preset condition as a target node.
In one embodiment, acquiring at least one candidate node from the node status directory according to the available operating resources and target operating resources of each node in the container management system includes:
and determining the nodes with the available operating resources larger than the target operating resources as candidate nodes according to the available operating resources and the target operating resources of each node in the container management system.
In one embodiment, the node operation information includes: the CPU occupancy rate and the memory occupancy rate are the available operating resources including available CPU resources and available memory resources.
In a second aspect, a micro-service scheduling apparatus is provided, the apparatus including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a node state directory of the container management system, and the node state directory comprises available operating resources of each node in the container management system;
the second acquisition module is used for acquiring target operation resources required by normal operation of the micro-service to be scheduled;
and the scheduling module is used for determining a target node from each node in the container management system according to the target running resource and the node state directory and deploying the micro service to be scheduled to the target node for running.
In a third aspect, a computer device is provided, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of any one of the above-mentioned microservice scheduling methods when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps of any one of the above-mentioned microservice scheduling methods in the first aspect.
The micro-service scheduling method, the micro-service scheduling device, the computer equipment and the storage medium acquire a node state directory of the container management system, wherein the node state directory comprises available operating resources of each node in the container management system; acquiring target operation resources required by normal operation of the micro-service to be scheduled; and determining a target node from each node in the container management system according to the target running resource and the node state directory, and deploying the micro-service to be scheduled to the target node for running. In the method, in order to avoid the situations that the micro-service to be scheduled cannot normally run after being deployed on the node and the node is blocked due to too large running resources and too few available resources of the node, two influence factors of the running resources required when the micro-service to be scheduled normally runs and the available running resources of each node are comprehensively considered when the micro-service is scheduled, so that the target node is determined. Therefore, the micro-service scheduling efficiency is improved, the micro-services on each node in the container management system are guaranteed to be distributed evenly, and each micro-service can run normally.
Drawings
Fig. 1 is a schematic structural diagram of a container management system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for scheduling micro services according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for scheduling micro services according to another embodiment of the present application;
fig. 4 is a schematic flowchart of a method for scheduling micro services according to another embodiment of the present application;
fig. 5 is a schematic flowchart of a method for scheduling micro services according to another embodiment of the present application;
fig. 6 is a schematic flowchart of a method for scheduling microservice according to another embodiment of the present application;
fig. 7 is a block diagram illustrating a micro-service scheduling apparatus according to an embodiment of the present application;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Before explaining the micro-service expansion method of the present application, terms and application environments related to the present application will be explained.
Kubernetes system: the container service scheduler (kubernets, K8s) provides functions such as application deployment, maintenance and extension mechanisms, the K8s system can be used for conveniently managing the cross-machine operation of containerized applications, and the K8s system can be operated on a physical machine.
The K8s system is composed of control nodes and operation nodes according to node functions.
The control node is responsible for scheduling and managing the whole system, and comprises an API Server (service interface) component, a Scheduler (Scheduler) component and a Controller Manager (control center) component.
The API Server serves as an entry of the K8s system, encapsulates the add-delete modify-check operation of the core object, provides an external client and an internal component call in a RESTful (Representational state transfer) interface mode, and persists the REST object to the etcd (key-value storage system, mainly used for shared configuration and service discovery). The Scheduler is responsible for resource scheduling of the cluster, and allocates machines for the newly built pod, and this part of the work is separated into one component, meaning that it can be easily replaced by other schedulers. The Controller Manager is a Manager of various controllers in the K8s system, and is a control center inside the cluster and responsible for executing various controllers. There are generally two types of controllers: an endpoint controller and a replica controller. An Endpoint Controller (Endpoint Controller) regularly associates service and point (association information is maintained by an Endpoint object), and ensures that the mapping from the service to the point is always up-to-date; the copy Controller (Replication Controller) regularly associates the Replication Controller with the pod, ensuring that the Replication quantity defined by the Replication Controller always coincides with the quantity of actually running pods.
The running node may be a physical host or a Virtual Machine (VM) and is responsible for running the service container. A service for starting and managing pod, kubel, is run on each running node and can be managed by the control node. The service processes running on the running node include kubel, kube-proxy and docker daemon.
The Kubelet is responsible for managing and controlling the docker container, such as starting/stopping and monitoring the running state, and the like, and regularly acquires the pod distributed to the machine from the etcd, and starts or stops the corresponding container according to pod information. Meanwhile, it will also receive HTTP request of API Server, report the running status of pod. The Kube Proxy is responsible for providing proxies for the pod, and can periodically acquire all services from the etcd, create proxies according to the service information, and when a certain client pod needs to access other pods, the access request can be forwarded through the local Proxy. The docker demamon listens to the request of the client and manages objects such as images, containers, networks, disks and the like of the docker. The docker mirror image is a read-only template and can be used for creating a docker container. The mirror image is a lightweight, executable, stand-alone software package that packages the software runtime environment and the software developed based on the runtime environment. It contains all the content needed to run a certain software, including code, runtime, libraries, environment variables, configuration files, etc.
Micro-service: a variation of a software development technology Service Oriented Architecture (SOA) architectural style constructs an application as a set of loosely coupled services. In the microservice architecture, services are fine-grained and protocols are lightweight. The micro-service architecture is to split a complex application into a plurality of service modules, and each module is dedicated to a single service function to provide services to the outside. The service modules can be independently compiled and deployed, and simultaneously, the service modules can be mutually communicated and combined into a whole to provide services to the outside, so that the micro-service architecture has the advantages of flexible deployment, convenient updating and maintenance and the like, and is widely applied to a large number of applications and services of the current Internet.
The micro-service scheduling method provided by the application can be applied to the application environment shown in fig. 1. The container management system 100 includes a control center 110 and a plurality of nodes 120, and the control center 110 communicates with the plurality of nodes 120 through a network. For the K8s system architecture, the node 120 is a running node in the K8s system, and may be a virtual machine or a physical server; the control center 110 is a control node in the K8s system, and may be a terminal device.
There is an allocated pod on each node 120, which can be viewed by a containerized environment as a "logical host" at the application level. A Pod may include multiple containers, and applications corresponding to the multiple containers in the Pod are typically tightly coupled, each container carrying a microservice, the Pod being created, initiated, or destroyed on the node 120.
The control center 110 is responsible for resource quota management in the K8s system, so as to ensure that the designated object does not occupy excessive system resources at any time, avoid the whole system from running disorderly or even accidentally down due to the defects of design or implementation of certain business processes, and play an important role in the stable running and stability of the whole cluster.
Each node is provided with a plurality of containers, each container carries a micro-service, and when a large number of containers are simultaneously arranged on a node to carry more micro-services, the node may be in a congested state, even the node may be halted, and the micro-services carried in the node containers are unavailable.
Based on this, in the embodiment of the present application, when any micro service needs to be scheduled, the control center 110 may determine, according to the available operating resources on each node 120 and the operating resources required for normal operation of the micro service to be scheduled, a node to deploy the micro service to be scheduled.
In one embodiment, as shown in fig. 2, a method for scheduling micro services is provided, which is described by taking the method as an example for being applied to the control center 110 in fig. 1, and includes the following steps:
step 210: and acquiring a node state directory of the container management system, wherein the node state directory comprises available operating resources of each node in the container management system.
The container management system comprises nodes, each node is provided with micro services, and different micro services occupy different node resources. Thus, the number of micro-services that can be deployed on a node depends on the available operating resources of the node, including available Central Processing Unit (CPU) resources and available memory resources.
In a possible implementation manner, when the control center is connected with a plurality of nodes, the running resources of each node are stored, and the corresponding relation between the nodes and the running resources of the nodes is established, so that the node state directory of the container management system is obtained.
The correspondence between the nodes and the node operation resources may be expressed in a list or a topology structure, which is not limited in the present application. And the control center updates the running resources of each node along with the change of the number of the deployed micro services of the nodes so as to determine the available running resources of each node.
Step 220: and acquiring target operation resources required by normal operation of the micro-service to be scheduled.
The micro service to be scheduled may be a newly added micro service that needs to be deployed in the container management system, or an original micro service on any node in the container management system, but the micro service needs to be scheduled to another node to run.
The target operation resources are the CPU resources and the memory resources which are occupied when the micro-service to be scheduled is normally operated.
Step 230: and determining a target node from each node in the container management system according to the target running resource and the node state directory, and deploying the micro-service to be scheduled to the target node for running.
It can be understood that the available operating resources of the node may ensure that the micro service to be scheduled is normally operated, and the micro service to be scheduled may be deployed on the node. In other words, if the available operating resources of the node cannot guarantee the normal operation of the micro service to be scheduled, the micro service to be scheduled cannot be deployed on the node, otherwise, the node may be in a congested state, even the node may crash, and the normal operation of other micro services carried by the node container may also be affected.
In a possible implementation manner, the implementation procedure of the step 230 may be: according to the CPU resource, the memory resource and the node state directory of the container management system required by the micro-service operation to be scheduled, the available CPU resource in the node state directory is larger than the CPU resource required by the micro-service operation to be scheduled, the node of which the available memory resource is larger than the memory resource required by the micro-service operation to be scheduled is determined as a target node, and the micro-service to be scheduled is deployed to the target node for operation.
In the embodiment of the application, a node state directory of a container management system is obtained, wherein the node state directory comprises available running resources of each node in the container management system; acquiring target operation resources required by normal operation of the micro-service to be scheduled; and determining a target node from each node in the container management system according to the target running resource and the node state directory, and deploying the micro-service to be scheduled to the target node for running. In the method, in order to avoid the situations that the micro-service to be scheduled cannot normally run after being deployed on the node and the node is blocked due to too large running resources and too few available resources of the node, two influence factors of the running resources required when the micro-service to be scheduled normally runs and the available running resources of each node are comprehensively considered when the micro-service is scheduled, so that the target node is determined. Therefore, the micro-service scheduling efficiency is improved, the micro-services on each node in the container management system are guaranteed to be distributed evenly, and each micro-service can run normally.
In one embodiment, as shown in fig. 3, the implementation process of obtaining the node state directory of the container management system (step 210 above) includes the following steps:
step 310: and acquiring node operation information of each node in the container management system.
In one possible implementation manner, the node operation information includes CPU occupancy and memory occupancy, and in another implementation manner, the node operation information includes occupied CPU resources and occupied memory resources, and both the above two types of node operation information can reflect the occupancy of unavailable resources in the node.
The implementation process of step 310 may be: the control center sends a state reporting instruction to each node, and the state reporting instruction is used for indicating each node to report node operation information; and the control center receives the node operation information reported by each node.
As an example, the status report information request terminal acquires IP address information of each node in the K8s system, and then sends a status report instruction to each node in the K8s system based on the IP address information, and after each node in the K8s system receives the status report instruction, the node operation information responding to the status report instruction is sent to the status report information request terminal, so that the status report information request terminal can obtain node operation information of each node in the K8s system.
It should be noted that the IP address information of the node may be a physical IP address of the node in the K8s system or a virtual IP address of the node, which is not limited in this application.
It can be understood that the status report information request terminal may be a control node in the K8s system (i.e., the control center 110 in the container management system 100 shown in fig. 1 of the present application), or may be a control terminal outside the K8s system, which is not limited in this application.
Step 320: and determining the available operation resources of each node according to the node operation information.
The available operation resources of the nodes comprise available CPU resources and available memory resources.
In one possible implementation, the total operating resources of each node are stored when the control center 110 establishes communication with each node. Thus, after receiving the node operation information reported by the node, the available operation resource of the node can be determined according to the total operation resource of the node and the resource occupancy rate/occupied resource reported by the node.
As an example, for any node, the available CPU resource determination process is:
available CPU resources are total CPU resources-total CPU resources × CPU occupancy;
available CPU resources-total CPU resources-occupied CPU resources.
Similarly, for any node, the available memory resource determination process is as follows:
the available memory resource is the total memory resource-the total memory resource is multiplied by the memory occupancy rate;
the available memory resource is the total memory resource-occupied memory resource.
Step 330: and arranging the available running resources of each node according to a preset specification to generate a node state directory.
The preset specification may include, but is not limited to, the following:
(1) the available operating resources are in descending order;
(2) the available operating resources are in order of small to large;
(3) and establishing the connection sequence between the nodes corresponding to the available operating resources and the control center.
The node state directory includes node information and available operating resources of the node, where the node information may include information such as a node name, a node IP address, a node identifier, and a node deployment container, and this is not limited in this application.
In the embodiment of the application, the available running resources of each node are obtained in a node reporting mode, and the available running resources of all the nodes are sorted according to the preset specification to obtain the node state directory. Therefore, the available resources of each node in the container management system can be clearly known through the node state directory.
Based on any of the embodiments described above, in one possible implementation, see fig. 4. If the micro service to be scheduled is the newly added micro service, the implementation process of deploying the micro service to be scheduled to the target node for operation (step 230) according to the target operation resource and the node state directory includes the following steps:
step 410: and acquiring at least one candidate node from the node state directory according to the available running resources and the target running resources of each node in the container management system.
The available running resources of the nodes comprise available CPU resources and available memory resources, and the target running resources of the micro service to be scheduled comprise target CPU resources and target memory resources.
In one possible implementation manner, the implementation procedure of step 410 is: and determining the nodes with the available operating resources larger than the target operating resources as candidate nodes according to the available operating resources and the target operating resources of each node in the container management system.
As an example, the available CPU resources and the available memory resources of each node are checked through the node status directory, and a node whose available CPU resources are greater than the target CPU resources and whose available memory resources are greater than the target memory resources is determined as a candidate node, so as to obtain at least one candidate node.
Step 420: and determining a target node from at least one candidate node, and deploying the micro-service to be scheduled to the target node for operation.
When one candidate node is selected, taking the candidate node as a target node; when the candidate nodes are two or more, one node is selected from the two or more candidate nodes as the target node.
As an example, the process of selecting a target node from two or more candidate nodes may be: taking a candidate node with the maximum difference value between the available running resources of the node and the target running resources as a target node; or, taking a candidate node with the largest difference between the target available CPU resource and the target CPU resource as a target node; or, taking a candidate node with the largest difference between the available memory resource and the target memory resource as a target node; or, a candidate node is arbitrarily designated as a target node, which is not limited in the present application.
After the target node is determined, a container for running the micro-service to be scheduled is created on the target node, so that the micro-service to be scheduled is deployed to the target node to run.
In the embodiment of the application, two influence factors, namely the running resources required by the normal running of the micro-service to be scheduled and the available running resources of each node, are comprehensively considered, a target node is determined from the node state directory, and the micro-service to be scheduled is deployed to the target node to run. Therefore, when the micro service to be scheduled is deployed, the micro service on each node in the container management system is ensured to be distributed evenly, and each micro service can run normally.
Based on the above embodiment, in another possible implementation process, see fig. 5. If the micro service to be scheduled is the original micro service of any node in the container management system, the implementation process of deploying the micro service to be scheduled to the target node for operation (step 230) according to the target operation resource and the node state directory includes the following steps:
step 510: and acquiring at least one candidate node from the node state directory according to the available running resources and the target running resources of each node in the container management system.
Step 510 is the same as step 410, and the specific implementation process refers to step 410, which is not described herein again.
Step 520: and acquiring a transfer path between the node where the micro service to be scheduled is located and each candidate node.
It should be noted that, when the control center 110 establishes communication among the nodes, that is, stores the micro-service deployment condition of each node, the control center may also add a transfer path between each node and another node into the node status directory when generating the status directory according to the node operation information reported by each node.
That is, the node state directory of the container management system includes: a plurality of nodes, available operating resources of each node, and a transfer path between each node.
In a possible implementation manner, the implementation process of step 520 may be: and acquiring a node state directory of the container management system, and inquiring transfer paths between the node and other candidate nodes from the node state directory according to the node where the micro service to be scheduled is located.
Step 530: and determining the candidate node with the transfer path meeting the preset condition as a target node, and deploying the micro-service to be scheduled to the target node for operation.
The preset condition may be that the transfer path is shortest, or that time consumption is shortest when the micro service is scheduled based on the transfer path, which is not limited in the present application.
As an example, the process of selecting a target node from two or more candidate nodes may be: and taking the candidate node corresponding to the path with the shortest transfer path as a target node.
After the target node is determined, a container for running the micro-service to be scheduled is created on the target node, so that the micro-service to be scheduled is deployed to the target node to run.
In the embodiment of the application, at least one candidate node is determined from the node state directory according to the running resources required by the normal running of the micro service to be scheduled and the available running resources of each node, and the micro service to be scheduled can run normally on the candidate node. Furthermore, considering the micro-service scheduling path and time, determining a target node from the candidate nodes according to the transfer path between the node where the micro-service to be scheduled is located and each candidate node, and deploying the micro-service to be scheduled to the target node for operation. Therefore, the micro-service scheduling efficiency is improved, the micro-services on each node in the container management system are guaranteed to be distributed evenly, and each micro-service can run normally.
Based on the foregoing embodiment, as shown in fig. 6, the present application further provides another micro service scheduling method, which is described by taking the method as an example for being applied to the control center 110 in fig. 1, and includes the following steps:
step 610: sending a state reporting instruction to each node, wherein the state reporting instruction is used for indicating each node to report node operation information;
step 620: receiving node operation information reported by each node;
step 630: determining available operating resources of each node according to the node operating information;
step 640: sorting the available operating resources of each node according to a preset specification to generate a node state directory;
step 650: acquiring target operation resources required by normal operation of the micro-service to be scheduled;
step 660: acquiring at least one candidate node from a node state directory according to the available running resources and target running resources of each node in the container management system;
step 670: and determining a target node from at least one candidate node, and deploying the micro-service to be scheduled to the target node for operation.
Step 680: acquiring a transfer path between a node where the micro service to be scheduled is located and each candidate node;
step 690: and determining the candidate node with the transfer path meeting the preset condition as a target node, and deploying the micro-service to be scheduled to the target node for operation.
The specific implementation process of the above steps can refer to the embodiments corresponding to fig. 2 to 5, which are not described herein again, and by the above micro-service scheduling method, when performing micro-service scheduling, two influencing factors, i.e., the running resources required for normal running of the micro-service to be scheduled and the available running resources of each node, are considered comprehensively, so as to determine the target node. Therefore, the micro-service scheduling efficiency is improved, the micro-services on each node in the container management system are guaranteed to be distributed evenly, and each micro-service can run normally.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 7, there is provided a micro-service scheduling apparatus 700, the apparatus including: a first obtaining module 710, a second obtaining module 720, and a scheduling module 730, wherein:
a first obtaining module 710, configured to obtain a node status directory of the container management system, where the node status directory includes available operating resources of each node in the container management system;
a second obtaining module 720, configured to obtain a target operation resource required for normal operation of the micro service to be scheduled;
and the scheduling module 730 is configured to determine a target node from each node in the container management system according to the target running resource and the node state directory, and deploy the micro service to be scheduled to the target node for running.
In one embodiment, the first obtaining module 710 includes:
the first acquisition unit is used for acquiring node operation information of each node in the container management system;
the first determining unit is used for determining the available operating resources of each node according to the node operating information;
and the directory generation unit is used for sorting the available running resources of each node according to a preset specification to generate a node state directory.
In one embodiment, the first obtaining unit is further configured to:
sending a state reporting instruction to each node, wherein the state reporting instruction is used for indicating each node to report node operation information;
and receiving the node operation information reported by each node.
In one embodiment, the scheduling module 730 includes:
the second acquisition unit is used for acquiring at least one candidate node from the node state directory according to the available running resources and the target running resources of each node in the container management system;
and the second determining unit is used for determining a target node from the at least one candidate node and deploying the micro service to be scheduled to the target node for operation.
In one embodiment, the node operation information further includes a node link, where the node link is used to indicate a transfer path between a node where the micro service to be scheduled is located and each node;
a second determining unit, further configured to:
acquiring a transfer path between a node where the micro service to be scheduled is located and each candidate node;
and determining the candidate node of which the transfer path meets the preset condition as a target node.
In one embodiment, the second obtaining unit is further configured to:
and determining the nodes with the available operating resources larger than the target operating resources as candidate nodes according to the available operating resources and the target operating resources of each node in the container management system.
In one embodiment, the node operation information includes: the CPU occupancy rate and the memory occupancy rate are the available operating resources including available CPU resources and available memory resources.
In the embodiment of the present application, the micro-service scheduling apparatus 600 obtains a node state directory of the container management system, where the node state directory includes available operating resources of each node in the container management system; acquiring target operation resources required by normal operation of the micro-service to be scheduled; and determining a target node from each node in the container management system according to the target running resource and the node state directory, and deploying the micro-service to be scheduled to the target node for running. In the method, in order to avoid the situation that the micro-service to be scheduled cannot normally run after being deployed on the node due to too large running resources and too few available resources of the node, and the node is in a blocked state, two influence factors of the running resources required by the normal running of the micro-service to be scheduled and the available running resources of each node are comprehensively considered when the micro-service is scheduled, and a target node is determined. Therefore, the micro-service scheduling efficiency is improved, the micro-services on each node in the container management system are guaranteed to be distributed evenly, and each micro-service can run normally.
For specific limitations of the micro service scheduling apparatus, reference may be made to the above limitations of the micro service scheduling method, which is not described herein again. The modules in the micro service scheduling apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a microservice scheduling method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of any of the micro-service scheduling methods shown in the embodiments of the present application when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the steps of any of the microservice scheduling methods illustrated by the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method for scheduling microservices, the method comprising:
acquiring a node state directory of a container management system, wherein the node state directory comprises available operating resources of each node in the container management system;
acquiring target operation resources required by normal operation of the micro-service to be scheduled;
and determining a target node from each node in the container management system according to the target operation resource and the node state directory, and deploying the micro service to be scheduled to the target node for operation.
2. The micro-service scheduling method of claim 1, wherein the obtaining of the node state directory of the container management system comprises:
acquiring node operation information of each node in the container management system;
determining available operating resources of each node according to the node operating information;
and sorting the available running resources of each node according to a preset specification to generate the node state directory.
3. The micro-service scheduling method according to claim 2, wherein the obtaining of the node operation information of each node in the container management system comprises:
sending a state reporting instruction to each node, wherein the state reporting instruction is used for indicating each node to report node operation information;
and receiving the node operation information reported by each node.
4. The micro-service scheduling method according to any one of claims 1 to 3, wherein the determining a target node from each node in the container management system according to the target running resource and the node state directory, and deploying the micro-service to be scheduled to the target node for running comprises:
acquiring at least one candidate node from the node state directory according to the available operating resources of each node in the container management system and the target operating resources;
and determining the target node from the at least one candidate node, and deploying the micro service to be scheduled to the target node for running.
5. The micro-service scheduling method according to claim 4, wherein the node operation information further includes a node link, and the node link is used for indicating a transfer path between the node where the micro-service to be scheduled is located and each node;
said determining said target node from said at least one candidate node comprises:
acquiring a transfer path between a node where the micro service to be scheduled is located and each candidate node;
and determining the candidate node of which the transfer path meets the preset condition as the target node.
6. The micro-service scheduling method according to claim 4, wherein the obtaining at least one candidate node from the node status directory according to the available operating resources of each node in the container management system and the target operating resources comprises:
and determining the node with the available operating resource larger than the target operating resource as the candidate node according to the available operating resource and the target operating resource of each node in the container management system.
7. The micro-service scheduling method according to any one of claims 1 to 3, wherein the node operation information includes: the CPU occupancy rate and the memory occupancy rate are the available operating resources including available CPU resources and available memory resources.
8. A micro-service scheduling apparatus, the apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a node state directory of a container management system, and the node state directory comprises available operating resources of each node in the container management system;
the second acquisition module is used for acquiring target operation resources required by normal operation of the micro-service to be scheduled;
and the scheduling module is used for determining a target node from each node in the container management system according to the target operation resource and the node state directory, and deploying the micro service to be scheduled to the target node for operation.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the micro-service scheduling method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the microservice scheduling method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110680340.3A CN113382077B (en) | 2021-06-18 | 2021-06-18 | Micro-service scheduling method, micro-service scheduling device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110680340.3A CN113382077B (en) | 2021-06-18 | 2021-06-18 | Micro-service scheduling method, micro-service scheduling device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113382077A true CN113382077A (en) | 2021-09-10 |
CN113382077B CN113382077B (en) | 2023-05-23 |
Family
ID=77577796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110680340.3A Active CN113382077B (en) | 2021-06-18 | 2021-06-18 | Micro-service scheduling method, micro-service scheduling device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113382077B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114546812A (en) * | 2022-04-27 | 2022-05-27 | 网思科技股份有限公司 | Energy consumption measuring method and device for application service, computer equipment and storage medium |
CN114816440A (en) * | 2022-05-09 | 2022-07-29 | 杭州云合智网技术有限公司 | Method for constructing distributed micro-service network controller architecture |
CN115379019A (en) * | 2022-08-19 | 2022-11-22 | 济南浪潮数据技术有限公司 | Service scheduling method, device, equipment and storage medium |
CN115460215A (en) * | 2022-08-12 | 2022-12-09 | 国网浙江省电力有限公司电力科学研究院 | Edge gateway extension method, system, device, equipment and medium |
WO2024051236A1 (en) * | 2022-09-05 | 2024-03-14 | 华为云计算技术有限公司 | Resource scheduling method and related device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180349121A1 (en) * | 2017-05-30 | 2018-12-06 | International Business Machines Corporation | Dynamic deployment of an application based on micro-services |
CN111104227A (en) * | 2019-12-28 | 2020-05-05 | 北京浪潮数据技术有限公司 | Resource control method and device of K8s platform and related components |
CN111464659A (en) * | 2020-04-27 | 2020-07-28 | 广州虎牙科技有限公司 | Node scheduling method, node pre-selection processing method, device, equipment and medium |
CN112214321A (en) * | 2020-10-10 | 2021-01-12 | 中国联合网络通信集团有限公司 | Node selection method and device for newly-added micro service and micro service management platform |
CN112269641A (en) * | 2020-11-18 | 2021-01-26 | 网易(杭州)网络有限公司 | Scheduling method, scheduling device, electronic equipment and storage medium |
CN112631680A (en) * | 2020-12-28 | 2021-04-09 | 南方电网数字电网研究院有限公司 | Micro-service container scheduling system, method, device and computer equipment |
CN112685153A (en) * | 2020-12-25 | 2021-04-20 | 广州奇盾信息技术有限公司 | Micro-service scheduling method and device and electronic equipment |
US20210182117A1 (en) * | 2019-12-17 | 2021-06-17 | Citrix Systems, Inc. | Systems and methods for service resource allocation and deployment |
-
2021
- 2021-06-18 CN CN202110680340.3A patent/CN113382077B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180349121A1 (en) * | 2017-05-30 | 2018-12-06 | International Business Machines Corporation | Dynamic deployment of an application based on micro-services |
US20210182117A1 (en) * | 2019-12-17 | 2021-06-17 | Citrix Systems, Inc. | Systems and methods for service resource allocation and deployment |
CN111104227A (en) * | 2019-12-28 | 2020-05-05 | 北京浪潮数据技术有限公司 | Resource control method and device of K8s platform and related components |
CN111464659A (en) * | 2020-04-27 | 2020-07-28 | 广州虎牙科技有限公司 | Node scheduling method, node pre-selection processing method, device, equipment and medium |
CN112214321A (en) * | 2020-10-10 | 2021-01-12 | 中国联合网络通信集团有限公司 | Node selection method and device for newly-added micro service and micro service management platform |
CN112269641A (en) * | 2020-11-18 | 2021-01-26 | 网易(杭州)网络有限公司 | Scheduling method, scheduling device, electronic equipment and storage medium |
CN112685153A (en) * | 2020-12-25 | 2021-04-20 | 广州奇盾信息技术有限公司 | Micro-service scheduling method and device and electronic equipment |
CN112631680A (en) * | 2020-12-28 | 2021-04-09 | 南方电网数字电网研究院有限公司 | Micro-service container scheduling system, method, device and computer equipment |
Non-Patent Citations (1)
Title |
---|
左灿;刘晓洁;: "一种改进的Kubernetes动态资源调度方法" * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114546812A (en) * | 2022-04-27 | 2022-05-27 | 网思科技股份有限公司 | Energy consumption measuring method and device for application service, computer equipment and storage medium |
CN114816440A (en) * | 2022-05-09 | 2022-07-29 | 杭州云合智网技术有限公司 | Method for constructing distributed micro-service network controller architecture |
CN115460215A (en) * | 2022-08-12 | 2022-12-09 | 国网浙江省电力有限公司电力科学研究院 | Edge gateway extension method, system, device, equipment and medium |
CN115379019A (en) * | 2022-08-19 | 2022-11-22 | 济南浪潮数据技术有限公司 | Service scheduling method, device, equipment and storage medium |
CN115379019B (en) * | 2022-08-19 | 2024-07-09 | 济南浪潮数据技术有限公司 | Service scheduling method, device, equipment and storage medium |
WO2024051236A1 (en) * | 2022-09-05 | 2024-03-14 | 华为云计算技术有限公司 | Resource scheduling method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN113382077B (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113382077B (en) | Micro-service scheduling method, micro-service scheduling device, computer equipment and storage medium | |
US11704144B2 (en) | Creating virtual machine groups based on request | |
CN111880936B (en) | Resource scheduling method, device, container cluster, computer equipment and storage medium | |
CN107145380B (en) | Virtual resource arranging method and device | |
CN107547596B (en) | Cloud platform control method and device based on Docker | |
EP3522013A1 (en) | Method and system for migration of containers in a container orchestration platform between compute nodes | |
JP6658882B2 (en) | Control device, VNF placement destination selection method and program | |
US20220329651A1 (en) | Apparatus for container orchestration in geographically distributed multi-cloud environment and method using the same | |
US11467874B2 (en) | System and method for resource management | |
CN108139935A (en) | The extension of the resource constraint of service definition container | |
WO2012068867A1 (en) | Virtual machine management system and using method thereof | |
CN112231049A (en) | Computing equipment sharing method, device, equipment and storage medium based on kubernets | |
EP3442201B1 (en) | Cloud platform construction method and cloud platform | |
Eidenbenz et al. | Latency-aware industrial fog application orchestration with kubernetes | |
WO2022188578A1 (en) | Method and system for multiple services to share same gpu, and device and medium | |
US8104038B1 (en) | Matching descriptions of resources with workload requirements | |
CN113485830A (en) | Micro-service automatic capacity expansion method for power grid monitoring system | |
CN112965817B (en) | Resource management method and device and electronic equipment | |
US20190004844A1 (en) | Cloud platform construction method and cloud platform | |
CN112910937A (en) | Object scheduling method and device in container cluster, server and container cluster | |
CN112631680A (en) | Micro-service container scheduling system, method, device and computer equipment | |
CN114615268B (en) | Service network, monitoring node, container node and equipment based on Kubernetes cluster | |
CN114924888A (en) | Resource allocation method, data processing method, device, equipment and storage medium | |
JP7483059B2 (en) | DEFAULT GATEWAY MANAGEMENT METHOD, GATEWAY MANAGER, SERVER AND STORAGE MEDIUM | |
CN112261125B (en) | Centralized unit cloud deployment method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |