CN106407007B - Cloud resource configuration optimization method for elastic analysis process - Google Patents
Cloud resource configuration optimization method for elastic analysis process Download PDFInfo
- Publication number
- CN106407007B CN106407007B CN201610790447.2A CN201610790447A CN106407007B CN 106407007 B CN106407007 B CN 106407007B CN 201610790447 A CN201610790447 A CN 201610790447A CN 106407007 B CN106407007 B CN 106407007B
- Authority
- CN
- China
- Prior art keywords
- component
- response time
- node
- components
- average response
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a cloud resource configuration optimization method for an elastic analysis process, which comprises the following steps of 1: performing performance modeling on the elastic analysis process by adopting an open queuing network theory, namely modeling the whole analysis process into an open queuing network, wherein each component in the process corresponds to a sub-queue in a queuing network system, and the output of one component is the input of the other component; step 2: the average response time of the whole open queuing network is estimated through a queuing theory, and cloud resource allocation is carried out on each component according to the estimated average response time, so that the total number of resources is minimum on the premise of meeting the average response time required by a user. The invention can carry out resource allocation on the analysis flow of continuous arrival of the requests, estimate the average response time of the system by using the queuing theory, estimate the response time more accurately, estimate the solution set of the allocable server of each component according to the queuing theory, and then obtain the approximate optimal solution by using a heuristic algorithm.
Description
Technical Field
The invention relates to the technical field of cloud resource configuration optimization, in particular to a cloud resource configuration optimization method for an elastic analysis process.
Background
In recent years, the popularization and development of cloud computing and mobile internet have promoted the generation of mass data in various fields, such as the field of biological information, social networks, and intelligent transportation systems. The analysis and processing of these massive data become a research hotspot. Efficient analysis of these data can make decisions more accurate and faster. Generally, data analysis is a computationally intensive application. For a continuously arriving data analysis task, if all requests are to be processed in time, enough resources must be allocated to satisfy the highest load. However, this results in a waste of resource idleness when the load requests are low. The cloud computing platform realizes the sharing of computing resources through a network, and can realize the supply of the resources as required. As a result, more and more applications are deployed on cloud platforms.
Generally speaking, a data analysis processing application is made up of multiple components, each of which may be deployed independently. Workflow is a good tool for coordinating these components. In order to dynamically extend the system, the conventional method is to extend the whole process as a whole, and increase or decrease the resource instances according to the whole process. But different components in the flow have different processing capabilities. In the case of a constantly arriving request load, one resource instance may be sufficient for a component, but other components may need more instances, or it may become a system bottleneck.
With the advent of the big data age, more and more data analysis applications emerge. Some data analysis applications may be modeled with a workflow. Requests for such applications are typically continuously arriving and have strict requirements on response time for these requests. When the analysis processes are deployed on a cloud platform, a key problem is how to allocate cloud resources, so that the number of leased virtual machines is minimized on the premise of meeting the response time requirement.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a cloud resource configuration optimization method facing an elastic analysis process.
The cloud resource configuration optimization method for the elastic analysis process comprises the following steps:
step 1: performing performance modeling on the elastic analysis process, namely modeling the whole analysis process into an open queuing network, wherein each component in the process is correspondingly an M/M/M sub-queue in a queuing network system, and the output of one component is the input of the other component;
step 2: and estimating the average response time of the whole open queuing network, and performing cloud resource allocation on each component according to the estimated average response time, so that the total number of resources is minimum on the premise of meeting the average response time required by a user.
Preferably, the elasticity analysis process in step 1 can be represented by a weighted directed acyclic graph G, where G ═ C, E, C is a node set of the graph G and represents a set of all components in the process, and E is an edge set of the graph G and represents a dependency relationship between the components; wherein the weight value on the node represents the average service time of the component for executing the user request; an edge formed by two nodes is used for representing a data flow relationship between two components, let e (c)i,cj)∈E,e(ci,cj) Represents node ciAnd cjHas a dependency relationship between them and represents a node ciIs cjOf the preceding node, respectively, cjIs ciThe successor node of (1); when processing a request, a component cannot begin processing until all of its predecessor nodes have not completed; specifically, the method comprises the following steps:
the elasticity analysis flow is represented by EW, EW ═ G, f (c) >, < G, f (c) > represents the optimization operation performed on graph G, where f (c) represents the function used to determine the number of instances per component.
Preferably, the step 2 includes:
step 2.1: determining a critical path in an open queuing network;
step 2.2: optimizing the resources of the components on the critical path;
step 2.3: resource optimization is performed on the remaining components, i.e., components that are not on the critical path.
Preferably, said step 2.1 comprises: updating the critical path every other pricing duration, and allocating a virtual machine to the components on the critical path, wherein: the critical path is as follows: the path with the maximum sum of all node weights in the path from the starting node to the ending node in the directed acyclic graph; the sum of all node weights can represent the average service time of the component.
Preferably, said step 2.2 comprises: and optimizing the resources of the components on the critical path.
Preferably, said step 2.3 comprises: in step 2.2, the average response time of all components on the critical path is obtained, virtual machines are distributed to the remaining components according to the parallel blocks of the weighted directed acyclic graph, and the average response time of the non-critical path parallel branches on each parallel block is smaller than or equal to the average response time of the corresponding parallel block critical path branch components; and calculating the number of the virtual machines according to the relation between the average response time and the number of the servers, and distributing the virtual machines.
Compared with the prior art, the invention has the following beneficial effects:
the invention can allocate resources to the analysis process that the request arrives continuously, concretely, the whole system is deployed independently by taking the assembly as a unit to form an elastic process; and each component is regarded as a queuing system, the whole process is regarded as a queuing network, the average response time of the system is estimated by using a queuing theory, and the response time is estimated more accurately. The invention can also estimate the distributable server solution set of each component according to the queuing theory, and then obtains an approximate optimal solution by utilizing a heuristic algorithm.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of a component performance analysis model.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Aiming at the elastic analysis-oriented process, the invention adopts a cloud resource configuration optimization strategy based on a queuing network theory. Generally, an analysis flow includes a plurality of components, each having independent functions and being capable of being deployed independently, and the components having interdependencies therebetween. The existing cloud resource configuration optimization method for workflows is to regard the whole process as a whole and perform cloud resource allocation on the whole process. Thus, when the performance of a certain component is reduced and is not enough to process the continuously arriving requests, resources need to be added to the whole application, which causes the performance of other components to be too high, and thus causes resource waste. In other words, each component in an analysis process has different processing capabilities and different amounts of resources, and if the whole process is allocated as a whole, the resources are allocated according to the bottleneck component (i.e., the component with the lowest processing capability), and the resources allocated by other components are excessive. The invention allocates each component with independent resource according to requirement, which saves more resource and cost. However, such problems are complicated and difficult. Since the user is concerned with the average response time for each request processing completion, not the processing time of each component, but the resource allocation scheme is how much resource is allocated for each component, i.e. a single component. Obviously, this is an NP-hard problem.
Firstly, an open queuing network theory is used for carrying out performance modeling on an elastic analysis process. The whole analysis process is modeled into an open queuing network system, each component in the process is an M/M/M sub-queue in the queuing network system, and the output of one component is the input of the other component. And then estimating the average response time of the whole system through a queuing theory. According to the estimated average response time, two heuristic algorithms are provided to distribute cloud resources to each component, so that the total number of resources is the minimum on the premise of meeting the average response time required by a user.
Elastic analysis process model:
typically, the analysis flow can be represented by a weighted Directed Acyclic Graph (DAG), G ═ C, E. Wherein the node represents the component, and the weight value on the node represents the average service time of the component for executing the user request; edge e (c)i,cj) Representing data flow relationships between components, e (c)i,cj) E denotes the vertex ciIs cjOf the preceding node, respectively, cjIs ciThe successor node of (1). When processing a request, a component cannot start processing until all its predecessor nodes have not completed. Each component in the elasticity analysis flow may deploy multiple instances according to load.
In the present invention, the elasticity analysis flow is represented by EW, < G, f (c) >, where f (c) represents a function for determining the number of instances of each component.
Resource model
Cloud resources are typically provided externally in the form of virtual machines and include a variety of specifications. To simplify the problem, the present invention standardizes virtual machines with virtual machine units. Furthermore, the present invention employs an exclusive resource provisioning model, i.e., each virtual machine can only be assigned to one component at a time. The pricing model for virtual machines is on-demand charging.
Performance analysis model
And modeling the performance of the analysis process by using a queuing theory, modeling the whole elastic analysis process into an open queuing network, and modeling each component into an M/M/M queue. According to a queuing theory, expressing the average request reaching rate of the whole analysis process by lambda; representing the average service rate of each virtual machine on the component by mu; m represents the number of virtual machines allocated on a component. The flow intensity is defined as ρ, ρ ═ λ/(m μ). Average response time per componentWhereinIndicating the average probability of queuing requests at each component,
the resource allocation optimization strategy of the invention is divided into three steps: the method comprises the steps of firstly, determining a key path, secondly, performing resource optimization on components on the key path, and thirdly, performing resource optimization on the rest components. Program 1 is pseudo code of the general flow of the policy. The first row is to calculate each update period based on the tariff charge duration. Because the virtual machines are charged according to the needs, in each charging duration, the virtual machines need to be charged once the virtual machines are rented, whether idle or busy, and all the virtual machines are actively updated once every other charging duration. Then, in each update cycle, a critical path is determined, and the algorithm of the calling program 2 or the program 3 allocates a virtual machine to the component on the critical path. And distributing virtual machines for the rest of the components according to the parallel blocks, wherein the strategy is that the average response time of other parallel branches on each parallel block is not more than that of the components of the critical path branches.
Program 1 (cloud resource allocation optimization algorithm overview)
The component on the critical path is a sequential flow. According to the queuing theory, the average response time of the whole critical path is the sum R of the response times of all the componentspath。
In the formula: riThe response time of the ith component is shown, and k is the number of components.
The resource allocation optimization strategy of the components on the critical path has two optimization algorithms:
packet Knapsack based optimization Algorithm (Group Knapack-based Algorithm, GKA)
Program 2 presents the pseudo code of the packet knapsack based optimization algorithm. Given the average arrival rate of requests and the average service time of a component, the relationship between the average response time and the number of virtual machines can be calculated according to a performance analysis model of the component. When the number of allocated virtual machines is larger in one component, the average response time of the component is smaller. But when the number of virtual machines reaches a certain saturation value, i.e. each newly arrived request is served without waiting in a queue, the average response time of the component does not change any more, and the number of virtual machines in this saturation state is recorded as m _ maxi,tThe corresponding response time is denoted r _ maxi,t. According to the steady state condition of the system, m _ mini,t=|λt/μi,tL, wherein: m _ mini,tRepresents the minimum number of virtual machines, λ, that satisfy the steady state of the system at time ttRepresents the average arrival rate, μ, of requests at time ti,tRepresenting the average service rate of the virtual machine on the component i at the time t, and defining a feasible solution set as m _ maxi,tAnd m _ mini,tAll integers in between. Considering the critical path as a knapsack, the capacity of the knapsack is the constraint given by the user: average response time Rπ. Each feasible solution is considered an item, each item having two attributes: cost and average response time. All items are divided into k groups (i.e., k components), and the items within each group can and can only be selected one. Items are now loaded into the backpack so that the cost of the backpack and the average response time are minimized.
In procedure 2, the number of feasible solutions for each component is first calculated, and then a packet knapsack algorithm is executed using a dynamic programming based method.
Program 2 (packet-based backpack optimization algorithm)
Adaptive Heuristic Algorithm (PAHA)
Program 3 presents the pseudo-code of the adaptive heuristic algorithm. The idea of this algorithm is simple, namely to allocate virtual machines proportionally according to the service time of all components, the more virtual machines are allocated to components with high service time (i.e. slow service). The first step of the algorithm is to compute a set of feasible solutions. Then R is adjusted according to the proportion of the service timeπDivided over each component such thatIf the target time allocated by a component is less than the lowest configuration time of the component, the component is allocated with the lowest configuration time, and then other components are proportionally allocated again to ensure that each component can be normally served. And then calculating the minimum number of virtual machines of each component on the premise of meeting the required service time according to the performance analysis model.
Program 3 (adaptive heuristic algorithm)
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (3)
1. A cloud resource configuration optimization method for an elastic analysis process is characterized by comprising the following steps:
step 1: performing performance modeling on the elastic analysis process, namely modeling the whole analysis process into an open queuing network, wherein each component in the process is correspondingly an M/M/M sub-queue in a queuing network system, and the output of one component is the input of the other component;
step 2: estimating the average response time of the whole open queuing network, and performing cloud resource allocation on each component according to the estimated average response time, so that the total number of resources is minimum on the premise of meeting the average response time required by a user;
the elasticity analysis process in step 1 can be represented by a weighted directed acyclic graph G, where G is (C, E), C is a node set of the graph G and represents a set of all components in the process, and E is an edge set of the graph G and represents a dependency relationship between the components; wherein the weight value on the node represents the average service time of the component for executing the user request; an edge formed by two nodes is used for representing a data flow relationship between two components, let e (c)i,cj)∈E,e(ci,cj) Represents node ciAnd cjHas a dependency relationship between them and represents a node ciIs cjOf the preceding node, respectively, cjIs ciThe successor node of (1); when processing a request, a component cannot begin processing until all of its predecessor nodes have not completed; specifically, the method comprises the following steps:
representing the elasticity analysis flow by EW, wherein EW is < G, f (C) >, < G, f (C) > represents the optimization operation of the graph G, and f (C) represents a function for determining the number of instances of each component;
the step 2 comprises the following steps:
step 2.1: determining a critical path in an open queuing network;
step 2.2: optimizing the resources of the components on the critical path;
step 2.3: performing resource optimization on the residual components, namely the components which are not on the critical path;
the critical path is as follows: the path with the maximum sum of all node weights in the path from the starting node to the ending node in the directed acyclic graph; the sum of all node weights can represent the average service time of the component.
2. The elasticity analysis process-oriented cloud resource configuration optimization method according to claim 1, wherein the step 2.1 includes: updating the critical path every other pricing duration, and allocating a virtual machine to the components on the critical path, wherein: the critical path is as follows: the path with the maximum sum of all node weights in the path from the starting node to the ending node in the directed acyclic graph; the sum of all node weights can represent the average service time of the component.
3. The elasticity analysis process-oriented cloud resource configuration optimization method according to claim 1, wherein the step 2.3 includes: in step 2.2, the average response time of all components on the critical path is obtained, virtual machines are distributed to the remaining components according to the parallel blocks of the weighted directed acyclic graph, and the average response time of the non-critical path parallel branches on each parallel block is smaller than or equal to the average response time of the corresponding parallel block critical path branch components; and calculating the number of the virtual machines according to the relation between the average response time and the number of the servers, and distributing the virtual machines.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610790447.2A CN106407007B (en) | 2016-08-31 | 2016-08-31 | Cloud resource configuration optimization method for elastic analysis process |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610790447.2A CN106407007B (en) | 2016-08-31 | 2016-08-31 | Cloud resource configuration optimization method for elastic analysis process |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106407007A CN106407007A (en) | 2017-02-15 |
CN106407007B true CN106407007B (en) | 2020-06-12 |
Family
ID=58001630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610790447.2A Active CN106407007B (en) | 2016-08-31 | 2016-08-31 | Cloud resource configuration optimization method for elastic analysis process |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106407007B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633125B (en) * | 2017-09-14 | 2021-08-31 | 北京仿真中心 | Simulation system parallelism identification method based on weighted directed graph |
CN108196948A (en) * | 2017-12-28 | 2018-06-22 | 东华大学 | A kind of mysorethorn example type combination optimum choice method based on Dynamic Programming |
CN108521352B (en) * | 2018-03-26 | 2022-07-22 | 天津大学 | Online cloud service tail delay prediction method based on random return network |
CN110278125B (en) * | 2019-06-21 | 2022-03-29 | 山东省计算中心(国家超级计算济南中心) | Cloud computing resource elasticity evaluation method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1610311A (en) * | 2003-10-20 | 2005-04-27 | 国际商业机器公司 | Method and apparatus for automatic modeling building using inference for IT systems |
CN102043674A (en) * | 2009-10-16 | 2011-05-04 | Sap股份公司 | Estimating service resource consumption based on response time |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7054943B1 (en) * | 2000-04-28 | 2006-05-30 | International Business Machines Corporation | Method and apparatus for dynamically adjusting resources assigned to plurality of customers, for meeting service level agreements (slas) with minimal resources, and allowing common pools of resources to be used across plural customers on a demand basis |
US7146353B2 (en) * | 2003-07-22 | 2006-12-05 | Hewlett-Packard Development Company, L.P. | Resource allocation for multiple applications |
US8538740B2 (en) * | 2009-10-14 | 2013-09-17 | International Business Machines Corporation | Real-time performance modeling of software systems with multi-class workload |
-
2016
- 2016-08-31 CN CN201610790447.2A patent/CN106407007B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1610311A (en) * | 2003-10-20 | 2005-04-27 | 国际商业机器公司 | Method and apparatus for automatic modeling building using inference for IT systems |
CN102043674A (en) * | 2009-10-16 | 2011-05-04 | Sap股份公司 | Estimating service resource consumption based on response time |
Also Published As
Publication number | Publication date |
---|---|
CN106407007A (en) | 2017-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Van den Bossche et al. | Cost-efficient scheduling heuristics for deadline constrained workloads on hybrid clouds | |
Kaur et al. | Load balancing optimization based on hybrid Heuristic-Metaheuristic techniques in cloud environment | |
US10474504B2 (en) | Distributed node intra-group task scheduling method and system | |
CN111427679B (en) | Computing task scheduling method, system and device for edge computing | |
Nabi et al. | DRALBA: Dynamic and resource aware load balanced scheduling approach for cloud computing | |
EP3770774B1 (en) | Control method for household appliance, and household appliance | |
US8997107B2 (en) | Elastic scaling for cloud-hosted batch applications | |
US8612987B2 (en) | Prediction-based resource matching for grid environments | |
US20080104605A1 (en) | Methods and apparatus for dynamic placement of heterogeneous workloads | |
Zhu et al. | A cost-effective scheduling algorithm for scientific workflows in clouds | |
CN106407007B (en) | Cloud resource configuration optimization method for elastic analysis process | |
CN103701886A (en) | Hierarchic scheduling method for service and resources in cloud computation environment | |
CN115134371A (en) | Scheduling method, system, equipment and medium containing edge network computing resources | |
Pasdar et al. | Hybrid scheduling for scientific workflows on hybrid clouds | |
Jagadish Kumar et al. | Hybrid gradient descent golden eagle optimization (HGDGEO) algorithm-based efficient heterogeneous resource scheduling for big data processing on clouds | |
Biswas et al. | Multi-level queue for task scheduling in heterogeneous distributed computing system | |
CN111309472A (en) | Online virtual resource allocation method based on virtual machine pre-deployment | |
CN117407160A (en) | Mixed deployment method for online task and offline task in edge computing scene | |
Le et al. | ITA: the improved throttled algorithm of load balancing on cloud computing | |
Choi et al. | Gpsf: general-purpose scheduling framework for container based on cloud environment | |
Abrishami et al. | Scheduling in hybrid cloud to maintain data privacy | |
Kim et al. | Design of the cost effective execution worker scheduling algorithm for faas platform using two-step allocation and dynamic scaling | |
CN113254200B (en) | Resource arrangement method and intelligent agent | |
Toporkov et al. | Budget and Cost-aware Resources Selection Strategy in Cloud Computing Environments | |
Sharma et al. | Multi-Faceted Job Scheduling Optimization Using Q-learning With ABC In Cloud Environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |