[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113962415B - Pipeline optimization method and device for continuous integrated environment - Google Patents

Pipeline optimization method and device for continuous integrated environment Download PDF

Info

Publication number
CN113962415B
CN113962415B CN202010629800.5A CN202010629800A CN113962415B CN 113962415 B CN113962415 B CN 113962415B CN 202010629800 A CN202010629800 A CN 202010629800A CN 113962415 B CN113962415 B CN 113962415B
Authority
CN
China
Prior art keywords
job
jobs
pipeline
operators
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010629800.5A
Other languages
Chinese (zh)
Other versions
CN113962415A (en
Inventor
杨海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202010629800.5A priority Critical patent/CN113962415B/en
Publication of CN113962415A publication Critical patent/CN113962415A/en
Application granted granted Critical
Publication of CN113962415B publication Critical patent/CN113962415B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a pipeline optimization method and device for a continuous integrated environment, wherein the method comprises the following steps: creating a project in a warehouse management system, and acquiring at least one operator configured for the project and the number of parallel running jobs supported by each of the at least one operator; creating a first configuration file in a root directory of the item, and pushing the first configuration file to a warehouse management system to construct a pipeline of the continuous integration environment; an optimization strategy for the build time of the pipeline is determined based on the number of operators contained by the at least one operator and the number of parallel running jobs supported by each of the at least one operator, and the optimization strategy is performed. Therefore, the embodiment of the application is beneficial to optimizing the construction time of the pipeline and reducing the construction time of the pipeline.

Description

Pipeline optimization method and device for continuous integrated environment
Technical Field
The application relates to the technical field of computers, in particular to a pipeline optimization method and device for a continuous integrated environment.
Background
The warehouse management system continuous integration (gitlab continuous integration, gitlab CI) is a continuous integration service provided by gitlab by integrating code into the backbone to ensure that quality problems do not occur after merging the backbone.
In gitlab CI, each code pushed to the gitlab server's configuration file builds a pipeline (pipeline) of the continuous integration environment, so that the code is automatically built, unit tested, and code tested to implement automated testing. However, in the case of large test volumes and complex test requirements, the construction time of the pipeline becomes a bottleneck that prevents the developer from iterating quickly, and needs to be further solved.
Disclosure of Invention
The embodiment of the application provides a pipeline optimization method and device for a continuous integrated environment, which are used for optimizing the construction time of a pipeline and reducing the construction time of the pipeline.
In a first aspect, an embodiment of the present application provides a pipeline optimization method for a persistent integrated environment, including:
Creating a project in a warehouse management system, and acquiring at least one operator configured for the project and the number of parallel running jobs supported by each of the at least one operator, wherein the at least one operator is used for running the jobs of the project and returning the running results to the warehouse management system;
Creating a first configuration file in a root directory of the item, and pushing the first configuration file to the warehouse management system to construct a pipeline of a continuous integration environment;
And determining an optimization strategy for the construction time of the pipeline according to the number of the operators contained in the at least one operator and the number of the parallel running jobs supported by each of the at least one operator, and executing the optimization strategy.
In a second aspect, an embodiment of the present application provides a pipeline optimization apparatus for a persistent integrated environment, where the apparatus includes a processing unit, where the processing unit is configured to:
creating a project in a warehouse management system, and acquiring at least one operator configured for the project and the number of parallel running jobs supported by each of the at least one operator, wherein the at least one operator is used for running the jobs of the project and returning the running results to the warehouse management system;
Creating a first configuration file in a root directory of the item, and pushing the first configuration file to the warehouse management system to construct a pipeline of a continuous integration environment;
And determining an optimization strategy for the construction time of the pipeline according to the number of the operators contained in the at least one operator and the number of the parallel running jobs supported by each of the at least one operator, and executing the optimization strategy.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a communication interface, where the memory stores one or more programs, and where the one or more programs are executed by the processor, where the one or more programs are configured to execute instructions of the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for electronic data exchange, the computer program being operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application, the computer program product may be a software installation package.
It can be seen that in the embodiment of the present application, first, by acquiring at least one operator configured for a project created in a warehouse management system and the number of jobs that are parallel-run supported by each of the at least one operator; then, a first configuration file is created for the project, and the first configuration file is pushed to a warehouse management system to construct a pipeline; finally, an optimization strategy for the build time of the pipeline is determined based on the number of operators contained by the at least one operator and the number of parallel running jobs supported by each of the at least one operator. According to the embodiment of the application, the optimization strategy for the construction time of the pipeline is determined according to the number of the configured operators and the number of the parallel operation jobs supported by the operators, and the optimization strategy is executed to realize the optimization of the construction time of the pipeline, so that the construction time of the pipeline is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is evident that the figures described below are only some embodiments of the application, from which other figures can be obtained without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a communication system according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a pipeline optimization method for a continuous integrated environment according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a pipeline according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a structure after optimizing a pipeline according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a further optimized pipeline according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a further optimized pipeline according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a further optimized pipeline according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a further optimized pipeline according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a further optimized pipeline according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a further optimized pipeline according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a further optimized pipeline according to an embodiment of the present application;
FIG. 12 is a functional block diagram of a pipeline optimization apparatus for a continuous integrated environment according to an embodiment of the present application.
Detailed Description
The following describes the technical scheme in the embodiment of the present application in detail with reference to the accompanying drawings.
For a better understanding of the solution of the embodiment of the present application, a description is first given of a communication system that may be related to the embodiment of the present application, as shown in fig. 1. The communication system 100 may include, among other things, an electronic device 110, a warehouse management system server 120, and a runner server 130. A user may create an item in a warehouse management system running on electronic device 110 and create a configuration file in a root directory of the item; the warehouse management system server 120 may configure 1 or more operators for the project created in the warehouse management system, find the specified operator in the configuration file under the project, and construct a pipeline once according to the configuration file; the runtime server 130 may be used to load a runtime configured for the project, and the loaded runtime may be used to run jobs for the project and return the results of the run to the warehouse management system server 120. It should be noted that 1 or more operators may be loaded on the operator server 130, or the operator server 130 may be a set of 1 or more operator servers.
Specifically, the electronic device 110 in the embodiment of the present application may be various handheld devices, vehicle-mounted devices, wearable devices, user Equipment (UE), terminal devices (TERMINAL DEVICE), personal digital assistants (personal DIGITAL ASSISTANT, PDA), personal computers (personal computer, PC), relay devices, computers supporting 802.11 protocols, terminal devices supporting 5G systems, terminal devices in future evolution public land mobile network (Public Land Mobile Network, PLMN), and the like with continuous integrated environment pipeline optimization functions.
Specifically, the warehouse management system server 120 in the embodiment of the present application may be various cloud servers for providing warehouse management service functions, internet of things devices, data center network devices, user devices, terminal devices, personal computers, relay devices, computers supporting 802.11 protocols, network devices supporting 5G systems, network devices in PLMNs that evolve in the future, and the embodiment of the present application is not limited in particular.
Specifically, the server 130 of the embodiment of the present application may be various cloud servers for loading an operator, an internet of things device, a data center network device, a user device, a terminal device, a personal computer, a relay device, a computer supporting 802.11 protocol, a network device supporting a 5G system, a network device in a PLMN of future evolution, and the like, which is not limited in particular.
The following describes the execution steps of pipeline optimization for a continuous integration environment from the perspective of a method example, see fig. 2. FIG. 2 is a flow chart of a pipeline optimization method for a continuous integrated environment, which is provided in an embodiment of the present application and can be applied to an electronic device 110; the method comprises the following steps:
s210, creating an item in a warehouse management system, and acquiring at least one operator configured for the item and the number of parallel running jobs supported by each of the at least one operator.
Wherein at least one of the operators is operable to run the job of the project and return the result of the run to the warehouse management system (gitlab).
It should be noted that, first, the electronic device 110 may receive an instruction from a user, and create an item in the warehouse management system according to the instruction; warehouse management system server 120 may then configure one or more operators for the project; finally, the operator configured by the warehouse management system server 120 for the project may be loaded into the operator server 130, where the operator is configured to process the job in the project, and return the processed operation result to the warehouse management system server 120, where the operation result is returned to the electronic device 110 by the warehouse management system server 120. Further, warehouse management system server 120 may configure at least one worker for each created project, and each worker may process one or more jobs simultaneously, that is, the number of jobs supported by each worker to run in parallel may be one or more. Meanwhile, there may be a plurality of operators having the same type and a plurality of operators having different types in at least one operator configured for a project. At the same time, the number of parallel running jobs supported by each of the same type is the same.
For example, a project is configured with a runner1 (gitlab runner 1), a runner2 (gitlab runner 2), and a runner3 (gitlab runner 3). Wherein gitlab runner and gitlab runner are the same type of operator, and gitlab runner and gitlab runner2 and gitlab runner are different types of operators. Meanwhile, the number of jobs supported by gitlab runner and gitlab runner2 running in parallel is the same, and the number of jobs supported by gitlab runner and gitlab runner and gitlab runner3 running in parallel may or may not be the same.
S220, creating a first configuration file in the root directory of the project, and pushing the first configuration file to a warehouse management system to construct a pipeline of the continuous integration environment.
It will be appreciated that the electronic device 110 may receive an instruction from a user and create a first profile in the root directory of the project based on the instruction and push the first profile to the warehouse management system server 120; warehouse management system server 120 may then build a pipeline of the persistent integration environment based on the code in the first configuration file.
In particular, the pipeline may include multiple flows, such as flows for automatic build, automatic unit detection, automatic code checking, and so on. Referring to FIG. 3, multiple stages may be included in a single pipeline, and each stage may include multiple jobs, that is, each stage is logically divided into multiple jobs. Wherein all stages in the pipeline run in sequence, and failure of any one stage will result in each stage following that stage not being executed, and the pipeline will build successfully only after all stages have completed successfully. Furthermore, all jobs in the same phase will run in parallel, and only if all jobs in the current phase run successfully, the current phase can be built successfully.
S230, determining an optimization strategy for the construction time of the pipeline according to the number of the operators contained in the at least one operator and the number of the parallel operation jobs supported by each of the at least one operator, and executing the optimization strategy.
It should be noted that, the number of parallel running jobs supported by each of the operators configured by the warehouse management system server 120 for each project may be1 or more, and the number of operators configured for each project may be1 or more, which is specifically configured according to the project requirements created in the warehouse management system, and the embodiment of the present application is not specifically limited.
Further, if n jobs run on the same type of runtime at the same stage, the number of jobs that are supported by each of the type of runtime to run in parallel is greater than or equal to n.
For example, a project is configured with a runner1 (gitlab runner 1) and a runner2 (gitlab runner). Wherein gitlab runner and gitlab runner are runner of the same type. If 2 jobs run on this type of runner at the same stage, then the number of jobs running in parallel supported by gitlab runner and gitlab runner are both greater than or equal to 2.
Because the continuous integrated system constructed pipeline based on the warehouse management system can realize continuous integrated automatic test, and under the conditions of larger test quantity and more complex test requirement, the construction time of the pipeline is increased, so that the running time of the continuous integrated system is prolonged, and the quick iteration of software development is influenced.
In one possible example, determining an optimization strategy for the build time of the pipeline based on the number of operators contained by the at least one operator and the number of parallel running jobs supported by each of the at least one operator may include the operations of: dividing the independent use case set into at least one independent use case set according to the number of the runners contained in the at least one runner; the independent use case set can be used for representing a logic set composed of independent use cases, and the independent use cases can be used for representing execution units which cannot be subdivided and are not premised on the output of any use case; combining each sub-set of the at least one independent use instance set into one first job to obtain at least one first job; wherein each of the at least one first job corresponds to a runner of the at least one runner; at least one first job is programmed into a stage in the pipeline according to the number of concurrently running jobs supported by each of the operators.
It should be noted that, since different independent use cases may correspond to different types of operators, and in the embodiment of the present application, the type of each of at least one operator configured by the warehouse management system server 120 for a project is different from each other, that is, the number of types of operators included in at least one operator may be used to indicate how many operators are included, so in optimizing which job may be included in each stage of the pipeline, it is necessary to divide the independent use case set according to the number of operators, and combine each divided independent use case set into one job. Further, since multiple stages may be included in a single pipeline, and each stage may include multiple jobs, each job needs to be run by the runner, the number of concurrently running jobs supported by the runner needs to be considered in optimizing which jobs each stage in the pipeline may contain. Meanwhile, because all the stages in the pipeline run in sequence, and all the jobs in the same stage can run in parallel, and the pipeline can be successfully constructed only after all the stages are successfully completed, the embodiment of the application compiles all the jobs into all the stages in the pipeline according to the number of the parallel running jobs supported by the operator, and ensures that as many jobs as possible run in parallel in the same stage, thereby reducing the construction time of the pipeline to optimize the construction time of the pipeline.
For example, first, a certain item is configured with a runner1 (gitlab runner 1) and a runner2 (gitlab runner 2), and gitlab runner and gitlab runner support a single job or a plurality of jobs to run in parallel; secondly, the independent use case sets are divided into independent use case set 1 and independent use case set 2 according to gitlab runner and gitlab runner, wherein the independent use case set 1 corresponds to gitlab runner, and the independent use case set 2 corresponds to gitlab runner; then, the independent use case set 1 is composed as a job 1, and the independent use case set 2 is composed as a job 2, wherein the job corresponding to gitlab runner is a job 1, and the job corresponding to gitlab runner2 is a job 2; finally, job 1 and job 2 are programmed into the stages in the pipeline.
Therefore, in the process of optimizing the number of jobs contained in the stages in the pipeline, the embodiment of the application considers not only the influence of the number of the operators and the number of the parallel running jobs supported by the operators on the optimization, but also the influence of different use case types on the optimization, thereby ensuring the efficiency and the accuracy of the pipeline optimization process and reducing the time for constructing the pipeline.
The following embodiments of the present application will specifically describe how the size relationship between the number of jobs that are supported by the runtime and the number of jobs in the job corresponding to the runtime affects the optimization process of the construction time of the pipeline.
In one possible example, the stage of programming at least one first job into the pipeline according to the number of concurrently running jobs supported by each of the operators may include the following operations: and when the number of the parallel running jobs supported by each of the operators is greater than or equal to the number of the jobs in the first jobs corresponding to each of the operators, programming at least one first job into a first stage in the pipeline, wherein the first stage is one stage in the pipeline.
It will be appreciated that when the number of jobs that a given operator supports to run in parallel is greater than the number of jobs in the job corresponding to that operator, the job corresponding to that operator may be programmed into the same stage. In addition, the jobs corresponding to different operators can be programmed into the same stage. Since jobs programmed into the same stage may run in parallel, it is advantageous to reduce the construction time of the pipeline to optimize the construction time of the pipeline.
For example, referring to fig. 4, first, a certain item is configured with a runner1 (gitlab runner 1) and a runner2 (gitlab runner 2), and both gitlab runner1 and gitlab runner support only a single job run. Next, the independent use case set is divided into an independent use case set 1 and an independent use case set 2 according to gitlab runner and gitlab runner, wherein the independent use case set 1 corresponds to gitlab runner and the independent use case set 2 corresponds to gitlab runner. Then, the independent example set 1 is composed as job1 (job 1), and the independent example set 2 is composed as job2 (job 2), wherein the job corresponding to gitlab runner is job1, and the job corresponding to gitlab runner2 is job 2. Finally, since gitlab runner and gitlab runner2 support running jobs with a number of 1 and the number of jobs in the jobs corresponding to gitlab runner and gitlab runner are 1, job1 and job2 are programmed into the same phase, i.e., phase n.
Referring to fig. 5, first, a certain item is configured with a runner1 (gitlab runner 1), and the number of parallel running jobs supported by gitlab runner1 is 3. Next, the independent use case set is divided into independent use case set 1 and independent use case set 2 according to gitlab runner 1. Then, the independent use case set 1 is composed as job 1, and the independent use case set 2 is composed as job 2. Finally, since the number of jobs 3 in parallel operation supported by gitlab runner1 is greater than the number of jobs 2 in the job corresponding to gitlab runner, job 1 and job 2 are programmed into the same phase, i.e., phase n.
Referring to fig. 6, first, an item is configured with a runner1 (gitlab runner 1) and a runner2 (gitlab runner 2), and the number of jobs supported by gitlab runner1 and gitlab runner2 to run in parallel is 3. Next, the independent use case set is divided into independent use case subset 1, independent use case set 2, independent use case set 3, and independent use case set 4 according to gitlab runner and gitlab runner2, wherein independent use case subset 1 and independent use case set 2 correspond to gitlab runner, and independent use case set 3 and independent use case set 4 correspond to gitlab runner. Then, the independent use case set 1 is composed as job 1, the independent use case set 2 is composed as job 2, the independent use case set 3 is composed as job 3, and the independent use case set 4 is composed as job 4. Finally, since the number of jobs supported by gitlab runner and gitlab runner2 to run in parallel is greater than the number of jobs in the jobs corresponding to gitlab runner and gitlab runner, jobs 1,2,3, and 4 are programmed into the same phase, i.e., phase n.
It should be noted that, the technical solutions in the embodiments of the present application are not limited to the examples in fig. 5, 6 and 7, and those skilled in the art can understand that the technical solutions in the embodiments of the present application may also include other examples, which are not limited specifically.
In one possible example, the stage of programming at least one first job into the pipeline according to the number of concurrently running jobs supported by each of the operators may include the operations of: when the number of the parallel operation jobs supported by each operator is smaller than or equal to the number of the parallel operation jobs in the first job corresponding to each operator, starting from a second stage in the pipeline, and programming the first number of the first jobs selected from the first jobs corresponding to each operator into the second stage, wherein the first number is smaller than or equal to the number of the parallel operation jobs supported by each operator; the remainder of the at least one first job is programmed from the next stage of the second stage.
It should be noted that, by programming the first number of jobs into the same stage, and programming the remaining jobs from the next stage, the jobs programmed into the same stage may run in parallel, thereby advantageously reducing the construction time of the pipeline to optimize the construction time of the pipeline. In addition, the rest first jobs in the at least one first job are programmed into the next stage of the second stage according to a second number of first jobs selected from the first jobs corresponding to each of the operators, and so on until the at least one first job is taken out, and the second number is less than or equal to the number of parallel running jobs supported by each of the operators.
For example, referring to fig. 7, first, a certain item is configured with a runner1 (gitlab runner 1), and the number of jobs of parallel running supported by gitlab runner1 is 2. Next, the independent use case set is divided into independent use case subset 1, independent use case set 2, and independent use case set 3 according to gitlab runner 1. Then, the independent use case set 1 is composed as job 1, the independent use case set 2 is composed as job 2, and the independent use case set 3 is composed as job 3. Finally, since the number of jobs 2 of parallel operation supported by gitlab runner1 is smaller than the number of jobs 3 in the job corresponding to gitlab runner1, job 1 and job 2 are selected from job 1, job 2 and job 3 corresponding to gitlab runner1, and job 1 and job 2 are programmed into the same phase, i.e., phase n, and job 3 is programmed into the next phase, i.e., phase n+1.
Referring to fig. 8, first, a certain item is configured with a runner1 (gitlab runner 1) and a runner2 (gitlab runner 2), and the number of jobs of parallel operations supported by gitlab runner1 and gitlab runner2 is 1. Next, the independent use case set is divided into independent use case subset 1, independent use case set 2, independent use case set 3, and independent use case set 4 according to gitlab runner and gitlab runner2, wherein independent use case subset 1 and independent use case set 2 correspond to gitlab runner, and independent use case set 3 and independent use case set 4 correspond to gitlab runner. Next, the independent example set 1 is composed as job 1, the independent example set 2 is composed as job 2, the independent example set 3 is composed as job 3, and the independent example set 4 is composed as job 4. Then, since the number of job's that are supported by gitlab runner1 and gitlab runner2 and run in parallel is smaller than the number of jobs in jobs corresponding to gitlab runner1 and gitlab runner, job 1 is selected from job 1 and job 2 corresponding to gitlab runner1, job 3 is selected from job 3 and job 4 corresponding to gitlab runner2, and job 1 and job 3 are in the same stage, i.e., stage n. Finally, jobs 2 and 4 are programmed into the next phase, phase n+1.
It should be noted that, the technical solution in the embodiment of the present application is not limited to the examples in fig. 8 and 9, and those skilled in the art can understand that the technical solution in the embodiment of the present application may also include other examples, which are not limited specifically.
The above description describes a case where only the independent use case set exists, and the following description specifically describes a case where only the pre-use case set and the conditional use case set exist. Wherein the set of pre-cases may be used to represent a logical set of pre-case components, and the set of conditional cases may be used to represent a logical set of conditional case components. The pre-use case may be a stand-alone use case or a conditional use case, and the conditional use case may be used to represent an execution unit that depends on the output of the pre-use case as a precondition.
In one possible example, determining an optimization strategy for the build time of the pipeline based on the number of operators contained by at least one of the operators and the number of parallel running jobs supported by each of the at least one of the operators may include the operations of: dividing the set of pre-use cases into at least one subset of pre-use cases according to the number of the operators contained in the at least one operator, and dividing the set of conditional use cases into at least one subset of conditional use cases; wherein each subset of the at least one conditional use case set corresponds one-to-one with each subset of the at least one pre-use case set; combining each subset of the at least one pre-use instance set into a second job to obtain at least one second job, and combining each subset of the at least one conditional use instance set into a third job to obtain at least one third job; a stage of programming at least one second job and at least one third job into the pipeline according to the number of parallel running jobs supported by each of the runners; wherein a second job incorporating stage of the first set of pre-use cases is preceded by a third job incorporating stage of the first set of conditional examples, the first subset of pre-use cases being a subset of the at least one set of pre-use cases, the first subset of conditional examples being a subset of the at least one set of conditional examples that depends on an output of the first subset of pre-use cases as a precondition.
For example, first, a certain item is configured with a runner1 (gitlab runner 1) and a runner2 (gitlab runner 2), a pre-use case set belongs to gitlab runner, a conditional use case set belongs to gitlab runner2, and gitlab runner and gitlab runner support single job or multiple jobs to run in parallel. Next, the pre-use case set is divided into a pre-use case set 1 and a pre-use case set 2 according to gitlab runner1, and the conditional use case set is divided into a conditional use case subset 1 and a conditional use case set 2 according to gitlab runner. Wherein, the pre-use case subset 1 is the use case subset on which the conditional use case subset 1 depends, and the pre-use case set 2 is the use case set on which the conditional use case set 2 depends. Then, the pre-use case set 1 is composed as job1, the pre-use case set 2 is composed as job2, the conditional use case subset 1 is composed as job 3, and the conditional use case subset 2 is composed as job 4, wherein the jobs corresponding to gitlab runner1 are job1 and job2, and the jobs corresponding to gitlab runner2 are job 3 and job 4. Finally, job1 and job2 are programmed into the stages in the pipeline.
Therefore, in the process of optimizing the number of jobs contained in the stages in the pipeline, the embodiment of the application considers not only the influence of the number of the operators and the number of the parallel running jobs supported by the operators on the optimization, but also the influence of the front use case and the front-back relation of the condition use case on the optimization, thereby ensuring the efficiency and the accuracy of the pipeline optimization process.
In one possible example, all second jobs in the at least one pre-use instance set correspond to a first one of the at least one runners, and all third jobs in the at least one conditional use instance set correspond to a second one of the at least one runners; the relation between the first operator and the second operator is one of the following: the first and second operators are the same operator, the first and second operators comprise at least one identical operator, and each of the first and second operators are different.
For example, a first runner may include gitlab runner and gitlab runner2, while a second runner may include gitlab runner and gitlab runner. At this time, the first and second operators include the same operator, i.e., gitlab runner.
It is to be appreciated that the set of pre-use cases and the set of conditional use cases may correspond to the same operator, that there is correspondence between the set of pre-use cases and the set of conditional use cases to the same operator, or that there is no correspondence between the set of pre-use cases and the set of conditional use cases to the same operator. Therefore, by considering the running device between the front-end use case set and the conditional use case set, the accuracy of the pipeline optimization process is guaranteed.
The following embodiments of the present application will specifically describe how the size relationship between the number of jobs that are supported by the runtime and the number of jobs in the job corresponding to the runtime affects the pipeline optimization process.
In one possible example, the stage of programming the at least one second job and the at least one third job into the pipeline according to the number of concurrently running jobs supported by each of the operators may include the operations of:
And in the case that the number of the jobs which are supported by each of the first operators and run in parallel is greater than or equal to the number of the jobs in the second jobs corresponding to each of the first operators, and the number of the jobs which are supported by each of the second operators and run in parallel is greater than or equal to the number of the jobs in the third jobs corresponding to each of the second operators, programming at least one second job into a third stage in the pipeline, and programming at least one third job into a fourth stage in the pipeline, the third stage being one stage before the fourth stage.
For example, referring to fig. 9, first, a certain item is configured with gitlab runner1, gitlab runner, and gitlab runner3, the pre-use case set belongs to gitlab runner and gitlab runner, the conditional use case set belongs to gitlab runner2 and gitlab runner3, and the number of jobs supported by gitlab runner1, gitlab runner2, and gitlab runner3 and running in parallel is 2. Next, the pre-use case set is divided into pre-use case set 1 by gitlab runner, the remaining pre-use case set is divided into pre-use case set 2 by gitlab runner, the conditional use case set is divided into conditional use case set 1 by gitlab runner, and the remaining conditional use case set is divided into conditional use case subset 2 by gitlab runner. Then, the pre-use case set 1 is composed as job 1, the pre-use case 2 is composed as job 2, the conditional use case subset 1 is composed as job 3, and the conditional use case subset 2 is composed as job 4. Wherein, the job corresponding to gitlab runner is job 1, the job corresponding to gitlab runner2 is job 2 and job 3, and the job corresponding to gitlab runner3 is job 4. Finally, since the number of jobs supported by gitlab runner, gitlab runner, 2, and gitlab runner to run in parallel is greater than or equal to the number of jobs in their respective corresponding jobs, job 1 and job 2 are programmed into the same phase, i.e., phase n. Furthermore, since the stage of job 1 is required to precede job 3 and the stage of job 2 is required to precede job 4, jobs 3 and 4 are programmed into the next stage of stage n, i.e., stage n+1.
Therefore, by considering the running device between the front-end use case set and the conditional use case set, the accuracy of the pipeline optimization process is guaranteed.
In one possible example, the stage of programming the at least one second job and the at least one third job into the pipeline according to the number of concurrently running jobs supported by each of the operators may include the operations of:
When the number of the parallel running jobs supported by each of the first operators is greater than or equal to the number of the parallel running jobs in the second jobs corresponding to each of the first operators, and the number of the parallel running jobs supported by each of the second operators is greater than or equal to the number of the parallel running jobs in the third jobs corresponding to each of the second operators, sequentially programming each of the at least one second jobs into stages in the pipeline in the order of stages from a fifth stage in the pipeline; the job programming stages corresponding to the same operator in at least one second job are different, and the job programming stages corresponding to different operators in at least one second job are the same; starting from a sixth stage in the pipeline, orderly programming each third job in at least one third job into the stages in the pipeline according to the sequence of the stages, wherein the fifth stage is one stage before the sixth stage; the job programming stages corresponding to the same operator in at least one third job are different, and the job programming stages corresponding to different operators in at least one third job are the same.
For example, referring to fig. 10, first, a certain item is configured with gitlab runner1, gitlab runner, and gitlab runner3, the pre-use case set belongs to gitlab runner and gitlab runner, the conditional use case set belongs to gitlab runner2 and gitlab runner3, and the number of jobs supported by gitlab runner1, gitlab runner2, and gitlab runner3 and running in parallel is 2. Next, the pre-use case set is divided into a pre-use case set 1 according to gitlab runner1, the remaining pre-use case set is divided into a pre-use case set 2 and a pre-use case set 3 according to gitlab runner, the conditional use case set is divided into a conditional use case set 1 according to gitlab runner2, and the remaining conditional use case set is divided into a conditional use case subset 2 and a conditional use case set 3 according to gitlab runner. Then, the pre-use case set 1 is composed as job 1, the pre-use case 2 is composed as job 2, the pre-use case 3 is composed as job 3, the conditional use case subset 1 is composed as job 4, the conditional use case subset 2 is composed as job 5, and the conditional use case subset 3 is composed as job 6. Wherein, the jobs corresponding to gitlab runner1 are job 1 and job 2, the jobs corresponding to gitlab runner2 are job 3 and job 4, and the jobs corresponding to gitlab runner3 are job 5 and job 6. Finally, since the number of jobs supported by gitlab runner, gitlab runner, and gitlab runner for parallel execution is greater than or equal to the number of jobs in their respective corresponding jobs, job 1, job 2, and job 3 are programmed sequentially from stage n. Of these, job 1 and job 2 belong to gitlab runner1, and job 3 belongs to gitlab runner, so job 2 and job 3 are programmed into phase n+1. Similarly, since the same phase is programmed for the job 4 and the job 5 from the phase n+1, the job 5 needs to be programmed for the phase subsequent to the job 2, and therefore, the job 4 and the job 5 are programmed for the phase n+2, and then the job 6 is programmed for the phase n+3.
Therefore, by considering the running device between the front-end use case set and the conditional use case set, the accuracy of the pipeline optimization process is guaranteed.
In one possible example, the stage of programming the at least one second job and the at least one third job into the pipeline according to the number of concurrently running jobs supported by each of the operators may include the operations of: when the number of the parallel operation jobs supported by each of the first operators is smaller than or equal to the number of the parallel operation jobs in the second jobs corresponding to each of the first operators, and the number of the parallel operation jobs supported by each of the second operators is smaller than or equal to the number of the parallel operation jobs in the third jobs corresponding to each of the second operators, starting from a seventh stage in the pipeline, sequentially programming the jobs in at least one second job into stages in the pipeline in the order of stages; the job programming stages corresponding to the same operator in at least one second job are different, and the job programming stages corresponding to different operators in at least one second job are the same; starting from an eighth stage in the pipeline, orderly programming the operations in at least one third operation into stages in the pipeline according to the sequence of the stages, wherein the seventh stage is one stage before the eighth stage; the job programming stages corresponding to the same operator in at least one third job are different, and the job programming stages corresponding to different operators in at least one third job are the same.
For example, referring to fig. 11, first, a certain item is configured with gitlab runner a and gitlab runner a, the set of pre-use cases belongs to gitlab runner a, the set of conditional use cases belongs to gitlab runner a, and the number of parallel operations supported by gitlab runner a and gitlab runner a is 1. Next, the pre-use case set is divided into a pre-use case set 1, a pre-use case set 2, and a pre-use case set 3 according to gitlab runner1, and the conditional use case set is divided into a conditional use case subset 1, a conditional use case subset 2, and a conditional use case set 3 according to gitlab runner 2. Then, the pre-use case set 1 is composed as job 1, the pre-use case 2 is composed as job 2, the pre-use case 3 is composed as job 3, the conditional use case subset 1 is composed as job 4, the conditional use case subset 2 is composed as job 5, and the conditional use case subset 3 is composed as job 6. Wherein, the jobs corresponding to gitlab runner1 are job 1, job 2 and job 3, and the jobs corresponding to gitlab runner2 are job 4, job 5 and job 6. Finally, since the number of jobs supported by gitlab runner and gitlab runner2 that run in parallel is smaller than the number of jobs in their respective jobs, job 1, job 2, and job 3 are programmed sequentially from stage n. Since the jobs 1,2 and 3 belong to gitlab runner1, the job 2 is programmed into the phase n+1 and the job 3 is programmed into the phase n+2. Similarly, from phase n+1, jobs 4, 5, and 6 are programmed into different phases in sequence.
Therefore, by considering the running device between the front-end use case set and the conditional use case set, the accuracy of the pipeline optimization process is guaranteed.
The above description describes specific embodiments in the case where only the independent use case set exists or only the pre-use case set and the conditional use case set exist, respectively. It should be noted that, based on the understanding of those skilled in the art, the technical solution and the diagrams provided by the embodiments of the present application may obtain a specific embodiment when the independent use case set, the pre-use case set and the conditional use case set exist at the same time, which is not described herein.
It can be seen that in the embodiment of the present application, first, by acquiring at least one operator configured for a project created in a warehouse management system and the number of jobs that are supported by each of the at least one operator and run in parallel; then, a first configuration file is created for the project, and the first configuration file is pushed to a warehouse management system to construct a pipeline; finally, an optimization strategy for the build time of the pipeline is determined based on the number of operators contained by the at least one operator and the number of parallel running jobs supported by each of the at least one operator. The embodiment of the application optimizes the construction time of the pipeline in the continuous integrated environment by the number of the configured operators and the number of the parallel operation jobs supported by the operators, thereby being beneficial to realizing the optimization of the construction time of the pipeline and reducing the construction time of the pipeline.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It is understood that the electronic device 110 includes corresponding hardware structures and/or software modules that perform the functions described above. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may divide the functional units of the electronic device 110 according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, but only one logic function is divided, and another division manner may be adopted in actual implementation.
In the case of integrated units, fig. 12 shows a functional unit block diagram of a pipeline optimization device of a continuous integrated environment. The pipeline optimization apparatus 1200 of the continuous integrated environment is applied to the electronic device 110, and specifically includes a processing unit 1220 and a communication unit 1230. The processing unit 1220 is configured to control and manage actions of the electronic device 110, e.g., the processing unit 1220 is configured to support the electronic device 110 to perform some or all of the steps of fig. 2, as well as other processes for the techniques described herein. The communication unit 1230 is used to support communication of the electronic device 110 with the warehouse management system server 120. The pipeline optimization apparatus 1200 of the continuous integrated environment may further comprise a storage unit 1210 for storing program code and data of the electronic device 110.
The processing unit 1220 may be a processor or a controller, such as a central processing unit (central processing unit, CPU), a general purpose processor, a digital signal processor (DIGITAL SIGNAL processor, DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (field programmable GATE ARRAY, FPGA), or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logical blocks, modules, and circuits described in connection with the present disclosure. The processing unit 1220 can also be a combination of computing functions, e.g., including one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc. The communication unit 1230 may be a communication interface, transceiver circuitry, etc. The storage unit 1210 may be a Memory, and the Memory may include a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), or a portable Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM).
In particular implementations, the processing unit 1220 is configured to perform any of the steps performed by the electronic device 110 in the method embodiments described above, and when performing data transmission such as sending, the communication unit 1230 is optionally invoked to complete the corresponding operation. The following is a detailed description.
The processing unit 1220 is configured to: creating a project in a warehouse management system, and acquiring at least one operator configured for the project and the number of parallel running jobs supported by each of the at least one operator, the at least one operator being configured to run jobs for the project and return a result of the running to the warehouse management system; creating a first configuration file in a root directory of the item and pushing the first configuration file to the warehouse management system to build a pipeline of a continuous integration environment; an optimization strategy for the build time of the pipeline is determined based on the number of operators contained by the at least one operator and the number of parallel running jobs supported by each of the at least one operator, and the optimization strategy is performed.
It can be seen that in the embodiment of the present application, first, by acquiring at least one operator configured for a project created in a warehouse management system and the number of jobs that are parallel-run supported by each of the at least one operator; then, a first configuration file is created for the project, and the first configuration file is pushed to a warehouse management system to construct a pipeline; finally, an optimization strategy for the build time of the pipeline is determined based on the number of operators contained by the at least one operator and the number of parallel running jobs supported by each of the at least one operator. The embodiment of the application optimizes the construction time of the pipeline in the continuous integrated environment by the number of the configured operators and the number of the parallel operation jobs supported by the operators, thereby being beneficial to realizing the optimization of the construction time of the pipeline and reducing the construction time of the pipeline.
In one possible example, in determining an optimization strategy for the build time of the pipeline based on the number of operators contained by at least one of the operators and the number of parallel running jobs supported by each of the at least one of the operators, processing unit 1220 is to: dividing the independent use case set into at least one independent use case set according to the number of the runners contained in the at least one runner; the independent use case set is used for representing a logic set composed of independent use cases, and the independent use cases are used for representing execution units which are not subdivided and do not take the output of any use case as a premise; combining each sub-set of the at least one independent use instance set into one first job to obtain at least one first job; wherein each of the at least one first job corresponds to a runner of the at least one runner; at least one first job is programmed into a stage in the pipeline according to the number of concurrently running jobs supported by each of the operators.
In one possible example, in terms of the stage of programming at least one first job into the pipeline according to the number of concurrently running jobs supported by each of the operators, processing unit 1220 is to: and when the number of the parallel running jobs supported by each of the operators is greater than or equal to the number of the jobs in the first jobs corresponding to each of the operators, programming at least one first job into a first stage in the pipeline, wherein the first stage is one stage in the pipeline.
In one possible example, in terms of the stage of programming at least one first job into the pipeline according to the number of concurrently running jobs supported by each of the operators, processing unit 1220 is to: when the number of the parallel operation jobs supported by each operator is smaller than or equal to the number of the parallel operation jobs in the first job corresponding to each operator, starting from a second stage in the pipeline, and programming the first number of the first jobs selected from the first jobs corresponding to each operator into the second stage, wherein the first number is smaller than or equal to the number of the parallel operation jobs supported by each operator; the remainder of the at least one first job is programmed from the next stage of the second stage.
In one possible example, in determining an optimization strategy for the build time of the pipeline based on the number of operators contained by at least one of the operators and the number of parallel running jobs supported by each of the at least one of the operators, processing unit 1220 is to: dividing the set of pre-use cases into at least one subset of pre-use cases according to the number of the operators contained in the at least one operator, and dividing the set of conditional use cases into at least one subset of conditional use cases; the method comprises the steps that a pre-use case set is used for representing a logic set composed of pre-use cases, a conditional use case set is used for representing a logic set composed of conditional use cases, the pre-use cases are independent use cases or conditional use cases, the independent use cases are used for representing execution units which cannot be subdivided and are not on the premise of output of any use case, the conditional use cases are used for representing execution units which depend on the output of the pre-use cases on the premise, and each subset in at least one conditional use case set corresponds to each subset in at least one pre-use case set one by one; combining each subset of the at least one pre-use instance set into a second job to obtain at least one second job, and combining each subset of the at least one conditional use instance set into a third job to obtain at least one third job; a stage of programming at least one second job and at least one third job into the pipeline according to the number of parallel running jobs supported by each of the runners; wherein the stage of the second job compilation in the first set of pre-use cases is preceded by the stage of the third job compilation in the first set of conditional use cases, the first subset of pre-use cases being a subset of the at least one set of pre-use cases, the first subset of conditional use cases being a subset of the at least one set of conditional use cases that depends on the output of the first subset of pre-use cases as a precondition.
In one possible example, all second jobs in the at least one pre-use instance set correspond to a first one of the at least one runners, and all third jobs in the at least one conditional use instance set correspond to a second one of the at least one runners; the relation between the first operator and the second operator is one of the following: the first and second operators are the same operator, the first and second operators comprise at least one identical operator, and each of the first and second operators are different.
In one possible example, in terms of the stages of programming at least one second job and at least one third job into the pipeline according to the number of concurrently running jobs supported by each of the operators, processing unit 1220 is to: in the case that the number of the jobs which are supported by each of the first operators and run in parallel is greater than or equal to the number of the jobs in the second jobs corresponding to each of the first operators, and the number of the jobs which are supported by each of the second operators and run in parallel is greater than or equal to the number of the jobs in the third jobs corresponding to each of the second operators, programming at least one second job into a third stage in the pipeline, and programming at least one third job into a fourth stage in the pipeline, the third stage being one stage before the fourth stage; or starting from the fifth stage in the pipeline, programming each second job in the at least one second job into the stages in the pipeline in sequence of the stages; the job programming stages corresponding to the same operator in at least one second job are different, and the job programming stages corresponding to different operators in at least one second job are the same; starting from a sixth stage in the pipeline, orderly programming each third job in at least one third job into the stages in the pipeline according to the sequence of the stages, wherein the fifth stage is one stage before the sixth stage; the job programming stages corresponding to the same operator in at least one third job are different, and the job programming stages corresponding to different operators in at least one third job are the same.
In one possible example, in terms of the stages of programming at least one second job and at least one third job into the pipeline according to the number of concurrently running jobs supported by each of the operators, processing unit 1220 is to: when the number of the parallel operation jobs supported by each of the first operators is smaller than or equal to the number of the parallel operation jobs in the second jobs corresponding to each of the first operators, and the number of the parallel operation jobs supported by each of the second operators is smaller than or equal to the number of the parallel operation jobs in the third jobs corresponding to each of the second operators, starting from a seventh stage in the pipeline, sequentially programming the jobs in at least one second job into stages in the pipeline in the order of stages; the job programming stages corresponding to the same operator in at least one second job are different, and the job programming stages corresponding to different operators in at least one second job are the same; starting from an eighth stage in the pipeline, orderly programming the operations in at least one third operation into stages in the pipeline according to the sequence of the stages, wherein the seventh stage is one stage before the eighth stage; the job programming stages corresponding to the same operator in at least one third job are different, and the job programming stages corresponding to different operators in at least one third job are the same.
The present application also provides a computer-readable storage medium storing a computer program for electronic data exchange, the computer program being operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above.
Embodiments of the present application also provide a computer program product, wherein the computer program product comprises a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package.
For the purposes of simplicity of explanation, the various method embodiments described above are depicted as a series of acts in combination. It will be appreciated by persons skilled in the art that the application is not limited by the order of acts described, as some steps in embodiments of the application may be performed in other orders or concurrently. Moreover, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts and modules referred to are not necessarily required in the present embodiments.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In several embodiments provided by the present application, it should be appreciated by those skilled in the art that the described apparatus may be implemented in other ways. It will be appreciated that the above described apparatus embodiments are merely illustrative. For example, the above-described division of units is only one logical function division, and there may be another division manner in practice. That is, multiple units or components may be combined or integrated into another piece of software, and some features may be omitted or not performed. Further, the illustrated or discussed coupling, direct coupling, or communication connection may be through some interface, device, or unit, or may be in electrical or other form.
The units described above as separate components may or may not be physically separate. The components shown as units may be physical units, or may not be located on one network unit, or may be distributed to a plurality of network units. Accordingly, the above embodiments may be implemented by selecting some or all of the units according to actual needs.
In addition, each functional unit in each embodiment may be integrated in one processing unit, may exist in different physical units, or two or more functional units may be integrated in one physical unit. The above units may be implemented in hardware or in software functional units.
The above units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable memory. It will be appreciated that the technical solution of the application, which contributes to the prior art or all or part of the technical solution, may be embodied in the form of a computer software product. The computer software product is stored in a memory and includes instructions for causing a computer device (personal computer, server, network device, etc.) to perform all or part of the steps of an embodiment of the application. The memory includes various media capable of storing program codes, such as a usb disk, a ROM, a RAM, a removable hard disk, a magnetic disk, and an optical disk.
Those skilled in the art will appreciate that all or part of the steps of embodiments of the application may be performed by a program to instruct related hardware, and the program may be stored in a memory, where the memory may include a flash disk, a ROM, a RAM, a magnetic disk, an optical disk, or the like.
The foregoing describes in detail embodiments of the present application only for aiding in the understanding of the method of the present application and its core ideas. Those skilled in the art will appreciate that the embodiments of the application vary from one embodiment to another and from one application to another, and so forth, the present disclosure should not be construed as limiting the application.

Claims (10)

1. A pipeline optimization method for a continuous integrated environment, comprising:
Creating a project in a warehouse management system, and acquiring at least one operator configured for the project and the number of parallel running jobs supported by each of the at least one operator, wherein the at least one operator is used for running the jobs of the project and returning the running results to the warehouse management system;
Creating a first configuration file in a root directory of the item, and pushing the first configuration file to the warehouse management system to construct a pipeline of a continuous integration environment;
Determining an optimization strategy for the construction time of the pipeline according to the number of the operators contained in the at least one operator and the number of the parallel running jobs supported by each of the at least one operator, and executing the optimization strategy;
Wherein the optimization strategy comprises: dividing the independent use case set into at least one independent use case set according to the number of the runners contained in the at least one runners; the independent use case set is used for representing a logic set composed of independent use cases, and the independent use cases are used for representing execution units which cannot be subdivided and are not premised on the output of any use case; combining each sub-set of the at least one independent use case set into one first job to obtain at least one first job; a stage of programming the at least one first job into the pipeline according to the number of parallel running jobs supported by each of the operators;
Or the optimization strategy comprises: dividing the pre-use case set into at least one pre-use case subset according to the number of the operators contained in the at least one operator, and dividing the conditional use case set into at least one conditional use case subset; the set of the pre-use cases is used for representing a logic set composed of the pre-use cases, the set of the conditional use cases is used for representing a logic set composed of the conditional use cases, the pre-use cases are independent use cases or the conditional use cases, the independent use cases are used for representing execution units which cannot be subdivided and are not on the premise of output of any use cases, and the conditional use cases are used for representing execution units which depend on the output of the pre-use cases and are on the premise; combining each subset of the at least one pre-use instance set into a second job to obtain at least one second job, and combining each subset of the at least one conditional use instance set into a third job to obtain at least one third job; and a stage of programming the at least one second job and the at least one third job into the pipeline according to the number of parallel running jobs supported by each of the operators.
2. The method of claim 1, wherein each of the at least one first job corresponds to a runner of the at least one runner.
3. The method of claim 2, wherein the step of programming the at least one first job into the pipeline based on the number of concurrently running jobs supported by each of the operators comprises:
And when the number of the parallel running jobs supported by each operator is greater than or equal to the number of the jobs in the first jobs corresponding to each operator, programming the at least one first job into a first stage in the pipeline, wherein the first stage is one stage in the pipeline.
4. The method of claim 2, wherein the step of programming the at least one first job into the pipeline based on the number of concurrently running jobs supported by each of the operators comprises:
When the number of the parallel running jobs supported by each operator is smaller than or equal to the number of the parallel running jobs in the first jobs corresponding to each operator, starting from a second stage in the pipeline, and programming a first number of jobs selected from the first jobs corresponding to each operator into the second stage, wherein the first number is smaller than or equal to the number of the parallel running jobs supported by each operator;
The remainder of the at least one first job is programmed from a next stage of the second stage.
5. The method of claim 1, wherein each subset of the at least one conditional instance set corresponds one-to-one with each subset of the at least one pre-use instance set;
The stage of second job compilation-in composed of the first set of pre-use cases is preceded by the stage of third job compilation-in composed of the first set of conditional use cases, the first set of pre-use cases being a subset of the at least one set of pre-use cases, the first set of conditional use cases being a subset of the at least one set of conditional use cases that depends on the output of the first subset of pre-use cases as a precondition.
6. The method of claim 1, wherein all second jobs in the at least one pre-use instance set correspond to a first one of the at least one runners and all third jobs in the at least one conditional use instance set correspond to a second one of the at least one runners; wherein the relationship between the first and second operators is one of: the first and the second operators are the same operator, and the first and the second operators comprise at least one same operator each of the first runners and the first runner each of the second operators is different.
7. The method of claim 6, wherein the step of programming the at least one second job and the at least one third job into the pipeline according to the number of concurrently running jobs supported by each of the operators comprises:
in the case where the number of jobs in parallel operation supported by each of the first operators is greater than or equal to the number of jobs in the second job corresponding to each of the first operators, and the number of jobs in parallel operation supported by each of the second operators is greater than or equal to the number of jobs in the third job corresponding to each of the second operators,
A third stage of programming the at least one second job into the pipeline, and a fourth stage of programming the at least one third job into the pipeline, the third stage being one stage preceding the fourth stage; or alternatively
Starting from a fifth stage in the pipeline, programming each second job in the at least one second job into stages in the pipeline in sequence of stages; the job programming stages corresponding to the same operator in the at least one second job are different, and the job programming stages corresponding to different operators in the at least one second job are the same; starting from a sixth stage in the pipeline, programming each third job in the at least one third job into stages in the pipeline in sequence of stages, wherein the fifth stage is one stage before the sixth stage; the job programming stages corresponding to the same operator in the at least one third job are different, and the job programming stages corresponding to different operators in the at least one third job are the same.
8. The method of claim 6, wherein the step of programming the at least one second job and the at least one third job into the pipeline according to the number of concurrently running jobs supported by each of the operators comprises:
In the case where the number of jobs in parallel operation supported by each of the first operators is less than or equal to the number of jobs in the second job corresponding to each of the first operators, and the number of jobs in parallel operation supported by each of the second operators is less than or equal to the number of jobs in the third job corresponding to each of the second operators,
Starting from a seventh stage in the pipeline, sequentially programming the jobs in the at least one second job into stages in the pipeline in the order of the stages; the job programming stages corresponding to the same operator in the at least one second job are different, and the job programming stages corresponding to different operators in the at least one second job are the same;
Starting from an eighth stage in the pipeline, orderly programming the operations in the at least one third operation into the stages in the pipeline according to the sequence of the stages, wherein the seventh stage is one stage before the eighth stage; the job programming stages corresponding to the same operator in the at least one third job are different, and the job programming stages corresponding to different operators in the at least one third job are the same.
9. A pipeline optimization apparatus for a continuous integrated environment, the apparatus comprising a processing unit configured to:
creating a project in a warehouse management system, and acquiring at least one operator configured for the project and the number of parallel running jobs supported by each of the at least one operator, wherein the at least one operator is used for running the jobs of the project and returning the running results to the warehouse management system;
Creating a first configuration file in a root directory of the item, and pushing the first configuration file to the warehouse management system to construct a pipeline of a continuous integration environment;
Determining an optimization strategy for the pipeline according to the number of the operators contained in the at least one operator and the number of the parallel running jobs supported by each of the at least one operator, and executing the optimization strategy of the pipeline;
Wherein the optimization strategy comprises: dividing the independent use case set into at least one independent use case set according to the number of the runners contained in the at least one runners; the independent use case set is used for representing a logic set composed of independent use cases, and the independent use cases are used for representing execution units which cannot be subdivided and are not premised on the output of any use case; combining each sub-set of the at least one independent use case set into one first job to obtain at least one first job; a stage of programming the at least one first job into the pipeline according to the number of parallel running jobs supported by each of the operators;
Or the optimization strategy comprises: dividing the pre-use case set into at least one pre-use case subset according to the number of the operators contained in the at least one operator, and dividing the conditional use case set into at least one conditional use case subset; the set of the pre-use cases is used for representing a logic set composed of the pre-use cases, the set of the conditional use cases is used for representing a logic set composed of the conditional use cases, the pre-use cases are independent use cases or the conditional use cases, the independent use cases are used for representing execution units which cannot be subdivided and are not on the premise of output of any use cases, and the conditional use cases are used for representing execution units which depend on the output of the pre-use cases and are on the premise; combining each subset of the at least one pre-use instance set into a second job to obtain at least one second job, and combining each subset of the at least one conditional use instance set into a third job to obtain at least one third job; and a stage of programming the at least one second job and the at least one third job into the pipeline according to the number of parallel running jobs supported by each of the operators.
10. A computer readable storage medium storing a computer program for electronic data exchange, wherein the computer program is operable to cause a computer to perform the method of any one of claims 1-8.
CN202010629800.5A 2020-07-02 2020-07-02 Pipeline optimization method and device for continuous integrated environment Active CN113962415B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010629800.5A CN113962415B (en) 2020-07-02 2020-07-02 Pipeline optimization method and device for continuous integrated environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010629800.5A CN113962415B (en) 2020-07-02 2020-07-02 Pipeline optimization method and device for continuous integrated environment

Publications (2)

Publication Number Publication Date
CN113962415A CN113962415A (en) 2022-01-21
CN113962415B true CN113962415B (en) 2024-10-08

Family

ID=79459381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010629800.5A Active CN113962415B (en) 2020-07-02 2020-07-02 Pipeline optimization method and device for continuous integrated environment

Country Status (1)

Country Link
CN (1) CN113962415B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783090A (en) * 2019-01-18 2019-05-21 成都宝瓜科技有限公司 A kind of method for visualizing, device and server for persistently delivering
CN110597552A (en) * 2019-09-04 2019-12-20 浙江大搜车软件技术有限公司 Configuration method, device and equipment of project continuous integration pipeline and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10332073B2 (en) * 2016-09-08 2019-06-25 International Business Machines Corporation Agile team structure and processes recommendation
CN111144839B (en) * 2019-12-17 2024-02-02 深圳市优必选科技股份有限公司 Project construction method, continuous integration system and terminal equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783090A (en) * 2019-01-18 2019-05-21 成都宝瓜科技有限公司 A kind of method for visualizing, device and server for persistently delivering
CN110597552A (en) * 2019-09-04 2019-12-20 浙江大搜车软件技术有限公司 Configuration method, device and equipment of project continuous integration pipeline and storage medium

Also Published As

Publication number Publication date
CN113962415A (en) 2022-01-21

Similar Documents

Publication Publication Date Title
CN114546738B (en) Universal test method, system, terminal and storage medium for server
CN102937904A (en) Multi-node firmware updating method and device
CN111045933A (en) Regression strategy updating method and device, storage medium and terminal equipment
CN112925587B (en) Method and device for initializing applications
CN104035747A (en) Method and device for parallel computing
CN112181522A (en) Data processing method and device and electronic equipment
CN111988429A (en) Algorithm scheduling method and system
CN113204385B (en) Plug-in loading method and device, computing equipment and readable storage medium
CN114117973A (en) Logic synthesis method, device and storage medium
CN114841323A (en) Processing method and processing device of neural network computation graph
CN114841322A (en) Processing method and processing device of neural network computation graph
CN115190010B (en) Distributed recommendation method and device based on software service dependency relationship
CN110362394B (en) Task processing method and device, storage medium and electronic device
US11163594B2 (en) Rescheduling JIT compilation based on jobs of parallel distributed computing framework
CN113962415B (en) Pipeline optimization method and device for continuous integrated environment
CN111079390B (en) Method and device for determining selection state of check box list
CN109947564B (en) Service processing method, device, equipment and storage medium
CN113342512B (en) IO task silencing and driving method and device and related equipment
CN117743145A (en) Test script generation method and device based on coding template and processing equipment
CN115951985A (en) Task execution method and device
CN111124413B (en) Method and device for compiling Maven project
CN112184027A (en) Task progress updating method and device and storage medium
CN117290113B (en) Task processing method, device, system and storage medium
CN111399969B (en) Virtual resource arranging system, method, device, medium and equipment
CN107818048B (en) Computer code branch integrated quality inspection method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant