[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN119065817A - Multi-process dynamic scheduling method and system under large and small core architecture CPU - Google Patents

Multi-process dynamic scheduling method and system under large and small core architecture CPU Download PDF

Info

Publication number
CN119065817A
CN119065817A CN202411547727.1A CN202411547727A CN119065817A CN 119065817 A CN119065817 A CN 119065817A CN 202411547727 A CN202411547727 A CN 202411547727A CN 119065817 A CN119065817 A CN 119065817A
Authority
CN
China
Prior art keywords
scheduling
core
list
cpu
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411547727.1A
Other languages
Chinese (zh)
Inventor
王丰羽
唐峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kirin Software Co Ltd
Original Assignee
Kirin Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kirin Software Co Ltd filed Critical Kirin Software Co Ltd
Priority to CN202411547727.1A priority Critical patent/CN119065817A/en
Publication of CN119065817A publication Critical patent/CN119065817A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明涉及进程调度技术领域,具体提供一种在大小核架构CPU下的多进程动态调度方法及系统。该方法包括:S1监测系统当前的CPU负载,判断当前CPU负载是否大于CPU负载阈值,得到第三调度参数;S2基于第三调度参数判断是否执行动态调度,当暂停动态调度时,等待k倍的预设监测间隔后返回步骤S1,k>1,当执行动态调度时,获取第一调度参数和第二调度参数;S3根据第一调度参数和第二调度参数确定动态调度模式;S4:根据动态调度模式对进程执行动态调度,等待预设监测间隔后返回步骤S1;通过上述方案,解决了现有系统调度在大小核混合架构的CPU上存在调度不及时、调度不合理的技术问题。

The present invention relates to the technical field of process scheduling, and specifically provides a multi-process dynamic scheduling method and system under a large-core and small-core architecture CPU. The method includes: S1 monitors the current CPU load of the system, determines whether the current CPU load is greater than the CPU load threshold, and obtains a third scheduling parameter; S2 determines whether to perform dynamic scheduling based on the third scheduling parameter, and when dynamic scheduling is suspended, returns to step S1 after waiting for k times of the preset monitoring interval, k>1, and when dynamic scheduling is performed, obtains the first scheduling parameter and the second scheduling parameter; S3 determines the dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter; S4: performs dynamic scheduling on the process according to the dynamic scheduling mode, and returns to step S1 after waiting for the preset monitoring interval; through the above scheme, the technical problems of untimely and unreasonable scheduling in the existing system scheduling on the CPU of the large-core hybrid architecture are solved.

Description

Multi-process dynamic scheduling method and system under large and small core architecture CPU
Technical Field
The invention relates to the technical field of process scheduling, in particular to a multi-process dynamic scheduling method and system under a large and small core architecture CPU.
Background
Scheduling is an important means to address the mismatch between resources and demands. The time that a process can run on a CPU is an important system resource, and an operating system is required to perform effective scheduling management according to the current system state. The current Linux kernel supports the following scheduling techniques:
1. First-in first-out scheduling
As shown in fig. 1, all the processes are queued according to the sequence of arrival time, the process that arrives first is scheduled and executed by the CPU, and then the CPU resource is released to the next process after the execution is completed, for example, for three processes of the process A, B, C, the process arrives at the CPU in sequence, and then the corresponding scheduling sequence is the process a→the process b→the process C.
2. Time slice rotation scheduling
Time slice round robin scheduling is a common scheduling algorithm that divides CPU time into fixed-sized time slices (also referred to as amounts of time), each process executing within a time slice, and if the time slice runs out, it is moved to the end of the ready queue to wait for the next round of scheduling.
3. Multi-level queue feedback scheduling
The multi-stage feedback queue scheduling is a process scheduling algorithm integrating priority scheduling and time slice rotation scheduling.
As shown in fig. 2, the multi-stage feedback queue scheduling algorithm sets a plurality of ready queues, and assigns different priorities to each queue, the priority of the first queue 1 is highest, and the priorities of the remaining queues 2, 3. The execution time slices given by the processes in different queues are different in size, and the time slices are smaller in the queue with higher priority, so that S1< S2< S3.
Then each queue adopts a first-in first-out scheduling algorithm, and when a new process enters the memory, the new process is put at the end of the first waiting queue for scheduling. If the process can be completed within the time slice, the process can be evacuated, otherwise the scheduler shifts it to the end of the second queue to wait for scheduling. And when the process is finally reduced to the nth queue, running according to a time slice round-robin scheduling algorithm in the nth queue.
The multi-level queue feedback scheduling arranges the processes according to different priorities and time slices through a plurality of ready queues, so that more flexible and efficient scheduling is realized.
The three scheduling technologies only consider the scheduling strategy of the process when the whole CPU resource is acquired, for the system, only the running time of the process on the CPU is controlled, but the consideration of the internal characteristics of the CPU is lacked, on the past CPU, for each CPU core, the scheduling strategy is equal rights, and the distinction between a big core and a small core does not exist, and under the CPU design architecture, the existing scheduling mode has good performance.
But at present, a new design architecture, namely a size core architecture, appears in the CPU. At present, large and small core architectures are adopted by mainstream personal end chips including Intel, most chips of a Rayleigh core micro-scale chip and the like. Cores on a CPU have different dominant frequencies, architectures, and functional orientations in order to optimize the performance and energy efficiency of the CPU. The large Core (Performance Core), also called P Core or large Core, generally has higher main frequency and strong computing power, is suitable for processing tasks with high Performance requirements, such as running large application programs and games with high graphics requirements, and the small Core (EFFICIENCY CORE), also called E Core or small Core, is a low-power and high-efficiency design, and can process low-load and multi-Core workload, such as daily application, web page browsing, and the like.
The design architecture allows the CPU to utilize different types of cores under different conditions, thereby improving the overall working efficiency and performance of the CPU. For embedded systems, the design of the size core helps to improve user experience, extend battery life, and maintain smooth multitasking capabilities.
However, when facing the CPU with the mixed architecture of the large and small cores, the conventional dispatching mode of the Linux system has the problems of untimely dispatching and unreasonable dispatching. The main problems are three:
1. The Linux system may schedule important high-priority processes on the small cores, and schedule part of low-priority processes on the large cores, so that the high-priority processes cannot obtain relatively sufficient CPU resources, and the low-priority processes occupy idle resources, thereby affecting the system performance and reducing the overall performance of the system;
2. Under the condition of low system load, the current scheduling mode cannot schedule the high-priority process of the system on the large core, instead, the small cores are completely used, and even the phenomenon that one core is difficult and multiple cores are observed in a surrounding way appears, so that the high-priority process cannot obtain higher performance, and the response speed of key system components is influenced;
3. Under the condition of higher system load, the current scheduling mode is not timely for high-load application scheduling, the high-load process scheduling of the system appears on a plurality of small cores, and other low-load processes schedule on a large core, so that the tasks of the high-load process cannot be completed in time, the overall load of the system is high, the subsequent system tasks cannot be processed in time, and even the system is down.
Disclosure of Invention
In order to overcome the defects, the invention provides the technical problems that the scheduling is not timely and unreasonable on the CPU of the large and small core mixed architecture in the conventional system scheduling.
In a first aspect, the present invention proposes a multi-process dynamic scheduling method under a large and small core architecture CPU, comprising the steps of,
S1, monitoring the current CPU load of a system, judging whether the current CPU load is larger than a CPU load threshold value, and obtaining a third scheduling parameter;
s2, judging whether to execute dynamic scheduling based on the third scheduling parameter, when the dynamic scheduling is suspended, returning to the step S1, wherein k is greater than 1 after waiting for k times of preset monitoring intervals, and when the dynamic scheduling is executed, acquiring a first scheduling parameter and a second scheduling parameter;
S3, determining a dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter;
s4, performing dynamic scheduling on the process according to the dynamic scheduling mode, and returning to the step S1 after waiting for a preset monitoring interval;
The method comprises the steps that first scheduling parameters are obtained, a divided scheduling reference list is obtained, the scheduling reference list comprises a system core process list, a system high-priority process list, a user-defined list and a high-priority process blacklist, and the category of a process in the scheduling reference list is used as the first scheduling parameters;
the obtaining of the second scheduling parameter includes updating an LRU linked list generated based on an LRU algorithm according to the current CPU occupancy rate, and calculating a scheduling score P score of each process in the updated LRU linked list as the second scheduling parameter.
Further, when the current CPU load is larger than the CPU load threshold, the third scheduling parameter is set to 0, dynamic scheduling is suspended, and when the current CPU load is smaller than or equal to the CPU load threshold, the third scheduling parameter is set to 1, and dynamic scheduling is executed.
The system core process list is used for maintaining processes corresponding to core components of the operating system, and comprises a system graphic server process, a system audio server process, a system core process and a maintenance process thereof;
the system high-priority process list is used for maintaining a system relational component process, a system important auxiliary process and a system detection response process;
The user-defined list is used for maintaining a process created for the important application and a standard test program process;
The high priority process blacklist is used to maintain processes that need resident systems and are not important.
Further, the system core process list and the system high priority process list are provided by an operating system designer, an operating system is built in, and a user does not have modification authority;
and the user-defined list and the high-priority process blacklist are input by the user-defined line by line through newly creating an empty configuration file in a specific directory of the system in the starting process, and the system detects the configuration file in the scheduling process and converts the configuration file into the user-defined list and the high-priority process blacklist.
Further, the updating the LRU linked list generated based on the LRU algorithm according to the current CPU occupancy rate includes the steps of:
acquiring N processes with highest CPU occupancy rate of the system at present, wherein N represents the length of an LRU linked list;
Judging whether each process with highest CPU occupancy rate exists in the current LRU chain table in sequence from high to low according to the CPU occupancy rate, if so, updating the occurrence times i=i++ of the process with the highest CPU occupancy rate, if not, traversing the current LRU chain table, and searching the processes which do not belong to the N processes with the highest CPU occupancy rate and have the least occurrence times;
Judging whether a process with the least occurrence number exists, if so, replacing the process with the least occurrence number with the process with the highest corresponding CPU occupancy rate, setting the occurrence number of the process with the highest corresponding CPU occupancy rate to be 1, and if not, replacing the process with the least occurrence number and the latest time entering the LRU linked list with the process with the highest corresponding CPU occupancy rate, and setting the occurrence number of the process with the highest corresponding CPU occupancy rate to be 1;
And rearranging all the processes in the LRU chain table according to the fact that the more the occurrence number is, the smaller the corresponding numerical value of the position of the process with the earlier entry chain table in the LRU chain table is.
Further, the scheduling score P of the process score=(10-Pni 5+(Ppr-20)-1)logN-1(2(N-1)) +logN-1((N-Plru)+N-1)Pcpu 99,
Wherein, P ni represents the NI value of the process in the LRU chain table, P pr represents the PR value of the process in the LRU chain table, N represents the length of the LRU chain table, P lru represents the value corresponding to the position of the process in the LRU chain table, and P cpu represents the CPU occupancy rate of the process in the LRU chain table.
Further, the step S3 includes:
judging the first scheduling parameter;
For the process of which the first scheduling parameter is a system core process list, setting a scheduling mode as exclusive large-core scheduling, for the process of which the first scheduling parameter is a system high-priority process list, setting a scheduling mode as full-large-core scheduling, for the process of which the first scheduling parameter is a high-priority process blacklist, setting a scheduling mode as full-small-core scheduling, and for the process of which the first scheduling parameter is a user-defined list, judging the second scheduling parameter;
when the second scheduling parameters are all smaller than or equal to the lower threshold value of the scheduling score, for the process of which the first scheduling parameters are user-defined lists, setting a scheduling mode as full-large-core scheduling, and setting the scheduling modes of all processes in the LRU linked list as large-core and small-core mixed scheduling;
When the second scheduling parameter is larger than the lower scheduling score threshold and smaller than or equal to the upper scheduling score threshold, for the process of which the first scheduling parameter is a user-defined list, setting a scheduling mode as big and small kernel mixed scheduling, setting a scheduling mode of the process of which the scheduling score is larger than the lower scheduling score threshold in the LRU linked list as full-big kernel scheduling, and setting scheduling modes of other processes in the LRU linked list as big and small kernel mixed scheduling;
when the second scheduling parameter is larger than the upper threshold value of the scheduling score, for the process of which the first scheduling parameter is a user-defined list, the scheduling mode is set to be large-core mixed scheduling, the scheduling mode of the process of which the scheduling score is larger than the upper threshold value of the scheduling score in the LRU chain table is set to be exclusive large-core scheduling, and the scheduling modes of other processes in the LRU list are set to be large-core mixed scheduling.
Further, the dedicated large core scheduling finger process can be scheduled on all large cores of the CPU, the full large core scheduling finger process can only be scheduled on large cores except the dedicated large cores of the CPU, the full small core scheduling finger process can only be scheduled on small cores of the CPU, and the large and small core mixed scheduling finger process can be scheduled on large cores except the dedicated large cores of the CPU or small cores of the CPU.
Further, the method also comprises the steps of:
acquiring a user-defined monitoring interval input by a user;
Judging whether the custom monitoring interval accords with a monitoring interval threshold, if so, taking the custom monitoring interval as a preset monitoring interval, and if not, taking the default monitoring interval as the preset monitoring interval.
In a second aspect, the invention provides a multi-process dynamic scheduling system under a large and small core architecture CPU, comprising a scheduling reference column surface layer, a process scheduling scoring algorithm layer, a system dynamic sensing layer and a large and small core dynamic scheduling layer which are sequentially connected in information, wherein the system dynamic sensing layer comprises a system state sensing module and a condition checking module which are connected in information, the condition checking module is simultaneously connected with the scheduling reference column surface layer, the process scheduling scoring algorithm layer and the large and small core dynamic scheduling layer, the large and small core dynamic scheduling layer comprises a scheduling mode selection module and a persistence monitoring module which are sequentially connected in information, the scheduling mode selection module is connected with the condition checking module of the system state sensing layer in information,
The scheduling reference list layer is used for providing a divided scheduling reference list, the scheduling reference list comprises a system core process list, a system high-priority process list, a user-defined list and a high-priority process blacklist, and the scheduling reference list type is used as a first scheduling parameter;
The process scheduling scoring algorithm layer is used for updating an LRU linked list generated based on an LRU algorithm according to the current CPU occupancy rate, calculating a scheduling score P score of each process in the updated LRU linked list, and taking the scheduling score P score as a second scheduling parameter;
The system state sensing module is used for monitoring the current CPU load of the system, judging whether the current CPU load is larger than a CPU load threshold value or not, and obtaining a third scheduling parameter;
The condition checking module is used for directly or indirectly acquiring a third scheduling parameter, a first scheduling parameter and a second scheduling parameter from the system state sensing module, the scheduling reference list surface layer and the process scheduling scoring algorithm layer respectively, judging whether to execute dynamic scheduling based on the third scheduling parameter, waiting for k times of preset monitoring interval when the dynamic scheduling is suspended, returning to the system state sensing module, wherein k is more than 1, acquiring the first scheduling parameter and the second scheduling parameter when the dynamic scheduling is executed, and sending the first scheduling parameter and the second scheduling parameter to the scheduling mode selecting module;
the scheduling mode selection module is used for determining a dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter which are acquired from the condition checking module;
the persistence monitoring module is used for executing dynamic scheduling on the process according to the dynamic scheduling mode, and returning to the system state sensing module to execute dynamic scheduling again after a preset monitoring interval.
The invention has the working principle and beneficial effects that:
In the technical scheme of implementing the invention, processes in the system are detected in real time, a CPU is responsible for judging whether to execute the process scheduling basis, a process scheduling score algorithm and a process scheduling reference list type are used as the basis for determining the dynamic scheduling mode, and a default scheduling strategy of a Linux operating system is replaced. And the corresponding large cores and/or small cores are selected for scheduling for different processes, so that the priority of scheduling the high-priority process and the high-load process on the large cores is improved, the performance of the key process of the system is optimized, and the overall performance of the operating system is further improved.
Drawings
The present disclosure will become more readily understood with reference to the accompanying drawings. It is to be understood by persons of ordinary skill in the art that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention. Moreover, like numerals in the figures are used to designate like parts, wherein:
FIG. 1 is a flow chart of first-in first-out scheduling in the background art of the invention;
FIG. 2 is a schematic flow chart of a multi-stage feedback queue scheduling in the background of the invention;
FIG. 3 is a flow chart illustrating the main steps of a multi-process dynamic scheduling method under a large and small core architecture CPU according to the present invention;
FIG. 4 is a schematic diagram of a multi-process dynamic scheduling system under a large and small core architecture CPU according to the present invention;
FIG. 5 is a schematic diagram showing GLMark of undeployed multi-process dynamic scheduling in embodiment 3 of the present invention;
FIG. 6 is a schematic diagram showing GLMark for deploying multi-process dynamic scheduling in embodiment 3 of the present invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
Example 1
FIG. 3 is a flowchart illustrating main steps of a multi-process dynamic scheduling method under a large and small core architecture CPU according to the present invention. As shown in fig. 3, the multi-process dynamic scheduling method under the large and small core architecture CPU of the present embodiment mainly includes the following steps S1 to S4.
S1, monitoring the current CPU load of the system, and judging whether the current CPU load is larger than a CPU load threshold value or not to obtain a third scheduling parameter.
In one embodiment, the third scheduling parameter is set to 0 if the current CPU load is greater than the CPU load threshold and to 1 if the current CPU load is less than or equal to the CPU load threshold.
S2, judging whether to execute dynamic scheduling or not based on the third scheduling parameter, if the third scheduling parameter is 0, suspending dynamic scheduling, and if the third scheduling parameter is 1, executing dynamic scheduling.
Generally, when the system is under an excessive load and the system resource environment is too intense, the consumption of executing the process scheduling will aggravate the system resource shortage, and meanwhile, the method is not in compliance with the requirement of preventing the downtime risk of the system and improving the overall performance of the system through reasonable size core scheduling in the embodiment, so that whether to execute the process scheduling needs to be determined through the third scheduling parameter.
When the dynamic scheduling is suspended, the step S1 is returned to after waiting for k times of preset monitoring intervals, and k >1.
In one embodiment, k is 3, and the preset monitoring interval is prolonged to be a specific times of the original monitoring interval, so as to reduce the influence of process scheduling on the system load.
When the dynamic scheduling is executed, a first scheduling parameter and a second scheduling parameter are acquired.
In one embodiment, the method for obtaining the first scheduling parameter includes obtaining a divided scheduling reference list, wherein the scheduling reference list comprises a system core process list, a system high priority process list, a user-defined list and a high priority process blacklist, and the category of the process in the scheduling reference list is used as the first scheduling parameter.
The system core process list is provided by an operating system designer, an operating system is built in, a user does not have modification authority, and is responsible for maintaining a process list to which a core component of the system belongs, such as a graphics server (Xorg, walland, etc.) of the system, an audio server (PulseAudio, etc.) of the system, a core process of the system and a maintenance process thereof (ukui-session on a domestic kylin operating system because of system variation). These processes are used as core processes of the system, participate in scheduling decisions, and in any system condition, are guaranteed to obtain the highest priority large core scheduling.
The system high priority process list is also provided by an operating system designer, an operating system is built in, a user does not have modification authority, and the system relational components except for a system core process, important auxiliary processes of the system and a system detection response process (which are different from each other according to the system and are ukui-settings-daemon, lightdm on a domestic kylin operating system) are maintained. These processes are used as key processes and important processes of the system, participate in scheduling decisions, and under most conditions of the system, the processes are inferior to the system core processes to obtain high-priority large-core scheduling.
The user-defined list is defined by the user, an empty configuration file list is newly built for the user in a specific directory of the system in the starting process, the user inputs the configuration file list according to lines, a process created for the user personal important application and a standard test program process are usually created for the user personal important application, the system detects the file in the dispatching process, the file is converted into the user-defined list to participate in score calculation, and a corresponding dispatching strategy is selected by a dispatching mode selection module to ensure that high-priority large-core dispatching is obtained under normal conditions.
The high priority process blacklist is defined by the user, an empty configuration file list is newly built for the user in a specific directory of the system in the starting process, the technology is usually set empty, when the user confirms that a certain process is a non-important process and needs to reside in the system, the user inputs the process by himself, and the process in the high priority process blacklist can be only scheduled to a small core of the CPU.
In one embodiment, the method for obtaining the second scheduling parameter is that according to the current CPU occupancy rate, an LRU linked list generated based on an LRU algorithm is updated, and a scheduling score P score of each process in the updated LRU linked list is calculated as the second scheduling parameter.
In this embodiment, the LRU link list generating method includes the following steps.
Acquiring N processes with highest CPU occupancy rate of the system at present, wherein N represents the length of an LRU linked list;
Judging whether each process with highest CPU occupancy rate exists in the current LRU chain table in sequence from high to low according to the CPU occupancy rate, if so, updating the occurrence times i=i++ of the process with the highest CPU occupancy rate, if not, traversing the current LRU chain table, and searching the processes which do not belong to the N processes with the highest CPU occupancy rate and have the lowest occurrence times in the calculation period, wherein for simplifying the description, the processes with the lowest occurrence times are referred to as the processes;
Judging whether a unique process exists, if so, replacing the process with the highest corresponding CPU occupancy rate, setting the occurrence number of the process with the highest corresponding CPU occupancy rate to be 1, if not, replacing the process with the lowest occurrence number and the latest time entering the LRU linked list with the process with the highest corresponding CPU occupancy rate, and setting the occurrence number of the process with the highest corresponding CPU occupancy rate to be 1;
And rearranging all the processes in the LRU chain table according to the fact that the more the occurrence number is, the smaller the corresponding numerical value of the position of the process with earlier entering chain table in the LRU chain table is, and the earlier the ranking in the LRU chain table is.
In one embodiment, it is assumed that there are 5 processes in the current LRU linked list, corresponding to the number of occurrences of process A-3, process B-10, process C-10, process D-5, and process E-6, respectively. The process is ordered B, C, E, D, A in the LRU linked list, and the method for acquiring the second scheduling parameter aiming at the initial LRU linked list comprises the following steps:
five processes with highest CPU occupancy rate in the system are obtained by using top command, which is A, B, D, F, G, so
For the process A, the process A appears in the current LRU linked list, the process is reserved, and the number of occurrences of the process A is increased by 1 to be changed into 4;
For the process B, the process B appears in the current LRU linked list, the process is reserved, and the number of occurrences of the process B is increased by 1 to be changed into 11;
For the process D, the process D appears in the current LRU linked list, the process is reserved, and the number of occurrences of the process D is increased by 1 to be changed into 6;
For the process F, not appearing in the current LRU chain table, searching N processes which do not belong to the process with the highest CPU occupancy rate and have the least occurrence number as E, replacing the process E with the process F, and setting the occurrence number of the process F as 1;
For the process G, not appearing in the current LRU chain table, searching N processes which do not belong to the process with the highest CPU occupancy rate and have the least occurrence number as C, replacing the process C with the process G, and setting the occurrence number of the process G as 1;
The 5 processes in the LRU chain table after final updating are respectively process A-4 times, process B-11 times, process G-1 times, process D-6 times and process F-1 times;
the more the number of occurrences, the more forward the process sequences in the LRU chain table, and the later the process sequences into the LRU chain table, the more backward the process sequences in the LRU chain table, namely B, D, A, F, G.
In this embodiment, the scheduling score P of a process score=(10-Pni 5+(Ppr-20)-1)logN-1(2(N-1)) +logN-1((N-Plru)+N-1)Pcpu 99。
Wherein, P ni represents the NI value of the process in the LRU chain table, P pr represents the PR value of the process in the LRU chain table, N represents the length of the LRU chain table, P lru represents the value corresponding to the position of the process in the LRU chain table, and P cpu represents the CPU occupancy rate of the process in the LRU chain table.
The NI value of a process, i.e., the process nice value, represents the static priority of the process, where nice represents a correction of the process priority. Is initially generated by default by the system. Each process has a nice value, the range of the nice value is-20-19, the default value is 0, and the lower the value is, the higher the scheduling priority is.
The PR value of the process is the priority value, which indicates the real-time priority of the process. The real-time priority range is 0-99. The value is calculated in real time by the operating system. The greater the PR value, the higher the priority. The PR value of the system is typically 20.
The CPU occupancy rate of a process refers to the proportion of CPU resources occupied by a process in a certain time. It is typically expressed in percent. If the CPU occupancy of a process is high, it may mean that it is resource intensive or is performing complex computations. Conversely, if the CPU occupancy is low, it may indicate that the process does not require too much computing resources, or that it is waiting for other resources (e.g., disk I/O).
The position of the process in the LRU linked list, that is, the latest LRU occupation level of the process, is a scheduling indicator provided in this embodiment, and refers to the position of the process corresponding to the N processes updated according to the N processes with the highest CPU occupancy rate. The LRU algorithm is commonly referred to as the least recently used algorithm (LEAST RECENTLY Use) and is widely used in cache mechanisms. When the space used by the cache reaches the upper limit, a part of the existing data needs to be eliminated to maintain the availability of the cache, and the selection of the eliminated data is completed through the LRU algorithm. The LRU algorithm maintains an LRU list of length N (i.e., N processes in the LRU list) and stores the corresponding number of occurrences of the processes.
And S3, determining a dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter.
In one embodiment, the method for determining the dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter specifically includes:
judging a first scheduling parameter;
For the process of which the first scheduling parameter is a system core process list, the scheduling mode is set as exclusive large-core scheduling, for the process of which the first scheduling parameter is a system high-priority process list, the scheduling mode is set as full-large-core scheduling, for the process of which the first scheduling parameter is a high-priority process blacklist, the scheduling mode is set as full-small-core scheduling, and for the process of which the first scheduling parameter is a user-defined list, the second scheduling parameter is judged;
when the second scheduling parameters are all smaller than or equal to the lower threshold value of the scheduling score, for the process of which the first scheduling parameters are user-defined lists, setting a scheduling mode as full-large-core scheduling, and setting the scheduling modes of all processes in the LRU linked list as large-core and small-core mixed scheduling;
When the second scheduling parameter is larger than the lower scheduling score threshold and smaller than or equal to the upper scheduling score threshold, for the process of which the first scheduling parameter is a user-defined list, setting a scheduling mode as big and small kernel mixed scheduling, setting a scheduling mode of the process of which the scheduling score is larger than the lower scheduling score threshold in the LRU linked list as full-big kernel scheduling, and setting scheduling modes of other processes in the LRU linked list as big and small kernel mixed scheduling;
when the second scheduling parameter is larger than the upper threshold value of the scheduling score, for the process of which the first scheduling parameter is a user-defined list, the scheduling mode is set to be large-core mixed scheduling, the scheduling mode of the process of which the scheduling score is larger than the upper threshold value of the scheduling score in the LRU chain table is set to be exclusive large-core scheduling, and the scheduling modes of other processes in the LRU list are set to be large-core mixed scheduling.
In the practical code implementation, the invention firstly determines the scheduling mode for the process in the LRU chain table according to the scheduling scheme of the LRU chain table, and then determines the scheduling mode for the process in the system core process list, the system high-priority process list and the high-priority process blacklist according to the scheduling scheme of the scheduling reference list. In this way, the scheduling mode determined by the list scheduling scheme can cover the scheduling mode determined by the LRU list scheduling scheme for the processes in the system core process list, the system high priority process list and the high priority process blacklist in the LRU list, so that the scheduling scheme of the scheduling reference list is still adopted when the processes in the system core process list and the high priority process list are arranged in the LRU list, the problem of contradiction scheduling modes exists.
That is, the present invention has a weaker influence in the overall scheduling scheme based on the LRU linked list scheduling scheme (i.e., the dynamic scheduling scheme distinguished according to the second scheduling parameter) than the scheduling reference list scheduling scheme (i.e., the dynamic scheduling scheme distinguished according to the first scheduling parameter).
The special large-core scheduling instruction process can be scheduled on all large cores of the CPU, the full-large-core scheduling instruction process can only be scheduled on large cores except the special large cores of the CPU, the full-small-core scheduling instruction process can only be scheduled on small cores of the CPU, and the large-small-core mixed scheduling instruction process can be scheduled on large cores except the special large cores of the CPU or small cores of the CPU.
And S4, performing dynamic scheduling on the process according to the dynamic scheduling mode, and returning to the step S1 after waiting for a preset monitoring interval.
In this embodiment, the preset monitoring interval is a default monitoring interval t s in this embodiment, and is also user-definable, and when the user is customized, the default configuration of the system is modified to be the custom monitoring interval t c. In order to ensure that the user-defined numerical value meets the requirement, the user-defined monitoring interval input by the user is checked, and the method specifically comprises the following steps.
Acquiring a user-defined monitoring interval input by a user;
judging whether the custom monitoring interval accords with the monitoring interval threshold, if so, taking the custom monitoring interval as a preset monitoring interval, and if not, taking the default monitoring interval as the preset monitoring interval.
The monitoring threshold is a section with an upper limit and a lower limit set in advance.
Based on the steps S1-S4, processes in the system are detected in real time, the CPU load condition is used as a basis for judging whether to execute process scheduling, a scheduling score algorithm of the processes and scheduling reference list types of the processes are used as a basis for determining a dynamic scheduling mode, and a default scheduling strategy of a Linux operating system is replaced. The corresponding large core or small core is selected for different processes to schedule, so that the priority of scheduling the high-priority process and the high-load process on the large core is improved, the performance of the key process of the system is optimized, and the overall performance of the operating system is further improved.
It should be noted that, although the foregoing embodiments describe the steps in a specific order, it will be understood by those skilled in the art that, in order to achieve the effects of the present invention, the steps are not necessarily performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of the present invention.
Example 2
Fig. 4 is a schematic diagram of a frame of a multi-process dynamic scheduling system under a large and small core architecture CPU according to the present invention, as shown in fig. 4, where the multi-process dynamic scheduling system under a large and small core architecture CPU in this embodiment includes a scheduling reference column layer, a process scheduling score algorithm layer, a system dynamic sensing layer and a large and small core dynamic scheduling layer that are connected in order by information.
1. Scheduling reference list layer
The dispatch reference list layer maintains four lists of a system core process list, a system high priority process list, a user-defined list and a high priority process blacklist.
The system core process list is provided by an operating system designer, an operating system is built in, a user does not have modification authority, and is responsible for maintaining a process list to which a core component of the system belongs, such as a graphic server (Xorg, walland, etc.) of the system, an audio server (PulseAudio, etc.) of the system, a core process of the system and a maintenance process thereof (which are different from one another in the system and are ukui-session on a domestic kylin operating system). These processes are used as core processes of the system, participate in scheduling decisions, and in any system condition, are guaranteed to obtain the highest priority large core scheduling.
The system high priority process list is also provided by an operating system designer, an operating system is built in, a user does not have modification authority, and the system relational components except for a system core process, important auxiliary processes of the system and a system detection response process (which are different from each other according to the system and are ukui-settings-daemon, lightdm on a domestic kylin operating system) are maintained. These processes are used as key processes and important processes of the system, participate in scheduling decisions, and under most conditions of the system, the processes are inferior to the system core processes to obtain high-priority large-core scheduling.
The user-defined list is defined by the user, an empty configuration file list is newly built for the user in a specific directory of the system in the starting process, the user inputs the configuration file list according to lines, a process created for the user personal important application and a standard test program process are usually created for the user personal important application, the system detects the file in the dispatching process, the file is converted into the user-defined list to participate in score calculation, and a corresponding dispatching strategy is selected by a dispatching mode selection module to ensure that high-priority large-core dispatching is obtained under normal conditions.
The high priority process blacklist is defined by the user, an empty configuration file list is newly built for the user in a specific directory of the system in the starting process, the technology is usually set empty, when the user confirms that a certain process is a non-important process and needs to reside in the system, the user inputs the process by himself, and the process in the high priority process blacklist can be only scheduled to a small core of the CPU.
2. Process scheduling scoring algorithm layer
The process scheduling scoring algorithm layer is used for updating the LRU linked list generated based on the LRU algorithm according to the current CPU occupancy rate, and calculating the scheduling score P score of each process in the updated LRU linked list.
The LRU algorithm firstly inquires N processes with highest CPU occupancy rate of the system, compares each process in the N processes with highest CPU occupancy rate, adds 1 to the occurrence number of the process if the process is in the LRU linked list, selects the process which has the least occurrence number and enters the LRU linked list at the latest to replace if the process is not in the LRU linked list, and sets the occurrence number of the process to 1.
The scheduling score of the process is mainly calculated by four dimensions of the NI value of the process, the PR value of the process, the CPU occupancy rate of the process and the position of the process in the LRU linked list. In combination with the four dimensions, the embodiment designs a scheduling score function P score=(10-Pni 5+(Ppr-20)-1)logN-1(2(N-1)) +logN-1((N-Plru)+N-1)Pcpu 99
In the score function, P score represents the scheduling score of the process, P ni represents the NI value of the process, P pr represents the PR value of the process, N represents the length of the LRU linked list, P lru represents the value corresponding to the position of the process in the LRU linked list, and P cpu represents the CPU occupancy rate of the process, typically in percentage.
In the scoring function, the four dimensions have different effects on the process score.
P ni and P pr are the first influencing priority factors for the score, which are calculated using an exponential function. At default for the system, when P ni is equal to 0 and P pr is 20, (10 -Pni 5+(Ppr-20)-1)logN-1(2(N-1)) is 0, which represents that the influence scheduling degree is 0, when the P ni is reduced or the P pr value is increased, the exponential function is rapidly increased, the influence of the change of PR and NI values on scheduling is far greater than that of the LRU list, and the priority of the system process is the primary consideration factor of the large-size core scheduling.
P cpu is a second consideration of the scoring function, in this function, P cpu is first multiplied by 99, and converted from a percentage to a corresponding scoring value, in general, the higher the CPU occupancy of a process, the higher the demand for large core resources, and the more efficient the execution end is for the system's operating pressure relief, so this is the second consideration of the size core scheduling score.
And the last influence factor is P lru, which represents the position of the linked list where the process is located in the LRU algorithm, the list is arranged according to the occurrence number, and the more the occurrence number is, the smaller the numerical value corresponding to the position in the linked list is. The smaller the value, the longer the process runs as a high load process in the operating system, requiring timely scheduling of completion.
3. System dynamic perception layer
Generally, when the system is under an excessive load and the system resource environment is too intense, the consumption of executing process scheduling can aggravate the shortage of system resources, and meanwhile, the requirements for preventing the downtime risk of the system and improving the overall performance of the system through reasonable size core scheduling are different from those of the embodiment, so that the main task of the system state dynamic sensing layer is to sense whether the current system state is suitable for executing dynamic scheduling.
The system dynamic sensing layer comprises a system state sensing module and a condition checking module which are connected through information, and the condition checking module is connected with the scheduling reference column surface layer, the process scheduling scoring algorithm layer and the size core dynamic scheduling layer at the same time.
The system state sensing module is mainly used for polling the current CPU state of the monitoring system according to a preset monitoring interval, generating a third scheduling parameter and transmitting the third scheduling parameter to the condition checking module. The condition checking module is used for directly or indirectly acquiring a third scheduling parameter, a first scheduling parameter and a second scheduling parameter from the system state sensing module, the scheduling reference list surface layer and the process scheduling scoring algorithm layer respectively, judging whether to execute dynamic scheduling based on the third scheduling parameter, and when the dynamic scheduling is suspended, waiting for k times of preset monitoring intervals and returning to the system state sensing module, wherein k is greater than 1, slowing down the polling frequency, reducing the influence of scheduling scripts on the system load, and continuing the operation of the scheduling mode selection module through a delay starting mechanism after the system CPU load is reduced below a specific threshold. When the dynamic scheduling is executed, the first scheduling parameter and the second scheduling parameter are acquired and sent to the scheduling mode selection module.
4. Size core dynamic scheduling layer
The size core dynamic scheduling layer comprises a scheduling mode selection module, a persistence monitoring module and a user input checking module which are sequentially connected through information, wherein the scheduling mode selection module is in information connection with a condition checking module of the system state sensing layer.
The scheduling mode selection module is used for determining a dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter which are acquired from the condition checking module. When the first scheduling parameter corresponds to the system core process list, the scheduling mode of the process is dedicated large-core scheduling, when the first scheduling parameter corresponds to the system high-priority process list, the scheduling mode of the process is full-large-core scheduling, and when the first scheduling parameter corresponds to the high-priority process blacklist, the scheduling mode of the process is full-small-core scheduling.
When the first scheduling parameter corresponds to the user-defined list, the second scheduling parameter needs to be considered, if the second scheduling parameter is all smaller than or equal to a certain specific value m (m is taken as a lower score threshold value), the scheduling mode of all processes in the user-defined list is full-large-core scheduling, if the second scheduling parameter is all smaller than or equal to another specific value n (n is taken as an upper score threshold value, n > m), the system is required to be considered, the high-load processes are scheduled preferentially, the scheduling mode of the processes in the user-defined list is full-large-core scheduling, the scheduling mode of the processes with the scheduling score larger than m in the LRU list is full-large-core scheduling, the scheduling mode of the other processes in the LRU list is full-large-core mixed scheduling, if the second scheduling parameter is larger than another specific value n, the high-load processes in the LRU list need to acquire system scheduling resources, the processes with the scheduling score larger than n in the LRU list adopt a dedicated large-core scheduling mode, and the other processes with the high-load processes in the LRU list adopt a full-core mixed scheduling mode.
The four scheduling modes are used for matching different processes, the full-small-core scheduling refers to the process which can only be scheduled on the small cores of the CPU, the large-small-core mixed scheduling refers to the process which can be scheduled on the large cores except the special large cores of the CPU and the small cores of the CPU, the full-large-core scheduling refers to the process which can only be scheduled on the large cores except the special large cores of the CPU, and the special large-core scheduling refers to the process which can be scheduled on all large cores including the special large cores.
The persistence monitoring module is responsible for executing dynamic scheduling on the process according to the dynamic scheduling mode, re-monitoring the process state of the system after the preset monitoring interval after completing one-time scheduling, and re-scheduling according to the need.
The preset monitoring interval is a default monitoring interval t s in the present embodiment, and is also user-definable, and when the user is customized, the default configuration of the system is modified to be the custom monitoring interval t c. The user input verification module is used for acquiring a user input custom monitoring interval, judging whether the custom monitoring interval accords with a monitoring interval threshold, if so, taking the custom monitoring interval as a preset monitoring interval, and if not, taking a default monitoring interval as the preset monitoring interval. And judging the validity and rationality of the user-defined detection interval input by the user through the user input checking module. Specifically, the validity and rationality processes of the first parameter T c input by the user through the specific shell command are checked, the integer values of the user inputs T min to T max (i.e. the monitoring interval threshold value) are supported, otherwise, the default monitoring interval T s of the system is adopted.
The above-mentioned multi-process dynamic scheduling system under the large and small core architecture CPU is implemented by a multi-process dynamic scheduling method under the large and small core architecture CPU for executing the embodiment 1, and the technical principles, the technical problems to be solved and the technical effects to be produced are similar, and those skilled in the art can clearly understand that, for convenience and brevity of description, the specific working process and related description of a multi-process dynamic scheduling system under the large and small core architecture CPU may refer to what is described in the embodiments of a multi-process dynamic scheduling method under the large and small core architecture CPU, and will not be repeated herein.
Example 3
Based on the technical solutions of the foregoing embodiments, this embodiment proposes a specific implementation manner.
The embodiment is based on the Ruifeng micro RK3588 platform and a Galaxy kylin desktop operating system (national defense version) V10 operating system, and particularly realizes a multi-process dynamic scheduling technology under a CPU size core architecture on a domestic kylin operating system. RK3588 is a very recent generation of industrial processors with a four-core ARM Cortex-A76@2.4GHz +four-core ARM Cortex-A55@1.8GHz size core architecture. The Galaxy kylin desktop operating system is an operating system based on a Linux kernel, aims to provide a safe, stable and easy-to-use operating system, and is mainly oriented to informatization construction of industries such as finance, electric power and the like.
1. Scheduling reference list layer implementation
The system core process list and the system high priority process list are embedded with codes, and the user cannot modify the codes. The system core process list comprises a graphics service process Xorg operated by kylin, an audio service process PulseAudio and a core process ukui-session of the system, the system high-priority process list comprises key processes of the system such as light, ukui-settings-daemon, file nodes of a user-defined list are located under/etc/processList.cnf and can be written by the user, and file nodes of a high-priority process blacklist are located under/etc/processBlackList.cnf and can be written by the user. After the corresponding file list is read, the four lists are integrated and are used as the first scheduling parameters to be transmitted into the condition checking module.
2. Process size kernel scheduling scoring algorithm layer implementation
In this example, the N size is 10, that is, the system maintains an LRU linked list with a size of 10, and obtains the first 10 processes with the highest output CPU occupancy rate through a top command, which are used as the latest input of the LRU linked list. Correspondingly, according to the processes in the LRU linked list, P ni、Pcpu、Ppr is recorded respectively and passed through P score=(10-Pni 5+(Ppr-20)-1)log918+log9((19-Plru))Pcpu 99, Calculating the corresponding process size kernel scheduling score. And the process in the LRU linked list and the corresponding scheduling score thereof are used as a second scheduling parameter to be transmitted into a condition checking module.
3. System state dynamic perception layer implementation
The system state sensing module obtains the current CPU load according to a preset monitoring interval through a top command in a polling mode, if the current CPU load is larger than a CPU load threshold, the third scheduling parameter is set to 0, if the current CPU load is smaller than or equal to the CPU load threshold, the third scheduling parameter is set to 1, and then the third scheduling parameter is transmitted to the condition checking module, preferably, the CPU load threshold is 85% in the example.
The condition checking module receives the third scheduling parameter transmitted by the system state sensing module, when the third scheduling parameter is 0, the dynamic scheduling is paused, namely the scheduling of the period is skipped directly, the preset monitoring interval is changed to be k times of the original k times, preferably, the value of k in the example is 3, and when the third scheduling parameter is 0, the condition checking module respectively acquires the first scheduling parameter and the second scheduling parameter from the scheduling reference column surface layer and the process scheduling scoring algorithm layer and transmits the first scheduling parameter and the second scheduling parameter to the scheduling mode selecting module.
4. Size core dynamic scheduling layer implementation
CPU size cores of the Rayleigh core micro RK3588 are distributed as CPU0-CPU3 as small cores and CPU4-CPU7 as large cores. Preferably, four scheduling modes of the scheduling mode selector in the embodiment are grouped into a CPU core corresponding to full-small core scheduling is CPU0-CPU4, a CPU core corresponding to full-large core scheduling is CPU4-CPU5, a CPU core corresponding to exclusive large core scheduling is CPU4-CPU7, and a CPU core in a large-small core mixed scheduling mode is CPU0-CPU5. Also, based on the LRU linked list length N being 10 in this example, it is preferable to take the scoring value of the process as N being 70 and m being 50.
For the persistent monitoring module, preferably, the default monitoring interval T s in this example is 5 seconds, and the allowable ranges T min and T max of the custom monitoring interval T c are 3 seconds and 20 seconds respectively.
The user input verification module is realized by exposing an initialization script in a service starting process to shell starting naming of a user, a user self-defined monitoring interval t c is obtained through capturing a first parameter input by a command, if the user directly has no parameter use command or the user input self-defined monitoring interval t c is illegal (namely, t c is more than 20 seconds or t c is less than 3 seconds), the preset monitoring interval is 5 seconds, and if the user input is legal, the preset monitoring interval is the user self-defined monitoring interval t c.
Based on the above scheme, since the large core frequency of the CPU is higher than the small core frequency, the large core performance is generally better than the small core performance. By improving the priority of dispatching the high-priority process and the high-load process to the large core, the system critical process and the high-load process select proper dispatching modes, the performance of the system critical process is improved, the inclination of the system to the CPU resource of the high-load process is increased, and the overall performance of the operating system under the normal running condition is improved.
GLMark2 is a standard test tool for testing the graphics performance provided by the system hardware. Taking GLMark2 as an example, fig. 5 is a schematic diagram showing GLMark without multi-process dynamic scheduling deployed in embodiment 3 of the present invention, where the final running score of the gale kylin desktop operating system is 936 points as shown in fig. 5, and fig. 6 is a schematic diagram showing GLMark with multi-process dynamic scheduling deployed in embodiment 3 of the present invention, where the final running score of the gale kylin desktop operating system is 1065 points as shown in fig. 6. It can be seen that by deploying the multi-process dynamic scheduling technique under the CPU-size core architecture, the graphics performance of the operating system is improved by about 14%.
GLMark2 is used as a standard test tool, the overall graphic performance of the system can be reflected to a certain extent, and it can be considered that by deploying the invention, compared with the same set of software and hardware equipment without the function, the overall graphic performance of the system is improved by about 14%, aiming at the problem of pain point scheduling of the market size core, the system can be dynamically perceived, dynamically detected and dynamically scheduled, and according to the load condition of the process and the importance degree in the system, the smoothness of the overall operation system is improved by selecting a proper scheduling mode by referring to the process requirements of a user, and the performance of the operation system is improved.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.

Claims (10)

1.一种在大小核架构CPU下的多进程动态调度方法,其特征在于,包括以下步骤,1. A multi-process dynamic scheduling method under a big-core and small-core CPU, characterized by comprising the following steps: S1:监测系统当前的CPU负载,判断当前CPU负载是否大于CPU负载阈值,得到第三调度参数;S1: monitor the current CPU load of the system, determine whether the current CPU load is greater than the CPU load threshold, and obtain the third scheduling parameter; S2:基于所述第三调度参数判断是否执行动态调度:当暂停动态调度时,等待k倍的预设监测间隔后返回步骤S1,k>1;当执行动态调度时,获取第一调度参数和第二调度参数;S2: Determine whether to perform dynamic scheduling based on the third scheduling parameter: when dynamic scheduling is suspended, wait for k times of the preset monitoring interval and then return to step S1, k>1; when dynamic scheduling is performed, obtain the first scheduling parameter and the second scheduling parameter; S3:根据第一调度参数和第二调度参数确定动态调度模式;S3: Determine a dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter; S4:根据所述动态调度模式对进程执行动态调度,等待预设监测间隔后返回步骤S1;S4: dynamically schedule the process according to the dynamic scheduling mode, and return to step S1 after waiting for a preset monitoring interval; 获取第一调度参数包括,获取划分好的调度参考列表,所述调度参考列表包括系统核心进程列表、系统高优先级进程列表、用户自定义列表和高优先级进程黑名单列表,将进程在调度参考列表中所属种类作为第一调度参数;Acquiring the first scheduling parameter includes acquiring a divided scheduling reference list, wherein the scheduling reference list includes a system core process list, a system high priority process list, a user-defined list, and a high priority process blacklist list, and taking the type of the process in the scheduling reference list as the first scheduling parameter; 获取第二调度参数包括,根据当前CPU占用率,更新基于LRU算法生成的LRU链表,并计算更新后的LRU链表内每个进程的调度得分Pscore,作为第二调度参数。Acquiring the second scheduling parameter includes updating the LRU linked list generated based on the LRU algorithm according to the current CPU occupancy rate, and calculating the scheduling score P score of each process in the updated LRU linked list as the second scheduling parameter. 2.根据权利要求1所述的一种在大小核架构CPU下的多进程动态调度方法,其特征在于,当前CPU负载大于CPU负载阈值时,将第三调度参数置为0,暂停动态调度,当前CPU负载小于等于CPU负载阈值时,将第三调度参数置为1,执行动态调度。2. According to claim 1, a multi-process dynamic scheduling method under a big and small core architecture CPU is characterized in that when the current CPU load is greater than the CPU load threshold, the third scheduling parameter is set to 0 and dynamic scheduling is suspended; when the current CPU load is less than or equal to the CPU load threshold, the third scheduling parameter is set to 1 and dynamic scheduling is performed. 3.根据权利要求1所述的一种在大小核架构CPU下的多进程动态调度方法,其特征在于,3. The multi-process dynamic scheduling method under a big-core and small-core CPU according to claim 1, characterized in that: 所述系统核心进程列表用于维护操作系统的核心组件对应的进程,包括系统图形服务器进程、系统音频服务器进程、系统核心进程与其维护进程;The system core process list is used to maintain the processes corresponding to the core components of the operating system, including the system graphics server process, the system audio server process, the system core process and its maintenance process; 所述系统高优先级进程列表用于维护系统关系性组件进程、系统重要辅助进程和系统检测响应进程;The system high priority process list is used to maintain system relational component processes, system important auxiliary processes and system detection response processes; 所述用户自定义列表用于维护为重要应用所创建的进程和标准的测试程序进程;The user-defined list is used to maintain processes created for important applications and standard test program processes; 所述高优先级进程黑名单列表用于维护需要常驻系统且为非重要的进程。The high-priority process blacklist is used to maintain processes that need to reside in the system and are not important. 4.根据权利要求3所述的一种在大小核架构CPU下的多进程动态调度方法,其特征在于,所述系统核心进程列表与所述系统高优先级进程列表均由操作系统设计者提供,内置入操作系统,用户不具有修改权限;4. A multi-process dynamic scheduling method under a big-core and small-core CPU according to claim 3, characterized in that the system core process list and the system high-priority process list are both provided by an operating system designer and built into the operating system, and the user does not have the right to modify them; 所述用户自定义列表与所述高优先级进程黑名单列表通过启动过程中在系统的特定目录新建空的配置文件,由用户自定义按行输入,系统在调度过程中检测所述配置文件并转换为用户自定义列表和高优先级进程黑名单列表。The user-defined list and the high-priority process blacklist are created by creating an empty configuration file in a specific directory of the system during the startup process, and are input by the user in lines. The system detects the configuration file during the scheduling process and converts it into a user-defined list and a high-priority process blacklist. 5.根据权利要求1所述的一种在大小核架构CPU下的多进程动态调度方法,其特征在于,所述根据当前CPU占用率,更新基于LRU算法生成的LRU链表,包括步骤:5. According to the method of claim 1, wherein the method of dynamically scheduling multiple processes under a CPU with a large and small core architecture is characterized in that the method of updating the LRU linked list generated based on the LRU algorithm according to the current CPU occupancy rate comprises the following steps: 获取系统当前CPU占用率最高的N个进程,N表示LRU链表的长度;Get the N processes with the highest CPU usage in the system, where N represents the length of the LRU linked list; 根据CPU占用率由高到低的顺序依次判断每个CPU占用率最高的进程是否存在于当前LRU链表中,若是,更新相应CPU占用率最高的进程的出现次数i=i++,若否,遍历当前LRU链表,查找不属于CPU占用率最高的N个进程且出现次数最少的进程;According to the order of CPU occupancy from high to low, determine whether each process with the highest CPU occupancy exists in the current LRU linked list. If so, update the occurrence count i=i++ of the corresponding process with the highest CPU occupancy. If not, traverse the current LRU linked list to find the process that does not belong to the N processes with the highest CPU occupancy and has the least occurrence count. 判断是否存在唯一出现次数最少的进程,若是,则将出现次数最少的进程替换为相应CPU占用率最高的进程,并将相应CPU占用率最高的进程的出现次数置为1,若否,则将出现次数最少且进入LRU链表时间最晚的进程替换为相应CPU占用率最高的进程,并将相应CPU占用率最高的进程的出现次数置为1;Determine whether there is a unique process with the least number of occurrences. If so, replace the process with the least number of occurrences with the process with the highest CPU occupancy rate, and set the number of occurrences of the process with the highest CPU occupancy rate to 1. If not, replace the process with the least number of occurrences and the latest time to enter the LRU linked list with the process with the highest CPU occupancy rate, and set the number of occurrences of the process with the highest CPU occupancy rate to 1. 根据出现次数越多、进入链表时间越早的进程在LRU链表中位置对应的数值越小,对LRU链表中的所有进程重新排列。All processes in the LRU list are rearranged based on the fact that the processes with more appearances and earlier entry into the list have smaller values corresponding to their positions in the LRU list. 6.根据权利要求5所述的一种在大小核架构CPU下的多进程动态调度方法,其特征在于,所述进程的调度得分Pscore=(10-Pni 5+(Ppr-20)-1)logN-1 (2(N-1)) +logN-1((N-Plru)+N-1)Pcpu 99,6. The multi-process dynamic scheduling method under the big-core and small-core CPU according to claim 5, characterized in that the scheduling score of the process P score = (10 - Pni 5+(Ppr-20) -1) log N-1 (2 (N-1)) +log N-1 ((NP lru )+N-1) CPU 99, 其中,Pni表示LRU链表内进程的NI值,Ppr表示LRU链表内进程的PR值,N表示LRU链表的长度,Plru表示LRU链表内进程在LRU链表的位置对应的数值,Pcpu表示LRU链表内进程的CPU占用率。Among them, P ni represents the NI value of the process in the LRU list, P pr represents the PR value of the process in the LRU list, N represents the length of the LRU list, P lru represents the value corresponding to the position of the process in the LRU list, and P cpu represents the CPU occupancy of the process in the LRU list. 7.根据权利要求1所述的一种在大小核架构CPU下的多进程动态调度方法,其特征在于,所述步骤S3包括:7. The multi-process dynamic scheduling method under a big-core-big-core CPU according to claim 1, wherein step S3 comprises: 判断所述第一调度参数;determining the first scheduling parameter; 对于第一调度参数为系统核心进程列表的进程,将调度模式设为专属大核调度,对于第一调度参数为系统高优先级进程列表的进程,将调度模式设为全大核调度,对于第一调度参数为高优先级进程黑名单列表的进程,将调度模式设为全小核调度,对于第一调度参数为用户自定义列表的进程,判断所述第二调度参数;For a process whose first scheduling parameter is a system core process list, the scheduling mode is set to exclusive large core scheduling; for a process whose first scheduling parameter is a system high priority process list, the scheduling mode is set to all large core scheduling; for a process whose first scheduling parameter is a high priority process blacklist, the scheduling mode is set to all small core scheduling; for a process whose first scheduling parameter is a user-defined list, the second scheduling parameter is determined; 当第二调度参数全部小于等于调度得分下阈值时,对于第一调度参数为用户自定义列表的进程,将调度模式设为全大核调度,并将LRU链表内所有进程的调度模式设为大小核混合调度;When all the second scheduling parameters are less than or equal to the scheduling score lower threshold, for the processes whose first scheduling parameters are in the user-defined list, the scheduling mode is set to all large core scheduling, and the scheduling mode of all processes in the LRU linked list is set to mixed scheduling of large and small cores; 当第二调度参数存在大于调度得分下阈值且小于等于调度得分上阈值时,对于第一调度参数为用户自定义列表的进程,将调度模式设为大小核混合调度,并将LRU链表内调度得分大于调度得分下阈值的进程的调度模式设为全大核调度,将LRU链表内其他进程的调度模式设为大小核混合调度;When the second scheduling parameter is greater than the scheduling score lower threshold and less than or equal to the scheduling score upper threshold, for the process whose first scheduling parameter is a user-defined list, the scheduling mode is set to mixed scheduling of large and small cores, and the scheduling mode of the process whose scheduling score in the LRU linked list is greater than the scheduling score lower threshold is set to all large core scheduling, and the scheduling mode of other processes in the LRU linked list is set to mixed scheduling of large and small cores; 当第二调度参数存在大于调度得分上阈值时,对于第一调度参数为用户自定义列表的进程,将调度模式设为大小核混合调度,并将LRU链表内调度得分大于调度得分上阈值的进程的调度模式设为专属大核调度,将LRU列表内其他进程的调度模式设为大小核混合调度。When the second scheduling parameter is greater than the upper threshold of the scheduling score, for the process whose first scheduling parameter is a user-defined list, the scheduling mode is set to a mixed scheduling of large and small cores, and the scheduling mode of the process in the LRU linked list whose scheduling score is greater than the upper threshold of the scheduling score is set to exclusive large core scheduling, and the scheduling mode of other processes in the LRU list is set to a mixed scheduling of large and small cores. 8.根据权利要求7所述的一种在大小核架构CPU下的多进程动态调度方法,其特征在于,专属大核调度指进程可以被调度在CPU的所有大核上;全大核调度指进程只能被调度在CPU除去专属大核之外大核上;全小核调度指进程只能被调度在CPU的小核上,大小核混合调度指进程可以被调度在CPU除去专属大核之外的大核或CPU的小核上。8. According to claim 7, a multi-process dynamic scheduling method under a big and small core architecture CPU is characterized in that exclusive big core scheduling means that the process can be scheduled on all big cores of the CPU; all big core scheduling means that the process can only be scheduled on the big cores of the CPU except the exclusive big core; all small core scheduling means that the process can only be scheduled on the small cores of the CPU, and big and small core mixed scheduling means that the process can be scheduled on the big core of the CPU except the exclusive big core or the small core of the CPU. 9.根据权利要求1所述的一种在大小核架构CPU下的多进程动态调度方法,其特征在于,还包括步骤:9. The multi-process dynamic scheduling method under the big-core and small-core CPU according to claim 1, characterized in that it also includes the steps of: 获取用户输入的自定义监测间隔;Get the custom monitoring interval entered by the user; 判断所述自定义监测间隔是否符合监测间隔阈值,若是,则将自定义监测间隔作为预设监测间隔,若否,则将默认监测间隔作为预设监测间隔。It is determined whether the user-defined monitoring interval meets the monitoring interval threshold. If so, the user-defined monitoring interval is used as the preset monitoring interval. If not, the default monitoring interval is used as the preset monitoring interval. 10.一种在大小核架构CPU下的多进程动态调度系统,其特征在于,包括依次信息连接的调度参考列表层、进程调度得分算法层、系统动态感知层和大小核动态调度层,所述系统动态感知层包括信息连接的系统状态感知模块和条件校验模块,所述条件校验模块同时与调度参考列表层、进程调度得分算法层以及大小核动态调度层连接,所述大小核动态调度层包括依次信息连接的调度模式选择模块和持久化监测模块,其中,调度模式选择模块与系统状态感知层的条件检验模块信息连接,10. A multi-process dynamic scheduling system under a big-core-big-core architecture CPU, characterized in that it comprises a scheduling reference list layer, a process scheduling score algorithm layer, a system dynamic perception layer and a big-core-big-core dynamic scheduling layer which are sequentially information-connected, the system dynamic perception layer comprises an information-connected system state perception module and a condition verification module, the condition verification module is simultaneously connected to the scheduling reference list layer, the process scheduling score algorithm layer and the big-core-big-core dynamic scheduling layer, the big-core-big-core dynamic scheduling layer comprises a scheduling mode selection module and a persistent monitoring module which are sequentially information-connected, wherein the scheduling mode selection module is information-connected to the condition verification module of the system state perception layer, 所述调度参考列表层用于提供划分好的调度参考列表,所述调度参考列表包括系统核心进程列表、系统高优先级进程列表、用户自定义列表和高优先级进程黑名单列表,所述调度参考列表种类作为第一调度参数;The scheduling reference list layer is used to provide a divided scheduling reference list, the scheduling reference list includes a system core process list, a system high priority process list, a user-defined list and a high priority process blacklist list, and the type of the scheduling reference list is used as the first scheduling parameter; 所述进程调度得分算法层用于根据当前CPU占用率,更新基于LRU算法生成的LRU链表,并计算更新后的LRU链表内每个进程的调度得分Pscore,所述调度得分Pscore作为第二调度参数;The process scheduling score algorithm layer is used to update the LRU linked list generated based on the LRU algorithm according to the current CPU occupancy rate, and calculate the scheduling score P score of each process in the updated LRU linked list, and the scheduling score P score is used as the second scheduling parameter; 所述系统状态感知模块用于监测系统当前的CPU负载,判断当前CPU负载是否大于CPU负载阈值,得到第三调度参数;The system state sensing module is used to monitor the current CPU load of the system, determine whether the current CPU load is greater than the CPU load threshold, and obtain the third scheduling parameter; 所述条件校验模块用于分别从系统状态感知模块、调度参考列表层、进程调度得分算法层直接或间接获取到第三调度参数、第一调度参数及第二调度参数,基于所述第三调度参数判断是否执行动态调度,当暂停动态调度时,等待k倍的预设监测间隔后返回系统状态感知模块,k>1,当执行动态调度时,获取第一调度参数和第二调度参数,发送给调度模式选择模块;The condition verification module is used to directly or indirectly obtain the third scheduling parameter, the first scheduling parameter and the second scheduling parameter from the system state perception module, the scheduling reference list layer and the process scheduling score algorithm layer respectively, and judge whether to perform dynamic scheduling based on the third scheduling parameter. When dynamic scheduling is suspended, it returns to the system state perception module after waiting for k times of the preset monitoring interval, k>1, and when dynamic scheduling is performed, the first scheduling parameter and the second scheduling parameter are obtained and sent to the scheduling mode selection module; 所述调度模式选择模块用于根据从条件检验模块获取的第一调度参数和第二调度参数确定动态调度模式;The scheduling mode selection module is used to determine the dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter obtained from the condition checking module; 所述持久化监测模块用于根据所述动态调度模式对进程执行动态调度,并在预设监测间隔后返回系统状态感知模块再次执行动态调度。The persistent monitoring module is used to perform dynamic scheduling on the process according to the dynamic scheduling mode, and return to the system status perception module to perform dynamic scheduling again after a preset monitoring interval.
CN202411547727.1A 2024-11-01 2024-11-01 Multi-process dynamic scheduling method and system under large and small core architecture CPU Pending CN119065817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411547727.1A CN119065817A (en) 2024-11-01 2024-11-01 Multi-process dynamic scheduling method and system under large and small core architecture CPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411547727.1A CN119065817A (en) 2024-11-01 2024-11-01 Multi-process dynamic scheduling method and system under large and small core architecture CPU

Publications (1)

Publication Number Publication Date
CN119065817A true CN119065817A (en) 2024-12-03

Family

ID=93637250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411547727.1A Pending CN119065817A (en) 2024-11-01 2024-11-01 Multi-process dynamic scheduling method and system under large and small core architecture CPU

Country Status (1)

Country Link
CN (1) CN119065817A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104572272A (en) * 2013-10-12 2015-04-29 杭州华为数字技术有限公司 Task scheduling method, device and system
US20150301858A1 (en) * 2014-04-21 2015-10-22 National Tsing Hua University Multiprocessors systems and processes scheduling methods thereof
CN107066326A (en) * 2017-03-27 2017-08-18 深圳市金立通信设备有限公司 The method and terminal of a kind of scheduler task
CN110489228A (en) * 2019-07-16 2019-11-22 华为技术有限公司 A kind of method and electronic equipment of scheduling of resource
CN111052083A (en) * 2017-08-16 2020-04-21 三星电子株式会社 Method and apparatus for managing scheduling of services during startup
US20200379804A1 (en) * 2019-06-01 2020-12-03 Apple Inc. Multi-level scheduling

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104572272A (en) * 2013-10-12 2015-04-29 杭州华为数字技术有限公司 Task scheduling method, device and system
US20150301858A1 (en) * 2014-04-21 2015-10-22 National Tsing Hua University Multiprocessors systems and processes scheduling methods thereof
CN107066326A (en) * 2017-03-27 2017-08-18 深圳市金立通信设备有限公司 The method and terminal of a kind of scheduler task
CN111052083A (en) * 2017-08-16 2020-04-21 三星电子株式会社 Method and apparatus for managing scheduling of services during startup
US20200379804A1 (en) * 2019-06-01 2020-12-03 Apple Inc. Multi-level scheduling
CN110489228A (en) * 2019-07-16 2019-11-22 华为技术有限公司 A kind of method and electronic equipment of scheduling of resource

Similar Documents

Publication Publication Date Title
KR101029414B1 (en) Apparatus and method provided for detecting processor state transition and machine accessible media and computing system
Xu et al. Adaptive task scheduling strategy based on dynamic workload adjustment for heterogeneous Hadoop clusters
JP5324934B2 (en) Information processing apparatus and information processing method
KR101953906B1 (en) Apparatus for scheduling task
US9973512B2 (en) Determining variable wait time in an asynchronous call-back system based on calculated average sub-queue wait time
KR101155202B1 (en) Method for managing power for multi-core processor, recorded medium for performing method for managing power for multi-core processor and multi-core processor system for performing the same
EP3278220B1 (en) Power aware scheduling and power manager
JP2013527948A (en) Method, system and computer program for dispatching tasks in a computer system
CN110795238B (en) Load calculation method and device, storage medium and electronic equipment
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
Han et al. Resource sharing in multicore mixed-criticality systems: Utilization bound and blocking overhead
RU2453901C2 (en) Hard-wired method to plan tasks (versions), system to plan tasks and machine-readable medium
CN112346836B (en) Preemption method, device, user equipment and storage medium for shared computing resources
US8862786B2 (en) Program execution with improved power efficiency
CN114461365A (en) Process scheduling processing method, device, equipment and storage medium
RU2450330C2 (en) Hardware-implemented method of executing programs
CN119065817A (en) Multi-process dynamic scheduling method and system under large and small core architecture CPU
CN117608850A (en) A multi-task computing resource allocation method and device for neural network processors
CN116755888A (en) High-performance computing cloud platform-oriented job scheduling device and method
CN116795503A (en) Task scheduling method, task scheduling device, graphics processor and electronic equipment
CN116755855A (en) Distributed container scheduling method based on Kubernetes cluster
CN115689855A (en) Scheduling method, scheduling device, chip and computer readable storage medium
Yazdanpanah et al. A comprehensive view of MapReduce aware scheduling algorithms in cloud environments
CN111444001A (en) Cloud platform task scheduling method and system
CN113448705B (en) Unbalanced job scheduling algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination