Disclosure of Invention
In order to overcome the defects, the invention provides the technical problems that the scheduling is not timely and unreasonable on the CPU of the large and small core mixed architecture in the conventional system scheduling.
In a first aspect, the present invention proposes a multi-process dynamic scheduling method under a large and small core architecture CPU, comprising the steps of,
S1, monitoring the current CPU load of a system, judging whether the current CPU load is larger than a CPU load threshold value, and obtaining a third scheduling parameter;
s2, judging whether to execute dynamic scheduling based on the third scheduling parameter, when the dynamic scheduling is suspended, returning to the step S1, wherein k is greater than 1 after waiting for k times of preset monitoring intervals, and when the dynamic scheduling is executed, acquiring a first scheduling parameter and a second scheduling parameter;
S3, determining a dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter;
s4, performing dynamic scheduling on the process according to the dynamic scheduling mode, and returning to the step S1 after waiting for a preset monitoring interval;
The method comprises the steps that first scheduling parameters are obtained, a divided scheduling reference list is obtained, the scheduling reference list comprises a system core process list, a system high-priority process list, a user-defined list and a high-priority process blacklist, and the category of a process in the scheduling reference list is used as the first scheduling parameters;
the obtaining of the second scheduling parameter includes updating an LRU linked list generated based on an LRU algorithm according to the current CPU occupancy rate, and calculating a scheduling score P score of each process in the updated LRU linked list as the second scheduling parameter.
Further, when the current CPU load is larger than the CPU load threshold, the third scheduling parameter is set to 0, dynamic scheduling is suspended, and when the current CPU load is smaller than or equal to the CPU load threshold, the third scheduling parameter is set to 1, and dynamic scheduling is executed.
The system core process list is used for maintaining processes corresponding to core components of the operating system, and comprises a system graphic server process, a system audio server process, a system core process and a maintenance process thereof;
the system high-priority process list is used for maintaining a system relational component process, a system important auxiliary process and a system detection response process;
The user-defined list is used for maintaining a process created for the important application and a standard test program process;
The high priority process blacklist is used to maintain processes that need resident systems and are not important.
Further, the system core process list and the system high priority process list are provided by an operating system designer, an operating system is built in, and a user does not have modification authority;
and the user-defined list and the high-priority process blacklist are input by the user-defined line by line through newly creating an empty configuration file in a specific directory of the system in the starting process, and the system detects the configuration file in the scheduling process and converts the configuration file into the user-defined list and the high-priority process blacklist.
Further, the updating the LRU linked list generated based on the LRU algorithm according to the current CPU occupancy rate includes the steps of:
acquiring N processes with highest CPU occupancy rate of the system at present, wherein N represents the length of an LRU linked list;
Judging whether each process with highest CPU occupancy rate exists in the current LRU chain table in sequence from high to low according to the CPU occupancy rate, if so, updating the occurrence times i=i++ of the process with the highest CPU occupancy rate, if not, traversing the current LRU chain table, and searching the processes which do not belong to the N processes with the highest CPU occupancy rate and have the least occurrence times;
Judging whether a process with the least occurrence number exists, if so, replacing the process with the least occurrence number with the process with the highest corresponding CPU occupancy rate, setting the occurrence number of the process with the highest corresponding CPU occupancy rate to be 1, and if not, replacing the process with the least occurrence number and the latest time entering the LRU linked list with the process with the highest corresponding CPU occupancy rate, and setting the occurrence number of the process with the highest corresponding CPU occupancy rate to be 1;
And rearranging all the processes in the LRU chain table according to the fact that the more the occurrence number is, the smaller the corresponding numerical value of the position of the process with the earlier entry chain table in the LRU chain table is.
Further, the scheduling score P of the process score=(10-Pni 5+(Ppr-20)-1)logN-1(2(N-1)) +logN-1((N-Plru)+N-1)Pcpu 99,
Wherein, P ni represents the NI value of the process in the LRU chain table, P pr represents the PR value of the process in the LRU chain table, N represents the length of the LRU chain table, P lru represents the value corresponding to the position of the process in the LRU chain table, and P cpu represents the CPU occupancy rate of the process in the LRU chain table.
Further, the step S3 includes:
judging the first scheduling parameter;
For the process of which the first scheduling parameter is a system core process list, setting a scheduling mode as exclusive large-core scheduling, for the process of which the first scheduling parameter is a system high-priority process list, setting a scheduling mode as full-large-core scheduling, for the process of which the first scheduling parameter is a high-priority process blacklist, setting a scheduling mode as full-small-core scheduling, and for the process of which the first scheduling parameter is a user-defined list, judging the second scheduling parameter;
when the second scheduling parameters are all smaller than or equal to the lower threshold value of the scheduling score, for the process of which the first scheduling parameters are user-defined lists, setting a scheduling mode as full-large-core scheduling, and setting the scheduling modes of all processes in the LRU linked list as large-core and small-core mixed scheduling;
When the second scheduling parameter is larger than the lower scheduling score threshold and smaller than or equal to the upper scheduling score threshold, for the process of which the first scheduling parameter is a user-defined list, setting a scheduling mode as big and small kernel mixed scheduling, setting a scheduling mode of the process of which the scheduling score is larger than the lower scheduling score threshold in the LRU linked list as full-big kernel scheduling, and setting scheduling modes of other processes in the LRU linked list as big and small kernel mixed scheduling;
when the second scheduling parameter is larger than the upper threshold value of the scheduling score, for the process of which the first scheduling parameter is a user-defined list, the scheduling mode is set to be large-core mixed scheduling, the scheduling mode of the process of which the scheduling score is larger than the upper threshold value of the scheduling score in the LRU chain table is set to be exclusive large-core scheduling, and the scheduling modes of other processes in the LRU list are set to be large-core mixed scheduling.
Further, the dedicated large core scheduling finger process can be scheduled on all large cores of the CPU, the full large core scheduling finger process can only be scheduled on large cores except the dedicated large cores of the CPU, the full small core scheduling finger process can only be scheduled on small cores of the CPU, and the large and small core mixed scheduling finger process can be scheduled on large cores except the dedicated large cores of the CPU or small cores of the CPU.
Further, the method also comprises the steps of:
acquiring a user-defined monitoring interval input by a user;
Judging whether the custom monitoring interval accords with a monitoring interval threshold, if so, taking the custom monitoring interval as a preset monitoring interval, and if not, taking the default monitoring interval as the preset monitoring interval.
In a second aspect, the invention provides a multi-process dynamic scheduling system under a large and small core architecture CPU, comprising a scheduling reference column surface layer, a process scheduling scoring algorithm layer, a system dynamic sensing layer and a large and small core dynamic scheduling layer which are sequentially connected in information, wherein the system dynamic sensing layer comprises a system state sensing module and a condition checking module which are connected in information, the condition checking module is simultaneously connected with the scheduling reference column surface layer, the process scheduling scoring algorithm layer and the large and small core dynamic scheduling layer, the large and small core dynamic scheduling layer comprises a scheduling mode selection module and a persistence monitoring module which are sequentially connected in information, the scheduling mode selection module is connected with the condition checking module of the system state sensing layer in information,
The scheduling reference list layer is used for providing a divided scheduling reference list, the scheduling reference list comprises a system core process list, a system high-priority process list, a user-defined list and a high-priority process blacklist, and the scheduling reference list type is used as a first scheduling parameter;
The process scheduling scoring algorithm layer is used for updating an LRU linked list generated based on an LRU algorithm according to the current CPU occupancy rate, calculating a scheduling score P score of each process in the updated LRU linked list, and taking the scheduling score P score as a second scheduling parameter;
The system state sensing module is used for monitoring the current CPU load of the system, judging whether the current CPU load is larger than a CPU load threshold value or not, and obtaining a third scheduling parameter;
The condition checking module is used for directly or indirectly acquiring a third scheduling parameter, a first scheduling parameter and a second scheduling parameter from the system state sensing module, the scheduling reference list surface layer and the process scheduling scoring algorithm layer respectively, judging whether to execute dynamic scheduling based on the third scheduling parameter, waiting for k times of preset monitoring interval when the dynamic scheduling is suspended, returning to the system state sensing module, wherein k is more than 1, acquiring the first scheduling parameter and the second scheduling parameter when the dynamic scheduling is executed, and sending the first scheduling parameter and the second scheduling parameter to the scheduling mode selecting module;
the scheduling mode selection module is used for determining a dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter which are acquired from the condition checking module;
the persistence monitoring module is used for executing dynamic scheduling on the process according to the dynamic scheduling mode, and returning to the system state sensing module to execute dynamic scheduling again after a preset monitoring interval.
The invention has the working principle and beneficial effects that:
In the technical scheme of implementing the invention, processes in the system are detected in real time, a CPU is responsible for judging whether to execute the process scheduling basis, a process scheduling score algorithm and a process scheduling reference list type are used as the basis for determining the dynamic scheduling mode, and a default scheduling strategy of a Linux operating system is replaced. And the corresponding large cores and/or small cores are selected for scheduling for different processes, so that the priority of scheduling the high-priority process and the high-load process on the large cores is improved, the performance of the key process of the system is optimized, and the overall performance of the operating system is further improved.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
Example 1
FIG. 3 is a flowchart illustrating main steps of a multi-process dynamic scheduling method under a large and small core architecture CPU according to the present invention. As shown in fig. 3, the multi-process dynamic scheduling method under the large and small core architecture CPU of the present embodiment mainly includes the following steps S1 to S4.
S1, monitoring the current CPU load of the system, and judging whether the current CPU load is larger than a CPU load threshold value or not to obtain a third scheduling parameter.
In one embodiment, the third scheduling parameter is set to 0 if the current CPU load is greater than the CPU load threshold and to 1 if the current CPU load is less than or equal to the CPU load threshold.
S2, judging whether to execute dynamic scheduling or not based on the third scheduling parameter, if the third scheduling parameter is 0, suspending dynamic scheduling, and if the third scheduling parameter is 1, executing dynamic scheduling.
Generally, when the system is under an excessive load and the system resource environment is too intense, the consumption of executing the process scheduling will aggravate the system resource shortage, and meanwhile, the method is not in compliance with the requirement of preventing the downtime risk of the system and improving the overall performance of the system through reasonable size core scheduling in the embodiment, so that whether to execute the process scheduling needs to be determined through the third scheduling parameter.
When the dynamic scheduling is suspended, the step S1 is returned to after waiting for k times of preset monitoring intervals, and k >1.
In one embodiment, k is 3, and the preset monitoring interval is prolonged to be a specific times of the original monitoring interval, so as to reduce the influence of process scheduling on the system load.
When the dynamic scheduling is executed, a first scheduling parameter and a second scheduling parameter are acquired.
In one embodiment, the method for obtaining the first scheduling parameter includes obtaining a divided scheduling reference list, wherein the scheduling reference list comprises a system core process list, a system high priority process list, a user-defined list and a high priority process blacklist, and the category of the process in the scheduling reference list is used as the first scheduling parameter.
The system core process list is provided by an operating system designer, an operating system is built in, a user does not have modification authority, and is responsible for maintaining a process list to which a core component of the system belongs, such as a graphics server (Xorg, walland, etc.) of the system, an audio server (PulseAudio, etc.) of the system, a core process of the system and a maintenance process thereof (ukui-session on a domestic kylin operating system because of system variation). These processes are used as core processes of the system, participate in scheduling decisions, and in any system condition, are guaranteed to obtain the highest priority large core scheduling.
The system high priority process list is also provided by an operating system designer, an operating system is built in, a user does not have modification authority, and the system relational components except for a system core process, important auxiliary processes of the system and a system detection response process (which are different from each other according to the system and are ukui-settings-daemon, lightdm on a domestic kylin operating system) are maintained. These processes are used as key processes and important processes of the system, participate in scheduling decisions, and under most conditions of the system, the processes are inferior to the system core processes to obtain high-priority large-core scheduling.
The user-defined list is defined by the user, an empty configuration file list is newly built for the user in a specific directory of the system in the starting process, the user inputs the configuration file list according to lines, a process created for the user personal important application and a standard test program process are usually created for the user personal important application, the system detects the file in the dispatching process, the file is converted into the user-defined list to participate in score calculation, and a corresponding dispatching strategy is selected by a dispatching mode selection module to ensure that high-priority large-core dispatching is obtained under normal conditions.
The high priority process blacklist is defined by the user, an empty configuration file list is newly built for the user in a specific directory of the system in the starting process, the technology is usually set empty, when the user confirms that a certain process is a non-important process and needs to reside in the system, the user inputs the process by himself, and the process in the high priority process blacklist can be only scheduled to a small core of the CPU.
In one embodiment, the method for obtaining the second scheduling parameter is that according to the current CPU occupancy rate, an LRU linked list generated based on an LRU algorithm is updated, and a scheduling score P score of each process in the updated LRU linked list is calculated as the second scheduling parameter.
In this embodiment, the LRU link list generating method includes the following steps.
Acquiring N processes with highest CPU occupancy rate of the system at present, wherein N represents the length of an LRU linked list;
Judging whether each process with highest CPU occupancy rate exists in the current LRU chain table in sequence from high to low according to the CPU occupancy rate, if so, updating the occurrence times i=i++ of the process with the highest CPU occupancy rate, if not, traversing the current LRU chain table, and searching the processes which do not belong to the N processes with the highest CPU occupancy rate and have the lowest occurrence times in the calculation period, wherein for simplifying the description, the processes with the lowest occurrence times are referred to as the processes;
Judging whether a unique process exists, if so, replacing the process with the highest corresponding CPU occupancy rate, setting the occurrence number of the process with the highest corresponding CPU occupancy rate to be 1, if not, replacing the process with the lowest occurrence number and the latest time entering the LRU linked list with the process with the highest corresponding CPU occupancy rate, and setting the occurrence number of the process with the highest corresponding CPU occupancy rate to be 1;
And rearranging all the processes in the LRU chain table according to the fact that the more the occurrence number is, the smaller the corresponding numerical value of the position of the process with earlier entering chain table in the LRU chain table is, and the earlier the ranking in the LRU chain table is.
In one embodiment, it is assumed that there are 5 processes in the current LRU linked list, corresponding to the number of occurrences of process A-3, process B-10, process C-10, process D-5, and process E-6, respectively. The process is ordered B, C, E, D, A in the LRU linked list, and the method for acquiring the second scheduling parameter aiming at the initial LRU linked list comprises the following steps:
five processes with highest CPU occupancy rate in the system are obtained by using top command, which is A, B, D, F, G, so
For the process A, the process A appears in the current LRU linked list, the process is reserved, and the number of occurrences of the process A is increased by 1 to be changed into 4;
For the process B, the process B appears in the current LRU linked list, the process is reserved, and the number of occurrences of the process B is increased by 1 to be changed into 11;
For the process D, the process D appears in the current LRU linked list, the process is reserved, and the number of occurrences of the process D is increased by 1 to be changed into 6;
For the process F, not appearing in the current LRU chain table, searching N processes which do not belong to the process with the highest CPU occupancy rate and have the least occurrence number as E, replacing the process E with the process F, and setting the occurrence number of the process F as 1;
For the process G, not appearing in the current LRU chain table, searching N processes which do not belong to the process with the highest CPU occupancy rate and have the least occurrence number as C, replacing the process C with the process G, and setting the occurrence number of the process G as 1;
The 5 processes in the LRU chain table after final updating are respectively process A-4 times, process B-11 times, process G-1 times, process D-6 times and process F-1 times;
the more the number of occurrences, the more forward the process sequences in the LRU chain table, and the later the process sequences into the LRU chain table, the more backward the process sequences in the LRU chain table, namely B, D, A, F, G.
In this embodiment, the scheduling score P of a process score=(10-Pni 5+(Ppr-20)-1)logN-1(2(N-1)) +logN-1((N-Plru)+N-1)Pcpu 99。
Wherein, P ni represents the NI value of the process in the LRU chain table, P pr represents the PR value of the process in the LRU chain table, N represents the length of the LRU chain table, P lru represents the value corresponding to the position of the process in the LRU chain table, and P cpu represents the CPU occupancy rate of the process in the LRU chain table.
The NI value of a process, i.e., the process nice value, represents the static priority of the process, where nice represents a correction of the process priority. Is initially generated by default by the system. Each process has a nice value, the range of the nice value is-20-19, the default value is 0, and the lower the value is, the higher the scheduling priority is.
The PR value of the process is the priority value, which indicates the real-time priority of the process. The real-time priority range is 0-99. The value is calculated in real time by the operating system. The greater the PR value, the higher the priority. The PR value of the system is typically 20.
The CPU occupancy rate of a process refers to the proportion of CPU resources occupied by a process in a certain time. It is typically expressed in percent. If the CPU occupancy of a process is high, it may mean that it is resource intensive or is performing complex computations. Conversely, if the CPU occupancy is low, it may indicate that the process does not require too much computing resources, or that it is waiting for other resources (e.g., disk I/O).
The position of the process in the LRU linked list, that is, the latest LRU occupation level of the process, is a scheduling indicator provided in this embodiment, and refers to the position of the process corresponding to the N processes updated according to the N processes with the highest CPU occupancy rate. The LRU algorithm is commonly referred to as the least recently used algorithm (LEAST RECENTLY Use) and is widely used in cache mechanisms. When the space used by the cache reaches the upper limit, a part of the existing data needs to be eliminated to maintain the availability of the cache, and the selection of the eliminated data is completed through the LRU algorithm. The LRU algorithm maintains an LRU list of length N (i.e., N processes in the LRU list) and stores the corresponding number of occurrences of the processes.
And S3, determining a dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter.
In one embodiment, the method for determining the dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter specifically includes:
judging a first scheduling parameter;
For the process of which the first scheduling parameter is a system core process list, the scheduling mode is set as exclusive large-core scheduling, for the process of which the first scheduling parameter is a system high-priority process list, the scheduling mode is set as full-large-core scheduling, for the process of which the first scheduling parameter is a high-priority process blacklist, the scheduling mode is set as full-small-core scheduling, and for the process of which the first scheduling parameter is a user-defined list, the second scheduling parameter is judged;
when the second scheduling parameters are all smaller than or equal to the lower threshold value of the scheduling score, for the process of which the first scheduling parameters are user-defined lists, setting a scheduling mode as full-large-core scheduling, and setting the scheduling modes of all processes in the LRU linked list as large-core and small-core mixed scheduling;
When the second scheduling parameter is larger than the lower scheduling score threshold and smaller than or equal to the upper scheduling score threshold, for the process of which the first scheduling parameter is a user-defined list, setting a scheduling mode as big and small kernel mixed scheduling, setting a scheduling mode of the process of which the scheduling score is larger than the lower scheduling score threshold in the LRU linked list as full-big kernel scheduling, and setting scheduling modes of other processes in the LRU linked list as big and small kernel mixed scheduling;
when the second scheduling parameter is larger than the upper threshold value of the scheduling score, for the process of which the first scheduling parameter is a user-defined list, the scheduling mode is set to be large-core mixed scheduling, the scheduling mode of the process of which the scheduling score is larger than the upper threshold value of the scheduling score in the LRU chain table is set to be exclusive large-core scheduling, and the scheduling modes of other processes in the LRU list are set to be large-core mixed scheduling.
In the practical code implementation, the invention firstly determines the scheduling mode for the process in the LRU chain table according to the scheduling scheme of the LRU chain table, and then determines the scheduling mode for the process in the system core process list, the system high-priority process list and the high-priority process blacklist according to the scheduling scheme of the scheduling reference list. In this way, the scheduling mode determined by the list scheduling scheme can cover the scheduling mode determined by the LRU list scheduling scheme for the processes in the system core process list, the system high priority process list and the high priority process blacklist in the LRU list, so that the scheduling scheme of the scheduling reference list is still adopted when the processes in the system core process list and the high priority process list are arranged in the LRU list, the problem of contradiction scheduling modes exists.
That is, the present invention has a weaker influence in the overall scheduling scheme based on the LRU linked list scheduling scheme (i.e., the dynamic scheduling scheme distinguished according to the second scheduling parameter) than the scheduling reference list scheduling scheme (i.e., the dynamic scheduling scheme distinguished according to the first scheduling parameter).
The special large-core scheduling instruction process can be scheduled on all large cores of the CPU, the full-large-core scheduling instruction process can only be scheduled on large cores except the special large cores of the CPU, the full-small-core scheduling instruction process can only be scheduled on small cores of the CPU, and the large-small-core mixed scheduling instruction process can be scheduled on large cores except the special large cores of the CPU or small cores of the CPU.
And S4, performing dynamic scheduling on the process according to the dynamic scheduling mode, and returning to the step S1 after waiting for a preset monitoring interval.
In this embodiment, the preset monitoring interval is a default monitoring interval t s in this embodiment, and is also user-definable, and when the user is customized, the default configuration of the system is modified to be the custom monitoring interval t c. In order to ensure that the user-defined numerical value meets the requirement, the user-defined monitoring interval input by the user is checked, and the method specifically comprises the following steps.
Acquiring a user-defined monitoring interval input by a user;
judging whether the custom monitoring interval accords with the monitoring interval threshold, if so, taking the custom monitoring interval as a preset monitoring interval, and if not, taking the default monitoring interval as the preset monitoring interval.
The monitoring threshold is a section with an upper limit and a lower limit set in advance.
Based on the steps S1-S4, processes in the system are detected in real time, the CPU load condition is used as a basis for judging whether to execute process scheduling, a scheduling score algorithm of the processes and scheduling reference list types of the processes are used as a basis for determining a dynamic scheduling mode, and a default scheduling strategy of a Linux operating system is replaced. The corresponding large core or small core is selected for different processes to schedule, so that the priority of scheduling the high-priority process and the high-load process on the large core is improved, the performance of the key process of the system is optimized, and the overall performance of the operating system is further improved.
It should be noted that, although the foregoing embodiments describe the steps in a specific order, it will be understood by those skilled in the art that, in order to achieve the effects of the present invention, the steps are not necessarily performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of the present invention.
Example 2
Fig. 4 is a schematic diagram of a frame of a multi-process dynamic scheduling system under a large and small core architecture CPU according to the present invention, as shown in fig. 4, where the multi-process dynamic scheduling system under a large and small core architecture CPU in this embodiment includes a scheduling reference column layer, a process scheduling score algorithm layer, a system dynamic sensing layer and a large and small core dynamic scheduling layer that are connected in order by information.
1. Scheduling reference list layer
The dispatch reference list layer maintains four lists of a system core process list, a system high priority process list, a user-defined list and a high priority process blacklist.
The system core process list is provided by an operating system designer, an operating system is built in, a user does not have modification authority, and is responsible for maintaining a process list to which a core component of the system belongs, such as a graphic server (Xorg, walland, etc.) of the system, an audio server (PulseAudio, etc.) of the system, a core process of the system and a maintenance process thereof (which are different from one another in the system and are ukui-session on a domestic kylin operating system). These processes are used as core processes of the system, participate in scheduling decisions, and in any system condition, are guaranteed to obtain the highest priority large core scheduling.
The system high priority process list is also provided by an operating system designer, an operating system is built in, a user does not have modification authority, and the system relational components except for a system core process, important auxiliary processes of the system and a system detection response process (which are different from each other according to the system and are ukui-settings-daemon, lightdm on a domestic kylin operating system) are maintained. These processes are used as key processes and important processes of the system, participate in scheduling decisions, and under most conditions of the system, the processes are inferior to the system core processes to obtain high-priority large-core scheduling.
The user-defined list is defined by the user, an empty configuration file list is newly built for the user in a specific directory of the system in the starting process, the user inputs the configuration file list according to lines, a process created for the user personal important application and a standard test program process are usually created for the user personal important application, the system detects the file in the dispatching process, the file is converted into the user-defined list to participate in score calculation, and a corresponding dispatching strategy is selected by a dispatching mode selection module to ensure that high-priority large-core dispatching is obtained under normal conditions.
The high priority process blacklist is defined by the user, an empty configuration file list is newly built for the user in a specific directory of the system in the starting process, the technology is usually set empty, when the user confirms that a certain process is a non-important process and needs to reside in the system, the user inputs the process by himself, and the process in the high priority process blacklist can be only scheduled to a small core of the CPU.
2. Process scheduling scoring algorithm layer
The process scheduling scoring algorithm layer is used for updating the LRU linked list generated based on the LRU algorithm according to the current CPU occupancy rate, and calculating the scheduling score P score of each process in the updated LRU linked list.
The LRU algorithm firstly inquires N processes with highest CPU occupancy rate of the system, compares each process in the N processes with highest CPU occupancy rate, adds 1 to the occurrence number of the process if the process is in the LRU linked list, selects the process which has the least occurrence number and enters the LRU linked list at the latest to replace if the process is not in the LRU linked list, and sets the occurrence number of the process to 1.
The scheduling score of the process is mainly calculated by four dimensions of the NI value of the process, the PR value of the process, the CPU occupancy rate of the process and the position of the process in the LRU linked list. In combination with the four dimensions, the embodiment designs a scheduling score function P score=(10-Pni 5+(Ppr-20)-1)logN-1(2(N-1)) +logN-1((N-Plru)+N-1)Pcpu 99
In the score function, P score represents the scheduling score of the process, P ni represents the NI value of the process, P pr represents the PR value of the process, N represents the length of the LRU linked list, P lru represents the value corresponding to the position of the process in the LRU linked list, and P cpu represents the CPU occupancy rate of the process, typically in percentage.
In the scoring function, the four dimensions have different effects on the process score.
P ni and P pr are the first influencing priority factors for the score, which are calculated using an exponential function. At default for the system, when P ni is equal to 0 and P pr is 20, (10 -Pni 5+(Ppr-20)-1)logN-1(2(N-1)) is 0, which represents that the influence scheduling degree is 0, when the P ni is reduced or the P pr value is increased, the exponential function is rapidly increased, the influence of the change of PR and NI values on scheduling is far greater than that of the LRU list, and the priority of the system process is the primary consideration factor of the large-size core scheduling.
P cpu is a second consideration of the scoring function, in this function, P cpu is first multiplied by 99, and converted from a percentage to a corresponding scoring value, in general, the higher the CPU occupancy of a process, the higher the demand for large core resources, and the more efficient the execution end is for the system's operating pressure relief, so this is the second consideration of the size core scheduling score.
And the last influence factor is P lru, which represents the position of the linked list where the process is located in the LRU algorithm, the list is arranged according to the occurrence number, and the more the occurrence number is, the smaller the numerical value corresponding to the position in the linked list is. The smaller the value, the longer the process runs as a high load process in the operating system, requiring timely scheduling of completion.
3. System dynamic perception layer
Generally, when the system is under an excessive load and the system resource environment is too intense, the consumption of executing process scheduling can aggravate the shortage of system resources, and meanwhile, the requirements for preventing the downtime risk of the system and improving the overall performance of the system through reasonable size core scheduling are different from those of the embodiment, so that the main task of the system state dynamic sensing layer is to sense whether the current system state is suitable for executing dynamic scheduling.
The system dynamic sensing layer comprises a system state sensing module and a condition checking module which are connected through information, and the condition checking module is connected with the scheduling reference column surface layer, the process scheduling scoring algorithm layer and the size core dynamic scheduling layer at the same time.
The system state sensing module is mainly used for polling the current CPU state of the monitoring system according to a preset monitoring interval, generating a third scheduling parameter and transmitting the third scheduling parameter to the condition checking module. The condition checking module is used for directly or indirectly acquiring a third scheduling parameter, a first scheduling parameter and a second scheduling parameter from the system state sensing module, the scheduling reference list surface layer and the process scheduling scoring algorithm layer respectively, judging whether to execute dynamic scheduling based on the third scheduling parameter, and when the dynamic scheduling is suspended, waiting for k times of preset monitoring intervals and returning to the system state sensing module, wherein k is greater than 1, slowing down the polling frequency, reducing the influence of scheduling scripts on the system load, and continuing the operation of the scheduling mode selection module through a delay starting mechanism after the system CPU load is reduced below a specific threshold. When the dynamic scheduling is executed, the first scheduling parameter and the second scheduling parameter are acquired and sent to the scheduling mode selection module.
4. Size core dynamic scheduling layer
The size core dynamic scheduling layer comprises a scheduling mode selection module, a persistence monitoring module and a user input checking module which are sequentially connected through information, wherein the scheduling mode selection module is in information connection with a condition checking module of the system state sensing layer.
The scheduling mode selection module is used for determining a dynamic scheduling mode according to the first scheduling parameter and the second scheduling parameter which are acquired from the condition checking module. When the first scheduling parameter corresponds to the system core process list, the scheduling mode of the process is dedicated large-core scheduling, when the first scheduling parameter corresponds to the system high-priority process list, the scheduling mode of the process is full-large-core scheduling, and when the first scheduling parameter corresponds to the high-priority process blacklist, the scheduling mode of the process is full-small-core scheduling.
When the first scheduling parameter corresponds to the user-defined list, the second scheduling parameter needs to be considered, if the second scheduling parameter is all smaller than or equal to a certain specific value m (m is taken as a lower score threshold value), the scheduling mode of all processes in the user-defined list is full-large-core scheduling, if the second scheduling parameter is all smaller than or equal to another specific value n (n is taken as an upper score threshold value, n > m), the system is required to be considered, the high-load processes are scheduled preferentially, the scheduling mode of the processes in the user-defined list is full-large-core scheduling, the scheduling mode of the processes with the scheduling score larger than m in the LRU list is full-large-core scheduling, the scheduling mode of the other processes in the LRU list is full-large-core mixed scheduling, if the second scheduling parameter is larger than another specific value n, the high-load processes in the LRU list need to acquire system scheduling resources, the processes with the scheduling score larger than n in the LRU list adopt a dedicated large-core scheduling mode, and the other processes with the high-load processes in the LRU list adopt a full-core mixed scheduling mode.
The four scheduling modes are used for matching different processes, the full-small-core scheduling refers to the process which can only be scheduled on the small cores of the CPU, the large-small-core mixed scheduling refers to the process which can be scheduled on the large cores except the special large cores of the CPU and the small cores of the CPU, the full-large-core scheduling refers to the process which can only be scheduled on the large cores except the special large cores of the CPU, and the special large-core scheduling refers to the process which can be scheduled on all large cores including the special large cores.
The persistence monitoring module is responsible for executing dynamic scheduling on the process according to the dynamic scheduling mode, re-monitoring the process state of the system after the preset monitoring interval after completing one-time scheduling, and re-scheduling according to the need.
The preset monitoring interval is a default monitoring interval t s in the present embodiment, and is also user-definable, and when the user is customized, the default configuration of the system is modified to be the custom monitoring interval t c. The user input verification module is used for acquiring a user input custom monitoring interval, judging whether the custom monitoring interval accords with a monitoring interval threshold, if so, taking the custom monitoring interval as a preset monitoring interval, and if not, taking a default monitoring interval as the preset monitoring interval. And judging the validity and rationality of the user-defined detection interval input by the user through the user input checking module. Specifically, the validity and rationality processes of the first parameter T c input by the user through the specific shell command are checked, the integer values of the user inputs T min to T max (i.e. the monitoring interval threshold value) are supported, otherwise, the default monitoring interval T s of the system is adopted.
The above-mentioned multi-process dynamic scheduling system under the large and small core architecture CPU is implemented by a multi-process dynamic scheduling method under the large and small core architecture CPU for executing the embodiment 1, and the technical principles, the technical problems to be solved and the technical effects to be produced are similar, and those skilled in the art can clearly understand that, for convenience and brevity of description, the specific working process and related description of a multi-process dynamic scheduling system under the large and small core architecture CPU may refer to what is described in the embodiments of a multi-process dynamic scheduling method under the large and small core architecture CPU, and will not be repeated herein.
Example 3
Based on the technical solutions of the foregoing embodiments, this embodiment proposes a specific implementation manner.
The embodiment is based on the Ruifeng micro RK3588 platform and a Galaxy kylin desktop operating system (national defense version) V10 operating system, and particularly realizes a multi-process dynamic scheduling technology under a CPU size core architecture on a domestic kylin operating system. RK3588 is a very recent generation of industrial processors with a four-core ARM Cortex-A76@2.4GHz +four-core ARM Cortex-A55@1.8GHz size core architecture. The Galaxy kylin desktop operating system is an operating system based on a Linux kernel, aims to provide a safe, stable and easy-to-use operating system, and is mainly oriented to informatization construction of industries such as finance, electric power and the like.
1. Scheduling reference list layer implementation
The system core process list and the system high priority process list are embedded with codes, and the user cannot modify the codes. The system core process list comprises a graphics service process Xorg operated by kylin, an audio service process PulseAudio and a core process ukui-session of the system, the system high-priority process list comprises key processes of the system such as light, ukui-settings-daemon, file nodes of a user-defined list are located under/etc/processList.cnf and can be written by the user, and file nodes of a high-priority process blacklist are located under/etc/processBlackList.cnf and can be written by the user. After the corresponding file list is read, the four lists are integrated and are used as the first scheduling parameters to be transmitted into the condition checking module.
2. Process size kernel scheduling scoring algorithm layer implementation
In this example, the N size is 10, that is, the system maintains an LRU linked list with a size of 10, and obtains the first 10 processes with the highest output CPU occupancy rate through a top command, which are used as the latest input of the LRU linked list. Correspondingly, according to the processes in the LRU linked list, P ni、Pcpu、Ppr is recorded respectively and passed through P score=(10-Pni 5+(Ppr-20)-1)log918+log9((19-Plru))Pcpu 99, Calculating the corresponding process size kernel scheduling score. And the process in the LRU linked list and the corresponding scheduling score thereof are used as a second scheduling parameter to be transmitted into a condition checking module.
3. System state dynamic perception layer implementation
The system state sensing module obtains the current CPU load according to a preset monitoring interval through a top command in a polling mode, if the current CPU load is larger than a CPU load threshold, the third scheduling parameter is set to 0, if the current CPU load is smaller than or equal to the CPU load threshold, the third scheduling parameter is set to 1, and then the third scheduling parameter is transmitted to the condition checking module, preferably, the CPU load threshold is 85% in the example.
The condition checking module receives the third scheduling parameter transmitted by the system state sensing module, when the third scheduling parameter is 0, the dynamic scheduling is paused, namely the scheduling of the period is skipped directly, the preset monitoring interval is changed to be k times of the original k times, preferably, the value of k in the example is 3, and when the third scheduling parameter is 0, the condition checking module respectively acquires the first scheduling parameter and the second scheduling parameter from the scheduling reference column surface layer and the process scheduling scoring algorithm layer and transmits the first scheduling parameter and the second scheduling parameter to the scheduling mode selecting module.
4. Size core dynamic scheduling layer implementation
CPU size cores of the Rayleigh core micro RK3588 are distributed as CPU0-CPU3 as small cores and CPU4-CPU7 as large cores. Preferably, four scheduling modes of the scheduling mode selector in the embodiment are grouped into a CPU core corresponding to full-small core scheduling is CPU0-CPU4, a CPU core corresponding to full-large core scheduling is CPU4-CPU5, a CPU core corresponding to exclusive large core scheduling is CPU4-CPU7, and a CPU core in a large-small core mixed scheduling mode is CPU0-CPU5. Also, based on the LRU linked list length N being 10 in this example, it is preferable to take the scoring value of the process as N being 70 and m being 50.
For the persistent monitoring module, preferably, the default monitoring interval T s in this example is 5 seconds, and the allowable ranges T min and T max of the custom monitoring interval T c are 3 seconds and 20 seconds respectively.
The user input verification module is realized by exposing an initialization script in a service starting process to shell starting naming of a user, a user self-defined monitoring interval t c is obtained through capturing a first parameter input by a command, if the user directly has no parameter use command or the user input self-defined monitoring interval t c is illegal (namely, t c is more than 20 seconds or t c is less than 3 seconds), the preset monitoring interval is 5 seconds, and if the user input is legal, the preset monitoring interval is the user self-defined monitoring interval t c.
Based on the above scheme, since the large core frequency of the CPU is higher than the small core frequency, the large core performance is generally better than the small core performance. By improving the priority of dispatching the high-priority process and the high-load process to the large core, the system critical process and the high-load process select proper dispatching modes, the performance of the system critical process is improved, the inclination of the system to the CPU resource of the high-load process is increased, and the overall performance of the operating system under the normal running condition is improved.
GLMark2 is a standard test tool for testing the graphics performance provided by the system hardware. Taking GLMark2 as an example, fig. 5 is a schematic diagram showing GLMark without multi-process dynamic scheduling deployed in embodiment 3 of the present invention, where the final running score of the gale kylin desktop operating system is 936 points as shown in fig. 5, and fig. 6 is a schematic diagram showing GLMark with multi-process dynamic scheduling deployed in embodiment 3 of the present invention, where the final running score of the gale kylin desktop operating system is 1065 points as shown in fig. 6. It can be seen that by deploying the multi-process dynamic scheduling technique under the CPU-size core architecture, the graphics performance of the operating system is improved by about 14%.
GLMark2 is used as a standard test tool, the overall graphic performance of the system can be reflected to a certain extent, and it can be considered that by deploying the invention, compared with the same set of software and hardware equipment without the function, the overall graphic performance of the system is improved by about 14%, aiming at the problem of pain point scheduling of the market size core, the system can be dynamically perceived, dynamically detected and dynamically scheduled, and according to the load condition of the process and the importance degree in the system, the smoothness of the overall operation system is improved by selecting a proper scheduling mode by referring to the process requirements of a user, and the performance of the operation system is improved.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.