CN110704187B - System resource adjusting method and device and readable storage medium - Google Patents
System resource adjusting method and device and readable storage medium Download PDFInfo
- Publication number
- CN110704187B CN110704187B CN201910916214.6A CN201910916214A CN110704187B CN 110704187 B CN110704187 B CN 110704187B CN 201910916214 A CN201910916214 A CN 201910916214A CN 110704187 B CN110704187 B CN 110704187B
- Authority
- CN
- China
- Prior art keywords
- resource
- thread
- optimized
- preset
- priority
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000004891 communication Methods 0.000 claims description 27
- 230000001419 dependent effect Effects 0.000 description 37
- 239000011230 binding agent Substances 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/526—Mutual exclusion algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Telephonic Communication Services (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention discloses a method and a device for adjusting system resources and a storage medium, wherein the method for adjusting the system resources comprises the following steps: s1: acquiring resources to be optimized corresponding to current system resources; s2: and adjusting the resources to be optimized according to a preset strategy. The priority of the resource to be optimized is adjusted so that the priority of the resource to be optimized, on which the current system resource depends, is improved, the resource to be optimized runs faster, the terminal is prevented from being blocked, and the application running fluency is improved.
Description
Technical Field
The present invention relates to the field of system resources, and in particular, to a method and apparatus for adjusting system resources, and a readable storage medium.
Background
In the prior art, the running of a foreground application program of a terminal often depends on data returned by a background application to run, and resource configuration optimization of a running system carried by the terminal is always a problem to be solved in the running process of the terminal; comparing the resource consumption information with the available computing resource information, and determining the task with the comparison result meeting the preset task scheduling condition as a target task, thereby optimizing the resource utilization rate of the task scheduling process and improving the processing performance of a data warehouse; or starting timing from the switching moment when the target application is switched to the background, if the timing moment starts and the timing time is within a first preset time length of the first preset time length, the target application is not relied on by the foreground application, and then resource limitation processing is carried out on the target application, so that the influence on the foreground application caused by limitation on the target application can be prevented, and the processing flexibility of the electronic equipment on the application is improved; for another example, in other patent documents, the adjustment of the CPU frequency modulation mode in the terminal is achieved through the acquired process packet and the prestored frequency modulation mode list, so that the existing adjustment technology of the frequency modulation mode is optimized, the power consumption of the mobile terminal is reduced, and the performance of the mobile terminal is improved. For another example, in some patent documents, by embedding an SDK in a target application program, the target application program can send its own application running information to the operating system through an API interface provided by the SDK, so that the operating system formulates a corresponding resource configuration policy based on the application running information, and thus allocates a corresponding system resource to the target application program according to the resource configuration policy; compared with the method of simply improving the hardware performance of the terminal, the operating system in the embodiment can allocate corresponding system resources for the application program according to the running state of the application program, so that the application program can achieve good running effects in different running states and reduce the dependence on the terminal hardware.
However, in the prior art, the current display thread of the foreground application of the terminal usually needs to acquire data returned by the background thread to run, the background thread needs to wait for the execution of other processes to finish, then the corresponding data can be executed and acquired, and the data is returned to the display thread, so that the display thread cannot be executed in time, and the phenomenon that the foreground application is blocked occurs. The prior art can not solve the problem that the foreground application display thread is blocked.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a system resource adjusting method and aims to solve the problem that in the prior art, a foreground application is blocked.
In order to achieve the above object, the present invention provides a method for adjusting system resources, the system resource adjustment includes the steps of:
s1: acquiring resources to be optimized corresponding to current system resources;
S2: and adjusting the resources to be optimized according to a preset strategy.
Optionally, the step S2 includes:
When communication exists between the current system resource and the resource to be optimized, acquiring the priority of the current system resource and a corresponding communication request;
acquiring the priority of the resource to be optimized responding to the communication request;
And adjusting the priority of the current system resource and/or the resource to be optimized so that the priority of the resource to be optimized is higher than or equal to the priority of the current system resource.
Optionally, the step S2 further includes:
storing the resource information of the resource to be optimized into a stack of the current system resource;
and after the operation of the resource to be optimized is finished, and when the resource information of the resource to be optimized is matched with the stack top information of the stack, the stack top information is taken out to restore the initial priority of the resource to be optimized.
Optionally, the step S2 includes:
And when the resource to be optimized is a preset resource, adjusting the priority of the preset resource to be higher than or equal to the priority of the current system resource.
Optionally, the step S2 further includes:
Storing the resource information of the preset resource into a stack of the current system resource;
And after the preset resource is operated, controlling the preset resource to release the corresponding resource, and when the resource information of the preset resource is matched with the stack top information of the stack, taking out the stack top information to restore the initial priority of the preset resource.
Optionally, the adjusting method further includes:
When at least one condition that a switching instruction corresponding to the current system resource and the current system resource is wrong is received, the resource information stored in the stack is taken out.
Optionally, before the step S1, the method further includes:
In a preset time period and/or a preset geofence, taking a system resource with the use time longer than or equal to the preset time period and/or the starting frequency longer than or equal to the preset frequency as a preset resource;
And executing the step of acquiring the resource to be optimized corresponding to the current system resource when the resource corresponding to the current system resource is the preset resource.
Optionally, the current system resource and/or the resource to be optimized includes at least one of an application, a process, and a thread.
In order to achieve the above object, the present invention further provides a system resource adjustment device, where the system resource adjustment device includes a memory, a processor, and a system resource adjustment program stored in the memory and capable of running on the processor, and the system resource adjustment program when executed by the processor implements the steps of the system resource adjustment method according to any one of the above claims.
In order to achieve the above object, the present invention further provides a readable storage medium having stored thereon a system resource adjustment program which, when executed by a processor, implements the steps of the system resource adjustment method according to any one of the above.
The technical scheme of the invention obtains the resources to be optimized corresponding to the current system resources; and adjusting the resources to be optimized according to a preset strategy so as to improve the priority of the resources to be optimized on which the current system resources depend, thereby enabling the resources to be optimized to run faster, avoiding the terminal from operating to be blocked, and improving the application operation fluency.
Drawings
FIG. 1 is a schematic diagram of a device architecture of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating an embodiment of a method for adjusting system resources according to the present invention;
FIG. 3 is a flowchart illustrating an embodiment of the step S2 of the system resource adjustment method according to the present invention;
FIG. 4 is a flowchart illustrating another embodiment of a system resource adjustment method according to the present invention;
FIG. 5 is a flowchart illustrating another embodiment of the step S2 of the system resource adjustment method according to the present invention;
Fig. 6 is a flowchart illustrating a system resource adjustment method according to another embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The technical scheme of the invention mainly comprises the following steps:
s1: acquiring resources to be optimized corresponding to current system resources;
S2: and adjusting the resources to be optimized according to a preset strategy.
In the prior art, a current display thread of a foreground application of a terminal usually needs to acquire data returned by a background thread to run, the background thread needs to wait for the execution of other processes to finish, then the background thread can execute and acquire corresponding data and returns the corresponding data to the display thread, so that the display thread cannot execute in time, and the phenomenon that the foreground application is blocked occurs. The prior art can not solve the problem that the foreground application display thread is blocked.
The technical scheme of the invention obtains the resources to be optimized corresponding to the current system resources; and adjusting the resources to be optimized according to a preset strategy so as to improve the priority of the resources to be optimized on which the current system resources depend, thereby enabling the resources to be optimized to run faster, avoiding the terminal from operating to be blocked, and improving the application operation fluency.
As shown in fig. 1, fig. 1 is a schematic diagram of a hardware operating environment of a terminal according to an embodiment of the present invention.
The terminal in the embodiment of the invention is a mobile terminal or a fixed terminal, such as a mobile phone. As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), a remote controller, and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., memory (non-volatilememory), such as disk storage, memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure of the terminal shown in fig. 1 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a control program of the apparatus may be included in the memory 1005, which is one type of computer storage medium.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to call a control program of the device stored in the memory 1005 and perform the following operations:
s1: acquiring resources to be optimized corresponding to current system resources;
S2: and adjusting the resources to be optimized according to a preset strategy.
Further, the processor 1001 may call a control program of the device stored in the memory 1005, and also perform the following operations:
When communication exists between the current system resource and the resource to be optimized, acquiring the priority of the current system resource and a corresponding communication request;
storing the resource information of the resource to be optimized into a stack of the current system resource;
and after the operation of the resource to be optimized is finished, and when the resource information of the resource to be optimized is matched with the stack top information of the stack, the stack top information is taken out to restore the initial priority of the resource to be optimized.
Further, the processor 1001 may call a control program of the device stored in the memory 1005, and also perform the following operations:
storing the resource information of the resource to be optimized into a stack of the current system resource;
and after the operation of the resource to be optimized is finished, and when the resource information of the resource to be optimized is matched with the stack top information of the stack, the stack top information is taken out to restore the initial priority of the resource to be optimized.
Further, the processor 1001 may call a control program of the device stored in the memory 1005, and also perform the following operations:
And when the resource to be optimized is a preset resource, adjusting the priority of the preset resource to be higher than or equal to the priority of the current system resource.
Further, the processor 1001 may call a control program of the device stored in the memory 1005, and also perform the following operations:
Storing the resource information of the preset resource into a stack of the current system resource;
And after the preset resource is operated, controlling the preset resource to release the corresponding resource, and when the resource information of the preset resource is matched with the stack top information of the stack, taking out the stack top information to restore the initial priority of the preset resource.
Further, the processor 1001 may call a control program of the device stored in the memory 1005, and also perform the following operations:
When at least one condition that a switching instruction corresponding to the current system resource and the current system resource is wrong is received, the resource information stored in the stack is taken out.
Further, the processor 1001 may call a control program of the device stored in the memory 1005, and also perform the following operations:
In a preset time period and/or a preset geofence, taking a system resource with the use time longer than or equal to the preset time period and/or the starting frequency longer than or equal to the preset frequency as a preset resource;
And executing the step of acquiring the resource to be optimized corresponding to the current system resource when the resource corresponding to the current system resource is the preset resource.
Further, the processor 1001 may call a control program of the device stored in the memory 1005, and also perform the following operations:
the current system resource and/or the resource to be optimized comprises at least one of an application, a process and a thread.
Referring to fig. 2, in embodiment 1 of the present invention, the method for adjusting system resources includes the following steps:
Step S1, obtaining resources to be optimized corresponding to current system resources.
In this embodiment, the current system resource and/or the resource to be optimized includes at least one of an application, a process, and a thread. In the following, the current system resource is taken as a current running thread, and the resource to be optimized is taken as a thread to be optimized as an example for explanation, it can be understood that in all the following embodiments, the current running thread can be correspondingly replaced by a current running application and a corresponding current running process, and the thread to be optimized can be correspondingly replaced by an application to be optimized or a corresponding process to be optimized.
In this embodiment, applications may be classified into a foreground application and a background application according to the running state. The foreground application refers to an application running in the foreground of the terminal, and the foreground application can be displayed in the foreground and interact with a user. The background application refers to an application running in the background of the terminal, and the background application cannot be displayed in the foreground and can realize an interaction process with a user. The terminal can control the switching of the foreground and background operation between different applications. The running of an Application (APP) is typically embodied by the running of a plurality of processes of interest. A process (process) is a running activity of a program in a computer on a certain data set, is a basic unit of resource allocation and scheduling by a system, and is a basis of an operating system structure. A thread is an entity of a process, is a basic unit for scheduling and dispatching by a Central Processing Unit (CPU), is a basic unit which is smaller than the process and can independently run, and a process run by an application at least comprises more than one thread, and one thread can create and cancel another thread; multiple threads in the same process may execute concurrently. The application in the present market embodiment may be a game-type application, a music-type application, or a social-type application, or a payment-type application, among other types of applications. For example, a user may play a game through a game-like application, may see video through a video-like application, may play music through a music-like application, and so on.
In this embodiment, the dependency indicates a relationship in which data representing one application needs to be utilized for another application or applications to successfully implement execution of the one application. There are two applications of the dependency relationship, namely the application being relied on and the application being relied on. Since the execution of applications is typically represented by the execution of a plurality of processes in relation, dependencies between applications also appear as dependencies between processes. For example, a certain process a in the application a depends on a certain process B in the application B, i.e. the process B is depended by the process a, the process a needs to use the data of the process B to implement the execution of the process a, which also indicates that the application a depends on the application B, or the application B is depended by the application a, and the application a needs to use the data of the application B to implement the execution of the application a, where the application a is the depended application. It will be appreciated that the dependent application may also be a foreground application and the dependent application may be a background application. The terminal may detect from the set of background applications to query whether there is a background application that is relied upon by the foreground application.
In this embodiment, the current running thread may be a display thread in a process corresponding to the running of the foreground application, or may be a background thread in a process corresponding to the running of the background application, and a display process in which the current running thread is used as a terminal display interface is described below as an example. In an embodiment, the thread to be optimized corresponding to the current running thread, that is, a certain thread of the background application that is depended on by the current running thread, the current running thread needs the dependent application to be optimized to return corresponding data to continue to execute. It can be understood that, because the current running thread needs to return corresponding data to continue execution by the corresponding application to be optimized, the thread to be optimized is executed first before the current running thread, at this time, the application to be optimized is converted into the current running thread, and after the dependent thread executes and returns corresponding data to the dependent thread, the dependent thread can continue to run, so that the current running thread is the thread that the terminal is currently running, and the current running thread can be a thread that depends on other threads or a thread that depends on other threads.
In an embodiment, the thread corresponding to the currently running thread may be one or more threads, for example, when photographing by using a WeChat application, the currently displayed thread of the WeChat needs to call a camera thread; in video chat through a micro-communication application, it may be necessary to invoke a camera thread, a microphone thread, and so on simultaneously.
And step S20, adjusting the resources to be optimized according to a preset strategy. In this embodiment, the adjustment of the adjustment policy aims at increasing the system priority of the resource to be optimized, so that the priority of the resource to be optimized is higher than the priority before adjustment, so that the resource to be optimized runs faster to return the data required by running to the current system resource, for example, the current system resource is the current running thread, and when the resource to be optimized is the thread to be optimized, the thread to be optimized is the thread on which the current running thread depends, and as a result of adjusting the priority of the thread to be optimized, the priority of the thread to be optimized after adjustment is equal to or higher than the priority of the current running thread. The priority represents the priority level of the application thread for occupying the resources and is used for reflecting the importance degree of the corresponding thread. The priority may include multiple levels, the priorities of different threads are not necessarily the same, and the priority for each thread may be set in advance. For example, threads of a system level application may be set to a higher priority and threads of a third party application set to a lower priority. The resource refers to a software or hardware resource that is necessary for the terminal to process an application event, such as a CPU (Central Processing Unit ), a Memory (Memory), hardware, a network resource, an IO (Input-Output), and the like of the terminal.
In an embodiment, the priority of the thread to be optimized may be that the thread to be optimized can obtain the relative size of the resource, which is generally represented by a value or a proportion, for example, the priority value corresponding to the currently running thread is 100, and the priority value corresponding to the priority of the thread to be optimized is 80, where the priority corresponding to the currently running thread is higher than the priority of the thread to be optimized; for another example, the priority ratio corresponding to the current running thread is 1%, and the priority value corresponding to the priority of the thread to be optimized is 2%, where the priority corresponding to the current running thread is lower than the priority of the thread to be optimized.
In this embodiment, after the current running thread obtains the corresponding thread to be optimized, that is, the current running thread depends on, the priority of the thread to be optimized is adjusted, so that the adjusted priority of the thread to be optimized is equal to or higher than the priority of the current running thread, for example, when the priority value corresponding to the current running thread is 100 and the priority value corresponding to the priority of the thread to be optimized is 80, the priority of the thread to be optimized is adjusted to be equal to or greater than 100, and when the current running thread enters a sleep state, and the thread to be optimized is controlled to execute first, the system can give resources corresponding to the priority (adjusted priority) of the thread to be optimized, so that when the thread to be optimized (the dependent thread) executes, the priority of the thread to be optimized is equal to or higher than the original resources of the dependent thread, so that the running process is completed faster, and corresponding data is returned to the dependent thread. When the current running thread is the display thread, the display thread can acquire data required by execution in time, so that the display process is prevented from being blocked.
Preferably, the priority of the thread to be optimized is adjusted to be equal to the priority of the currently running thread, that is, when the thread to be optimized (the dependent thread) runs, the thread to be optimized has resources equal to the original resources of the dependent thread, so that the thread to be optimized (the dependent thread) finishes executing quickly, and meanwhile, the influence on the priority of other threads existing in the system when the thread to be optimized (the dependent thread) has more resources than the original resources of the dependent thread is avoided.
In another embodiment, the thread to be optimized corresponding to the current running thread is not a thread on which the current running thread depends, but a thread which has no dependency relationship with the current running thread, by reducing the priority of the thread to be optimized, the priority of the thread to be optimized is lower than or equal to the priority of the current running thread, and because the priority is a relative concept, the priority of the thread on which the current running thread does not depend is reduced, therefore, the priority of the thread on which the current running thread depends is increased, and thus, a background thread on which the current running thread depends can be quickly executed, and corresponding data is returned to the current running thread. When the current running thread is the display thread, the display thread can also acquire the data required by execution in time, so that the display process is prevented from being blocked. In the following embodiments, the thread to be optimized corresponding to the current running thread is described as a background thread on which the current running thread depends. In yet another embodiment, the priority of the thread to be optimized may also be increased by decreasing the priority of the currently running thread.
In summary, according to the technical scheme of the invention, by acquiring the thread to be optimized corresponding to the current running thread and adjusting the priority of the thread to be optimized, so that the priority of the thread to be optimized is equal to or higher than the priority of the current running thread, when the thread to be optimized runs, CPU resources equal to or more than the current running thread are owned, so that the thread to be optimized can complete the running process faster and return corresponding data to the current running thread, and when the current running thread is a display thread of a terminal, the display thread can acquire the data required by execution in time, thereby avoiding the occurrence of a stuck display process and improving the running smoothness of a foreground application. Similarly, in the technical scheme of the invention, the resources comprise at least one of an application, a thread and a process, and the resources to be optimized corresponding to the current system resources are obtained; and adjusting the resources to be optimized according to a preset strategy so as to improve the priority of the resources to be optimized on which the current system resources depend, thereby enabling the resources to be optimized to run faster, avoiding the terminal from operating to be blocked, and improving the application operation fluency.
Optionally, in embodiment 2, as shown in fig. 3, on the basis of embodiment 1 above, the step S20 includes:
Step S21, when communication exists between the current system resource and the resource to be optimized, the priority of the current system resource and a corresponding communication request are obtained;
Step S22, the priority of the resource to be optimized responding to the communication request is obtained;
Step S23, adjusting the priority of the current system resource and/or the resource to be optimized, so that the priority of the resource to be optimized is higher than or equal to the priority of the current system resource.
In this embodiment, the adjusting process may be that, when the priority of the currently running thread is higher than the priority of the thread to be optimized, the priority of the thread to be optimized is adjusted to be equal to or higher than the priority of the currently running thread; the adjusting process may also be to adjust the priority of the currently running thread to be equal to or lower than the priority of the thread to be optimized, so that the effect that the priority of the thread to be optimized is equal to or higher than the priority of the currently running thread can be achieved, and the priority of the thread to be optimized is adjusted so that the priority of the thread to be optimized is equal to or higher than the priority of the current thread.
In this embodiment, taking the resource as an example of a thread, the current running thread and the thread to be optimized may be two threads in the same process, or two threads belonging to different processes; when the current running thread and the thread to be optimized are two threads belonging to different processes, inter-process communication (IPC) exists between the current running thread and the thread to be optimized, and common inter-process communication mechanisms include Binder communication, pipe (Pipe), signal and Trace (Trace), socket, message queue (Message), shared Memory (shared Memory), semaphore (Semaphore), and the like.
The following is an example of a Binder communication mechanism: the current running thread is a thread which is created by a Client (also called Client, request end and sending end) and corresponds to a communication request for requesting to return data to a Client Server (also called Server), and when the thread to be optimized is the Client Server and corresponds to the communication request, the corresponding data is obtained and returned to the thread corresponding to the Client. When a communication request of the Binder client is received, storing the communication request, acquiring the priority of a thread corresponding to the communication request, namely the current running thread (dependent thread), and adjusting the priority of the thread to be optimized when the priority of the current running thread is higher than the priority of the thread to be optimized, so that the adjusted priority of the thread to be optimized is equal to or higher than the priority of the current running thread; when the priority of the current running thread is equal to or lower than the priority of the thread to be optimized, the priority of the thread to be optimized is reserved without adjustment, so that the thread executed by the Binder server can be guaranteed to quickly acquire corresponding data, and the corresponding data is returned to the thread waiting for receiving the data in the Binder client, namely the current running thread (dependent thread), and the current running thread can be guaranteed to quickly run.
Alternatively, in embodiment 3, as shown in fig. 4, on the basis of the above-described embodiments 1-2, the following steps are performed while step S20 is performed:
step S30, storing the resource information of the resource to be optimized into a stack of the current system resource;
And S40, after the operation of the resource to be optimized is finished, and when the resource information of the resource to be optimized is matched with the stack top information of the stack, the stack top information is taken out to restore the initial priority of the resource to be optimized.
In this embodiment, taking the resource as an example of a thread, the system creates a stack for the currently running thread to be used for storing the thread information of the thread to be optimized (the thread information is one of the above-mentioned resource information, it can be understood that when the resource is an application, the resource information is application information, and when the resource is a process, the resource information is process information), where the thread information at least includes the identification information of the thread, such as a thread number, an initial priority of the thread, a thread state, and the like. While executing the step S20, that is, while adjusting the priority of the thread to be optimized, of course, the thread information of the thread to be optimized may be stored in the stack before adjusting the priority of the thread to be optimized, after the thread to be optimized is executed, the thread information of the thread to be optimized stored in the stack is taken out, so as to restore the initial priority of the thread to be optimized, so that the initial priority of the thread to be optimized is restored, because the adjusted priority of the thread to be optimized (the dependent thread) is aimed at the current running thread (the dependent thread), so as to ensure the fast execution of the current running thread (the dependent thread), and after the thread to be optimized (the dependent thread) finishes execution and returns corresponding data to the current running thread (the dependent thread), the adjusted priority of the thread to be optimized is not pointed, so as to avoid the influence of the adjusted priority of the thread to the priority of other threads in the system, so as to ensure that the priority of the thread to be optimized (the dependent thread) is influenced by the priority of the other threads in the system, so as to ensure that the priority of the thread to be optimized is executed by the current running thread.
In this embodiment, the dependent thread and the dependent thread are relatively speaking, for example, the currently running thread a is a dependent thread, the thread to be optimized B is a dependent thread, and the thread to be optimized may also be executed by a dependent thread C, where, relative to the thread C, the thread to be optimized B is a dependent thread, that is, the threads A, B, C depend in turn, and the priority of the threads B, C is adjusted in turn; when the thread B is executed in advance, the thread information of the thread B needs to be stored in the stack, and because the thread B depends on the thread C, the thread C needs to be executed before the thread B, the thread information of the thread C needs to be stored in the stack, at the moment, the thread information of the thread C and the thread information of the thread B are stored in the stack, and because the thread information of the thread C is pushed after the thread information of the thread C is compared with the thread information of the thread B, the thread information of the thread C is positioned at the stack top; after the thread C finishes executing and returns the data to the thread B, the thread C is required to be matched with the stack top information, and when the matching is successful, the stack top information, namely the thread information of the thread C, is taken out to restore the initial priority of the thread C; and after receiving the data returned by the thread C, the thread B continues to execute, after finishing executing the thread B and returning the data to the thread A, the thread B is required to be matched with the stack top information, and when the matching is successful, the stack top information, namely the thread information of the thread B, is taken out to restore the initial priority of the thread B. Therefore, when the multiple threads are sequentially dependent, the thread information of the threads needs to be sequentially pushed and sequentially popped after the execution of the threads is completed, so as to restore the initial priorities of the multiple threads.
Optionally, in embodiment 4, on the basis of embodiments 1 to 3 above, the step S20 includes:
and step S24, when the resource to be optimized is a preset resource, adjusting the priority of the preset resource to be higher than or equal to the priority of the current system resource.
In this embodiment, the preset resource is taken as a lock-holding thread as an example, and when the preset resource is taken as a lock-holding thread, the priority of the lock-holding thread is adjusted to be equal to or higher than the priority of the current running thread; the thread to be optimized (the dependent thread) is a lock-holding thread, and the current running thread (the dependent thread) needs to obtain the thread to be optimized and can continue to execute, so that the lock-holding thread needs to release the lock, and the lock-holding thread needs to be executed first when the lock-holding thread is released, therefore, in the embodiment, the priority of the lock-holding thread is adjusted so that the priority of the lock-holding thread is equal to or higher than the priority of the current running thread, so that the lock-holding thread can execute quickly, and after the lock-holding thread is executed, the lock is released and data is returned to the current running thread, and the current running thread continues to execute after obtaining the lock and the data returned by the lock-holding thread, so as to avoid the blocking of a foreground application corresponding to the current running thread, and improve the running process of the foreground application.
Preferably, the priority of the lock holding thread is adjusted to be equal to the priority of the currently running thread, that is, when the lock holding thread (the dependent thread) runs, the lock holding thread has resources equal to the original resources of the dependent thread, and the lock holding thread Cheng Xiancheng (the dependent thread) finishes executing quickly, so that the influence on the priority of other threads existing in the system is avoided when the lock holding thread Cheng Xiancheng (the dependent thread) has more resources than the dependent thread.
Alternatively, in embodiment 5, as shown in fig. 5, on the basis of embodiment 4 described above, the following steps are performed while step S20 is performed:
Step S24, storing the resource information of the preset resource into the stack of the current system resource;
And S25, after the operation of the preset resource is finished, controlling the preset resource to release the corresponding resource, and when the resource information of the preset resource is matched with the stack top information of the stack, taking out the stack top information to restore the initial priority of the preset resource.
In this embodiment, taking the preset resource as an example of a lock holding thread, the system creates a stack for the current running thread, so as to store thread information of the lock holding thread. While executing the step S20, that is, while adjusting the priority of the lock holding thread, it is of course also possible to store the thread information of the lock holding thread in the stack before adjusting the priority of the thread to be optimized, and after the lock holding thread is executed, take out the thread information of the lock holding thread in the stack to restore the initial priority of the lock holding thread, so as to avoid the influence of the priority of the adjusted lock holding thread (the dependent thread) on the priority of other threads in the system, and ensure that other threads in the system execute according to the respective priorities. In this embodiment, the control of the preset resource to release the corresponding resource may be control of the lock holding line Cheng Shifang lock, that is, control of the resource occupied by the lock holding thread.
In this embodiment, when there are multiple threads that are interdependent, there may be multiple thread locks in the multiple interdependent threads, so after the lock holding thread is finished, the lock holding thread Cheng Shifang is controlled to be matched with the stack top information of the stack, when the lock holding thread is matched with the stack top information, the stack top information is taken out to restore the initial priority of the lock holding thread, and when the lock holding thread is not matched with the stack top information due to the existence of multiple thread locks, the priority error of a certain lock holding thread D is restored to the priority of another lock holding thread E.
Optionally, in embodiment 6, on the basis of embodiment 3 or 5 above, the adjusting method further includes:
And step S50, when at least one condition that a switching instruction corresponding to the current system resource and the current system resource is wrong occurs, the resource information stored in the stack is taken out. In this embodiment, taking the current system resource as an example of a lock-holding thread, when a switching instruction of a foreground application is received, it indicates that a user needs to switch the foreground application, where the priority of a thread adjusted before switching is not suitable for a foreground application after switching, so that thread information of all threads stored in the stack is taken out to recover the initial priority of the adjusted threads, and the priority of a process on which a display process of the foreground application after switching depends is adjusted, so as to ensure running smoothness of the foreground application after switching. Similarly, when an error occurs in a process corresponding to the current running thread, such as a thread crash, due to an unexpected situation, thread information of all threads stored in the stack is also taken out to restore the initial priority of the adjusted threads, wherein all threads comprise the thread with lock and the thread without lock, and the threads in all threads are not matched with stack top information after being executed, but are taken out in sequence, so that the efficiency of the pop of the thread information is improved.
Optionally, in embodiment 7, as shown in fig. 6, on the basis of embodiments 1 to 6 above, the step S10 includes, before:
Step S60, taking the system resources with the use time longer than or equal to the preset time length and/or the starting frequency longer than or equal to the preset frequency as preset resources in the preset time period and/or the preset geofence;
And executing the step of acquiring the resource to be optimized corresponding to the current system resource when the resource corresponding to the current system resource is the preset resource.
In this embodiment, the current system resource is taken as an example of an application, for example, the preset time period may be one day, and in one day, applications with a usage time longer than a preset duration, for example, 5 hours, and/or an opening frequency greater than a preset frequency, for example, 10 times, are used as preset applications; for another example, the preset time period may be one hour, and in one hour, an application with a use time longer than a preset time period, for example, 20 minutes, and/or an open frequency greater than a preset frequency, for example, 2 times, is used as the preset application; the priority of the thread to be optimized corresponding to the thread of the preset application is adjusted, and the priority of the thread to be optimized corresponding to the thread not belonging to the preset application is not adjusted, so that the priority of the thread to be optimized corresponding to all the threads is prevented from being adjusted, the system power consumption is reduced, the priority of the thread to be optimized corresponding to the thread of the preset application is only adjusted, and in view of a user, the daily requirement of the user is met, and the running fluency of the foreground application is improved. Further, in this embodiment, in order to reduce the adjustment times of the priority, the power consumption of the bottoming system may only use the application with the longest use duration and/or the highest starting frequency in the preset time as the preset application.
In this embodiment, the preset geofence divides a preset virtual geographic area, for example, a user's home, a user's office location, or a shopping mall, and in the preset geofence, an application with a use time longer than a preset time length and/or an opening frequency longer than a preset frequency is used as a preset application, where the use time length and the opening frequency may be, relative to a total time length of the terminal in the preset geofence, or may be, relative to a preset period of the terminal in the preset geofence, and the preset time length and the opening frequency may be adjusted according to actual situations; for example, if the total duration of the terminal in the preset geofence is 10000 hours, the preset duration may be set to 2000 hours, and if the preset period of the terminal in the preset geofence is one month, the preset duration may be 200 hours, and the adjustment of the preset frequency is the same; in this embodiment, the priority of the thread to be optimized corresponding to the thread of the preset application is adjusted, and the priority of the thread to be optimized corresponding to the thread not belonging to the preset application is not adjusted, so that the priority of the threads to be optimized corresponding to all the threads is prevented from being adjusted, the system power consumption is reduced, and only the priority of the thread to be optimized corresponding to the thread of the preset application is adjusted, so that in view of a user, the daily requirement of the user is met, and the running fluency of the foreground application is improved. Further, in this embodiment, in order to reduce the adjustment times of the priority, the power consumption of the bottoming system may only use the application with the longest use duration and/or the highest starting frequency in the preset geofence as the preset application.
In order to achieve the above object, the present invention further provides a system resource adjusting device, where the system resource adjusting device includes a memory, a processor, and a system resource adjusting program stored in the memory and capable of running on the processor, and the system resource adjusting program when executed by the processor implements the steps of the system resource adjusting method described above.
In order to achieve the above object, the present invention also provides a readable storage medium having stored thereon an adjustment program of a system resource, which when executed by a processor, implements the steps of the adjustment method of a system resource as described above.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a television, a mobile phone, a computer, a server, an apparatus, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (8)
1. A method for adjusting system resources, wherein the adjusting of system resources comprises the steps of:
s1: acquiring resources to be optimized corresponding to current system resources;
S2: the resource to be optimized is adjusted according to a preset strategy, which comprises the following steps: when the resource to be optimized is a preset resource, adjusting the priority of the preset resource to be higher than or equal to the priority of the current system resource, wherein the preset resource comprises a system resource with the use time length being longer than or equal to the preset time length in a preset time period and/or a preset geofence and/or the starting frequency being longer than or equal to the preset frequency; and/or, storing the resource information of the resource to be optimized and/or the preset resource through the stack of the current system resource;
The step S2 further includes:
When communication exists between the current system resource and the resource to be optimized, acquiring the priority of the current system resource and a corresponding communication request;
acquiring the priority of the resource to be optimized responding to the communication request;
And adjusting the priority of the current system resource and/or the resource to be optimized so that the priority of the resource to be optimized is higher than or equal to the priority of the current system resource.
2. The method for adjusting system resources according to claim 1, wherein the step S2 further comprises:
storing the resource information of the resource to be optimized into a stack of the current system resource;
and after the operation of the resource to be optimized is finished, and when the resource information of the resource to be optimized is matched with the stack top information of the stack, the stack top information is taken out to restore the initial priority of the resource to be optimized.
3. The method for adjusting system resources according to claim 1, wherein the step S2 further comprises:
Storing the resource information of the preset resource into a stack of the current system resource;
And after the preset resource is operated, controlling the preset resource to release the corresponding resource, and when the resource information of the preset resource is matched with the stack top information of the stack, taking out the stack top information to restore the initial priority of the preset resource.
4. A method of adjusting system resources according to claim 2 or 3, wherein the method of adjusting further comprises:
When at least one condition that a switching instruction corresponding to the current system resource and the current system resource is wrong is received, the resource information stored in the stack is taken out.
5. The method for adjusting system resources according to claim 1, further comprising, before the step S1:
In a preset time period and/or a preset geofence, taking a system resource with the use time longer than or equal to the preset time period and/or the starting frequency longer than or equal to the preset frequency as a preset resource;
And executing the step of acquiring the resource to be optimized corresponding to the current system resource when the resource corresponding to the current system resource is the preset resource.
6. The method for adjusting system resources according to claim 4, wherein,
The current system resource and/or the resource to be optimized comprises at least one of an application, a process and a thread.
7. A system resource adjustment device, characterized in that the system resource adjustment device comprises a memory, a processor and a system resource adjustment program stored on the memory and executable on the processor, the system resource adjustment program realizing the steps of the system resource adjustment method according to any one of claims 1 to 6 when being executed by the processor.
8. A computer-readable storage medium, wherein an adjustment program for a system resource is stored on the computer-readable storage medium, which when executed by a processor, implements the steps of the adjustment method for a system resource according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910916214.6A CN110704187B (en) | 2019-09-25 | 2019-09-25 | System resource adjusting method and device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910916214.6A CN110704187B (en) | 2019-09-25 | 2019-09-25 | System resource adjusting method and device and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110704187A CN110704187A (en) | 2020-01-17 |
CN110704187B true CN110704187B (en) | 2024-10-01 |
Family
ID=69196431
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910916214.6A Active CN110704187B (en) | 2019-09-25 | 2019-09-25 | System resource adjusting method and device and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110704187B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111666153B (en) * | 2020-05-25 | 2024-07-05 | 深圳Tcl新技术有限公司 | Cache task management method, terminal device and storage medium |
CN115391054B (en) * | 2022-10-27 | 2023-03-17 | 宁波均联智行科技股份有限公司 | Resource allocation method of vehicle-mounted machine system and vehicle-mounted machine system |
CN116382876A (en) * | 2023-04-28 | 2023-07-04 | 维沃移动通信有限公司 | Task management method, device, electronic equipment and medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109992400A (en) * | 2017-12-29 | 2019-07-09 | 广东欧珀移动通信有限公司 | Resource allocation methods, device, mobile terminal and computer readable storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9632569B2 (en) * | 2014-08-05 | 2017-04-25 | Qualcomm Incorporated | Directed event signaling for multiprocessor systems |
CN106250167B (en) * | 2016-02-04 | 2019-08-02 | 北京智谷睿拓技术服务有限公司 | Method of controlling operation thereof, device and deformation controllable device |
CN107463436B (en) * | 2017-07-31 | 2019-12-10 | Oppo广东移动通信有限公司 | process control method, device, storage medium and electronic equipment |
CN109992370A (en) * | 2017-12-29 | 2019-07-09 | 广东欧珀移动通信有限公司 | Applied program processing method and device, electronic equipment, computer readable storage medium |
-
2019
- 2019-09-25 CN CN201910916214.6A patent/CN110704187B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109992400A (en) * | 2017-12-29 | 2019-07-09 | 广东欧珀移动通信有限公司 | Resource allocation methods, device, mobile terminal and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110704187A (en) | 2020-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8713571B2 (en) | Asynchronous task execution | |
US9201693B2 (en) | Quota-based resource management | |
US9244816B2 (en) | Application testing using sandboxes | |
CN110704187B (en) | System resource adjusting method and device and readable storage medium | |
US20060010446A1 (en) | Method and system for concurrent execution of multiple kernels | |
JP7100154B2 (en) | Processor core scheduling method, equipment, terminals and storage media | |
US9218201B2 (en) | Multicore system and activating method | |
WO2014180295A1 (en) | Method, server and terminal for acquiring performance optimization strategy and terminal performance optimization | |
US10037225B2 (en) | Method and system for scheduling computing | |
CN113656157A (en) | Distributed task scheduling method and device, storage medium and electronic equipment | |
CN110955499A (en) | Processor core configuration method, device, terminal and storage medium | |
US20230275976A1 (en) | Data processing method and apparatus, and computer-readable storage medium | |
CN109002364B (en) | Method for optimizing inter-process communication, electronic device and readable storage medium | |
US20180270306A1 (en) | Coexistence of a synchronous architecture and an asynchronous architecture in a server | |
CN109388501B (en) | Communication matching method, device, equipment and medium based on face recognition request | |
CN111045789A (en) | Virtual machine starting method and device, electronic equipment and storage medium | |
JP4862056B2 (en) | Virtual machine management mechanism and CPU time allocation control method in virtual machine system | |
US20060277547A1 (en) | Task management system | |
JP2016528648A (en) | Network application parallel scheduling to reduce power consumption | |
WO2015184902A1 (en) | Concurrent processing method for intelligent split-screen and corresponding intelligent terminal | |
US8869171B2 (en) | Low-latency communications | |
JP2009541852A (en) | Computer micro job | |
CN109062706B (en) | Electronic device, method for limiting inter-process communication thereof and storage medium | |
US20060048150A1 (en) | Task management methods and related devices | |
US20120054767A1 (en) | Recording medium for resource management program, resource management device, and resource management method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |