Detailed Description
In order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples.
The application provides a method, a device and a storage device for monitoring thread utilization rate, which are at least applied to the following scenes, wherein when a background server processes task requests, the proportion of the total capacity of a thread pool occupied by each task request is monitored and calculated, so that the analysis on which task requests excessively consume thread pool resources, thread blockage is caused, thread resources are insufficient, and the fault of a new request cannot be processed is facilitated.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for monitoring thread usage according to a first embodiment of the present disclosure. In this embodiment, the method of monitoring thread usage comprises the steps of:
s101: the start time of the thread processing request is obtained.
The thread pool is a multi-thread processing mode, can effectively process the concurrency problem of a plurality of threads, avoids the blocking phenomenon caused by that a large number of threads mutually occupy the system resources forcibly, and can effectively reduce the cost brought by frequently creating and destroying the threads to the performance. Under the default condition, after the thread pool is created, the thread number in the thread pool is 0; after a task comes, a thread is created to execute the task; when the number of threads in the thread pool reaches the maximum number of threads, the arrived task is put into the cache queue. Wherein the maximum thread number indicates how many threads can be created in the thread pool at most in a predetermined cycle. Counting the preset period, and starting to count when the counting period starts; stopping timing when the counting period is finished; then the time is zeroed and a new cycle of statistics is started. Taking the starting time of the statistical period as a reference, marking as 0 point or 0 second, and starting to create a new thread to execute the task after the task comes, and marking as the starting time of the thread processing request; for example, the start time of the statistical period is 0 second, and a new thread starts to be created to execute a task at the 200 th ms, so the start time of the thread processing request is 200 ms.
S102: and acquiring the end time of the thread processing request, and calculating the time delay data of the thread processing request.
After the thread processing request starts, timing is continued, and the ending time of the thread processing request is recorded, for example, when the thread processing request ends at 500ms, the ending time of the thread processing request is 500 ms. The duration of the whole thread processing request is delay data, and specifically, the delay data is a time difference between an end time and a start time. The time delay data may be obtained by subtracting the start time from the end time, or by subtracting the end time from the start time and then taking the positive result. For example, if the start time of a thread processing request is 200ms and the end time is 500ms, the latency data is 300ms (500ms-200 ms).
S103: and calculating the total capacity time of the thread pool in the preset period.
The total capacity time of the thread pool is the total execution time of all threads of the thread pool in a statistical cycle. I.e. the number of bus threads of the thread pool multiplied by the statistical time of the predetermined period.
S104: and calculating the occupancy rate of the thread processing request occupying the thread pool capacity in the preset period.
Wherein, the occupancy rate is the ratio of the time delay data to the total time of the capacity. In a statistical period, the total capacity time of the thread pool is less than or equal to a preset time, and is specifically related to the thread number of the thread pool; each time a thread processing request is run, a portion of the thread pool resources are occupied, and blocking tends to occur when the thread pool is full. In the embodiment, the occupancy rate of the thread processing request is calculated to obtain how much resource the thread processing request occupies the thread pool, so as to monitor the utilization rate of the thread pool, effectively control and prevent the thread pool from being blocked, and simultaneously monitor and position the thread processing request causing the thread pool to be blocked, and further take corresponding solution measures.
Optionally, in a statistical cycle, the same thread processing request may have been run for multiple times, so on one hand, the occupancy rate of a single thread processing request may be calculated, and on the other hand, the total occupancy rate of the same thread processing request may also be calculated, that is, the total delay data of the same thread processing request in a statistical cycle is calculated, where the total delay data is the time of the thread processing request multiplied by the time of the thread processing request for a single time, and then the ratio of the total delay data to the total time of the thread pool capacity is calculated to obtain the total occupancy rate of the thread processing request. By calculating the total occupancy rate of the thread processing request, the method can more comprehensively evaluate the amount of resources occupied by the thread processing request in the thread pool.
Optionally, in addition to monitoring the occupancy rate of processing requests by each thread, the thread usage rate of the thread pool may be calculated to monitor the usage status of the thread pool. Wherein, the thread utilization rate is the sum of the occupancy rates of all threads for processing the requests. The occupancy rate of each thread processing request can be respectively calculated, and then the occupancy rates are summed to obtain the thread utilization rate; or calculating the total time delay data of all the thread processing requests, and then calculating the ratio of the total time delay data to the total time of the thread pool capacity to obtain the thread utilization rate.
Optionally, while monitoring the occupancy rate of the thread processing requests, the thread processing requests may be sorted according to the occupancy rate, for example, the thread processing requests are sorted in a reverse order from the occupancy rate to the occupancy rate. By sequencing the thread processing requests according to the occupancy rate, the method can intuitively and quickly know which thread processing requests occupy more resources of the thread pool.
Optionally, in order to make reasonable use of the thread pool resources, the thread processing request with a large occupancy rate may be limited, or the time and the number of times of occurrence of the thread processing request may be reasonably planned. The method comprises the steps of presetting a high-value threshold of the occupancy rate, and defining the thread processing request with the occupancy rate larger than the preset threshold as the thread processing request occupying more thread pool resources.
Specifically, the occupancy rate of the thread processing request is compared with a preset threshold value; and marking the thread with the occupancy rate larger than a preset threshold value to process the request. Wherein, the occupancy rate of the thread processing request can be compared with a preset threshold value one by one; the thread processing requests can also be divided into a plurality of grades according to the occupancy rate; if the occupancy rate is 40-60%, the occupancy rate is a first level, the occupancy rate is a second level when the occupancy rate is 20-40%, and the occupancy rate is a third level when the occupancy rate is 5-20%; the comparison and marking may then be performed only on thread processing requests within a certain level, or may be performed directly on all thread processing requests within a certain level. By marking the thread processing requests, the thread processing requests occupying more resources of the thread pool can be quickly positioned.
Optionally, the marked thread processing requests are limited, that is, the number of requests or the frequency of the thread processing requests with occupancy rates larger than a preset threshold value in a preset period is limited; if the number of requests reaches the limit value, the requests are added into the waiting queue. By limiting the thread processing requests, the resources of the thread pool can be reasonably utilized, the blockage is prevented, and the utilization rate of the thread pool is further improved.
According to the scheme, the occupancy rate of the thread processing request can be calculated to obtain the resource occupied by the thread processing request in the thread pool, so that the utilization rate of the thread pool is monitored, the thread pool is effectively controlled and prevented from being blocked, meanwhile, the thread processing request causing the thread pool to be blocked can be monitored and positioned, and then corresponding solution measures are taken. In the following, the above method is described in an application scenario, which is only an example and does not limit the technical solution.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for monitoring thread usage according to a second embodiment of the present application. In this embodiment, the thread pool includes 3 threads, and the method for monitoring thread usage comprises the following steps:
a thread processing request is obtained within the thread pool and a start time is recorded. As get request uri 1: /getGiftList and request uri 2: (sendGift) records the start time, e.g., 300 ms.
And recording the ending time when the thread processing request is executed, and calculating time delay data, wherein the time delay data is the time difference between the ending time and the starting time. The delay data of request 1 is 100ms (delay data) ═ 400ms (end time) -300ms (start time); the delay data of request 2 is 200ms (delay data) 500ms (end time) to 300ms (start time).
And sending the monitored data to an acquisition arithmetic unit for aggregation operation.
Wherein, the total time that all threads in the thread pool can execute in a statistical cycle is calculated, namely: 180s (total capacity time) is 60s (time period) 3 (number of passes).
The actual total time delay for processing each request in a statistical period is calculated, that is: 150s (actual total time) — 100s (sum (uri1 delay)) +50s (sum (uri 2 delay)). In this statistical cycle, request 1 is executed 1000 times, so the total latency data 100s (uri1 total latency) of request 1 is 1000 (number of execution times) 100ms (single latency data); request 2 was executed 250 times, so the total latency data 50s (uri1 total latency) for request 2 is 250 (number of executions) 200ms (single latency data).
Calculating the resource capacity ratio of the thread pool, namely: 83.33 (thread usage rate) 150s (actual total time)/180 s (total capacity time).
The utilization rate of the threads is calculated, meanwhile, the occupancy rate of a single thread processing request can also be calculated, then a request URI (Uniform Resource Identifier, URI) inverted list for consuming thread resources is arranged according to the occupancy rate, and the thread processing request causing high utilization rate of the threads is positioned. These more resource-intensive thread processing requests may then be restricted to increase the utilization of the thread pool.
Based on the above method for monitoring thread usage, the present application also provides a device for monitoring thread usage, please refer to fig. 3, and fig. 3 is a schematic structural diagram of a first embodiment of the device for monitoring thread usage according to the present application. In this embodiment, the means 30 for monitoring thread usage comprises: the processor 301 is configured to obtain a start time and an end time of the thread processing request, and calculate time delay data of the thread processing request, where the time delay data is a time difference between the end time and the start time; the processor 301 is further configured to calculate a total time of the capacity of the thread pool in a predetermined period; and calculating the occupancy rate of the thread pool capacity occupied by the thread processing request in the preset period, wherein the occupancy rate is the ratio of the time delay data to the total time of the capacity.
In an embodiment, the processor 301 is further configured to calculate a total occupancy rate of the thread pool capacity occupied by the thread processing request in the predetermined period, where the processor 301 is specifically configured to calculate total delay data of the thread processing request, the total delay data is the request frequency of the thread processing request multiplied by the single delay data, and the total occupancy rate is a ratio of the total delay data to the total time of the capacity.
In one embodiment, the processor 301 is further configured to calculate a thread usage rate of the thread pool, where the thread usage rate is a sum of occupancy rates of all thread processing requests.
In one embodiment, the processor 301 is further configured to sort the thread processing requests according to occupancy.
In one embodiment, the processor 301 is further configured to compare the occupancy rate with a preset threshold; and marking the thread with the occupancy rate larger than a preset threshold value to process the request.
In one embodiment, the processor 301 is further configured to limit the number of times or frequency that a thread with an occupancy rate greater than a preset threshold processes a request in a predetermined period.
The device for monitoring thread usage rate can be used to execute the above method, perform aggregation operation on corresponding data, and have corresponding beneficial effects, which are specifically referred to the description of the above embodiments and will not be described herein again. The device for monitoring the thread usage rate may be an independent device independent of the background server, or may be a certain module in the server, or a certain processing unit.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a device for monitoring thread usage according to a second embodiment of the present application. In this embodiment, the apparatus 40 for monitoring thread usage rate includes a first obtaining module 401, a second obtaining module 402, a first calculating module 403, and a second calculating module 404, where the first obtaining module 401 is configured to obtain a start time of a thread processing request; the second obtaining module 402 is configured to obtain an ending time of the thread processing request, and calculate time delay data of the thread processing request, where the time delay data is a time difference between the ending time and the starting time. The first calculating module 403 is configured to calculate a total time of the capacity of the thread pool in a predetermined period; the second calculating module 404 is configured to calculate an occupancy rate of the thread pool capacity occupied by the thread processing request in the predetermined period, where the occupancy rate is a ratio of the delay data to the total time of the capacity.
In an embodiment, the first calculating module 403 is specifically configured to multiply the total number of threads of the thread pool by the statistical time of the predetermined period to calculate the total capacity time of the thread pool.
In an embodiment, the second calculating module 404 is further configured to calculate a total occupancy rate of the thread pool capacity occupied by the thread processing request in the predetermined period, where the second calculating module 404 is specifically configured to calculate total delay data of the thread processing request, the total delay data is the request frequency of the thread processing request multiplied by the single delay data, and the total occupancy rate is a ratio of the total delay data to the total time of the capacity. The second calculation module 404 is further configured to calculate a thread usage rate of the thread pool, where the thread usage rate is a sum of occupancy rates of all threads for processing requests.
In an embodiment, the apparatus for monitoring thread usage further includes a sorting module (not shown), and the sorting module is configured to sort the thread processing requests according to the occupancy rate.
In an embodiment, the apparatus for monitoring thread usage further includes a comparing module (not shown), where the comparing module is configured to compare the occupancy rate with a preset threshold; and marking the thread with the occupancy rate larger than a preset threshold value to process the request.
In one embodiment, the apparatus for monitoring thread usage further includes a limiting module (not shown) configured to limit a number of times or a frequency of processing requests of threads having an occupancy rate greater than a preset threshold within a predetermined period.
The device for monitoring thread usage rate can be used to execute the above method, perform aggregation operation on corresponding data, and have corresponding beneficial effects, which are specifically referred to the description of the above embodiments and will not be described herein again. The device for monitoring the thread usage rate may be an independent device independent of the background server, or may be a certain module in the server, or a certain processing unit.
Based on the method for monitoring the thread usage, the present application further provides a device with a storage function, please refer to fig. 5, where fig. 5 is a schematic structural diagram of a first embodiment of the device with a storage function according to the present application. In this embodiment, the storage device 50 stores a program 501, and the method of monitoring the thread usage described above is implemented when the program 501 is executed. The specific working process is the same as the above method embodiment, and therefore, detailed description is not repeated here, and please refer to the description of the corresponding method steps above in detail. The device with the storage function may be a portable storage medium such as a usb disk, an optical disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk and other various media capable of storing program codes, and may also be a terminal, a server and other media.
According to the scheme, the method for monitoring the thread utilization rate can obtain the amount of resources occupied by the thread processing request through calculating the occupancy rate of the thread processing request, further monitor the utilization rate of the thread pool, effectively control and prevent the thread pool from being blocked, and meanwhile can monitor and position the thread processing request causing the thread pool to be blocked, and further take corresponding measures.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.