[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109271290B - Method and device for monitoring thread utilization rate and storage device - Google Patents

Method and device for monitoring thread utilization rate and storage device Download PDF

Info

Publication number
CN109271290B
CN109271290B CN201810847872.XA CN201810847872A CN109271290B CN 109271290 B CN109271290 B CN 109271290B CN 201810847872 A CN201810847872 A CN 201810847872A CN 109271290 B CN109271290 B CN 109271290B
Authority
CN
China
Prior art keywords
thread
time
total
processing request
capacity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810847872.XA
Other languages
Chinese (zh)
Other versions
CN109271290A (en
Inventor
匡凌轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN201810847872.XA priority Critical patent/CN109271290B/en
Publication of CN109271290A publication Critical patent/CN109271290A/en
Application granted granted Critical
Publication of CN109271290B publication Critical patent/CN109271290B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a method, a device and a storage device for monitoring thread utilization rate, wherein the method comprises the following steps: acquiring the starting time of a thread processing request; acquiring the ending time of the thread processing request, and calculating the time delay data of the thread processing request, wherein the time delay data is the time difference between the ending time and the starting time; calculating the total capacity time of a thread pool in a preset period; and calculating the occupancy rate of the thread pool capacity occupied by the thread processing request in the preset period, wherein the occupancy rate is the ratio of the time delay data to the total time of the capacity. By means of the method, the monitoring and analysis of the thread utilization rate can be achieved.

Description

Method and device for monitoring thread utilization rate and storage device
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, and a storage apparatus for monitoring thread usage rate.
Background
With the development of the internet technology, the number of requests which need to be processed by the back-end service of the internet is increased, wherein the back-end service of the internet mainly processes request tasks in a multithread concurrent manner, and in the running stage, because some threads are blocked or the execution time is long, thread resources are not enough, and the fault of a new request cannot be processed. At this time, a developer is usually required to check the log to find out what request is processed, and this method is heavy in workload, time-consuming, and especially when the number of processing requests is large, it is not easy to find the location.
Disclosure of Invention
The method, the device and the storage device for monitoring the thread utilization rate are mainly used for monitoring and analyzing the thread utilization rate.
In order to solve the technical problem, the application adopts a technical scheme that: there is provided a method of monitoring thread usage, the method comprising: acquiring the starting time of a thread processing request; acquiring the ending time of the thread processing request, and calculating the time delay data of the thread processing request, wherein the time delay data is the time difference between the ending time and the starting time; calculating the total capacity time of a thread pool in a preset period; and calculating the occupancy rate of the thread pool capacity occupied by the thread processing request in the preset period, wherein the occupancy rate is the ratio of the time delay data to the total time of the capacity.
The total occupancy rate of the thread pool capacity occupied by the thread processing requests in the preset period is calculated, wherein the total time delay data of the thread processing requests are calculated, the total time delay data are obtained by multiplying the request times of the thread processing requests by the time delay data of single requests, and the total occupancy rate is the ratio of the total time delay data to the capacity total time.
And calculating the thread utilization rate of the thread pool, wherein the thread utilization rate is the sum of the occupancy rates of all thread processing requests.
Wherein, the total capacity time is the number of bus threads of the thread pool multiplied by the statistical time of a preset period.
And sorting the thread processing requests according to the occupancy rates.
Wherein, the occupancy rate is compared with a preset threshold value; and marking the thread with the occupancy rate larger than a preset threshold value to process the request.
And limiting the request times or frequency of the thread processing requests with the occupancy rate larger than a preset threshold value in a preset period.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an apparatus for monitoring thread usage, the apparatus comprising: the first acquisition module is used for acquiring the starting time of the thread processing request; the second acquisition module is used for acquiring the ending time of the thread processing request and calculating the time delay data of the thread processing request, wherein the time delay data is the time difference between the ending time and the starting time; the first calculation module is used for calculating the total capacity time of the thread pool in a preset period; and the second calculation module is used for calculating the occupancy rate of the thread pool capacity occupied by the thread processing requests in the preset period, wherein the occupancy rate is the ratio of the time delay data to the total time of the capacity.
In order to solve the technical problem, the other technical scheme adopted by the application is as follows: the device comprises a processor, a processor and a processing module, wherein the processor is used for acquiring the starting time and the ending time of a thread processing request and calculating the time delay data of the thread processing request, and the time delay data is the time difference between the ending time and the starting time; the processor is also used for calculating the total capacity time of the thread pool in a preset period; and calculating the occupancy rate of the thread pool capacity occupied by the thread processing request in the preset period, wherein the occupancy rate is the ratio of the time delay data to the total time of the capacity.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a device having a memory function, the device storing a program which, when executed, implements the method of monitoring thread usage described above.
The beneficial effect of this application is: the method is different from the prior art, and can obtain the amount of resources occupied by the thread processing request through calculating the occupancy rate of the thread processing request, so as to monitor the utilization rate of the thread pool, effectively control and prevent the thread pool from being blocked, monitor and position the thread processing request causing the thread pool to be blocked, and further take corresponding measures.
Drawings
FIG. 1 is a flowchart illustrating a first embodiment of a method for monitoring thread usage according to the present application;
FIG. 2 is a flowchart illustrating a second embodiment of a method for monitoring thread usage according to the present application;
FIG. 3 is a schematic structural diagram of a first embodiment of an apparatus for monitoring thread usage according to the present invention;
FIG. 4 is a schematic structural diagram illustrating a second embodiment of an apparatus for monitoring thread usage according to the present application;
fig. 5 is a schematic structural diagram of a first embodiment of the apparatus with a storage function according to the present application.
Detailed Description
In order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples.
The application provides a method, a device and a storage device for monitoring thread utilization rate, which are at least applied to the following scenes, wherein when a background server processes task requests, the proportion of the total capacity of a thread pool occupied by each task request is monitored and calculated, so that the analysis on which task requests excessively consume thread pool resources, thread blockage is caused, thread resources are insufficient, and the fault of a new request cannot be processed is facilitated.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for monitoring thread usage according to a first embodiment of the present disclosure. In this embodiment, the method of monitoring thread usage comprises the steps of:
s101: the start time of the thread processing request is obtained.
The thread pool is a multi-thread processing mode, can effectively process the concurrency problem of a plurality of threads, avoids the blocking phenomenon caused by that a large number of threads mutually occupy the system resources forcibly, and can effectively reduce the cost brought by frequently creating and destroying the threads to the performance. Under the default condition, after the thread pool is created, the thread number in the thread pool is 0; after a task comes, a thread is created to execute the task; when the number of threads in the thread pool reaches the maximum number of threads, the arrived task is put into the cache queue. Wherein the maximum thread number indicates how many threads can be created in the thread pool at most in a predetermined cycle. Counting the preset period, and starting to count when the counting period starts; stopping timing when the counting period is finished; then the time is zeroed and a new cycle of statistics is started. Taking the starting time of the statistical period as a reference, marking as 0 point or 0 second, and starting to create a new thread to execute the task after the task comes, and marking as the starting time of the thread processing request; for example, the start time of the statistical period is 0 second, and a new thread starts to be created to execute a task at the 200 th ms, so the start time of the thread processing request is 200 ms.
S102: and acquiring the end time of the thread processing request, and calculating the time delay data of the thread processing request.
After the thread processing request starts, timing is continued, and the ending time of the thread processing request is recorded, for example, when the thread processing request ends at 500ms, the ending time of the thread processing request is 500 ms. The duration of the whole thread processing request is delay data, and specifically, the delay data is a time difference between an end time and a start time. The time delay data may be obtained by subtracting the start time from the end time, or by subtracting the end time from the start time and then taking the positive result. For example, if the start time of a thread processing request is 200ms and the end time is 500ms, the latency data is 300ms (500ms-200 ms).
S103: and calculating the total capacity time of the thread pool in the preset period.
The total capacity time of the thread pool is the total execution time of all threads of the thread pool in a statistical cycle. I.e. the number of bus threads of the thread pool multiplied by the statistical time of the predetermined period.
S104: and calculating the occupancy rate of the thread processing request occupying the thread pool capacity in the preset period.
Wherein, the occupancy rate is the ratio of the time delay data to the total time of the capacity. In a statistical period, the total capacity time of the thread pool is less than or equal to a preset time, and is specifically related to the thread number of the thread pool; each time a thread processing request is run, a portion of the thread pool resources are occupied, and blocking tends to occur when the thread pool is full. In the embodiment, the occupancy rate of the thread processing request is calculated to obtain how much resource the thread processing request occupies the thread pool, so as to monitor the utilization rate of the thread pool, effectively control and prevent the thread pool from being blocked, and simultaneously monitor and position the thread processing request causing the thread pool to be blocked, and further take corresponding solution measures.
Optionally, in a statistical cycle, the same thread processing request may have been run for multiple times, so on one hand, the occupancy rate of a single thread processing request may be calculated, and on the other hand, the total occupancy rate of the same thread processing request may also be calculated, that is, the total delay data of the same thread processing request in a statistical cycle is calculated, where the total delay data is the time of the thread processing request multiplied by the time of the thread processing request for a single time, and then the ratio of the total delay data to the total time of the thread pool capacity is calculated to obtain the total occupancy rate of the thread processing request. By calculating the total occupancy rate of the thread processing request, the method can more comprehensively evaluate the amount of resources occupied by the thread processing request in the thread pool.
Optionally, in addition to monitoring the occupancy rate of processing requests by each thread, the thread usage rate of the thread pool may be calculated to monitor the usage status of the thread pool. Wherein, the thread utilization rate is the sum of the occupancy rates of all threads for processing the requests. The occupancy rate of each thread processing request can be respectively calculated, and then the occupancy rates are summed to obtain the thread utilization rate; or calculating the total time delay data of all the thread processing requests, and then calculating the ratio of the total time delay data to the total time of the thread pool capacity to obtain the thread utilization rate.
Optionally, while monitoring the occupancy rate of the thread processing requests, the thread processing requests may be sorted according to the occupancy rate, for example, the thread processing requests are sorted in a reverse order from the occupancy rate to the occupancy rate. By sequencing the thread processing requests according to the occupancy rate, the method can intuitively and quickly know which thread processing requests occupy more resources of the thread pool.
Optionally, in order to make reasonable use of the thread pool resources, the thread processing request with a large occupancy rate may be limited, or the time and the number of times of occurrence of the thread processing request may be reasonably planned. The method comprises the steps of presetting a high-value threshold of the occupancy rate, and defining the thread processing request with the occupancy rate larger than the preset threshold as the thread processing request occupying more thread pool resources.
Specifically, the occupancy rate of the thread processing request is compared with a preset threshold value; and marking the thread with the occupancy rate larger than a preset threshold value to process the request. Wherein, the occupancy rate of the thread processing request can be compared with a preset threshold value one by one; the thread processing requests can also be divided into a plurality of grades according to the occupancy rate; if the occupancy rate is 40-60%, the occupancy rate is a first level, the occupancy rate is a second level when the occupancy rate is 20-40%, and the occupancy rate is a third level when the occupancy rate is 5-20%; the comparison and marking may then be performed only on thread processing requests within a certain level, or may be performed directly on all thread processing requests within a certain level. By marking the thread processing requests, the thread processing requests occupying more resources of the thread pool can be quickly positioned.
Optionally, the marked thread processing requests are limited, that is, the number of requests or the frequency of the thread processing requests with occupancy rates larger than a preset threshold value in a preset period is limited; if the number of requests reaches the limit value, the requests are added into the waiting queue. By limiting the thread processing requests, the resources of the thread pool can be reasonably utilized, the blockage is prevented, and the utilization rate of the thread pool is further improved.
According to the scheme, the occupancy rate of the thread processing request can be calculated to obtain the resource occupied by the thread processing request in the thread pool, so that the utilization rate of the thread pool is monitored, the thread pool is effectively controlled and prevented from being blocked, meanwhile, the thread processing request causing the thread pool to be blocked can be monitored and positioned, and then corresponding solution measures are taken. In the following, the above method is described in an application scenario, which is only an example and does not limit the technical solution.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for monitoring thread usage according to a second embodiment of the present application. In this embodiment, the thread pool includes 3 threads, and the method for monitoring thread usage comprises the following steps:
a thread processing request is obtained within the thread pool and a start time is recorded. As get request uri 1: /getGiftList and request uri 2: (sendGift) records the start time, e.g., 300 ms.
And recording the ending time when the thread processing request is executed, and calculating time delay data, wherein the time delay data is the time difference between the ending time and the starting time. The delay data of request 1 is 100ms (delay data) ═ 400ms (end time) -300ms (start time); the delay data of request 2 is 200ms (delay data) 500ms (end time) to 300ms (start time).
And sending the monitored data to an acquisition arithmetic unit for aggregation operation.
Wherein, the total time that all threads in the thread pool can execute in a statistical cycle is calculated, namely: 180s (total capacity time) is 60s (time period) 3 (number of passes).
The actual total time delay for processing each request in a statistical period is calculated, that is: 150s (actual total time) — 100s (sum (uri1 delay)) +50s (sum (uri 2 delay)). In this statistical cycle, request 1 is executed 1000 times, so the total latency data 100s (uri1 total latency) of request 1 is 1000 (number of execution times) 100ms (single latency data); request 2 was executed 250 times, so the total latency data 50s (uri1 total latency) for request 2 is 250 (number of executions) 200ms (single latency data).
Calculating the resource capacity ratio of the thread pool, namely: 83.33 (thread usage rate) 150s (actual total time)/180 s (total capacity time).
The utilization rate of the threads is calculated, meanwhile, the occupancy rate of a single thread processing request can also be calculated, then a request URI (Uniform Resource Identifier, URI) inverted list for consuming thread resources is arranged according to the occupancy rate, and the thread processing request causing high utilization rate of the threads is positioned. These more resource-intensive thread processing requests may then be restricted to increase the utilization of the thread pool.
Based on the above method for monitoring thread usage, the present application also provides a device for monitoring thread usage, please refer to fig. 3, and fig. 3 is a schematic structural diagram of a first embodiment of the device for monitoring thread usage according to the present application. In this embodiment, the means 30 for monitoring thread usage comprises: the processor 301 is configured to obtain a start time and an end time of the thread processing request, and calculate time delay data of the thread processing request, where the time delay data is a time difference between the end time and the start time; the processor 301 is further configured to calculate a total time of the capacity of the thread pool in a predetermined period; and calculating the occupancy rate of the thread pool capacity occupied by the thread processing request in the preset period, wherein the occupancy rate is the ratio of the time delay data to the total time of the capacity.
In an embodiment, the processor 301 is further configured to calculate a total occupancy rate of the thread pool capacity occupied by the thread processing request in the predetermined period, where the processor 301 is specifically configured to calculate total delay data of the thread processing request, the total delay data is the request frequency of the thread processing request multiplied by the single delay data, and the total occupancy rate is a ratio of the total delay data to the total time of the capacity.
In one embodiment, the processor 301 is further configured to calculate a thread usage rate of the thread pool, where the thread usage rate is a sum of occupancy rates of all thread processing requests.
In one embodiment, the processor 301 is further configured to sort the thread processing requests according to occupancy.
In one embodiment, the processor 301 is further configured to compare the occupancy rate with a preset threshold; and marking the thread with the occupancy rate larger than a preset threshold value to process the request.
In one embodiment, the processor 301 is further configured to limit the number of times or frequency that a thread with an occupancy rate greater than a preset threshold processes a request in a predetermined period.
The device for monitoring thread usage rate can be used to execute the above method, perform aggregation operation on corresponding data, and have corresponding beneficial effects, which are specifically referred to the description of the above embodiments and will not be described herein again. The device for monitoring the thread usage rate may be an independent device independent of the background server, or may be a certain module in the server, or a certain processing unit.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a device for monitoring thread usage according to a second embodiment of the present application. In this embodiment, the apparatus 40 for monitoring thread usage rate includes a first obtaining module 401, a second obtaining module 402, a first calculating module 403, and a second calculating module 404, where the first obtaining module 401 is configured to obtain a start time of a thread processing request; the second obtaining module 402 is configured to obtain an ending time of the thread processing request, and calculate time delay data of the thread processing request, where the time delay data is a time difference between the ending time and the starting time. The first calculating module 403 is configured to calculate a total time of the capacity of the thread pool in a predetermined period; the second calculating module 404 is configured to calculate an occupancy rate of the thread pool capacity occupied by the thread processing request in the predetermined period, where the occupancy rate is a ratio of the delay data to the total time of the capacity.
In an embodiment, the first calculating module 403 is specifically configured to multiply the total number of threads of the thread pool by the statistical time of the predetermined period to calculate the total capacity time of the thread pool.
In an embodiment, the second calculating module 404 is further configured to calculate a total occupancy rate of the thread pool capacity occupied by the thread processing request in the predetermined period, where the second calculating module 404 is specifically configured to calculate total delay data of the thread processing request, the total delay data is the request frequency of the thread processing request multiplied by the single delay data, and the total occupancy rate is a ratio of the total delay data to the total time of the capacity. The second calculation module 404 is further configured to calculate a thread usage rate of the thread pool, where the thread usage rate is a sum of occupancy rates of all threads for processing requests.
In an embodiment, the apparatus for monitoring thread usage further includes a sorting module (not shown), and the sorting module is configured to sort the thread processing requests according to the occupancy rate.
In an embodiment, the apparatus for monitoring thread usage further includes a comparing module (not shown), where the comparing module is configured to compare the occupancy rate with a preset threshold; and marking the thread with the occupancy rate larger than a preset threshold value to process the request.
In one embodiment, the apparatus for monitoring thread usage further includes a limiting module (not shown) configured to limit a number of times or a frequency of processing requests of threads having an occupancy rate greater than a preset threshold within a predetermined period.
The device for monitoring thread usage rate can be used to execute the above method, perform aggregation operation on corresponding data, and have corresponding beneficial effects, which are specifically referred to the description of the above embodiments and will not be described herein again. The device for monitoring the thread usage rate may be an independent device independent of the background server, or may be a certain module in the server, or a certain processing unit.
Based on the method for monitoring the thread usage, the present application further provides a device with a storage function, please refer to fig. 5, where fig. 5 is a schematic structural diagram of a first embodiment of the device with a storage function according to the present application. In this embodiment, the storage device 50 stores a program 501, and the method of monitoring the thread usage described above is implemented when the program 501 is executed. The specific working process is the same as the above method embodiment, and therefore, detailed description is not repeated here, and please refer to the description of the corresponding method steps above in detail. The device with the storage function may be a portable storage medium such as a usb disk, an optical disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk and other various media capable of storing program codes, and may also be a terminal, a server and other media.
According to the scheme, the method for monitoring the thread utilization rate can obtain the amount of resources occupied by the thread processing request through calculating the occupancy rate of the thread processing request, further monitor the utilization rate of the thread pool, effectively control and prevent the thread pool from being blocked, and meanwhile can monitor and position the thread processing request causing the thread pool to be blocked, and further take corresponding measures.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (9)

1. A method of monitoring thread usage, the method comprising:
acquiring the starting time of a thread processing request;
acquiring the ending time of a thread processing request, and calculating the time delay data of the thread processing request, wherein the time delay data is the time difference between the ending time and the starting time;
calculating the total capacity time of a thread pool in a preset period, wherein the total capacity time is the total execution time of all threads of the thread pool in the preset period, and the total capacity time is the statistical time of the preset period multiplied by the total thread number of the thread pool;
and calculating the occupancy rate of the thread processing request occupying the capacity of the thread pool in a preset period, wherein the occupancy rate is the ratio of the time delay data to the total time of the capacity.
2. The method of claim 1, wherein the calculating the occupancy of the thread pool capacity occupied by thread processing requests in a predetermined period further comprises:
and calculating the total occupancy rate of the thread processing request occupying the capacity of the thread pool in a preset period, wherein the total time delay data of the thread processing request is calculated, the total time delay data is the time of the request of the thread processing request multiplied by the time delay data of a single request, and the total occupancy rate is the ratio of the total time delay data to the total time of the capacity.
3. The method of monitoring thread usage according to claim 1, the method further comprising:
and calculating the thread utilization rate of the thread pool, wherein the thread utilization rate is the sum of the occupancy rates of all the thread processing requests.
4. The method of claim 1, wherein the calculating the occupancy rate of the thread pool occupied by the thread processing requests in the predetermined period further comprises:
and sorting the thread processing requests according to the occupancy rates.
5. The method of claim 1, wherein the calculating the occupancy rate of the thread pool occupied by the thread processing requests in the predetermined period further comprises:
comparing the occupancy rate with a preset threshold value;
marking the thread processing requests with the occupancy rates larger than the preset threshold value.
6. The method of monitoring thread usage according to claim 5, wherein processing the thread with a tag occupancy greater than a predetermined threshold further comprises:
and limiting the number of times or frequency of the thread processing requests with the occupancy rate larger than the preset threshold value in a preset period.
7. An apparatus for monitoring thread usage, the apparatus comprising:
the first acquisition module is used for acquiring the starting time of the thread processing request;
a second obtaining module, configured to obtain an end time of a thread processing request, and calculate time delay data of the thread processing request, where the time delay data is a time difference between the end time and the start time;
the system comprises a first calculation module, a second calculation module and a third calculation module, wherein the first calculation module is used for calculating the total capacity time of a thread pool in a preset period, the total capacity time is the total execution time of all threads of the thread pool in the preset period, and the total capacity time is the statistical time of the preset period multiplied by the number of bus threads of the thread pool;
and the second calculation module is used for calculating the occupancy rate of the thread pool capacity occupied by the thread processing request in a preset period, wherein the occupancy rate is the ratio of the time delay data to the total time of the capacity.
8. The device for monitoring the thread utilization rate is characterized by comprising a processor, wherein the processor is used for acquiring the starting time and the ending time of a thread processing request and calculating the time delay data of the thread processing request, and the time delay data is the time difference between the ending time and the starting time; the processor is further configured to calculate a total capacity time of the thread pool in a predetermined cycle, where the total capacity time is a total execution time of all threads of the thread pool in the predetermined cycle, and the total capacity time is a statistical time of the thread pool multiplied by the number of bus threads of the thread pool in the predetermined cycle; and calculating the occupancy rate of the thread processing request occupying the capacity of the thread pool in a preset period, wherein the occupancy rate is the ratio of the time delay data to the total time of the capacity.
9. An apparatus having a storage function, wherein the apparatus stores a program which, when executed, implements the method of monitoring thread usage of any of claims 1 to 6.
CN201810847872.XA 2018-07-27 2018-07-27 Method and device for monitoring thread utilization rate and storage device Active CN109271290B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810847872.XA CN109271290B (en) 2018-07-27 2018-07-27 Method and device for monitoring thread utilization rate and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810847872.XA CN109271290B (en) 2018-07-27 2018-07-27 Method and device for monitoring thread utilization rate and storage device

Publications (2)

Publication Number Publication Date
CN109271290A CN109271290A (en) 2019-01-25
CN109271290B true CN109271290B (en) 2022-06-07

Family

ID=65152909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810847872.XA Active CN109271290B (en) 2018-07-27 2018-07-27 Method and device for monitoring thread utilization rate and storage device

Country Status (1)

Country Link
CN (1) CN109271290B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960637A (en) * 2019-03-20 2019-07-02 苏州浪潮智能科技有限公司 A kind of method and system for analyzing network interface card CPU usage
CN111831519A (en) * 2019-04-16 2020-10-27 阿里巴巴集团控股有限公司 Data acquisition method, device and equipment
CN113760632B (en) * 2020-08-10 2024-06-18 北京沃东天骏信息技术有限公司 Thread pool performance monitoring method, device, equipment and storage medium
CN112749013B (en) * 2021-01-19 2024-04-19 广州虎牙科技有限公司 Thread load detection method and device, electronic equipment and storage medium
CN114138499B (en) * 2022-01-29 2022-05-06 苏州浪潮智能科技有限公司 GPU resource utilization rate monitoring method and device, computer equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106506389A (en) * 2016-10-19 2017-03-15 广州华多网络科技有限公司 Network request asynchronous processing method and device
CN107046510A (en) * 2017-01-13 2017-08-15 广西电网有限责任公司电力科学研究院 A kind of node and its system of composition suitable for distributed computing system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7984452B2 (en) * 2006-11-10 2011-07-19 Cptn Holdings Llc Event source management using a metadata-driven framework
CN101505243B (en) * 2009-03-10 2011-01-05 中国科学院软件研究所 Performance exception detecting method for Web application
CN101876933A (en) * 2009-04-28 2010-11-03 深圳富泰宏精密工业有限公司 Analysis system and method for CPU utilization rate
CN103870348A (en) * 2012-12-14 2014-06-18 中国电信股份有限公司 Test method and system for concurrent user access
CN106302594B (en) * 2015-05-29 2019-11-05 广州华多网络科技有限公司 A kind of method and apparatus of determining process loading condition
CN105630606A (en) * 2015-12-22 2016-06-01 山东中创软件工程股份有限公司 Method and device for adjusting capacity of thread pools
CN107179975A (en) * 2016-03-09 2017-09-19 北京京东尚科信息技术有限公司 monitoring method and device
CN106294168B (en) * 2016-08-16 2018-10-23 广州华多网络科技有限公司 A kind of method and system carrying out Application testing
CN107870800A (en) * 2016-09-23 2018-04-03 超威半导体(上海)有限公司 Virtual machine activity detects
CN107678861B (en) * 2017-10-16 2020-11-24 广州酷狗计算机科技有限公司 Method and device for processing function execution request

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106506389A (en) * 2016-10-19 2017-03-15 广州华多网络科技有限公司 Network request asynchronous processing method and device
CN107046510A (en) * 2017-01-13 2017-08-15 广西电网有限责任公司电力科学研究院 A kind of node and its system of composition suitable for distributed computing system

Also Published As

Publication number Publication date
CN109271290A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN109271290B (en) Method and device for monitoring thread utilization rate and storage device
Lu et al. Log-based abnormal task detection and root cause analysis for spark
EP2503733B1 (en) Data collecting method, data collecting apparatus and network management device
CN106452818B (en) Resource scheduling method and system
US8881164B2 (en) Computer process with utilization reduction
US9870269B1 (en) Job allocation in a clustered environment
CN111563014B (en) Interface service performance test method, device, equipment and storage medium
US20120137295A1 (en) Method for displaying cpu utilization in a multi-processing system
KR20120026046A (en) Application efficiency engine
JP2012531642A (en) Time-based context sampling of trace data with support for multiple virtual machines
CN109144862A (en) Statistical method, device, computer equipment and the storage medium of test data
WO2014208139A1 (en) Fault detection device, control method, and program
US9442817B2 (en) Diagnosis of application server performance problems via thread level pattern analysis
CN106528318B (en) Thread dead loop detection method and device
CN112749013B (en) Thread load detection method and device, electronic equipment and storage medium
WO2017107456A1 (en) Method and apparatus for determining resources consumed by task
CN106209412B (en) Resource monitoring system and method thereof
DE112011101759B4 (en) Sampling of idle transitions
CN112379935A (en) Spark performance optimization control method, device, equipment and storage medium
US8001341B2 (en) Managing dynamically allocated memory in a computer system
Çavdar et al. Quantifying the brown side of priority schedulers: Lessons from big clusters
CN107220166B (en) A kind of statistical method and device of CPU usage
CN110928750B (en) Data processing method, device and equipment
CN109032814B (en) Mobile terminal, method for monitoring interprocess communication of mobile terminal and storage medium
CN110928663A (en) Cross-platform multithreading monitoring method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210118

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511449 28th floor, block B1, Wanda Plaza, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190125

Assignee: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

Assignor: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2021440000054

Denomination of invention: A method, device and storage device for monitoring thread utilization rate

License type: Common License

Record date: 20210208

GR01 Patent grant
GR01 Patent grant