CN114237895B - Thread resource scheduling method and device, storage medium and electronic equipment - Google Patents
Thread resource scheduling method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN114237895B CN114237895B CN202111559737.3A CN202111559737A CN114237895B CN 114237895 B CN114237895 B CN 114237895B CN 202111559737 A CN202111559737 A CN 202111559737A CN 114237895 B CN114237895 B CN 114237895B
- Authority
- CN
- China
- Prior art keywords
- thread
- node
- allocated
- determining
- time complexity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012545 processing Methods 0.000 claims abstract description 30
- 230000004044 response Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000000903 blocking effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000005111 flow chemistry technique Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013468 resource allocation Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000003999 initiator Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure provides a method, a device, a storage medium and electronic equipment for scheduling thread resources, and relates to the technical field of computers. The method comprises the following steps: obtaining the maximum thread number available in the thread resource and obtaining the initial time complexity of the thread node to be allocated; determining the thread weight of the thread node to be allocated according to the initial time complexity; and determining the thread number of the thread node to be allocated according to the thread weight of the thread node to be allocated and the maximum thread number. The method can solve the problem of unbalanced allocation of thread resources of each node in the related technology, greatly improves the efficiency of processing data by the node, plays the performance of server hardware to the maximum extent, and can ensure the stability of the system.
Description
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a method and a device for scheduling thread resources, a storage medium and electronic equipment.
Background
With the rapid development of computer and communication technologies, various complex processes are required for massive data, and corresponding operations are often required to be performed through different nodes when the massive data are processed. Because of the difference of the modes of data processing of each node and the processing efficiency, obvious bucket effects appear in the aspect of performance, the stability of a system is difficult to ensure under the condition, and the system data processing efficiency, most of conventional solutions to the problem at present are nodes with long execution time by directly calling in a multithreading mode, but when the method is used for multi-node scheduling, the problems of overhigh system resource occupation, uneven resource allocation of each node of the system and the like easily appear, huge hidden dangers are left for the system, the irreparable consequences are caused once the conditions of memory overflow and the like appear, and the field of a scientific and efficient scheduling algorithm still has a very large exploration space.
In summary, how to ensure the balance of the thread resource allocation of each node is a problem to be solved at present.
Disclosure of Invention
The disclosure aims to provide a method, a device, a storage medium and electronic equipment for scheduling thread resources, so as to solve the problem of unbalanced thread resource allocation of each node in the related art.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to one aspect of the present disclosure, there is provided a method of thread resource scheduling, comprising: obtaining the maximum thread number available in the thread resource and obtaining the initial time complexity of the thread node to be allocated; determining the thread weight of the thread node to be allocated according to the initial time complexity; and determining the thread number of the thread node to be allocated according to the thread weight of the thread node to be allocated and the maximum thread number.
In one embodiment of the present disclosure, the determining the thread weight of the thread node to be allocated according to the initial time complexity includes: determining the proportion of the initial time complexity of each thread node to be allocated to the sum of the initial time complexity of all the thread nodes to be allocated, and determining the proportion as the thread weight of each thread node to be allocated.
In one embodiment of the present disclosure, after determining the number of threads of the thread node to be allocated, the method further comprises: acquiring the operation efficiency of the thread node to be allocated based on a timing task; judging whether the thread node to be distributed meets the condition of updating the thread weight according to the running efficiency; and updating the thread weight of the thread node to be distributed in response to the condition that the thread weight update is satisfied.
In one embodiment of the present disclosure, the obtaining, based on the timing task, the operation efficiency of the thread node to be allocated includes: acquiring the execution time complexity of the thread node to be allocated based on a timing task; and determining the running efficiency of the thread node to be distributed according to the execution time complexity.
In one embodiment of the present disclosure, the operating efficiency of the thread node to be allocated is a variance of the execution time complexity.
In one embodiment of the present disclosure, the thread weight update condition is that the variance of the execution time complexity is greater than a preset value.
In one embodiment of the present disclosure, after determining the number of threads of the thread node to be allocated, the method further comprises: and distributing the thread resources according to the thread number of the thread nodes to be distributed, and constructing a private thread pool of the thread nodes to be distributed.
According to another aspect of the present disclosure, there is provided an apparatus for scheduling thread resources, including: the acquisition module is used for acquiring the maximum thread number available in the thread resources and the initial time complexity of the thread node to be allocated; the weight determining module is used for determining the thread weight of the thread node to be distributed according to the initial time complexity; and the thread number determining module is used for determining the thread number of the thread node to be allocated according to the thread weight of the thread node to be allocated and the maximum thread number.
In one embodiment of the present disclosure, the weight determination module is further configured to: determining the proportion of the initial time complexity of each thread node to be allocated to the sum of the initial time complexity of all the thread nodes to be allocated, and determining the proportion as the thread weight of each thread node to be allocated.
In one embodiment of the disclosure, the apparatus further includes a weight update module configured to obtain an operating efficiency of the thread node to be allocated based on a timing task; judging whether the thread node to be distributed meets the condition of updating the thread weight according to the running efficiency; and updating the thread weight of the thread node to be distributed in response to the condition that the thread weight update is satisfied.
In one embodiment of the present disclosure, the weight updating module is further configured to: acquiring the execution time complexity of the thread node to be allocated based on a timing task; and determining the running efficiency of the thread node to be distributed according to the execution time complexity.
In one embodiment of the present disclosure, the operating efficiency of the thread node to be allocated is a variance of the execution time complexity.
In one embodiment of the present disclosure, the thread weight update condition is that the variance of the execution time complexity is greater than a preset value.
In one embodiment of the disclosure, the apparatus further includes a thread pool construction module configured to allocate the thread resources according to the number of threads of the thread node to be allocated, and construct a private thread pool of the thread node to be allocated.
According to yet another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of thread resource scheduling described above.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of thread resource scheduling described above via execution of the executable instructions.
According to the thread resource scheduling method provided by the embodiment of the disclosure, the running condition of the node and the running efficiency of each node can be determined according to the initial time complexity of each thread node to be allocated, the data processing mode of each node is further automatically analyzed to obtain the thread weight of the current node, and then reasonable thread numbers are allocated to each node according to the thread weights for concurrent processing. The efficiency of processing data by the nodes can be greatly improved, the performance of the server hardware can be exerted to the maximum extent, and the stability of the system is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which the method of generating a house type profile of the exemplary embodiments of the present disclosure may be applied;
FIG. 2 illustrates a flow diagram of a method of thread resource scheduling in accordance with one embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of prior art data stream processing;
FIG. 4 illustrates a flow diagram of a method of thread resource scheduling of one embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of data flow processing of one embodiment of the present disclosure;
FIG. 6 illustrates a flow diagram of thread weight update of one embodiment of the present disclosure;
FIG. 7 illustrates a block diagram of an apparatus for thread resource scheduling of one embodiment of the present disclosure; and
Fig. 8 shows a block diagram of a device for generating a house type profile in an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present disclosure, the meaning of "a plurality" is at least two, such as two, three, etc., unless explicitly specified otherwise.
In view of the technical problems in the related art, embodiments of the present disclosure provide a method for scheduling thread resources, which is used to at least solve one or all of the technical problems.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which the method of thread resource scheduling of the exemplary embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture may include a server 101, a network 102, and a client 103. Network 102 may provide a medium for communication links between clients 103 and server 101. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
The server 101 may be a server providing various services, such as a background management server providing support for devices operated by users with clients. The background management server can analyze and process the received data such as the request and the like, and feed back the processing result to the client.
The client 103 may be a mobile terminal such as a mobile phone, a game console, a tablet computer, an electronic book reader, smart glasses, a smart home device, an AR (Augmented Reality) device, a VR (Virtual Reality) device, or the like, or the client 103 may be a personal computer such as a laptop portable computer and a desktop computer, or the like.
In exemplary embodiments of the present disclosure, a server may, for example, obtain a maximum number of threads available in a thread resource, and obtain an initial time complexity of a thread node to be allocated; determining the thread weight of the thread node to be allocated according to the initial time complexity; and determining the thread number of the thread node to be allocated according to the thread weight and the maximum thread number of the thread node to be allocated.
It should be understood that the number of clients, networks and servers in fig. 1 is merely illustrative, and the server 101 may be a server of one entity, may be a server cluster formed by a plurality of servers, may be a cloud server, and may have any number of terminal devices, networks and servers according to actual needs.
Hereinafter, each step of the method for generating a house type profile in the exemplary embodiments of the present disclosure will be described in more detail with reference to the accompanying drawings and embodiments.
FIG. 2 illustrates a flow diagram of a method of thread resource scheduling in accordance with one embodiment of the present disclosure. The method provided by the embodiments of the present disclosure may be performed in a server or a client as shown in fig. 1, but the present disclosure is not limited thereto.
In the following illustration, the server cluster 101 is exemplified as an execution subject.
As shown in fig. 2, the method for generating a thread resource schedule provided by the embodiment of the present disclosure may include the following steps.
Step S210, obtaining the maximum thread number available in the thread resource and obtaining the initial time complexity of the thread node to be allocated. The maximum thread number which can be used in the thread resource can be obtained in advance according to the condition of the server hardware where the system is located through pressure test, and the maximum thread number which can be born by the whole system under the condition of meeting the high-efficiency operation of the current service is obtained. The thread node to be distributed can realize the calling of various services which need to occupy more resources, such as downstream HTTP interfaces, MQ queue enqueuing, database inquiry, IO stream reading and the like. The initial time complexity refers to the time difference between the end call and the start call of the node, and in this embodiment, there are two acquisition modes for the initial time complexity, one is preset, and the other can be according to the average time complexity of each node recorded in real time.
Step S220, determining the thread weight of the thread node to be allocated according to the initial time complexity. In the embodiment of the disclosure, the initial operation efficiency of each node is determined according to the initial time complexity of each thread node to be allocated, and then the thread weight of each thread node to be allocated is determined according to the initial time complexity, namely the thread weight of each node is determined according to the initial operation efficiency of each node, so that the allocation of thread resources is more reasonable, and the operation efficiency of each node is higher.
Step S230, determining the thread number of the thread node to be allocated according to the thread weight and the maximum thread number of the thread node to be allocated. Optionally, after determining the thread weight of each node, the thread resource may be allocated according to the thread number of each thread node to be allocated, and a private thread pool of each thread node to be allocated may be constructed, so as to facilitate thread call of each node.
In the data stream processing chain shown in fig. 3, the problem of processing chain blocking caused by unbalanced allocation of thread resources of each node in the prior art, specifically, if the processing time of the node C is too long, node B blocking is caused, and further node B blocking also causes node a blocking. Thus, the processing chain congestion may seriously cause unstable operation of the data stream processing system and low real-time operation efficiency of each node.
According to the embodiment of the disclosure, the running condition of the node and the running efficiency of each node can be determined according to the initial time complexity of each thread node to be allocated, the data processing mode of each node is further automatically analyzed to obtain the thread weight of the current node, and then reasonable thread numbers are allocated to each node according to the thread weight for concurrent processing.
FIG. 4 illustrates a flow diagram of a method of thread resource scheduling in accordance with one embodiment of the present disclosure. As shown in fig. 4, the method for scheduling thread resources may include:
step S410, obtaining the maximum thread number available in the thread resource and obtaining the initial time complexity of all the thread nodes to be allocated.
Step S420, determining the proportion of the initial time complexity of each thread node to be allocated to the sum of the initial time complexity of all the thread nodes to be allocated, and determining the proportion as the thread weight of each thread node to be allocated. For example, assuming that the thread nodes to be allocated are node1, node2, node3 …, noden, respectively, where n is the number of thread nodes to be allocated, and the initial time complexity of each thread node to be allocated obtained through step S410 is T 1,T2,…Tn, respectively, the thread weights of the nodes are:
Wherein i is equal to 1,2,3 …, n.
Step S430, determining the thread number of each thread node to be allocated according to the thread weight and the maximum thread number of each thread node to be allocated. For the above example, assume that the maximum number of threads available in the thread resource is N, obtained in step S410, and after determining the thread weight of the node in step S420, the number of threads of the node is determined to be:
step S440, based on the timing task, the operation efficiency of the thread node to be distributed is obtained. The operation efficiency is the real-time operation efficiency of each node.
Step S450, judging whether the thread node to be distributed meets the condition of updating the thread weight according to the operation efficiency.
Step S460, in response to the condition that the thread weight update is satisfied, updating the thread weight of the thread node to be allocated.
According to the embodiment of the disclosure, if the running efficiency of each node changes greatly within a period of time, the thread weight of each node can be recalculated, so that the weight of each node is dynamically modified, the thread number of the thread pool can be greatly improved, the system data processing efficiency can be greatly improved under the condition that the hardware environment is the same, and the thread resources occupied by each node are balanced in real time. Perfectly solving the situations of 'neck blocking' and 'bucket effect' in the data processing process.
Fig. 5 shows a schematic diagram of data flow processing according to an embodiment of the present disclosure, and as shown in fig. 5, a process of performing data flow processing according to an embodiment of the present disclosure includes:
step 1: the data processing time complexity of reading A, B, C, D nodes is Ta, tb, tc, td and the maximum number of threads read is 8. The node A is an upstream node and is only a call initiator in the graph logic, the node B is used for calling the nodes C and D, a weighting algorithm does not exist in the node A system, the main body of the weighting algorithm is the node B, and the weighting algorithm does not exist in the node A, the node C and the node D programs.
Step 2: the data flow firstly passes through the A node, the thread weight of the data flow is 100% before the processing of the A node, and the number of threads distributed in a weight line urban of the A node is 8;
Step 3: executing a weighting algorithm in the node B, specifically, determining that the thread weights of the C, D nodes are Tc/(tc+td) and Td/(tc+td) according to Tc and Td, and assuming that the weights are 62.5% and 37.5% respectively, the number of threads to be allocated in the weight line urban pool of the C, D nodes is 8×62.5% =5 and 8×37.5% =3 respectively;
step 4: in the process of executing the weighting algorithm in the node B, the time of the current processing data of the C, D node is recorded according to the timing task, and the thread weight of the C, D node is recalculated to adjust the thread allocation of the C, D node.
FIG. 6 illustrates a flow diagram for thread weight update of one embodiment of the present disclosure. As shown in fig. 6, the flow of thread weight update may include:
Step S610, based on the timing task, the execution time complexity of the thread node to be allocated is obtained. The execution time complexity of each node is the real-time complexity of the node, namely the time occupied by the processing data in real time. In particular, the execution time complexity may be determined by timing the time at which the task computing node calls end-node call start. For example, assume that the recorded execution time complexity of the n thread nodes to be allocated is: t 1',T2',…Tn'.
Step S620, determining the running efficiency of the thread node to be allocated according to the execution time complexity. The running efficiency of the thread node to be distributed is the variance of the complexity of the execution time. Specifically, each node starts calling and ends calling, and the record corresponds to the node, wherein the record can be a database record or a log. Calculating the time difference between the end of each node call and the start of each node call through the timing task in the step S610, and then calculating the variance of the call time of all times in the time, namely calculating the variance of each execution time complexity according to T 1',T2',…Tn' and the following variance formula:
In step S630, if the variance of the execution time complexity is greater than the preset value, it is determined that the thread node to be allocated satisfies the condition of updating the thread weight. For example, if the variance of the execution time complexity is greater than 50% of the unit time, the node weight formula is triggered again, the number of allocable threads of each node is recalculated, and the previous thread pool allocation strategy is modified. Where unit time refers to the time complexity that this node uses in weighting the thread pool initial or last assigned thread calculation.
Step S640, according to the updated thread weight of the thread node to be allocated, the thread number allocated by the thread node to be allocated is adjusted.
According to the embodiment of the disclosure, the real-time operation efficiency of each node can be calculated by monitoring the operation state of the current system in real time, and a variance formula is usedWhether the current weighted thread pool strategy is to be changed is judged, so that the weighted thread pool can be dynamically allocated according to the current system running condition, the weighted thread pool strategy is ensured to accord with the running condition of each node of the current system, and the soundness and stability of the system are ensured.
It is noted that the above-described figures are only schematic illustrations of processes involved in a method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
FIG. 7 illustrates a block diagram of an apparatus for thread resource scheduling of one embodiment of the present disclosure. As shown in fig. 7, an apparatus 700 for thread resource scheduling may include: an acquisition module 701, a weight determination module 702, and a thread number determination module 703.
Wherein, the acquisition module 701 may be configured to: obtaining the maximum thread number available in the thread resource and the initial time complexity of the thread node to be allocated; the weight determination module 702 may be configured to: determining the thread weight of the thread node to be allocated according to the initial time complexity; the thread number determination module 703 may be configured to: and determining the thread number of the thread node to be allocated according to the thread weight and the maximum thread number of the thread node to be allocated.
In an exemplary embodiment, the weight determination module 702 may be further operable to: determining the proportion of the initial time complexity of each thread node to be allocated to the sum of the initial time complexity of all the thread nodes to be allocated, and determining the proportion as the thread weight of each thread node to be allocated.
In an exemplary embodiment, the apparatus further comprises a weight update module operable to: acquiring the operation efficiency of the thread node to be allocated based on the timing task; judging whether the thread node to be allocated meets the condition of updating the thread weight according to the operation efficiency; and updating the thread weight of the thread node to be allocated in response to the condition that the thread weight update is satisfied.
In an exemplary embodiment, the weight update module may be further configured to: acquiring the execution time complexity of the thread node to be allocated based on the timing task; and determining the running efficiency of the thread node to be allocated according to the execution time complexity.
In an exemplary embodiment, the operating efficiency of the thread node to be allocated is the variance of the execution time complexity.
In an exemplary embodiment, the condition for the thread weight update is that the variance of the execution time complexity is greater than a preset value.
In an exemplary embodiment, the apparatus further comprises a thread pool construction module operable to allocate thread resources according to the number of threads of the thread node to be allocated and construct a private thread pool of the thread node to be allocated.
It should be noted that the block diagrams shown in the above figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Fig. 8 is a block diagram illustrating a thread resource scheduling generation apparatus in an embodiment of the present disclosure. It should be noted that the illustrated electronic device is only an example, and should not impose any limitation on the functions and application scope of the embodiments of the present invention.
An electronic device 800 according to such an embodiment of the invention is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, and a bus 830 connecting the various system components, including the memory unit 820 and the processing unit 810.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present invention described in the above section of the "exemplary method" of the present specification. For example, the processing unit 810 may perform step S210 shown in fig. 2, obtain the maximum number of threads available in the thread resource, and obtain the initial time complexity of the thread node to be allocated; step S220, determining the thread weight of the thread node to be allocated according to the initial time complexity; step S230, determining the thread number of the thread node to be allocated according to the thread weight and the maximum thread number of the thread node to be allocated.
The storage unit 820 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 8201 and/or cache memory 8202, and may further include Read Only Memory (ROM) 8203.
Storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1000, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 1000 can communicate with one or more networks such as Local Area Networks (LANs), wide Area Networks (WANs), and/or public networks such as the internet through network adapter 860. As shown, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present invention may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (7)
1. A method for scheduling thread resources, comprising:
obtaining the maximum thread number available in the thread resource and obtaining the initial time complexity of the thread node to be allocated; the initial time complexity is preset or the average time complexity of each node recorded in real time; the time complexity of the node is the time difference between the end call and the start call of the node;
determining the thread weight of the thread node to be allocated according to the initial time complexity; it comprises the following steps: determining the proportion of the initial time complexity of each thread node to be allocated to the sum of the initial time complexity of all the thread nodes to be allocated, and determining the proportion as the thread weight of each thread node to be allocated;
Determining the thread number of the thread node to be allocated according to the thread weight of the thread node to be allocated and the maximum thread number; it comprises the following steps: and determining the product of the thread weight of the thread node to be allocated and the maximum thread number as the thread number of the thread node to be allocated.
2. The method of claim 1, wherein after determining the number of threads of the thread node to be allocated, the method further comprises:
acquiring the operation efficiency of the thread node to be allocated based on a timing task;
Judging whether the thread node to be distributed meets the condition of updating the thread weight according to the running efficiency; the thread weight updating condition is that the running efficiency of the thread node to be distributed is larger than a preset value;
and updating the thread weight of the thread node to be distributed in response to the condition that the thread weight update is satisfied.
3. The method according to claim 2, wherein the obtaining, based on the timing task, the operation efficiency of the thread node to be allocated includes:
acquiring the execution time complexity of the thread node to be allocated based on a timing task; the execution time complexity is the time occupied by the thread node to be distributed for processing data in real time;
determining the operation efficiency of the thread node to be distributed according to the execution time complexity; the running efficiency of the thread node to be distributed is the variance of the complexity of the execution time.
4. A method according to any one of claims 1 to 3, wherein after determining the number of threads of a thread node to be allocated, the method further comprises:
And distributing the thread resources according to the thread number of the thread nodes to be distributed, and constructing a private thread pool of the thread nodes to be distributed.
5. An apparatus for scheduling thread resources, comprising:
The acquisition module is used for acquiring the maximum thread number available in the thread resources and the initial time complexity of the thread node to be allocated; the initial time complexity is preset or the average time complexity of each node recorded in real time; the time complexity of the node is the time difference between the end call and the start call of the node;
the weight determining module is used for determining the thread weight of the thread node to be distributed according to the initial time complexity; it comprises the following steps: determining the proportion of the initial time complexity of each thread node to be allocated to the sum of the initial time complexity of all the thread nodes to be allocated, and determining the proportion as the thread weight of each thread node to be allocated;
the thread number determining module is used for determining the thread number of the thread node to be allocated according to the thread weight of the thread node to be allocated and the maximum thread number; it comprises the following steps: and determining the product of the thread weight of the thread node to be allocated and the maximum thread number as the thread number of the thread node to be allocated.
6. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method of thread resource scheduling according to any one of claims 1 to 4.
7. An electronic device, comprising:
one or more processors;
Storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of thread resource scheduling of any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111559737.3A CN114237895B (en) | 2021-12-20 | 2021-12-20 | Thread resource scheduling method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111559737.3A CN114237895B (en) | 2021-12-20 | 2021-12-20 | Thread resource scheduling method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114237895A CN114237895A (en) | 2022-03-25 |
CN114237895B true CN114237895B (en) | 2024-11-05 |
Family
ID=80758924
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111559737.3A Active CN114237895B (en) | 2021-12-20 | 2021-12-20 | Thread resource scheduling method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114237895B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107818407A (en) * | 2017-10-20 | 2018-03-20 | 平安科技(深圳)有限公司 | Method for allocating tasks, device, storage medium and computer equipment |
CN111225050A (en) * | 2020-01-02 | 2020-06-02 | 中国神华能源股份有限公司神朔铁路分公司 | Cloud computing resource allocation method and device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9244732B2 (en) * | 2009-08-28 | 2016-01-26 | Vmware, Inc. | Compensating threads for microarchitectural resource contentions by prioritizing scheduling and execution |
CN108390913B (en) * | 2018-01-19 | 2019-03-12 | 北京白山耘科技有限公司 | A kind of control user uses the method and device of resource |
US10817341B1 (en) * | 2019-04-10 | 2020-10-27 | EMC IP Holding Company LLC | Adaptive tuning of thread weight based on prior activity of a thread |
CN112612605B (en) * | 2020-12-16 | 2024-07-09 | 平安消费金融有限公司 | Thread allocation method, thread allocation device, computer equipment and readable storage medium |
-
2021
- 2021-12-20 CN CN202111559737.3A patent/CN114237895B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107818407A (en) * | 2017-10-20 | 2018-03-20 | 平安科技(深圳)有限公司 | Method for allocating tasks, device, storage medium and computer equipment |
CN111225050A (en) * | 2020-01-02 | 2020-06-02 | 中国神华能源股份有限公司神朔铁路分公司 | Cloud computing resource allocation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN114237895A (en) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9363154B2 (en) | Prediction-based provisioning planning for cloud environments | |
CN109614227B (en) | Task resource allocation method and device, electronic equipment and computer readable medium | |
US10102033B2 (en) | Method and system for performance ticket reduction | |
CN112016794B (en) | Resource quota management method and device and electronic equipment | |
WO2024016596A1 (en) | Container cluster scheduling method and apparatus, device, and storage medium | |
US7793297B2 (en) | Intelligent resource provisioning based on on-demand weight calculation | |
CN111010453B (en) | Service request processing method, system, electronic device and computer readable medium | |
CN115794262A (en) | Task processing method, device, equipment, storage medium and program product | |
CN114020469B (en) | Edge node-based multi-task learning method, device, medium and equipment | |
CN111858040A (en) | Resource scheduling method and device | |
US8548881B1 (en) | Credit optimization to minimize latency | |
CN112799851B (en) | Data processing method and related device in multiparty security calculation | |
US20220413906A1 (en) | Method, device, and program product for managing multiple computing tasks based on batch | |
US20220292392A1 (en) | Scheduled federated learning for enhanced search | |
CN114416357A (en) | Method and device for creating container group, electronic equipment and medium | |
CN114237895B (en) | Thread resource scheduling method and device, storage medium and electronic equipment | |
CN112328391A (en) | Resource allocation method and device and electronic equipment | |
CN109308243B (en) | Data processing method, data processing device, computer equipment and medium | |
CN110825920B (en) | Data processing method and device | |
CN114064403A (en) | Task delay analysis processing method and device | |
CN114265692A (en) | Service scheduling method, device, equipment and storage medium | |
CN114090247A (en) | Method, device, equipment and storage medium for processing data | |
CN113157397A (en) | Virtual resource allocation and service function chain construction method and device | |
Matsuura et al. | A highly efficient consolidated platform for stream computing and hadoop | |
CN114356513B (en) | Task processing method and device for cluster mode |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |