CN111124651B - Method for concurrently scheduling multiple threads in distributed environment - Google Patents
Method for concurrently scheduling multiple threads in distributed environment Download PDFInfo
- Publication number
- CN111124651B CN111124651B CN201911373184.5A CN201911373184A CN111124651B CN 111124651 B CN111124651 B CN 111124651B CN 201911373184 A CN201911373184 A CN 201911373184A CN 111124651 B CN111124651 B CN 111124651B
- Authority
- CN
- China
- Prior art keywords
- thread
- service
- pool
- sleep
- work order
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method for concurrently dispatching multithreading in a distributed environment, which relates to the technical field of enterprise informatization and adopts the technical scheme that work order information to be processed is uniformly placed in a passing pool; traversing the state of the work orders in the transit pool, and uniformly storing state information of the work orders in a database, wherein whether the thread service processes a certain work order depends on the state of the work orders; the state transitions follow a unified scheduling rule. The thread services of each application server are managed through the thread pool framework, and the thread services comprise thread registration, thread instantiation, thread execution, thread sleep and thread ending. The beneficial effects of the invention are as follows: the method solves the defects in message queue processing in the distributed workflow engine through the distributed environment multithreading concurrent scheduling, improves the stability of high concurrent application workflow services, and achieves the purposes of high efficiency, intelligence and balance.
Description
Technical Field
The invention relates to the technical field of enterprise informatization, in particular to a method for concurrently scheduling multiple threads in a distributed environment.
Background
Most of the systems in the current enterprise informatization construction are distributed systems, the deployment mode is a cluster mode, session caching and routing are carried out among application nodes through load balancing, service communication is carried out through message queues, and the message queues have the advantages of asynchronism, decoupling, advection and the like and are suitable for interface communication; the workflow engine of part of the system also adopts a message queue mode, and the message queue has the following problems when applied to the workflow engine:
1. the message queue follows strict first-in first-out principle, and the service information in the queue can not realize priority control.
2. Reducing system availability, message queues result in increased inter-module dependencies, and one module exception often results in a workflow engine paralysis.
3. The complexity of the system is increased, the communication among the modules is subjected to the steps of sending, receiving and processing the messages, and the system becomes more complex due to the increase of the processing steps.
4. The system controllability is reduced, the message queue is specifically realized by special middleware, the internal processing logic of part of the middleware message queue is invisible to the outside, and the application is used for controlling the message processing process.
5. The reliability of the system is reduced, and the internal message queues of the distributed system are always balanced by adopting a t3 protocol through middleware, so that the balanced reliability is not high.
Disclosure of Invention
Aiming at the technical problems, the invention provides a method for concurrently scheduling multiple threads in a distributed environment.
The technical proposal is that the workflow engine of the application system is subjected to service decomposition, each service is responsible for completing a certain function of the workflow engine, the realization of the service is completed through the distributed multi-thread concurrent execution, comprising,
s1, uniformly placing work order information to be constructed and processed in a passing pool; the work orders and related information in construction are stored in the pool, and the work orders after construction are discharged out of the pool in time, so that the data amount of each table of the work order pool is minimum, and high-level data processing efficiency is ensured.
S2, traversing the state of the work orders in the transit pool, and uniformly storing state information of the work orders in a database, wherein whether the thread service processes a certain work order depends on the state of the work orders; the state transitions follow a unified scheduling rule.
S3, managing the thread service of each application server through a thread pool framework, wherein the thread service comprises S310 and thread registration; s320, thread instantiation; s330, executing a thread; s340, the thread is put to sleep; s350, ending the thread.
Preferably, the method for registering the S310 thread is as follows:
s311, determining an application domain;
s312, determining application services;
s313, determining thread service;
s314, configuring thread running information.
Preferably, the configuring the thread running information in S314 includes setting a thread concurrency number, setting a thread service peak value, setting a thread log mode, and setting a thread running mode;
the set thread running mode comprises the setting of information such as a time zone, a grabbing amount, a sleep interval and the like.
Preferably, the method of S320 thread instantiation is,
s321, creating an application thread pool;
s322, loading application registration thread information;
s323, creating a thread instance;
s324, initializing a thread instance;
s325, the thread instance is ready.
Preferably, the method of executing the thread S330 is,
s331, scanning the service waiting amount of the category;
s332, calculating the running priority of the current thread;
s333, issuing a token to the thread according to the thread priority rule; when a thread execution token is issued, priority control is carried out according to the relative service throughput of the thread, so that the service processing balance of the same type of thread service is ensured, and the equipment resource utilization rate is balanced;
s334, grabbing work orders according to the token sequence;
s335, executing service processing on the successfully grabbed work order.
Preferably, when the step S334 captures the worksheet according to the token sequence, a data rolling lock policy is adopted to perform data query and locking, and the locked data is skipped to continue to perform query and locking by utilizing the SELECT FOR UPDATE SKIP LOCKED function implicitly supported by Oracle.
Preferably, the S340 thread sleep method is that,
s341, calculating whether the activity processing reaches an upper limit value;
s342, if the judgment result of S341 reaches the upper limit value, entering thread dormancy;
s343, if the judging result of S341 does not reach the upper limit value, calculating whether the thread needs to sleep;
s344, if the judgment result of S343 is that the thread needs to sleep, the thread is put into sleep;
s345, if the judgment result of S343 is that the thread does not need to sleep, the thread is put into sleep.
Preferably, in the step S1, the work order information to be processed is uniformly placed in a pool; the work order quantity in the transit pool is controlled through a work order caching mechanism, and when the work order backlog quantity reaches a certain threshold value, the work order with the threshold value priority is cached so as to ensure the priority scheduling efficiency; and when the backlog quantity of the work orders is reduced to a certain threshold value, performing anti-caching processing on the cached work orders.
The technical scheme provided by the embodiment of the invention has the beneficial effects that: the method solves the defects in message queue processing in the distributed workflow engine through the distributed environment multithreading concurrent scheduling, improves the stability of high concurrent application workflow services, and achieves the purposes of high efficiency, intelligence and balance.
Drawings
FIG. 1 is an overall frame diagram of an embodiment of the present invention.
FIG. 2 is a flow chart of thread registration according to an embodiment of the present invention.
FIG. 3 is a thread instantiation flowchart of an embodiment of the present invention.
FIG. 4 is a flow chart of thread execution according to an embodiment of the present invention.
FIG. 5 is a flow chart of thread sleep on a memory device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
In the description of the invention, it should be understood that the terms "center," "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships that are based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the invention and simplify the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be configured and operate in a particular orientation, and therefore should not be construed as limiting the invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. In the description of the invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the description of the invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the creation of the present invention can be understood by those of ordinary skill in the art in a specific case.
Example 1
Referring to fig. 1 to 5, the present invention provides a method for multi-thread concurrent scheduling in a distributed environment, which decomposes services of an application workflow engine, each service is responsible for completing a function of the workflow engine, and the implementation of the services is completed by distributed multi-thread concurrent execution, including,
s1, uniformly placing work order information to be constructed and processed in a passing pool; the work orders and related information in construction are stored in the pool, and the work orders after construction are discharged out of the pool in time, so that the data amount of each table of the work order pool is minimum, and high-level data processing efficiency is ensured.
S2, traversing the state of the work orders in the transit pool, and uniformly storing state information of the work orders in a database, wherein whether the thread service processes a certain work order depends on the state of the work orders; the state transitions follow a unified scheduling rule.
S3, managing the thread service of each application server through a thread pool framework, wherein the thread service comprises S310 and thread registration; s320, thread instantiation; s330, executing a thread; s340, the thread is put to sleep; s350, ending the thread.
The method for registering the S310 thread comprises the following steps:
s311, determining an application domain;
s312, determining application services;
s313, determining thread service;
s314, configuring thread running information.
S314, configuring thread running information comprises setting a thread concurrency number, setting a thread service peak value, setting a thread log mode and setting a thread running mode;
the set thread running mode comprises the setting of information such as a time zone, a grabbing amount, a sleep interval and the like.
The method of S320 thread instantiation is that,
s321, creating an application thread pool;
s322, loading application registration thread information;
s323, creating a thread instance;
s324, initializing a thread instance;
s325, the thread instance is ready.
The method of executing the thread at S330 is,
s331, scanning the service waiting amount of the category;
s332, calculating the running priority of the current thread;
s333, issuing a token to the thread according to the thread priority rule; when a thread execution token is issued, priority control is carried out according to the relative service throughput of the thread, so that the service processing balance of the same type of thread service is ensured, and the equipment resource utilization rate is balanced;
s334, grabbing work orders according to the token sequence;
s335, executing service processing on the successfully grabbed work order.
And controlling the thread instance to capture the work order according to the thread execution token strategy to perform construction treatment, ensuring that the quantity of concurrent threads in the distributed environment under the same service at the same time is minimum, and reducing the resource contention risk.
And S334, when the work orders are grabbed according to the token sequence, a data rolling lock strategy is adopted, the SELECT FOR UPDATE SKIP LOCKED function implicitly supported by Oracle is utilized to execute the inquiry and locking of the data, the locked data is skipped to continue to execute the inquiry and locking, and the same work order is prevented from being locked when a plurality of threads serve for grabbing the work orders.
The sleep method of the S340 thread is that,
s341, calculating whether the activity processing reaches an upper limit value;
s342, if the judgment result of S341 reaches the upper limit value, entering thread dormancy;
s343, if the judging result of S341 does not reach the upper limit value, calculating whether the thread needs to sleep;
s344, if the judgment result of S343 is that the thread needs to sleep, the thread is put into sleep;
s345, if the judgment result of S343 is that the thread does not need to sleep, the thread is put into sleep.
The thread double-sleep mechanism is used for ensuring that most of thread services are in a sleep state to release equipment resources when the traffic is low and the thread services can quickly reach a processing capacity peak state in a traffic peak period.
S1, uniformly placing work order information to be processed in a passing pool; the work order quantity in the transit pool is controlled through a work order caching mechanism, and when the work order backlog quantity reaches a certain threshold value, the work order with the threshold value priority is cached so as to ensure the priority scheduling efficiency; and when the backlog quantity of the work orders is reduced to a certain threshold value, performing anti-caching processing on the cached work orders.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (7)
1. A method for concurrently dispatching multiple threads in a distributed environment is characterized in that an application system workflow engine is subjected to service decomposition, each service is responsible for completing a certain function of the workflow engine, the realization of the service is completed through the concurrent execution of the distributed multiple threads, and the method comprises the steps of,
s1, uniformly placing work order information to be processed in a passing pool; only the work orders and related information in construction are stored in the way pool, and the work orders after construction are timely discharged from the pool, so that the data amount of each table of the work order pool is minimum, and high-level data processing efficiency is ensured;
s2, traversing the state of the work orders in the transit pool, and uniformly storing state information of the work orders in a database, wherein whether the thread service processes a certain work order depends on the state of the work orders; the state transition follows a unified scheduling rule;
s3, managing the thread service of each application server through a thread pool framework, wherein the thread service comprises S310 and thread registration; s320, thread instantiation; s330, executing a thread; s340, the thread is put to sleep; s350, ending the thread;
in the step S1, work order information to be processed is uniformly placed in a passing pool; the work order quantity in the in-transit pool is controlled through a work order caching mechanism, and when the work order backlog quantity reaches a certain threshold value, the work order with the threshold priority is cached to ensure the priority scheduling efficiency; and when the backlog amount of the work orders is reduced to a certain threshold value, performing anti-caching processing on the cached work orders.
2. The method for concurrently scheduling multiple threads in a distributed environment according to claim 1, wherein the method for registering the S310 threads is:
s311, determining an application domain;
s312, determining application services;
s313, determining thread service;
s314, configuring thread running information.
3. The method for multi-thread concurrency scheduling in a distributed environment according to claim 2, wherein the configuring of the thread running information in S314 includes setting a thread concurrency count, setting a thread service peak, setting a thread log mode, and setting a thread running mode;
the thread running mode is set to comprise a time zone, a grabbing amount and a sleep interval setting.
4. The method of claim 1, wherein the step of S320 thread instantiation is performed by,
s321, creating an application thread pool;
s322, loading application registration thread information;
s323, creating a thread instance;
s324, initializing a thread instance;
s325, the thread instance is ready.
5. The method for concurrently scheduling multiple threads in a distributed environment according to claim 1, wherein the method for executing the threads at S330 is,
s331, scanning the service waiting amount of the category;
s332, calculating the running priority of the current thread;
s333, issuing a token to the thread according to the thread priority rule; when a thread execution token is issued, priority control is carried out according to the relative service throughput of the thread, so that the service processing balance of the same type of thread service is ensured, and the equipment resource utilization rate is balanced;
s334, grabbing work orders according to the token sequence;
s335, executing service processing on the successfully grabbed work order.
6. The method for multi-threaded concurrent scheduling in a distributed environment according to claim 5, wherein when the job ticket is grabbed in the token order in S334, a data rolling lock policy is adopted to perform data query and locking, and the locked data is skipped to continue to perform query and locking.
7. The method of claim 1, wherein the S340 thread sleep method is,
s341, calculating whether the activity processing reaches an upper limit value;
s342, if the judgment result of S341 reaches the upper limit value, entering thread dormancy;
s343, if the judging result of S341 does not reach the upper limit value, calculating whether the thread needs to sleep;
s344, if the judgment result of S343 is that the thread needs to sleep, the thread is put into sleep;
s345, if the judgment result of S343 is that the thread does not need to sleep, the thread is put into sleep.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911373184.5A CN111124651B (en) | 2019-12-27 | 2019-12-27 | Method for concurrently scheduling multiple threads in distributed environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911373184.5A CN111124651B (en) | 2019-12-27 | 2019-12-27 | Method for concurrently scheduling multiple threads in distributed environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111124651A CN111124651A (en) | 2020-05-08 |
CN111124651B true CN111124651B (en) | 2023-05-23 |
Family
ID=70503706
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911373184.5A Active CN111124651B (en) | 2019-12-27 | 2019-12-27 | Method for concurrently scheduling multiple threads in distributed environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111124651B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112328388B (en) * | 2020-09-17 | 2022-03-08 | 北京中数科技术有限公司 | Parallel computing method and system fusing multithreading and distributed technology |
CN116501475B (en) * | 2023-06-21 | 2023-10-20 | 杭州炬华科技股份有限公司 | Thread scheduling method, system and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102360310A (en) * | 2011-09-28 | 2012-02-22 | 中国电子科技集团公司第二十八研究所 | Multitask process monitoring method and system in distributed system environment |
CN102681889A (en) * | 2012-04-27 | 2012-09-19 | 电子科技大学 | Scheduling method of cloud computing open platform |
CN108132837A (en) * | 2018-01-02 | 2018-06-08 | 中国工商银行股份有限公司 | A kind of distributed type assemblies dispatch system and method |
CN109753354A (en) * | 2018-11-26 | 2019-05-14 | 平安科技(深圳)有限公司 | Processing method, device and the computer equipment of Streaming Media task based on multithreading |
CN110597606A (en) * | 2019-08-13 | 2019-12-20 | 中国电子科技集团公司第二十八研究所 | Cache-friendly user-level thread scheduling method |
CN112000445A (en) * | 2020-07-08 | 2020-11-27 | 苏宁云计算有限公司 | Distributed task scheduling method and system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8576862B2 (en) * | 2010-05-18 | 2013-11-05 | Lsi Corporation | Root scheduling algorithm in a network processor |
CN106126336B (en) * | 2016-06-17 | 2019-06-04 | 上海兆芯集成电路有限公司 | Processor and dispatching method |
-
2019
- 2019-12-27 CN CN201911373184.5A patent/CN111124651B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102360310A (en) * | 2011-09-28 | 2012-02-22 | 中国电子科技集团公司第二十八研究所 | Multitask process monitoring method and system in distributed system environment |
CN102681889A (en) * | 2012-04-27 | 2012-09-19 | 电子科技大学 | Scheduling method of cloud computing open platform |
CN108132837A (en) * | 2018-01-02 | 2018-06-08 | 中国工商银行股份有限公司 | A kind of distributed type assemblies dispatch system and method |
CN109753354A (en) * | 2018-11-26 | 2019-05-14 | 平安科技(深圳)有限公司 | Processing method, device and the computer equipment of Streaming Media task based on multithreading |
CN110597606A (en) * | 2019-08-13 | 2019-12-20 | 中国电子科技集团公司第二十八研究所 | Cache-friendly user-level thread scheduling method |
CN112000445A (en) * | 2020-07-08 | 2020-11-27 | 苏宁云计算有限公司 | Distributed task scheduling method and system |
Also Published As
Publication number | Publication date |
---|---|
CN111124651A (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
RU2465634C2 (en) | Real-time instruction processing system | |
US8938421B2 (en) | Method and a system for synchronizing data | |
US20180357291A1 (en) | Dynamic admission control for database requests | |
CN111124651B (en) | Method for concurrently scheduling multiple threads in distributed environment | |
US20120076152A1 (en) | System and method for priority scheduling of plurality of message types with serialization constraints and dynamic class switching | |
CN110377406A (en) | A kind of method for scheduling task, device, storage medium and server node | |
US9348865B2 (en) | System and method for massively parallel processing database | |
CN108563502A (en) | A kind of method for scheduling task and device | |
CN111580990A (en) | Task scheduling method, scheduling node, centralized configuration server and system | |
CN110955506A (en) | Distributed job scheduling processing method | |
CN1898647B (en) | Processing architecture having passive threads and active semaphores | |
US20180124810A1 (en) | Method for providing m2m data | |
CN112241400A (en) | Method for realizing distributed lock based on database | |
WO2014019701A1 (en) | Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment | |
US20150378406A1 (en) | Method of executing an application on a distributed computer system, a resource manager and a distributed computer system | |
CN112667383B (en) | Task execution and scheduling method, system, device, computing equipment and medium | |
CN110914805A (en) | Computing system for hierarchical task scheduling | |
CN116010064A (en) | DAG job scheduling and cluster management method, system and device | |
CN113821322A (en) | Loosely-coupled distributed workflow coordination system and method | |
US9792419B2 (en) | Starvationless kernel-aware distributed scheduling of software licenses | |
US8903767B2 (en) | Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment | |
CN116633875B (en) | Time order-preserving scheduling method for multi-service coupling concurrent communication | |
CN113660178B (en) | CDN content management system | |
CN117033016A (en) | Flow scheduling system, method, electronic equipment and storage medium | |
CN116302423A (en) | Distributed task scheduling method and system for cloud management platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |