WO2019148734A1 - Procédé de traitement de groupe de fils uniformes, serveur d'application et support d'informations lisible par ordinateur - Google Patents
Procédé de traitement de groupe de fils uniformes, serveur d'application et support d'informations lisible par ordinateur Download PDFInfo
- Publication number
- WO2019148734A1 WO2019148734A1 PCT/CN2018/090909 CN2018090909W WO2019148734A1 WO 2019148734 A1 WO2019148734 A1 WO 2019148734A1 CN 2018090909 W CN2018090909 W CN 2018090909W WO 2019148734 A1 WO2019148734 A1 WO 2019148734A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- thread pool
- thread
- queue
- parameters
- pool
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
Definitions
- the present application relates to the field of data analysis technologies, and in particular, to a unified thread pool processing method, an application server, and a computer readable storage medium.
- the thread pool has been widely used in Internet technology. By introducing a thread pool, it is possible to effectively manage threads, control the total number of thread ceilings, and reduce the overhead caused by creating and destroying threads.
- multiple services in the system also use thread pools, such as sensitive log bulk storage errors, re-entry tables, MQCP log receiving queues, and public cleanup services.
- the role of the thread pool is to limit the number of execution threads in the system. According to the environment of the system, the number of threads can be set automatically or manually to achieve the best effect of the operation; less system resources are wasted, and the system congestion is not high.
- the thread pool technology faces a new challenge: if the thread pool is not managed uniformly, it is prone to thread confusion or resource consumption in high concurrency, which seriously affects system stability. .
- the present application provides a unified thread pool processing method, an application server, and a computer readable storage medium to solve the problem that thread turbulence or resource consumption seriously affects system stability under high concurrency conditions.
- the present application provides a unified thread pool processing method, the method comprising the steps of:
- Each of the thread pools is managed by the thread pool queue.
- the present application further provides an application server, including a memory and a processor, where the memory stores a unified thread pool processing system executable on the processor, and the unified thread pool processing system The steps of the unified thread pool processing method as described above are implemented when executed by the processor.
- the present application further provides a computer readable storage medium storing a unified thread pool processing system, the unified thread pool processing system being executable by at least one processor, The step of causing the at least one processor to perform the unified thread pool processing method as described above.
- the unified thread pool processing method, the application server, and the computer readable storage medium proposed by the present application can obtain parameters of each thread pool by querying a persistent data table, and according to each thread pool.
- the parameters respectively create each thread pool object, and realize the unified creation of the thread pool; save the mapping relationship between each thread pool name and each of the thread pool objects by creating a thread pool queue, and implement according to the thread pool queue
- the unified management of each of the thread pool objects reduces the system resource consumption and improves the stability of the system.
- 1 is a schematic diagram of an optional hardware architecture of an application server of the present application
- FIG. 2 is a schematic diagram of a program module of a first embodiment of a unified thread pool processing system of the present application
- FIG. 3 is a schematic diagram of a program module of a second embodiment of a unified thread pool processing system of the present application
- FIG. 4 is a schematic flowchart of a first embodiment of a unified thread pool processing method of the present application
- FIG. 5 is a schematic flowchart of a second embodiment of a unified thread pool processing method of the present application.
- FIG. 6 is a schematic flowchart diagram of a third embodiment of a unified thread pool processing method according to the present application.
- FIG. 1 it is a schematic diagram of an optional hardware architecture of the application server 2 of the present application.
- the application server 2 may include, but is not limited to, the memory 11, the processor 12, and the network interface 13 being communicably connected to each other through a system bus. It is pointed out that Figure 1 only shows the application server 2 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
- the application server 2 may be a computing device such as a rack server, a blade server, a tower server, or a rack server.
- the application server 2 may be an independent server or a server cluster composed of multiple servers. .
- the memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
- the memory 11 may be an internal storage unit of the application server 2, such as a hard disk or memory of the application server 2.
- the memory 11 may also be an external storage device of the application server 2, such as a plug-in hard disk equipped on the application server 2, a smart memory card (SMC), and a secure digital number. (Secure Digital, SD) card, flash card, etc.
- the memory 11 can also include both the internal storage unit of the application server 2 and its external storage device.
- the memory 11 is generally used to store an operating system installed on the application server 2 and various types of application software, such as program code of the unified thread pool processing system 200. Further, the memory 11 can also be used to temporarily store various types of data that have been output or are to be output.
- the processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
- the processor 12 is typically used to control the overall operation of the application server 2.
- the processor 12 is configured to run program code or process data stored in the memory 11, such as running the unified thread pool processing system 200 and the like.
- the network interface 13 may comprise a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the application server 2 and other electronic devices.
- the present application proposes a unified thread pool processing system 200.
- FIG. 2 it is a program module diagram of the first embodiment of the unified thread pool processing system 200 of the present application.
- the unified thread pool processing system 200 includes a series of computer program instructions stored in the memory 11, and when the computer program instructions are executed by the processor 12, the unified thread pool of the embodiments of the present application can be implemented. Processing operations.
- the unified thread pool processing system 200 can be divided into one or more modules based on the particular operations implemented by the various portions of the computer program instructions. For example, in FIG. 2, the unified thread pool processing system 200 can be divided into a query module 201, an acquisition module 202, a thread pool object creation module 203, and a thread pool queue creation module 204. among them:
- the query module 201 is configured to query a persistent data table.
- the persistent data table needs to be read from a key-value database.
- the key-value database may be a Redis database.
- the persistent data table is used to store parameter information of each thread pool object in the execution queue, and the parameter information includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- the persistence method periodically performs a snapshot of Redis data in the memory according to a certain saving rule, and synchronizes the snapshot data to the hard disk.
- the obtaining module 202 is configured to obtain, according to the persistent data table, parameters of each thread pool in the execution queue.
- each thread pool includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- the core thread in the core thread number (corePoolSize) will always survive, even if no tasks need to be executed.
- corePoolSize The core thread in the core thread number
- the first thread pool when a task needs to be added to the first thread pool, if the number of threads running in the first thread pool is less than the number of core threads, the first thread pool immediately creates a thread to run the task; If the number of threads running in the pool is greater than or equal to the number of core threads, then this task is placed in the task queue of the first thread pool.
- the maximum number of threads (maximumPoolSize) is used to indicate how many threads can be created in the thread pool.
- a task queue of the first thread pool is full (that is, the maximum queue length has been reached), and the number of running threads is less than the maximum number of threads, then To create a new thread to run this task;
- the first thread pool when a task needs to be added to the first thread pool, if the queue is full (ie, the maximum queue length has been reached) and the number of running threads is greater than or equal to the maximum number of threads, the first thread pool will Throw an exception.
- the queue types include: direct submission queues, unbounded queues, and bounded queues.
- the direct submission queue may be a SynchronousQueue.
- the SynchronousQueue can be set as the default option for the work queue, which is submitted directly to the thread without queuing. If there is no thread available to run the task immediately, the direct commit queue fails to queue the task, so a new thread is constructed. This strategy avoids locks when dealing with request sets that may have internal dependencies.
- the direct commit queue typically requires an unbounded maximum number of threads to avoid rejecting newly submitted tasks. This strategy allows unbounded threads to grow as the command arrives continuously over the average that can be processed by the queue.
- the bounded queue can be an ArrayBlockingQueue.
- the ArrayBlockingQueue is a bounded blocking queue based on an array structure. This queue sorts the elements according to the FIFO (first in, first out) principle.
- the unbounded queue can be a LinkedBlockingQueue.
- LinkedBlockingQueue is a blocking queue based on linked list structure. This queue sorts elements by FIFO (first in first out), and the throughput is usually higher than ArrayBlockingQueue.
- the thread pool object creation module 203 is configured to separately create each thread pool object according to the parameters of each thread pool.
- the thread pool is created through the java.uitl.concurrent.ThreadPoolExecutor class, and the specific creation method is:
- keepAliveTime indicates that the thread will terminate when there is no task execution.
- Unit is the time unit of the parameter keepAliveTime; where threadFactory is the thread factory, which is used to create the thread; where the handler is the policy when the task is rejected.
- the handler when the value of the handler is ThreadPoolExecutor.DiscardPolicy, the task is discarded, but the Rejected Execution Exception is not thrown;
- the task at the top of the queue is discarded, and then the task is retried (repeating the process);
- the handler when the handler takes a value of ThreadPoolExecutor.CallerRunsPolicy, the task is processed by the calling thread.
- the thread pool queue creation module 204 is configured to create a thread pool queue according to the thread pool name and the thread pool object.
- mapping relationship between the thread pool name and the corresponding thread pool object is saved as the thread pool queue.
- mapping relationship between the thread pool name and the corresponding thread pool object may be represented as Map ⁇ ThreadPoolName, ThreadPool>, wherein the key of the Map class is the thread pool name, and the value is the thread pool. Object.
- mapping relationship between the second thread pool name and the corresponding second thread pool object may be added to the thread pool queue by map.put(ThreadPoolName2, ThreadPool2).
- the unified thread pool processing system 200 includes a query module 201, an obtaining module 202, a thread pool object creating module 203, and a thread pool queue creating module 204 in the first embodiment, and a management module. 205.
- the management module 205 is configured to manage each of the thread pools through the thread pool queue.
- the management module 205 is specifically configured to:
- the management operation may include: acquiring the first thread pool object thread number, submitting a task to the first thread pool object, and closing the first thread pool object.
- the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using the thread pool name.
- the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using a private volatile int poolSize.
- the length of the current execution queue of the thread pool object corresponding to the thread pool name may be obtained by the thread pool name.
- the task object to be started may be submitted to the specified thread pool object by using a thread pool name, parameter information, and a task object to be started.
- the task object to be started may be submitted to the specified thread pool object by execute() and submit().
- the thread pool object after receiving the task object to be started :
- the creation thread runs this task
- the task is put into a queue
- the thread pool throws an exception.
- the thread pool object corresponding to the thread pool name may be closed by the thread pool name.
- the thread pool object corresponding to the thread pool name may be closed by using shutdown() and shutdownNow().
- Shutdown() does not terminate the thread pool immediately, but waits until all the tasks in the task cache queue have been executed, but will not accept the new task again;
- shutdownNow() terminate the thread pool immediately and try Interrupts the task being executed and clears the task cache queue, returning tasks that have not yet been executed.
- the unified thread pool processing system of the present application obtains the parameters of each thread pool by querying the persistent data table, and creates each thread pool object according to the parameters of each thread pool, thereby realizing the unified creation of the thread pool;
- the thread pool queue saves the mapping relationship between each thread pool name and each of the thread pool objects, and implements unified management of each of the thread pool objects according to the thread pool queue, thereby reducing system resource consumption and improving The stability of the system.
- the present application also proposes a unified thread pool processing method.
- FIG. 4 it is a schematic flowchart of the first embodiment of the unified thread pool processing method of the present application.
- the order of execution of the steps in the flowchart shown in FIG. 5 may be changed according to different requirements, and some steps may be omitted.
- Step S402 querying the persistent data table.
- the persistent data table needs to be read from a key-value database.
- the key-value database may be a Redis database.
- the persistent data table is used to store parameter information of each thread pool object in the execution queue, and the parameter information includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- the persistence method periodically performs a snapshot of Redis data in the memory according to a certain saving rule, and synchronizes the snapshot data to the hard disk.
- Step S404 Acquire, according to the persistent data table, parameters of each thread pool in the execution queue, where the parameters of each thread pool include each thread pool name.
- each thread pool includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- the core thread in the core thread number (corePoolSize) will always survive, even if no tasks need to be executed.
- corePoolSize The core thread in the core thread number
- the first thread pool when a task needs to be added to the first thread pool, if the number of threads running in the first thread pool is less than the number of core threads, the first thread pool immediately creates a thread to run the task; If the number of threads running in the pool is greater than or equal to the number of core threads, then this task is placed in the task queue of the first thread pool.
- the core thread in the core thread number (corePoolSize) will always survive, and no task needs to be executed in time.
- the thread pool will preferentially create a new thread for processing.
- maximumPoolSize The maximum number of threads (maximumPoolSize) is used to indicate how many threads can be created in the thread pool:
- a task queue of the first thread pool is full (that is, the maximum queue length has been reached), and the number of running threads is less than the maximum number of threads, then To create a new thread to run this task;
- the first thread pool when a task needs to be added to the first thread pool, if the queue is full (ie, the maximum queue length has been reached) and the number of running threads is greater than or equal to the maximum number of threads, the first thread pool will Throw an exception.
- the queue types include: direct submission queues, unbounded queues, and bounded queues.
- the direct submission queue may be a SynchronousQueue.
- the SynchronousQueue can be set as the default option for the work queue, which is submitted directly to the thread without queuing. If there is no thread available to run the task immediately, the direct commit queue fails to queue the task, so a new thread is constructed. This strategy avoids locks when dealing with request sets that may have internal dependencies.
- the direct commit queue typically requires an unbounded maximum number of threads to avoid rejecting newly submitted tasks. This strategy allows unbounded threads to grow as the command arrives continuously over the average that can be processed by the queue.
- the bounded queue can be an ArrayBlockingQueue.
- the ArrayBlockingQueue is a bounded blocking queue based on an array structure. This queue sorts the elements according to the FIFO (first in, first out) principle.
- the unbounded queue can be a LinkedBlockingQueue.
- LinkedBlockingQueue is a blocking queue based on linked list structure. This queue sorts elements by FIFO (first in first out), and the throughput is usually higher than ArrayBlockingQueue.
- Step S406 respectively creating each thread pool object according to the parameters of each thread pool.
- the thread pool is created through the java.uitl.concurrent.ThreadPoolExecutor class, and the specific creation method is:
- corePoolSize represents the core pool size.
- the thread pool is created, by default, there is no thread in the thread pool, but waiting for a task to arrive to create a thread to execute the task, unless the prestartAllCoreThreads() or prestartCoreThread() method is called.
- the thread pool is created, the number of threads in the thread pool is 0.
- a thread is created to execute the task.
- the number of threads in the thread pool reaches the number of core threads, Put the arriving task in the cache queue;
- maximumPoolSize represents the maximum number of threads in the thread pool, indicating how many threads can be created in the thread pool;
- keepAliveTime indicates that the thread will terminate when there is no task execution.
- the unit is the time unit of the parameter keepAliveTime.
- the threadFactory is the thread factory and is used to create the thread.
- the handler is the strategy when the task is rejected.
- the handler when the value of the handler is ThreadPoolExecutor.DiscardPolicy, the task is discarded, but the Rejected Execution Exception is not thrown;
- the task at the top of the queue is discarded, and then the task is retried (repeating the process);
- the handler when the handler takes a value of ThreadPoolExecutor.CallerRunsPolicy, the task is processed by the calling thread.
- Step S408 creating a thread pool queue according to each thread pool name and each of the thread pool objects.
- mapping relationship between the thread pool name and the corresponding thread pool object is saved as the thread pool queue.
- mapping relationship between the thread pool name and the corresponding thread pool object may be represented as Map ⁇ ThreadPoolName, ThreadPool>, wherein the key of the Map class is the thread pool name, and the value is the thread pool. Object.
- mapping relationship between the second thread pool name and the corresponding second thread pool object may be added to the thread pool queue by map.put(ThreadPoolName2, ThreadPool2).
- FIG. 5 it is a schematic flowchart of a second embodiment of a unified thread pool processing method of the present application.
- the steps S502-S508 of the unified thread pool processing method are similar to the steps S402-S408 of the first embodiment, except that the method further includes step S510.
- the method includes the following steps:
- Step S510 managing each thread pool through the thread pool queue.
- FIG. 6 is a schematic flowchart diagram of a third embodiment of a unified thread pool processing method of the present application.
- the step of managing the thread pool by using the thread pool queue includes:
- Step 602 Obtain a first thread pool name of a thread pool to be operated.
- Step 604 Obtain a corresponding first thread pool object from the thread pool queue according to the first thread pool name.
- the obtained first thread pool name is taken as a variable ThreadPoolName, and is delivered to the thread pool queue.
- Step 606 Perform a management operation on the first thread pool object.
- the managing operation may include: acquiring the first thread pool object thread number, submitting the task to the first thread pool object, and closing the first thread pool object.
- the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using the thread pool name.
- the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using a private volatile int poolSize.
- the length of the current execution queue of the thread pool object corresponding to the thread pool name may be obtained by the thread pool name.
- the task object to be started may be submitted to the specified thread pool object by using a thread pool name, parameter information, and a task object to be started.
- the task object to be started may be submitted to the specified thread pool object by execute() and submit().
- the thread pool object after receiving the task object to be started :
- the creation thread runs this task
- the task is put into a queue
- the thread pool throws an exception.
- the thread pool object corresponding to the thread pool name may be closed by the thread pool name.
- the thread pool object corresponding to the thread pool name may be closed by using shutdown() and shutdownNow().
- Shutdown() does not terminate the thread pool immediately, but waits until all the tasks in the task cache queue have been executed, but will not accept the new task again;
- shutdownNow() terminate the thread pool immediately and try Interrupts the task being executed and clears the task cache queue, returning tasks that have not yet been executed.
- the unified thread pool processing method of the present application obtains the parameters of each thread pool by querying the persistent data table, and creates each thread pool object according to the parameters of each thread pool, thereby realizing the unified creation of the thread pool;
- the thread pool queue saves the mapping relationship between each thread pool name and each of the thread pool objects, and implements unified management of each of the thread pool objects according to the thread pool queue, thereby reducing system resource consumption and improving The stability of the system.
- the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
- Implementation Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
- the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
La présente invention concerne un procédé de traitement de groupe de fils uniformes, consistant à : interroger une table de données persistante ; obtenir un paramètre de chaque groupe de fils dans une file d'attente d'exécution selon la table de données persistante, le paramètre de chaque groupe de fils comprenant chaque nom de groupe de fils ; créer respectivement chaque objet de groupe de fils en fonction du paramètre de chaque groupe de fils ; créer une file d'attente de groupe de fils selon chaque nom de groupe de fils et chaque objet de groupe de fils ; et gérer chaque groupe de fils au moyen de la file d'attente de groupe de fils. L'invention concerne également un serveur d'application et un support d'informations lisible par ordinateur. Le procédé de traitement de groupe de fils uniformes, le serveur d'application et le support d'informations lisible par ordinateur fournis par la présente invention peuvent mettre en œuvre une gestion uniforme de chaque objet de groupe de fils, réduire la consommation de ressources du système et améliorer la stabilité du système.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810102252.3 | 2018-02-01 | ||
CN201810102252.3A CN108345499B (zh) | 2018-02-01 | 2018-02-01 | 统一线程池处理方法、应用服务器及计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019148734A1 true WO2019148734A1 (fr) | 2019-08-08 |
Family
ID=62958407
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/090909 WO2019148734A1 (fr) | 2018-02-01 | 2018-06-12 | Procédé de traitement de groupe de fils uniformes, serveur d'application et support d'informations lisible par ordinateur |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108345499B (fr) |
WO (1) | WO2019148734A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112965805A (zh) * | 2021-03-25 | 2021-06-15 | 兴业数字金融服务(上海)股份有限公司 | 基于内存映射文件的跨进程异步任务处理方法及系统 |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110865798B (zh) * | 2018-08-28 | 2023-07-21 | 中国移动通信集团浙江有限公司 | 一种线程池优化方法及系统 |
CN109582472B (zh) * | 2018-10-19 | 2021-05-18 | 华为技术有限公司 | 一种微服务处理方法及设备 |
CN109739583B (zh) * | 2018-12-13 | 2023-09-08 | 平安科技(深圳)有限公司 | 多线程并行运行的方法、装置、计算机设备以及存储介质 |
CN110109739A (zh) * | 2019-04-25 | 2019-08-09 | 北京奇艺世纪科技有限公司 | 一种多线程应用程序的关闭方法及装置 |
CN112114862B (zh) * | 2019-06-20 | 2023-12-22 | 普天信息技术有限公司 | spring boot实例并发处理方法及装置 |
CN110287013A (zh) * | 2019-06-26 | 2019-09-27 | 四川长虹电器股份有限公司 | 基于java多线程技术解决物联云端服务雪崩效应的方法 |
CN111078377B (zh) * | 2019-11-29 | 2023-04-07 | 易方信息科技股份有限公司 | 一种线程工作方法 |
CN111625332A (zh) * | 2020-05-21 | 2020-09-04 | 杭州安恒信息技术股份有限公司 | Java线程池拒绝策略执行方法、装置和计算机设备 |
CN111897643B (zh) * | 2020-08-05 | 2024-07-02 | 深圳鼎盛电脑科技有限公司 | 线程池配置系统、方法、装置和存储介质 |
CN112667385A (zh) * | 2021-01-15 | 2021-04-16 | 北京金和网络股份有限公司 | 一种云服务系统及其任务执行方法和装置及服务器 |
CN112835704A (zh) * | 2021-03-26 | 2021-05-25 | 中国工商银行股份有限公司 | 任务处理方法、线程池管理方法、装置和计算设备 |
CN114490112B (zh) * | 2021-12-20 | 2024-09-20 | 阿里巴巴(中国)有限公司 | 消息处理方法、设备及系统 |
CN114924849B (zh) * | 2022-04-27 | 2024-06-04 | 上海交通大学 | 一种工业控制系统高并发执行和资源调度方法及装置 |
CN116974730B (zh) * | 2023-09-22 | 2024-01-30 | 深圳联友科技有限公司 | 一种大批量任务处理方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599027A (zh) * | 2009-06-30 | 2009-12-09 | 中兴通讯股份有限公司 | 一种线程池管理方法及其系统 |
CN103218264A (zh) * | 2013-03-26 | 2013-07-24 | 广东威创视讯科技股份有限公司 | 基于线程池的多线程有限状态机切换方法及装置 |
CN105760234A (zh) * | 2016-03-17 | 2016-07-13 | 联动优势科技有限公司 | 一种线程池管理方法及装置 |
US20170168843A1 (en) * | 2012-04-03 | 2017-06-15 | Microsoft Technology Licensing, Llc | Thread-agile execution of dynamic programming language programs |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6604125B1 (en) * | 1999-09-24 | 2003-08-05 | Sun Microsystems, Inc. | Mechanism for enabling a thread unaware or non thread safe application to be executed safely in a multi-threaded environment |
US20070229520A1 (en) * | 2006-03-31 | 2007-10-04 | Microsoft Corporation | Buffered Paint Systems |
US9397976B2 (en) * | 2009-10-30 | 2016-07-19 | International Business Machines Corporation | Tuning LDAP server and directory database |
CN101777008A (zh) * | 2009-12-31 | 2010-07-14 | 中兴通讯股份有限公司 | 移动终端系统线程池实现方法及装置 |
CN103455377B (zh) * | 2013-08-06 | 2019-01-22 | 北京京东尚科信息技术有限公司 | 用于管理业务线程池的系统和方法 |
CN105159768A (zh) * | 2015-09-09 | 2015-12-16 | 浪潮集团有限公司 | 一种任务管理方法及云数据中心管理平台 |
CN107450978A (zh) * | 2016-05-31 | 2017-12-08 | 北京京东尚科信息技术有限公司 | 分布式系统的线程管理方法和装置 |
CN107463439A (zh) * | 2017-08-21 | 2017-12-12 | 山东浪潮通软信息科技有限公司 | 一种线程池实现方法及装置 |
-
2018
- 2018-02-01 CN CN201810102252.3A patent/CN108345499B/zh active Active
- 2018-06-12 WO PCT/CN2018/090909 patent/WO2019148734A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599027A (zh) * | 2009-06-30 | 2009-12-09 | 中兴通讯股份有限公司 | 一种线程池管理方法及其系统 |
US20170168843A1 (en) * | 2012-04-03 | 2017-06-15 | Microsoft Technology Licensing, Llc | Thread-agile execution of dynamic programming language programs |
CN103218264A (zh) * | 2013-03-26 | 2013-07-24 | 广东威创视讯科技股份有限公司 | 基于线程池的多线程有限状态机切换方法及装置 |
CN105760234A (zh) * | 2016-03-17 | 2016-07-13 | 联动优势科技有限公司 | 一种线程池管理方法及装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112965805A (zh) * | 2021-03-25 | 2021-06-15 | 兴业数字金融服务(上海)股份有限公司 | 基于内存映射文件的跨进程异步任务处理方法及系统 |
CN112965805B (zh) * | 2021-03-25 | 2023-12-05 | 兴业数字金融服务(上海)股份有限公司 | 基于内存映射文件的跨进程异步任务处理方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN108345499A (zh) | 2018-07-31 |
CN108345499B (zh) | 2019-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019148734A1 (fr) | Procédé de traitement de groupe de fils uniformes, serveur d'application et support d'informations lisible par ordinateur | |
US9940346B2 (en) | Two-level management of locks on shared resources | |
WO2020238737A1 (fr) | Procédé et appareil de traitement de tâches de base de données, dispositif électronique, et support lisible | |
US9396010B2 (en) | Optimization of packet processing by delaying a processor from entering an idle state | |
WO2019179026A1 (fr) | Dispositif électronique, procédé de génération automatique d'un nom de domaine d'accès de groupe, et support de stockage | |
US8589537B2 (en) | Methods and computer program products for aggregating network application performance metrics by process pool | |
WO2019179027A1 (fr) | Dispositif électronique, procédé de vérification de fourniture de pare-feu, système et support d'informations | |
CN108595282A (zh) | 一种高并发消息队列的实现方法 | |
US9038093B1 (en) | Retrieving service request messages from a message queue maintained by a messaging middleware tool based on the origination time of the service request message | |
US9378246B2 (en) | Systems and methods of accessing distributed data | |
WO2021169275A1 (fr) | Procédé et appareil d'accès à un dispositif de réseau sdn, dispositif informatique et support de stockage | |
CN110445828B (zh) | 一种基于Redis的数据分布式处理方法及其相关设备 | |
CN109842621A (zh) | 一种减少token存储数量的方法及终端 | |
CN110851276A (zh) | 一种业务请求处理方法、装置、服务器和存储介质 | |
WO2022142008A1 (fr) | Procédé et appareil de traitement de données, dispositif électronique et support de stockage | |
CN112217849B (zh) | Sd-wan系统中的任务调度方法、系统和计算机设备 | |
US10310857B2 (en) | Systems and methods facilitating multi-word atomic operation support for system on chip environments | |
CN111597056A (zh) | 一种分布式调度方法、系统、存储介质和设备 | |
JP5884566B2 (ja) | バッチ処理システム、進捗状況確認装置、進捗状況確認方法、及びプログラム | |
US20070180115A1 (en) | System and method for self-configuring multi-type and multi-location result aggregation for large cross-platform information sets | |
CN107632893B (zh) | 消息队列处理方法及装置 | |
CN112100186B (zh) | 基于分布式系统的数据处理方法、装置、计算机设备 | |
CN115774724A (zh) | 并发请求的处理方法、装置、电子设备及存储介质 | |
CN113778674A (zh) | 一种负载均衡设备配置管理在多核下的免锁实现方法 | |
US8566467B2 (en) | Data processing system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18903511 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.11.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18903511 Country of ref document: EP Kind code of ref document: A1 |