WO2019148734A1 - Uniform thread pool processing method, application server, and computer readable storage medium - Google Patents
Uniform thread pool processing method, application server, and computer readable storage medium Download PDFInfo
- Publication number
- WO2019148734A1 WO2019148734A1 PCT/CN2018/090909 CN2018090909W WO2019148734A1 WO 2019148734 A1 WO2019148734 A1 WO 2019148734A1 CN 2018090909 W CN2018090909 W CN 2018090909W WO 2019148734 A1 WO2019148734 A1 WO 2019148734A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- thread pool
- thread
- queue
- parameters
- pool
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
Definitions
- the present application relates to the field of data analysis technologies, and in particular, to a unified thread pool processing method, an application server, and a computer readable storage medium.
- the thread pool has been widely used in Internet technology. By introducing a thread pool, it is possible to effectively manage threads, control the total number of thread ceilings, and reduce the overhead caused by creating and destroying threads.
- multiple services in the system also use thread pools, such as sensitive log bulk storage errors, re-entry tables, MQCP log receiving queues, and public cleanup services.
- the role of the thread pool is to limit the number of execution threads in the system. According to the environment of the system, the number of threads can be set automatically or manually to achieve the best effect of the operation; less system resources are wasted, and the system congestion is not high.
- the thread pool technology faces a new challenge: if the thread pool is not managed uniformly, it is prone to thread confusion or resource consumption in high concurrency, which seriously affects system stability. .
- the present application provides a unified thread pool processing method, an application server, and a computer readable storage medium to solve the problem that thread turbulence or resource consumption seriously affects system stability under high concurrency conditions.
- the present application provides a unified thread pool processing method, the method comprising the steps of:
- Each of the thread pools is managed by the thread pool queue.
- the present application further provides an application server, including a memory and a processor, where the memory stores a unified thread pool processing system executable on the processor, and the unified thread pool processing system The steps of the unified thread pool processing method as described above are implemented when executed by the processor.
- the present application further provides a computer readable storage medium storing a unified thread pool processing system, the unified thread pool processing system being executable by at least one processor, The step of causing the at least one processor to perform the unified thread pool processing method as described above.
- the unified thread pool processing method, the application server, and the computer readable storage medium proposed by the present application can obtain parameters of each thread pool by querying a persistent data table, and according to each thread pool.
- the parameters respectively create each thread pool object, and realize the unified creation of the thread pool; save the mapping relationship between each thread pool name and each of the thread pool objects by creating a thread pool queue, and implement according to the thread pool queue
- the unified management of each of the thread pool objects reduces the system resource consumption and improves the stability of the system.
- 1 is a schematic diagram of an optional hardware architecture of an application server of the present application
- FIG. 2 is a schematic diagram of a program module of a first embodiment of a unified thread pool processing system of the present application
- FIG. 3 is a schematic diagram of a program module of a second embodiment of a unified thread pool processing system of the present application
- FIG. 4 is a schematic flowchart of a first embodiment of a unified thread pool processing method of the present application
- FIG. 5 is a schematic flowchart of a second embodiment of a unified thread pool processing method of the present application.
- FIG. 6 is a schematic flowchart diagram of a third embodiment of a unified thread pool processing method according to the present application.
- FIG. 1 it is a schematic diagram of an optional hardware architecture of the application server 2 of the present application.
- the application server 2 may include, but is not limited to, the memory 11, the processor 12, and the network interface 13 being communicably connected to each other through a system bus. It is pointed out that Figure 1 only shows the application server 2 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
- the application server 2 may be a computing device such as a rack server, a blade server, a tower server, or a rack server.
- the application server 2 may be an independent server or a server cluster composed of multiple servers. .
- the memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
- the memory 11 may be an internal storage unit of the application server 2, such as a hard disk or memory of the application server 2.
- the memory 11 may also be an external storage device of the application server 2, such as a plug-in hard disk equipped on the application server 2, a smart memory card (SMC), and a secure digital number. (Secure Digital, SD) card, flash card, etc.
- the memory 11 can also include both the internal storage unit of the application server 2 and its external storage device.
- the memory 11 is generally used to store an operating system installed on the application server 2 and various types of application software, such as program code of the unified thread pool processing system 200. Further, the memory 11 can also be used to temporarily store various types of data that have been output or are to be output.
- the processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
- the processor 12 is typically used to control the overall operation of the application server 2.
- the processor 12 is configured to run program code or process data stored in the memory 11, such as running the unified thread pool processing system 200 and the like.
- the network interface 13 may comprise a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the application server 2 and other electronic devices.
- the present application proposes a unified thread pool processing system 200.
- FIG. 2 it is a program module diagram of the first embodiment of the unified thread pool processing system 200 of the present application.
- the unified thread pool processing system 200 includes a series of computer program instructions stored in the memory 11, and when the computer program instructions are executed by the processor 12, the unified thread pool of the embodiments of the present application can be implemented. Processing operations.
- the unified thread pool processing system 200 can be divided into one or more modules based on the particular operations implemented by the various portions of the computer program instructions. For example, in FIG. 2, the unified thread pool processing system 200 can be divided into a query module 201, an acquisition module 202, a thread pool object creation module 203, and a thread pool queue creation module 204. among them:
- the query module 201 is configured to query a persistent data table.
- the persistent data table needs to be read from a key-value database.
- the key-value database may be a Redis database.
- the persistent data table is used to store parameter information of each thread pool object in the execution queue, and the parameter information includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- the persistence method periodically performs a snapshot of Redis data in the memory according to a certain saving rule, and synchronizes the snapshot data to the hard disk.
- the obtaining module 202 is configured to obtain, according to the persistent data table, parameters of each thread pool in the execution queue.
- each thread pool includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- the core thread in the core thread number (corePoolSize) will always survive, even if no tasks need to be executed.
- corePoolSize The core thread in the core thread number
- the first thread pool when a task needs to be added to the first thread pool, if the number of threads running in the first thread pool is less than the number of core threads, the first thread pool immediately creates a thread to run the task; If the number of threads running in the pool is greater than or equal to the number of core threads, then this task is placed in the task queue of the first thread pool.
- the maximum number of threads (maximumPoolSize) is used to indicate how many threads can be created in the thread pool.
- a task queue of the first thread pool is full (that is, the maximum queue length has been reached), and the number of running threads is less than the maximum number of threads, then To create a new thread to run this task;
- the first thread pool when a task needs to be added to the first thread pool, if the queue is full (ie, the maximum queue length has been reached) and the number of running threads is greater than or equal to the maximum number of threads, the first thread pool will Throw an exception.
- the queue types include: direct submission queues, unbounded queues, and bounded queues.
- the direct submission queue may be a SynchronousQueue.
- the SynchronousQueue can be set as the default option for the work queue, which is submitted directly to the thread without queuing. If there is no thread available to run the task immediately, the direct commit queue fails to queue the task, so a new thread is constructed. This strategy avoids locks when dealing with request sets that may have internal dependencies.
- the direct commit queue typically requires an unbounded maximum number of threads to avoid rejecting newly submitted tasks. This strategy allows unbounded threads to grow as the command arrives continuously over the average that can be processed by the queue.
- the bounded queue can be an ArrayBlockingQueue.
- the ArrayBlockingQueue is a bounded blocking queue based on an array structure. This queue sorts the elements according to the FIFO (first in, first out) principle.
- the unbounded queue can be a LinkedBlockingQueue.
- LinkedBlockingQueue is a blocking queue based on linked list structure. This queue sorts elements by FIFO (first in first out), and the throughput is usually higher than ArrayBlockingQueue.
- the thread pool object creation module 203 is configured to separately create each thread pool object according to the parameters of each thread pool.
- the thread pool is created through the java.uitl.concurrent.ThreadPoolExecutor class, and the specific creation method is:
- keepAliveTime indicates that the thread will terminate when there is no task execution.
- Unit is the time unit of the parameter keepAliveTime; where threadFactory is the thread factory, which is used to create the thread; where the handler is the policy when the task is rejected.
- the handler when the value of the handler is ThreadPoolExecutor.DiscardPolicy, the task is discarded, but the Rejected Execution Exception is not thrown;
- the task at the top of the queue is discarded, and then the task is retried (repeating the process);
- the handler when the handler takes a value of ThreadPoolExecutor.CallerRunsPolicy, the task is processed by the calling thread.
- the thread pool queue creation module 204 is configured to create a thread pool queue according to the thread pool name and the thread pool object.
- mapping relationship between the thread pool name and the corresponding thread pool object is saved as the thread pool queue.
- mapping relationship between the thread pool name and the corresponding thread pool object may be represented as Map ⁇ ThreadPoolName, ThreadPool>, wherein the key of the Map class is the thread pool name, and the value is the thread pool. Object.
- mapping relationship between the second thread pool name and the corresponding second thread pool object may be added to the thread pool queue by map.put(ThreadPoolName2, ThreadPool2).
- the unified thread pool processing system 200 includes a query module 201, an obtaining module 202, a thread pool object creating module 203, and a thread pool queue creating module 204 in the first embodiment, and a management module. 205.
- the management module 205 is configured to manage each of the thread pools through the thread pool queue.
- the management module 205 is specifically configured to:
- the management operation may include: acquiring the first thread pool object thread number, submitting a task to the first thread pool object, and closing the first thread pool object.
- the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using the thread pool name.
- the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using a private volatile int poolSize.
- the length of the current execution queue of the thread pool object corresponding to the thread pool name may be obtained by the thread pool name.
- the task object to be started may be submitted to the specified thread pool object by using a thread pool name, parameter information, and a task object to be started.
- the task object to be started may be submitted to the specified thread pool object by execute() and submit().
- the thread pool object after receiving the task object to be started :
- the creation thread runs this task
- the task is put into a queue
- the thread pool throws an exception.
- the thread pool object corresponding to the thread pool name may be closed by the thread pool name.
- the thread pool object corresponding to the thread pool name may be closed by using shutdown() and shutdownNow().
- Shutdown() does not terminate the thread pool immediately, but waits until all the tasks in the task cache queue have been executed, but will not accept the new task again;
- shutdownNow() terminate the thread pool immediately and try Interrupts the task being executed and clears the task cache queue, returning tasks that have not yet been executed.
- the unified thread pool processing system of the present application obtains the parameters of each thread pool by querying the persistent data table, and creates each thread pool object according to the parameters of each thread pool, thereby realizing the unified creation of the thread pool;
- the thread pool queue saves the mapping relationship between each thread pool name and each of the thread pool objects, and implements unified management of each of the thread pool objects according to the thread pool queue, thereby reducing system resource consumption and improving The stability of the system.
- the present application also proposes a unified thread pool processing method.
- FIG. 4 it is a schematic flowchart of the first embodiment of the unified thread pool processing method of the present application.
- the order of execution of the steps in the flowchart shown in FIG. 5 may be changed according to different requirements, and some steps may be omitted.
- Step S402 querying the persistent data table.
- the persistent data table needs to be read from a key-value database.
- the key-value database may be a Redis database.
- the persistent data table is used to store parameter information of each thread pool object in the execution queue, and the parameter information includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- the persistence method periodically performs a snapshot of Redis data in the memory according to a certain saving rule, and synchronizes the snapshot data to the hard disk.
- Step S404 Acquire, according to the persistent data table, parameters of each thread pool in the execution queue, where the parameters of each thread pool include each thread pool name.
- each thread pool includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- the core thread in the core thread number (corePoolSize) will always survive, even if no tasks need to be executed.
- corePoolSize The core thread in the core thread number
- the first thread pool when a task needs to be added to the first thread pool, if the number of threads running in the first thread pool is less than the number of core threads, the first thread pool immediately creates a thread to run the task; If the number of threads running in the pool is greater than or equal to the number of core threads, then this task is placed in the task queue of the first thread pool.
- the core thread in the core thread number (corePoolSize) will always survive, and no task needs to be executed in time.
- the thread pool will preferentially create a new thread for processing.
- maximumPoolSize The maximum number of threads (maximumPoolSize) is used to indicate how many threads can be created in the thread pool:
- a task queue of the first thread pool is full (that is, the maximum queue length has been reached), and the number of running threads is less than the maximum number of threads, then To create a new thread to run this task;
- the first thread pool when a task needs to be added to the first thread pool, if the queue is full (ie, the maximum queue length has been reached) and the number of running threads is greater than or equal to the maximum number of threads, the first thread pool will Throw an exception.
- the queue types include: direct submission queues, unbounded queues, and bounded queues.
- the direct submission queue may be a SynchronousQueue.
- the SynchronousQueue can be set as the default option for the work queue, which is submitted directly to the thread without queuing. If there is no thread available to run the task immediately, the direct commit queue fails to queue the task, so a new thread is constructed. This strategy avoids locks when dealing with request sets that may have internal dependencies.
- the direct commit queue typically requires an unbounded maximum number of threads to avoid rejecting newly submitted tasks. This strategy allows unbounded threads to grow as the command arrives continuously over the average that can be processed by the queue.
- the bounded queue can be an ArrayBlockingQueue.
- the ArrayBlockingQueue is a bounded blocking queue based on an array structure. This queue sorts the elements according to the FIFO (first in, first out) principle.
- the unbounded queue can be a LinkedBlockingQueue.
- LinkedBlockingQueue is a blocking queue based on linked list structure. This queue sorts elements by FIFO (first in first out), and the throughput is usually higher than ArrayBlockingQueue.
- Step S406 respectively creating each thread pool object according to the parameters of each thread pool.
- the thread pool is created through the java.uitl.concurrent.ThreadPoolExecutor class, and the specific creation method is:
- corePoolSize represents the core pool size.
- the thread pool is created, by default, there is no thread in the thread pool, but waiting for a task to arrive to create a thread to execute the task, unless the prestartAllCoreThreads() or prestartCoreThread() method is called.
- the thread pool is created, the number of threads in the thread pool is 0.
- a thread is created to execute the task.
- the number of threads in the thread pool reaches the number of core threads, Put the arriving task in the cache queue;
- maximumPoolSize represents the maximum number of threads in the thread pool, indicating how many threads can be created in the thread pool;
- keepAliveTime indicates that the thread will terminate when there is no task execution.
- the unit is the time unit of the parameter keepAliveTime.
- the threadFactory is the thread factory and is used to create the thread.
- the handler is the strategy when the task is rejected.
- the handler when the value of the handler is ThreadPoolExecutor.DiscardPolicy, the task is discarded, but the Rejected Execution Exception is not thrown;
- the task at the top of the queue is discarded, and then the task is retried (repeating the process);
- the handler when the handler takes a value of ThreadPoolExecutor.CallerRunsPolicy, the task is processed by the calling thread.
- Step S408 creating a thread pool queue according to each thread pool name and each of the thread pool objects.
- mapping relationship between the thread pool name and the corresponding thread pool object is saved as the thread pool queue.
- mapping relationship between the thread pool name and the corresponding thread pool object may be represented as Map ⁇ ThreadPoolName, ThreadPool>, wherein the key of the Map class is the thread pool name, and the value is the thread pool. Object.
- mapping relationship between the second thread pool name and the corresponding second thread pool object may be added to the thread pool queue by map.put(ThreadPoolName2, ThreadPool2).
- FIG. 5 it is a schematic flowchart of a second embodiment of a unified thread pool processing method of the present application.
- the steps S502-S508 of the unified thread pool processing method are similar to the steps S402-S408 of the first embodiment, except that the method further includes step S510.
- the method includes the following steps:
- Step S510 managing each thread pool through the thread pool queue.
- FIG. 6 is a schematic flowchart diagram of a third embodiment of a unified thread pool processing method of the present application.
- the step of managing the thread pool by using the thread pool queue includes:
- Step 602 Obtain a first thread pool name of a thread pool to be operated.
- Step 604 Obtain a corresponding first thread pool object from the thread pool queue according to the first thread pool name.
- the obtained first thread pool name is taken as a variable ThreadPoolName, and is delivered to the thread pool queue.
- Step 606 Perform a management operation on the first thread pool object.
- the managing operation may include: acquiring the first thread pool object thread number, submitting the task to the first thread pool object, and closing the first thread pool object.
- the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using the thread pool name.
- the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using a private volatile int poolSize.
- the length of the current execution queue of the thread pool object corresponding to the thread pool name may be obtained by the thread pool name.
- the task object to be started may be submitted to the specified thread pool object by using a thread pool name, parameter information, and a task object to be started.
- the task object to be started may be submitted to the specified thread pool object by execute() and submit().
- the thread pool object after receiving the task object to be started :
- the creation thread runs this task
- the task is put into a queue
- the thread pool throws an exception.
- the thread pool object corresponding to the thread pool name may be closed by the thread pool name.
- the thread pool object corresponding to the thread pool name may be closed by using shutdown() and shutdownNow().
- Shutdown() does not terminate the thread pool immediately, but waits until all the tasks in the task cache queue have been executed, but will not accept the new task again;
- shutdownNow() terminate the thread pool immediately and try Interrupts the task being executed and clears the task cache queue, returning tasks that have not yet been executed.
- the unified thread pool processing method of the present application obtains the parameters of each thread pool by querying the persistent data table, and creates each thread pool object according to the parameters of each thread pool, thereby realizing the unified creation of the thread pool;
- the thread pool queue saves the mapping relationship between each thread pool name and each of the thread pool objects, and implements unified management of each of the thread pool objects according to the thread pool queue, thereby reducing system resource consumption and improving The stability of the system.
- the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
- Implementation Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
- the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Disclosed in the present application is a uniform thread pool processing method, comprising: querying a persistent data table; obtaining a parameter of each thread pool in an execution queue according to the persistent data table, wherein the parameter of each thread pool comprises each thread pool name; respectively creating each thread pool object according to the parameter of each thread pool; creating a thread pool queue according to each thread pool name and each thread pool object; and managing each thread pool by means of the thread pool queue. The present application also provides an application server and a computer readable storage medium. The uniform thread pool processing method, application server, and computer readable storage medium provided by the present application can implement uniform management of each thread pool object, reduce system resource consumption, and improve system stability.
Description
本申请要求于2018年2月1日提交中国专利局,申请号为201810102252.3、发明名称为“统一线程池处理方法、应用服务器及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to Chinese Patent Application No. 201101102252.3, entitled "Unified Thread Pool Processing Method, Application Server, and Computer Readable Storage Media", which is filed on February 1, 2018, the entire contents of which are incorporated herein by reference. This is incorporated herein by reference.
本申请涉及数据分析技术领域,尤其涉及一种统一线程池处理方法、应用服务器及计算机可读存储介质。The present application relates to the field of data analysis technologies, and in particular, to a unified thread pool processing method, an application server, and a computer readable storage medium.
当前,在互联网技术中线程池得到了广泛的应用,通过引入线程池可以有效的管理线程、控制线程上限总数、以及减少创建和销毁线程所带来的开销。在实际应用中,随着业务的发展,系统中多处业务也会使用到线程池,例如敏感日志批量入库错误,重新入表、MQCP日志接收队列及公共清理等业务。线程池作用就是限制系统中执行线程的数量。根据系统的环境情况,可以自动或手动设置线程数量,达到运行的最佳效果;少了浪费了系统资源,多了造成系统拥挤效率不高。At present, the thread pool has been widely used in Internet technology. By introducing a thread pool, it is possible to effectively manage threads, control the total number of thread ceilings, and reduce the overhead caused by creating and destroying threads. In practical applications, with the development of services, multiple services in the system also use thread pools, such as sensitive log bulk storage errors, re-entry tables, MQCP log receiving queues, and public cleanup services. The role of the thread pool is to limit the number of execution threads in the system. According to the environment of the system, the number of threads can be set automatically or manually to achieve the best effect of the operation; less system resources are wasted, and the system congestion is not high.
当一个系统对其他越来越多的系统调用增加,线程池技术面临着新的挑战:如果不对线程池进行统一管理,很容易出现高并发状况下线程紊乱或资源消耗的问题,严重影响系统稳定。When a system increases the number of other system calls, the thread pool technology faces a new challenge: if the thread pool is not managed uniformly, it is prone to thread confusion or resource consumption in high concurrency, which seriously affects system stability. .
发明内容Summary of the invention
有鉴于此,本申请提出一种统一线程池处理方法、应用服务器及计算机可读存储介质,以解决在高并发状况下线程紊乱或资源消耗严重影响系统稳定的问题。In view of this, the present application provides a unified thread pool processing method, an application server, and a computer readable storage medium to solve the problem that thread turbulence or resource consumption seriously affects system stability under high concurrency conditions.
首先,为实现上述目的,本申请提出一种统一线程池处理方法,该方法包括步骤:First, in order to achieve the above object, the present application provides a unified thread pool processing method, the method comprising the steps of:
查询持久化数据表;Query the persistent data table;
根据所述持久化数据表,获取执行队列中每一个线程池的参数,所述每一个线程池的参数包括每一个线程池名称;Obtaining, according to the persistent data table, parameters of each thread pool in the execution queue, where the parameters of each thread pool include each thread pool name;
根据所述每一个线程池的参数分别创建每一个线程池对象;Creating each thread pool object according to the parameters of each thread pool;
根据所述每一个线程池名称与所述每一个线程池对象创建线程池队列;及Creating a thread pool queue according to each of the thread pool names and each of the thread pool objects; and
通过所述线程池队列管理所述每一个线程池。Each of the thread pools is managed by the thread pool queue.
此外,为实现上述目的,本申请还提供一种应用服务器,包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的统一线程池处理系统,所述统一线程池处理系统被所述处理器执行时实现如上述的统一线程池处理方法的步骤。In addition, to achieve the above object, the present application further provides an application server, including a memory and a processor, where the memory stores a unified thread pool processing system executable on the processor, and the unified thread pool processing system The steps of the unified thread pool processing method as described above are implemented when executed by the processor.
进一步地,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质存储有统一线程池处理系统,所述统一线程池处理系统可被至少一个处理器执行,以使所述至少一个处理器执行如上述的统一线程池处理方法的步骤。Further, in order to achieve the above object, the present application further provides a computer readable storage medium storing a unified thread pool processing system, the unified thread pool processing system being executable by at least one processor, The step of causing the at least one processor to perform the unified thread pool processing method as described above.
相较于现有技术,本申请所提出的统一线程池处理方法、应用服务器及计算机可读存储介质,可以通过查询持久化数据表获取每一个线程池的参数,并根据所述每一个线程池的参数分别创建每一个线程池对象,实现了对线程池的统一创建;通过创建线程池队列保存每一个线程池名称与每一个所述线程池对象的映射关系,并根据所述线程池队列实现了对每一个所述线程池对象的统一管理,降低了系统资源消耗,提僧了系统的稳定性。Compared with the prior art, the unified thread pool processing method, the application server, and the computer readable storage medium proposed by the present application can obtain parameters of each thread pool by querying a persistent data table, and according to each thread pool. The parameters respectively create each thread pool object, and realize the unified creation of the thread pool; save the mapping relationship between each thread pool name and each of the thread pool objects by creating a thread pool queue, and implement according to the thread pool queue The unified management of each of the thread pool objects reduces the system resource consumption and improves the stability of the system.
图1是本申请应用服务器一可选的硬件架构的示意图;1 is a schematic diagram of an optional hardware architecture of an application server of the present application;
图2是本申请统一线程池处理系统第一实施例的程序模块示意图;2 is a schematic diagram of a program module of a first embodiment of a unified thread pool processing system of the present application;
图3是本申请统一线程池处理系统第二实施例的程序模块示意图;3 is a schematic diagram of a program module of a second embodiment of a unified thread pool processing system of the present application;
图4是本申请统一线程池处理方法第一实施例的流程示意图;4 is a schematic flowchart of a first embodiment of a unified thread pool processing method of the present application;
图5是本申请统一线程池处理方法第二实施例的流程示意图;5 is a schematic flowchart of a second embodiment of a unified thread pool processing method of the present application;
图6是本申请统一线程池处理方法第三实施例的流程示意图。FIG. 6 is a schematic flowchart diagram of a third embodiment of a unified thread pool processing method according to the present application.
附图标记:Reference mark:
应用服务器application server | 22 |
存储器Memory | 1111 |
处理器processor | 1212 |
网络接口Network Interface | 1313 |
统一线程池处理系统Unified thread pool processing system | 200200 |
查询模块Query module | 201201 |
获取模块Acquisition module | 202202 |
线程池对象创建模块Thread pool object creation module | 203203 |
线程池队列创建模块Thread pool queue creation module | 204204 |
管理模块Management module | 205205 |
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings.
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the objects, technical solutions, and advantages of the present application more comprehensible, the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
需要说明的是,在本申请中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。It should be noted that the descriptions of "first", "second" and the like in the present application are for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. . Thus, features defining "first" or "second" may include at least one of the features, either explicitly or implicitly. In addition, the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions does not exist. Nor is it within the scope of protection required by this application.
参阅图1所示,是本申请应用服务器2一可选的硬件架构的示意图。Referring to FIG. 1, it is a schematic diagram of an optional hardware architecture of the application server 2 of the present application.
本实施例中,所述应用服务器2可包括,但不仅限于,可通过系统总线相互通信连接存储器11、处理器12、网络接口13。需要指出的是,图1仅示出了具有组件11-13的应用服务器2,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。In this embodiment, the application server 2 may include, but is not limited to, the memory 11, the processor 12, and the network interface 13 being communicably connected to each other through a system bus. It is pointed out that Figure 1 only shows the application server 2 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
其中,所述应用服务器2可以是机架式服务器、刀片式服务器、塔式服务器或机柜式服务器等计算设备,该应用服务器2可以是独立的服务器,也可以是多个服务器所组成的服务器集群。The application server 2 may be a computing device such as a rack server, a blade server, a tower server, or a rack server. The application server 2 may be an independent server or a server cluster composed of multiple servers. .
所述存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器11可以是所述应用服务器2的内部存储单元,例如该应用服务器2的硬盘或内存。在另一些实施例中,所述存储器11也可以是所述应用服务器2的外部存储设备,例如该应用服务器2上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器11还可以既包括所述应用服务器2的内部存储单元也包括其外部存储设备。本实施例中,所述存储器11通常用于存储安装于所述应用服务器2的操作系统和各类应用软件,例如统一线程池处理系统200的程序代码等。此外,所述存储器11还可以用于暂时地存储已经输出或者将要输出的各类数据。The memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. In some embodiments, the memory 11 may be an internal storage unit of the application server 2, such as a hard disk or memory of the application server 2. In other embodiments, the memory 11 may also be an external storage device of the application server 2, such as a plug-in hard disk equipped on the application server 2, a smart memory card (SMC), and a secure digital number. (Secure Digital, SD) card, flash card, etc. Of course, the memory 11 can also include both the internal storage unit of the application server 2 and its external storage device. In this embodiment, the memory 11 is generally used to store an operating system installed on the application server 2 and various types of application software, such as program code of the unified thread pool processing system 200. Further, the memory 11 can also be used to temporarily store various types of data that have been output or are to be output.
所述处理器12在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器12通常用于控制所述应用服务器2的总体操作。本实施例中,所述处理器12用于运行所述存储器11中存储的程序代码或者处理数据,例如运行所述的统 一线程池处理系统200等。The processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 12 is typically used to control the overall operation of the application server 2. In this embodiment, the processor 12 is configured to run program code or process data stored in the memory 11, such as running the unified thread pool processing system 200 and the like.
所述网络接口13可包括无线网络接口或有线网络接口,该网络接口13通常用于在所述应用服务器2与其他电子设备之间建立通信连接。The network interface 13 may comprise a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the application server 2 and other electronic devices.
至此,己经详细介绍了本申请相关设备的硬件结构和功能。下面,将基于上述介绍提出本申请的各个实施例。So far, the hardware structure and functions of the devices related to this application have been described in detail. Hereinafter, various embodiments of the present application will be made based on the above description.
首先,本申请提出一种统一线程池处理系统200。First, the present application proposes a unified thread pool processing system 200.
参阅图2所示,是本申请统一线程池处理系统200第一实施例的程序模块图。Referring to FIG. 2, it is a program module diagram of the first embodiment of the unified thread pool processing system 200 of the present application.
本实施例中,所述统一线程池处理系统200包括一系列的存储于存储器11上的计算机程序指令,当该计算机程序指令被处理器12执行时,可以实现本申请各实施例的统一线程池处理操作。在一些实施例中,基于该计算机程序指令各部分所实现的特定的操作,统一线程池处理系统200可以被划分为一个或多个模块。例如,在图2中,所述统一线程池处理系统200可以被分割成查询模块201、获取模块202、线程池对象创建模块203及线程池队列创建模块204。其中:In this embodiment, the unified thread pool processing system 200 includes a series of computer program instructions stored in the memory 11, and when the computer program instructions are executed by the processor 12, the unified thread pool of the embodiments of the present application can be implemented. Processing operations. In some embodiments, the unified thread pool processing system 200 can be divided into one or more modules based on the particular operations implemented by the various portions of the computer program instructions. For example, in FIG. 2, the unified thread pool processing system 200 can be divided into a query module 201, an acquisition module 202, a thread pool object creation module 203, and a thread pool queue creation module 204. among them:
所述查询模块201,用于查询持久化数据表。The query module 201 is configured to query a persistent data table.
具体地,需要从key-value数据库中读取所述持久化数据表,在一实施例中,所述key-value数据库可以为Redis数据库。Specifically, the persistent data table needs to be read from a key-value database. In an embodiment, the key-value database may be a Redis database.
所述持久化数据表用于存储执行队列中每一个线程池对象的参数信息,所述参数信息包括线程池名称、核心线程数、最大线程数、最大队列长度、队列类型。The persistent data table is used to store parameter information of each thread pool object in the execution queue, and the parameter information includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
在本实施例中,所述持久化方法为根据一定的保存规则定期对Redis在内存中的数据做一个快照,把快照数据同步到硬盘上,每次的快照文件就是一个保存着redis数据的二进制文件。In this embodiment, the persistence method periodically performs a snapshot of Redis data in the memory according to a certain saving rule, and synchronizes the snapshot data to the hard disk. Each time the snapshot file is a binary that stores redis data. file.
所述获取模块202,用于根据所述持久化数据表,获取执行队列中每一个线程池的参数。The obtaining module 202 is configured to obtain, according to the persistent data table, parameters of each thread pool in the execution queue.
具体地,所述每一个线程池的参数包括线程池名称、核心线程数、最大线程数、最大队列长度、队列类型。Specifically, the parameters of each thread pool include a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
其中所述核心线程数(corePoolSize)中的核心线程会一直存活,即使没有任务需要执行。当正在运行的线程数小于核心线程数时,即使有线程空闲,线程池也会优先创建新线程处理。The core thread in the core thread number (corePoolSize) will always survive, even if no tasks need to be executed. When the number of running threads is less than the number of core threads, even if there are threads idle, the thread pool will give priority to creating new thread processing.
在一实施例中,当需要添加一个任务至第一线程池时,若第一线程池中正在运行的线程数量小于核心线程数,那么第一线程池马上创建线程运行这个任务;若第一线程池中正在运行的线程数量大于或等于核心线程数,那么将这个任务放入第一线程池的任务队列中。In an embodiment, when a task needs to be added to the first thread pool, if the number of threads running in the first thread pool is less than the number of core threads, the first thread pool immediately creates a thread to run the task; If the number of threads running in the pool is greater than or equal to the number of core threads, then this task is placed in the task queue of the first thread pool.
所述最大线程数(maximumPoolSize)用于表示在线程池中最多能创建多少个线程。The maximum number of threads (maximumPoolSize) is used to indicate how many threads can be created in the thread pool.
在一实施例中,当需要添加一个任务至第一线程池时,若第一线程池的 任务队列已满(即已达到最大队列长度),而且正在运行的线程数量小于最大线程数,那么还是要创建新的线程运行这个任务;In an embodiment, when a task needs to be added to the first thread pool, if the task queue of the first thread pool is full (that is, the maximum queue length has been reached), and the number of running threads is less than the maximum number of threads, then To create a new thread to run this task;
在进一步的实施例中,当需要添加一个任务至第一线程池时,如果队列已满(即已达到最大队列长度),而且正在运行的线程数量大于或等于最大线程数,第一线程池将抛出异常。In a further embodiment, when a task needs to be added to the first thread pool, if the queue is full (ie, the maximum queue length has been reached) and the number of running threads is greater than or equal to the maximum number of threads, the first thread pool will Throw an exception.
所述队列类型包括:直接提交队列,无界队列及有界队列。The queue types include: direct submission queues, unbounded queues, and bounded queues.
其中,所述直接提交队列可以为SynchronousQueue。在一实施例中,可以设置SynchronousQueue为工作队列的默认选项,所述直接提交队列直接提交给线程而不需要排队。若不存在可用于立即运行任务的线程,所述直接提交队列把任务加入队列失败,因此会构造一个新的线程。此策略可以避免在处理可能具有内部依赖性的请求集时出现锁。所述直接提交队列通常要求无界最大线程数以避免拒绝新提交的任务。当命令以超过队列所能处理的平均数连续到达时,此策略允许无界线程具有增长的可能性。The direct submission queue may be a SynchronousQueue. In an embodiment, the SynchronousQueue can be set as the default option for the work queue, which is submitted directly to the thread without queuing. If there is no thread available to run the task immediately, the direct commit queue fails to queue the task, so a new thread is constructed. This strategy avoids locks when dealing with request sets that may have internal dependencies. The direct commit queue typically requires an unbounded maximum number of threads to avoid rejecting newly submitted tasks. This strategy allows unbounded threads to grow as the command arrives continuously over the average that can be processed by the queue.
所述有界队列可以为ArrayBlockingQueue。其中ArrayBlockingQueue为基于数组结构的有界阻塞队列,此队列按FIFO(先进先出)原则对元素进行排序。The bounded queue can be an ArrayBlockingQueue. The ArrayBlockingQueue is a bounded blocking queue based on an array structure. This queue sorts the elements according to the FIFO (first in, first out) principle.
所述无界队列可以为LinkedBlockingQueue。LinkedBlockingQueue为基于链表结构的阻塞队列,此队列按FIFO(先进先出)排序元素,吞吐量通常要高于ArrayBlockingQueue。The unbounded queue can be a LinkedBlockingQueue. LinkedBlockingQueue is a blocking queue based on linked list structure. This queue sorts elements by FIFO (first in first out), and the throughput is usually higher than ArrayBlockingQueue.
所述线程池对象创建模块203,用于根据所述每一个线程池的参数分别创建每一个线程池对象。The thread pool object creation module 203 is configured to separately create each thread pool object according to the parameters of each thread pool.
具体地,通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池,具体创建方法为:Specifically, the thread pool is created through the java.uitl.concurrent.ThreadPoolExecutor class, and the specific creation method is:
其中,keepAliveTime表示线程没有任务执行时最多保持多久时间会终止。unit为参数keepAliveTime的时间单位;其中,threadFactory为线程工厂,用于创建线程;其中,handler为当拒绝处理任务时的策略。Among them, keepAliveTime indicates that the thread will terminate when there is no task execution. Unit is the time unit of the parameter keepAliveTime; where threadFactory is the thread factory, which is used to create the thread; where the handler is the policy when the task is rejected.
在一实施例中,当handler取值为ThreadPoolExecutor.AbortPolicy时,丢弃任务并抛出Rejected Execution Exception异常;In an embodiment, when the value of the handler is ThreadPoolExecutor.AbortPolicy, the task is discarded and a Rejected Execution Exception is thrown;
在一实施例中,当handler取值为ThreadPoolExecutor.DiscardPolicy时,丢弃任务,但不需抛出Rejected Execution Exception异常;In an embodiment, when the value of the handler is ThreadPoolExecutor.DiscardPolicy, the task is discarded, but the Rejected Execution Exception is not thrown;
在一实施例中,当handler取值为ThreadPoolExecutor.DiscardOldestPolicy时,丢弃队列最前面的任务,然后重新尝试执行任务(重复此过程);In an embodiment, when the value of the handler is ThreadPoolExecutor.DiscardOldestPolicy, the task at the top of the queue is discarded, and then the task is retried (repeating the process);
在一实施例中,当handler取值为ThreadPoolExecutor.CallerRunsPolicy时,由调用线程处理该任务。In one embodiment, when the handler takes a value of ThreadPoolExecutor.CallerRunsPolicy, the task is processed by the calling thread.
所述线程池队列创建模块204,用于根据所述线程池名称与所述线程池对象创建线程池队列。The thread pool queue creation module 204 is configured to create a thread pool queue according to the thread pool name and the thread pool object.
具体地,将所述线程池名称与对应的所述线程池对象的映射关系保存下来,作为所述线程池队列。Specifically, the mapping relationship between the thread pool name and the corresponding thread pool object is saved as the thread pool queue.
在一实施例中,所述线程池名称与对应的所述线程池对象的映射关系可以表示为Map<ThreadPoolName,ThreadPool>,其中Map类的key为所述线程池名称,value为所述线程池对象。In an embodiment, the mapping relationship between the thread pool name and the corresponding thread pool object may be represented as Map<ThreadPoolName, ThreadPool>, wherein the key of the Map class is the thread pool name, and the value is the thread pool. Object.
在进一步的实施例中,可以通过map.put(ThreadPoolName2,ThreadPool2)向所述线程池队列添加第二线程池名称与对应的第二线程池对象的映射关系。In a further embodiment, the mapping relationship between the second thread pool name and the corresponding second thread pool object may be added to the thread pool queue by map.put(ThreadPoolName2, ThreadPool2).
参阅图3所示,是本申请统一线程池处理系统200第二实施例的程序模块图。本实施例中,所述的统一线程池处理系统200除了包括第一实施例中的查询模块201、获取模块202、线程池对象创建模块203、线程池队列创建模块204之外,还包括管理模块205。Referring to FIG. 3, it is a program module diagram of a second embodiment of the unified thread pool processing system 200 of the present application. In this embodiment, the unified thread pool processing system 200 includes a query module 201, an obtaining module 202, a thread pool object creating module 203, and a thread pool queue creating module 204 in the first embodiment, and a management module. 205.
所述管理模块205用于通过所述线程池队列管理所述每一个线程池。The management module 205 is configured to manage each of the thread pools through the thread pool queue.
具体地,可以通过Object get=map.get("ThreadPoolName")获取与所述线程池名称对应的线程池对象,管理所述与所述线程池名称对应的线程池对象。Specifically, the thread pool object corresponding to the thread pool name may be obtained by using Object get=map.get("ThreadPoolName"), and the thread pool object corresponding to the thread pool name is managed.
在一优选实施例中所述管理模块205具体用于:In a preferred embodiment, the management module 205 is specifically configured to:
获取欲操作的线程池的第一线程池名称;Get the first thread pool name of the thread pool to be operated;
根据所述第一线程池名称从所述线程池队列中获取对应的第一线程池对象;Obtaining, according to the first thread pool name, a corresponding first thread pool object from the thread pool queue;
对所述第一线程池对象进行管理操作,所述管理操作可以包括:获取所述第一线程池对象线程数、提交任务至所述第一线程池对象及关闭所述第一线程池对象。Performing a management operation on the first thread pool object, the management operation may include: acquiring the first thread pool object thread number, submitting a task to the first thread pool object, and closing the first thread pool object.
在一实施例中,可以通过所述线程池名称获取与所述线程池名称对应的线程池对象当前活动的线程数In an embodiment, the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using the thread pool name.
具体的,可以通过private volatile int poolSize获取与所述线程池名称对应的线程池对象当前活动的线程数。Specifically, the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using a private volatile int poolSize.
在一实施例中,可以通过所述线程池名称获取与所述线程池名称对应的线程池对象的当前执行队列的长度。In an embodiment, the length of the current execution queue of the thread pool object corresponding to the thread pool name may be obtained by the thread pool name.
在一实施例中,可以通过线程池名称,参数信息及待启动的任务对象,将所述待启动的任务对象提交给指定的线程池对象。In an embodiment, the task object to be started may be submitted to the specified thread pool object by using a thread pool name, parameter information, and a task object to be started.
具体的,可以通过execute()与submit()向所述指定的线程池对象提交所述待启动的任务对象。Specifically, the task object to be started may be submitted to the specified thread pool object by execute() and submit().
在进一步的实施例中,所述线程池对象在接收到所述待启动的任务对象后:In a further embodiment, the thread pool object after receiving the task object to be started:
当所线程池对象当前活动的线程数小于核心线程数,创建线程运行这个任务;When the number of threads currently active by the thread pool object is less than the number of core threads, the creation thread runs this task;
当所述线程池对象当前活动的线程数大于或等于核心线程数,将这个任务放入队列;When the number of threads currently active by the thread pool object is greater than or equal to the number of core threads, the task is put into a queue;
当所述线程池对象当前活动的线程数小于最大线程数,创建新的线程运行这个任务;When the number of threads currently active by the thread pool object is less than the maximum number of threads, create a new thread to run the task;
当所述线程池对象当前活动的线程数大于或等于最大线程数,线程池会抛出异常。When the number of threads currently active by the thread pool object is greater than or equal to the maximum number of threads, the thread pool throws an exception.
在一实施例中,可以通过所述线程池名称关闭与所述线程池名称对应的线程池对象。具体的,可以采用shutdown()与shutdownNow()关闭与所述线程池名称对应的线程池对象。其中shutdown()不会立即终止线程池,而是要等所有任务缓存队列中的任务都执行完后才终止,但再也不会接受新的任务;shutdownNow():立即终止线程池,并尝试打断正在执行的任务,并且清空任务缓存队列,返回尚未执行的任务。In an embodiment, the thread pool object corresponding to the thread pool name may be closed by the thread pool name. Specifically, the thread pool object corresponding to the thread pool name may be closed by using shutdown() and shutdownNow(). Shutdown() does not terminate the thread pool immediately, but waits until all the tasks in the task cache queue have been executed, but will not accept the new task again; shutdownNow(): terminate the thread pool immediately and try Interrupts the task being executed and clears the task cache queue, returning tasks that have not yet been executed.
本申请统一线程池处理系统通过查询持久化数据表获取每一个线程池的参数,并根据所述每一个线程池的参数分别创建每一个线程池对象,实现了对线程池的统一创建;通过创建线程池队列保存每一个线程池名称与每一个所述线程池对象的映射关系,并根据所述线程池队列实现了对每一个所述线程池对象的统一管理,降低了系统资源消耗,提僧了系统的稳定性。The unified thread pool processing system of the present application obtains the parameters of each thread pool by querying the persistent data table, and creates each thread pool object according to the parameters of each thread pool, thereby realizing the unified creation of the thread pool; The thread pool queue saves the mapping relationship between each thread pool name and each of the thread pool objects, and implements unified management of each of the thread pool objects according to the thread pool queue, thereby reducing system resource consumption and improving The stability of the system.
此外,本申请还提出一种统一线程池处理方法。In addition, the present application also proposes a unified thread pool processing method.
参阅图4所示,是本申请统一线程池处理方法第一实施例的流程示意图。在本实施例中,根据不同的需求,图5所示的流程图中的步骤的执行顺序可以改变,某些步骤可以省略。Referring to FIG. 4, it is a schematic flowchart of the first embodiment of the unified thread pool processing method of the present application. In this embodiment, the order of execution of the steps in the flowchart shown in FIG. 5 may be changed according to different requirements, and some steps may be omitted.
步骤S402,查询持久化数据表。Step S402, querying the persistent data table.
具体地,需要从key-value数据库中读取所述持久化数据表,在一实施例中,所述key-value数据库可以为Redis数据库。Specifically, the persistent data table needs to be read from a key-value database. In an embodiment, the key-value database may be a Redis database.
所述持久化数据表用于存储执行队列中每一个线程池对象的参数信息,所述参数信息包括线程池名称、核心线程数、最大线程数、最大队列长度、队列类型。The persistent data table is used to store parameter information of each thread pool object in the execution queue, and the parameter information includes a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
在本实施例中,所述持久化方法为根据一定的保存规则定期对Redis在内存中的数据做一个快照,把快照数据同步到硬盘上,每次的快照文件就是一个保存着redis数据的二进制文件。In this embodiment, the persistence method periodically performs a snapshot of Redis data in the memory according to a certain saving rule, and synchronizes the snapshot data to the hard disk. Each time the snapshot file is a binary that stores redis data. file.
步骤S404,根据所述持久化数据表,获取执行队列中每一个线程池的参数,所述每一个线程池的参数包括每一个线程池名称。Step S404: Acquire, according to the persistent data table, parameters of each thread pool in the execution queue, where the parameters of each thread pool include each thread pool name.
具体地,所述每一个线程池的参数包括线程池名称、核心线程数、最大线程数、最大队列长度、队列类型。Specifically, the parameters of each thread pool include a thread pool name, a core thread number, a maximum thread number, a maximum queue length, and a queue type.
其中所述核心线程数(corePoolSize)中的核心线程会一直存活,即使没有任务需要执行。当正在运行的线程数小于核心线程数时,即使有线程空闲,线程池也会优先创建新线程处理。The core thread in the core thread number (corePoolSize) will always survive, even if no tasks need to be executed. When the number of running threads is less than the number of core threads, even if there are threads idle, the thread pool will give priority to creating new thread processing.
在一实施例中,当需要添加一个任务至第一线程池时,若第一线程池中正在运行的线程数量小于核心线程数,那么第一线程池马上创建线程运行这个任务;若第一线程池中正在运行的线程数量大于或等于核心线程数,那么将这个任务放入第一线程池的任务队列中。In an embodiment, when a task needs to be added to the first thread pool, if the number of threads running in the first thread pool is less than the number of core threads, the first thread pool immediately creates a thread to run the task; If the number of threads running in the pool is greater than or equal to the number of core threads, then this task is placed in the task queue of the first thread pool.
其中所述核心线程数(corePoolSize)中的核心线程会一直存活,及时没有任务需要执行当线程数小于核心线程数时,即使有线程空闲,线程池也会优先创建新线程处理。The core thread in the core thread number (corePoolSize) will always survive, and no task needs to be executed in time. When the number of threads is less than the number of core threads, even if there is a thread idle, the thread pool will preferentially create a new thread for processing.
所述最大线程数(maximumPoolSize)用于表示在线程池中最多能创建多少个线程:The maximum number of threads (maximumPoolSize) is used to indicate how many threads can be created in the thread pool:
在一实施例中,当需要添加一个任务至第一线程池时,若第一线程池的任务队列已满(即已达到最大队列长度),而且正在运行的线程数量小于最大线程数,那么还是要创建新的线程运行这个任务;In an embodiment, when a task needs to be added to the first thread pool, if the task queue of the first thread pool is full (that is, the maximum queue length has been reached), and the number of running threads is less than the maximum number of threads, then To create a new thread to run this task;
在进一步的实施例中,当需要添加一个任务至第一线程池时,如果队列已满(即已达到最大队列长度),而且正在运行的线程数量大于或等于最大线程数,第一线程池将抛出异常。In a further embodiment, when a task needs to be added to the first thread pool, if the queue is full (ie, the maximum queue length has been reached) and the number of running threads is greater than or equal to the maximum number of threads, the first thread pool will Throw an exception.
所述队列类型包括:直接提交队列,无界队列及有界队列。The queue types include: direct submission queues, unbounded queues, and bounded queues.
其中,所述直接提交队列可以为SynchronousQueue。在一实施例中,可以设置SynchronousQueue为工作队列的默认选项,所述直接提交队列直接提交给线程而不需要排队。若不存在可用于立即运行任务的线程,所述直接提交队列把任务加入队列失败,因此会构造一个新的线程。此策略可以避免在处理可能具有内部依赖性的请求集时出现锁。所述直接提交队列通常要求无界最大线程数以避免拒绝新提交的任务。当命令以超过队列所能处理的平均数连续到达时,此策略允许无界线程具有增长的可能性。The direct submission queue may be a SynchronousQueue. In an embodiment, the SynchronousQueue can be set as the default option for the work queue, which is submitted directly to the thread without queuing. If there is no thread available to run the task immediately, the direct commit queue fails to queue the task, so a new thread is constructed. This strategy avoids locks when dealing with request sets that may have internal dependencies. The direct commit queue typically requires an unbounded maximum number of threads to avoid rejecting newly submitted tasks. This strategy allows unbounded threads to grow as the command arrives continuously over the average that can be processed by the queue.
所述有界队列可以为ArrayBlockingQueue。其中ArrayBlockingQueue为基于数组结构的有界阻塞队列,此队列按FIFO(先进先出)原则对元素进行排序。The bounded queue can be an ArrayBlockingQueue. The ArrayBlockingQueue is a bounded blocking queue based on an array structure. This queue sorts the elements according to the FIFO (first in, first out) principle.
所述无界队列可以为LinkedBlockingQueue。LinkedBlockingQueue为基于链表结构的阻塞队列,此队列按FIFO(先进先出)排序元素,吞吐量通常要高于ArrayBlockingQueue。The unbounded queue can be a LinkedBlockingQueue. LinkedBlockingQueue is a blocking queue based on linked list structure. This queue sorts elements by FIFO (first in first out), and the throughput is usually higher than ArrayBlockingQueue.
步骤S406,根据所述每一个线程池的参数分别创建每一个线程池对象Step S406, respectively creating each thread pool object according to the parameters of each thread pool.
具体地,通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池,具体创建方法为:Specifically, the thread pool is created through the java.uitl.concurrent.ThreadPoolExecutor class, and the specific creation method is:
其中,corePoolSize表示核心池大小。在创建了线程池后,默认情况下,线程池中并没有任何线程,而是等待有任务到来才创建线程去执行任务,除非调用了prestartAllCoreThreads()或者prestartCoreThread()方法。默认情况下,在创建了线程池后,线程池中的线程数为0,当有任务来之后,就会创建一个线程去执行任务,当线程池中的线程数目达到核心线程数后,就会把到达的任务放到缓存队列当中;Among them, corePoolSize represents the core pool size. After the thread pool is created, by default, there is no thread in the thread pool, but waiting for a task to arrive to create a thread to execute the task, unless the prestartAllCoreThreads() or prestartCoreThread() method is called. By default, after the thread pool is created, the number of threads in the thread pool is 0. When there is a task, a thread is created to execute the task. When the number of threads in the thread pool reaches the number of core threads, Put the arriving task in the cache queue;
其中,maximumPoolSize表示线程池最大线程数,表示在线程池中最多能创建多少个线程;Among them, maximumPoolSize represents the maximum number of threads in the thread pool, indicating how many threads can be created in the thread pool;
其中,keepAliveTime表示线程没有任务执行时最多保持多久时间会终止;其中unit为参数keepAliveTime的时间单位;其中,threadFactory为线程工厂,用于创建线程;其中,handler为当拒绝处理任务时的策略。Among them, keepAliveTime indicates that the thread will terminate when there is no task execution. The unit is the time unit of the parameter keepAliveTime. The threadFactory is the thread factory and is used to create the thread. The handler is the strategy when the task is rejected.
在一实施例中,当handler取值为ThreadPoolExecutor.AbortPolicy时,丢弃任务并抛出Rejected Execution Exception异常;In an embodiment, when the value of the handler is ThreadPoolExecutor.AbortPolicy, the task is discarded and a Rejected Execution Exception is thrown;
在一实施例中,当handler取值为ThreadPoolExecutor.DiscardPolicy时,丢弃任务,但不需抛出Rejected Execution Exception异常;In an embodiment, when the value of the handler is ThreadPoolExecutor.DiscardPolicy, the task is discarded, but the Rejected Execution Exception is not thrown;
在一实施例中,当handler取值为ThreadPoolExecutor.DiscardOldestPolicy时,丢弃队列最前面的任务,然后重新尝试执行任务(重复此过程);In an embodiment, when the value of the handler is ThreadPoolExecutor.DiscardOldestPolicy, the task at the top of the queue is discarded, and then the task is retried (repeating the process);
在一实施例中,当handler取值为ThreadPoolExecutor.CallerRunsPolicy时,由调用线程处理该任务。In one embodiment, when the handler takes a value of ThreadPoolExecutor.CallerRunsPolicy, the task is processed by the calling thread.
步骤S408,根据所述每一个线程池名称与所述每一个线程池对象创建线程池队列。Step S408, creating a thread pool queue according to each thread pool name and each of the thread pool objects.
具体地,将所述线程池名称与对应的所述线程池对象的映射关系保存下来,作为所述线程池队列。Specifically, the mapping relationship between the thread pool name and the corresponding thread pool object is saved as the thread pool queue.
在一实施例中,所述线程池名称与对应的所述线程池对象的映射关系可以表示为Map<ThreadPoolName,ThreadPool>,其中Map类的key为所述线程池名称,value为所述线程池对象。In an embodiment, the mapping relationship between the thread pool name and the corresponding thread pool object may be represented as Map<ThreadPoolName, ThreadPool>, wherein the key of the Map class is the thread pool name, and the value is the thread pool. Object.
在进一步的实施例中,可以通过map.put(ThreadPoolName2,ThreadPool2)向所述线程池队列添加第二线程池名称与对应的第二线程池对象的映射关系。In a further embodiment, the mapping relationship between the second thread pool name and the corresponding second thread pool object may be added to the thread pool queue by map.put(ThreadPoolName2, ThreadPool2).
如图5所示,是本申请统一线程池处理方法的第二实施例的流程示意图。本实施例中,所述统一线程池处理方法的步骤S502-S508与第一实施例的步骤S402-S408相类似,区别在于该方法还包括步骤S510。As shown in FIG. 5, it is a schematic flowchart of a second embodiment of a unified thread pool processing method of the present application. In this embodiment, the steps S502-S508 of the unified thread pool processing method are similar to the steps S402-S408 of the first embodiment, except that the method further includes step S510.
该方法包括以下步骤:The method includes the following steps:
步骤S510,通过所述线程池队列管理所述每一个线程池。Step S510, managing each thread pool through the thread pool queue.
具体地,可以通过Object get=map.get("ThreadPoolName")获取与所述线程池名称对应的线程池对象,管理所述与所述线程池名称对应的线程池对象。具体步骤将在本申请统一线程池处理方法的第三实施例(参阅图6)进行详述。Specifically, the thread pool object corresponding to the thread pool name may be obtained by using Object get=map.get("ThreadPoolName"), and the thread pool object corresponding to the thread pool name is managed. The specific steps will be described in detail in the third embodiment of the unified thread pool processing method of the present application (see FIG. 6).
如图6所示,是本申请统一线程池处理方法的第三实施例的流程示意图。本实施例中,所述通过所述线程池队列管理所述每一个线程池的步骤,具体包括:FIG. 6 is a schematic flowchart diagram of a third embodiment of a unified thread pool processing method of the present application. In this embodiment, the step of managing the thread pool by using the thread pool queue includes:
步骤602,获取欲操作的线程池的第一线程池名称。Step 602: Obtain a first thread pool name of a thread pool to be operated.
具体的,在接收操作指令之后,根据所述操作指令提取所述欲操作的线程池的第一线程池名称。Specifically, after receiving the operation instruction, extracting, according to the operation instruction, a first thread pool name of the thread pool to be operated.
步骤604,根据所述第一线程池名称从所述线程池队列中获取对应的第一线程池对象。Step 604: Obtain a corresponding first thread pool object from the thread pool queue according to the first thread pool name.
具体的,将获取的所述第一线程池名称作为变量ThreadPoolName,并传递至所述线程池队列中。Specifically, the obtained first thread pool name is taken as a variable ThreadPoolName, and is delivered to the thread pool queue.
在一实施例中,通过Object get=map.get("ThreadPoolName")获取与所述线程池名称对应的线程池对象。In an embodiment, the thread pool object corresponding to the thread pool name is obtained by Object get=map.get("ThreadPoolName").
步骤606,对所述第一线程池对象进行管理操作。Step 606: Perform a management operation on the first thread pool object.
具体的,所述管理操作可以包括:获取所述第一线程池对象线程数、提交任务至所述第一线程池对象及关闭所述第一线程池对象。Specifically, the managing operation may include: acquiring the first thread pool object thread number, submitting the task to the first thread pool object, and closing the first thread pool object.
在一实施例中,可以通过所述线程池名称获取与所述线程池名称对应的线程池对象当前活动的线程数In an embodiment, the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using the thread pool name.
具体的,可以通过private volatile int poolSize获取与所述线程池名称对应的线程池对象当前活动的线程数。Specifically, the number of threads currently active by the thread pool object corresponding to the thread pool name may be obtained by using a private volatile int poolSize.
在一实施例中,可以通过所述线程池名称获取与所述线程池名称对应的线程池对象的当前执行队列的长度。In an embodiment, the length of the current execution queue of the thread pool object corresponding to the thread pool name may be obtained by the thread pool name.
在一实施例中,可以通过线程池名称,参数信息及待启动的任务对象,将所述待启动的任务对象提交给指定的线程池对象。In an embodiment, the task object to be started may be submitted to the specified thread pool object by using a thread pool name, parameter information, and a task object to be started.
具体的,可以通过execute()与submit()向所述指定的线程池对象提交所述待启动的任务对象。Specifically, the task object to be started may be submitted to the specified thread pool object by execute() and submit().
在进一步的实施例中,所述线程池对象在接收到所述待启动的任务对象后:In a further embodiment, the thread pool object after receiving the task object to be started:
当所线程池对象当前活动的线程数小于核心线程数,创建线程运行这个任务;When the number of threads currently active by the thread pool object is less than the number of core threads, the creation thread runs this task;
当所述线程池对象当前活动的线程数大于或等于核心线程数,将这个任务放入队列;When the number of threads currently active by the thread pool object is greater than or equal to the number of core threads, the task is put into a queue;
当所述线程池对象当前活动的线程数小于最大线程数,创建新的线程运行这个任务;When the number of threads currently active by the thread pool object is less than the maximum number of threads, create a new thread to run the task;
当所述线程池对象当前活动的线程数大于或等于最大线程数,线程池会抛出异常。When the number of threads currently active by the thread pool object is greater than or equal to the maximum number of threads, the thread pool throws an exception.
在一实施例中,可以通过所述线程池名称关闭与所述线程池名称对应的线程池对象。具体的,可以采用shutdown()与shutdownNow()关闭与所述线程池名称对应的线程池对象。其中shutdown()不会立即终止线程池,而是要等所有任务缓存队列中的任务都执行完后才终止,但再也不会接受新的任务;shutdownNow():立即终止线程池,并尝试打断正在执行的任务,并且清空任务缓存队列,返回尚未执行的任务。In an embodiment, the thread pool object corresponding to the thread pool name may be closed by the thread pool name. Specifically, the thread pool object corresponding to the thread pool name may be closed by using shutdown() and shutdownNow(). Shutdown() does not terminate the thread pool immediately, but waits until all the tasks in the task cache queue have been executed, but will not accept the new task again; shutdownNow(): terminate the thread pool immediately and try Interrupts the task being executed and clears the task cache queue, returning tasks that have not yet been executed.
本申请统一线程池处理方法通过查询持久化数据表获取每一个线程池的参数,并根据所述每一个线程池的参数分别创建每一个线程池对象,实现了对线程池的统一创建;通过创建线程池队列保存每一个线程池名称与每一个所述线程池对象的映射关系,并根据所述线程池队列实现了对每一个所述线程池对象的统一管理,降低了系统资源消耗,提僧了系统的稳定性。The unified thread pool processing method of the present application obtains the parameters of each thread pool by querying the persistent data table, and creates each thread pool object according to the parameters of each thread pool, thereby realizing the unified creation of the thread pool; The thread pool queue saves the mapping relationship between each thread pool name and each of the thread pool objects, and implements unified management of each of the thread pool objects according to the thread pool queue, thereby reducing system resource consumption and improving The stability of the system.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present application are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, The optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of this application.
Claims (20)
- 一种统一线程池处理方法,应用于应用服务器,其特征在于,所述方法包括步骤:A unified thread pool processing method is applied to an application server, and the method includes the following steps:查询持久化数据表;Query the persistent data table;根据所述持久化数据表,获取执行队列中每一个线程池的参数,所述每一个线程池的参数包括每一个线程池名称;Obtaining, according to the persistent data table, parameters of each thread pool in the execution queue, where the parameters of each thread pool include each thread pool name;根据所述每一个线程池的参数分别创建每一个线程池对象;Creating each thread pool object according to the parameters of each thread pool;根据所述每一个线程池名称与所述每一个线程池对象创建线程池队列;及Creating a thread pool queue according to each of the thread pool names and each of the thread pool objects; and通过所述线程池队列管理所述每一个线程池。Each of the thread pools is managed by the thread pool queue.
- 如权利要求1所述的统一线程池处理方法,其特征在于,所述每一个线程池的参数还包括核心线程数,最大线程数,最大队列长度,队列类型。The unified thread pool processing method according to claim 1, wherein the parameters of each thread pool further include a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- 如权利要求2所述的统一线程池处理方法,其特征在于,所述根据所述每一个线程池名称与所述每一个线程池对象创建线程池队列的步骤具体包括:The unified thread pool processing method according to claim 2, wherein the step of creating a thread pool queue according to each of the thread pool names and each of the thread pool objects comprises:获取所述每一个线程池名称与所述每一个线程池对象的映射关系;Obtaining a mapping relationship between each thread pool name and each thread pool object;以Map类保存所述每一个线程池名称与所述每一个线程池对象的映射关系并创建所述线程池队列。The Map class saves the mapping relationship between each thread pool name and each thread pool object and creates the thread pool queue.
- 如权利要求2所述的统一线程池处理方法,其特征在于,所述通过所述线程池队列管理所述每一个线程池的步骤具体包括:The unified thread pool processing method according to claim 2, wherein the step of managing the each thread pool through the thread pool queue comprises:获取欲操作的线程池的第一线程池名称;Get the first thread pool name of the thread pool to be operated;根据所述第一线程池名称从所述线程池队列中获取对应的第一线程池对象;Obtaining, according to the first thread pool name, a corresponding first thread pool object from the thread pool queue;对所述第一线程池对象进行管理操作,所述管理操作可以包括:获取所述第一线程池对象线程数、提交任务至所述第一线程池对象及关闭所述第一线程池对象。Performing a management operation on the first thread pool object, the management operation may include: acquiring the first thread pool object thread number, submitting a task to the first thread pool object, and closing the first thread pool object.
- 如权利要求1所述的统一线程池处理方法,其特征在于,所述根据所述每一个线程池的参数分别创建每一个线程池对象的步骤具体包括:The unified thread pool processing method according to claim 1, wherein the step of separately creating each thread pool object according to the parameters of each thread pool comprises:收集所述每一个线程池的参数;Collecting parameters of each of the thread pools;通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池。Create a thread pool through the java.uitl.concurrent.ThreadPoolExecutor class.
- 如权利要求2所述的统一线程池处理方法,其特征在于,所述根据所述每一个线程池的参数分别创建每一个线程池对象的步骤具体包括:The unified thread pool processing method according to claim 2, wherein the step of separately creating each thread pool object according to the parameters of each thread pool comprises:收集所述每一个线程池的参数;Collecting parameters of each of the thread pools;通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池。Create a thread pool through the java.uitl.concurrent.ThreadPoolExecutor class.
- 如权利要求3或4所述的统一线程池处理方法,其特征在于,所述根据所述每一个线程池的参数分别创建每一个线程池对象的步骤具体包括:The unified thread pool processing method according to claim 3 or 4, wherein the step of separately creating each thread pool object according to the parameters of each thread pool comprises:收集所述每一个线程池的参数;Collecting parameters of each of the thread pools;通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池。Create a thread pool through the java.uitl.concurrent.ThreadPoolExecutor class.
- 一种应用服务器,其特征在于,所述应用服务器包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的统一线程池处理系统,所述统一线程池处理系统被所述处理器执行时实现如下步骤:An application server, comprising: a memory, a processor, and a unified thread pool processing system executable on the processor, where the unified thread pool processing system is The processor implements the following steps when it executes:查询持久化数据表;Query the persistent data table;根据所述持久化数据表,获取执行队列中每一个线程池的参数,所述每一个线程池的参数包括每一个线程池名称;Obtaining, according to the persistent data table, parameters of each thread pool in the execution queue, where the parameters of each thread pool include each thread pool name;根据所述每一个线程池的参数分别创建每一个线程池对象;Creating each thread pool object according to the parameters of each thread pool;根据所述每一个线程池名称与所述每一个线程池对象创建线程池队列;及Creating a thread pool queue according to each of the thread pool names and each of the thread pool objects; and通过所述线程池队列管理所述每一个线程池。Each of the thread pools is managed by the thread pool queue.
- 如权利要求8所述的应用服务器,其特征在于,所述每一个线程池的参数还包括核心线程数,最大线程数,最大队列长度,队列类型。The application server according to claim 8, wherein the parameters of each thread pool further include a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- 如权利要求9所述的应用服务器,其特征在于,所述根据所述每一个线程池名称与所述每一个线程池对象创建线程池队列的步骤具体包括:The application server according to claim 9, wherein the step of creating a thread pool queue according to each of the thread pool names and each of the thread pool objects comprises:获取所述每一个线程池名称与所述每一个线程池对象的映射关系;Obtaining a mapping relationship between each thread pool name and each thread pool object;以Map类保存所述每一个线程池名称与所述每一个线程池对象的映射关系并创建所述线程池队列。The Map class saves the mapping relationship between each thread pool name and each thread pool object and creates the thread pool queue.
- 如权利要求9所述的应用服务器,其特征在于,所述通过所述线程池队列管理所述每一个线程池的步骤具体包括:The application server according to claim 9, wherein the step of managing each of the thread pools by using the thread pool queue comprises:获取欲操作的线程池的第一线程池名称;Get the first thread pool name of the thread pool to be operated;根据所述第一线程池名称从所述线程池队列中获取对应的第一线程池对象;Obtaining, according to the first thread pool name, a corresponding first thread pool object from the thread pool queue;对所述第一线程池对象进行管理操作,所述管理操作可以包括:获取所述第一线程池对象线程数、提交任务至所述第一线程池对象及关闭所述第一线程池对象。Performing a management operation on the first thread pool object, the management operation may include: acquiring the first thread pool object thread number, submitting a task to the first thread pool object, and closing the first thread pool object.
- 如权利要求8所述的应用服务器,其特征在于,所述根据所述每一个线程池的参数分别创建每一个线程池对象的步骤具体包括:The application server according to claim 8, wherein the step of separately creating each thread pool object according to the parameters of each thread pool comprises:收集所述每一个线程池的参数;Collecting parameters of each of the thread pools;通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池。Create a thread pool through the java.uitl.concurrent.ThreadPoolExecutor class.
- 如权利要求9所述的应用服务器,其特征在于,所述根据所述每一个线程池的参数分别创建每一个线程池对象的步骤具体包括:The application server according to claim 9, wherein the step of separately creating each thread pool object according to the parameters of each thread pool comprises:收集所述每一个线程池的参数;Collecting parameters of each of the thread pools;通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池。Create a thread pool through the java.uitl.concurrent.ThreadPoolExecutor class.
- 如权利要求10或11所述的应用服务器,其特征在于,所述根据所述每一个线程池的参数分别创建每一个线程池对象的步骤具体包括:The application server according to claim 10 or 11, wherein the step of separately creating each thread pool object according to the parameters of each thread pool comprises:收集所述每一个线程池的参数;Collecting parameters of each of the thread pools;通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池。Create a thread pool through the java.uitl.concurrent.ThreadPoolExecutor class.
- 一种计算机可读存储介质,所述计算机可读存储介质存储有统一线程池处理系统,所述统一线程池处理系统可被至少一个处理器执行,以使所述 至少一个处理器执行如下步骤:A computer readable storage medium storing a unified thread pool processing system, the unified thread pool processing system being executable by at least one processor to cause the at least one processor to perform the following steps:查询持久化数据表;Query the persistent data table;根据所述持久化数据表,获取执行队列中每一个线程池的参数,所述每一个线程池的参数包括每一个线程池名称;Obtaining, according to the persistent data table, parameters of each thread pool in the execution queue, where the parameters of each thread pool include each thread pool name;根据所述每一个线程池的参数分别创建每一个线程池对象;Creating each thread pool object according to the parameters of each thread pool;根据所述每一个线程池名称与所述每一个线程池对象创建线程池队列;及Creating a thread pool queue according to each of the thread pool names and each of the thread pool objects; and通过所述线程池队列管理所述每一个线程池。Each of the thread pools is managed by the thread pool queue.
- 如权利要求15所述的计算机可读存储介质,其特征在于,所述每一个线程池的参数还包括核心线程数,最大线程数,最大队列长度,队列类型。The computer readable storage medium according to claim 15, wherein the parameters of each thread pool further include a core thread number, a maximum thread number, a maximum queue length, and a queue type.
- 如权利要求16所述的计算机可读存储介质,其特征在于,所述根据所述每一个线程池名称与所述每一个线程池对象创建线程池队列的步骤具体包括:The computer readable storage medium according to claim 16, wherein the step of creating a thread pool queue according to each of the thread pool names and each of the thread pool objects comprises:获取所述每一个线程池名称与所述每一个线程池对象的映射关系;Obtaining a mapping relationship between each thread pool name and each thread pool object;以Map类保存所述每一个线程池名称与所述每一个线程池对象的映射关系并创建所述线程池队列。The Map class saves the mapping relationship between each thread pool name and each thread pool object and creates the thread pool queue.
- 如权利要求16所述的计算机可读存储介质,其特征在于,所述通过所述线程池队列管理所述每一个线程池的步骤具体包括:The computer readable storage medium of claim 16, wherein the step of managing each of the thread pools by using the thread pool queue comprises:获取欲操作的线程池的第一线程池名称;Get the first thread pool name of the thread pool to be operated;根据所述第一线程池名称从所述线程池队列中获取对应的第一线程池对象;Obtaining, according to the first thread pool name, a corresponding first thread pool object from the thread pool queue;对所述第一线程池对象进行管理操作,所述管理操作可以包括:获取所述第一线程池对象线程数、提交任务至所述第一线程池对象及关闭所述第一线程池对象。Performing a management operation on the first thread pool object, the management operation may include: acquiring the first thread pool object thread number, submitting a task to the first thread pool object, and closing the first thread pool object.
- 如权利要求15所述的计算机可读存储介质,其特征在于,所述根据所述每一个线程池的参数分别创建每一个线程池对象的步骤具体包括:The computer readable storage medium according to claim 15, wherein the step of separately creating each thread pool object according to the parameters of each thread pool comprises:收集所述每一个线程池的参数;Collecting parameters of each of the thread pools;通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池。Create a thread pool through the java.uitl.concurrent.ThreadPoolExecutor class.
- 如权利要求16-18任一项所述的计算机可读存储介质,其特征在于,所述根据所述每一个线程池的参数分别创建每一个线程池对象的步骤具体包括:The computer readable storage medium according to any one of claims 16 to 18, wherein the step of separately creating each thread pool object according to the parameters of each thread pool comprises:收集所述每一个线程池的参数;Collecting parameters of each of the thread pools;通过java.uitl.concurrent.ThreadPoolExecutor类创建线程池。Create a thread pool through the java.uitl.concurrent.ThreadPoolExecutor class.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810102252.3 | 2018-02-01 | ||
CN201810102252.3A CN108345499B (en) | 2018-02-01 | 2018-02-01 | Unified thread pool processing method, application server and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019148734A1 true WO2019148734A1 (en) | 2019-08-08 |
Family
ID=62958407
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/090909 WO2019148734A1 (en) | 2018-02-01 | 2018-06-12 | Uniform thread pool processing method, application server, and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108345499B (en) |
WO (1) | WO2019148734A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112965805A (en) * | 2021-03-25 | 2021-06-15 | 兴业数字金融服务(上海)股份有限公司 | Cross-process asynchronous task processing method and system based on memory mapping file |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110865798B (en) * | 2018-08-28 | 2023-07-21 | 中国移动通信集团浙江有限公司 | Thread pool optimization method and system |
CN109582472B (en) * | 2018-10-19 | 2021-05-18 | 华为技术有限公司 | Micro-service processing method and device |
CN109739583B (en) * | 2018-12-13 | 2023-09-08 | 平安科技(深圳)有限公司 | Method, device, computer equipment and storage medium for parallel running of multiple threads |
CN110109739A (en) * | 2019-04-25 | 2019-08-09 | 北京奇艺世纪科技有限公司 | A kind of method for closing and device of multithread application |
CN112114862B (en) * | 2019-06-20 | 2023-12-22 | 普天信息技术有限公司 | Method and device for concurrency processing of spring boot instances |
CN110287013A (en) * | 2019-06-26 | 2019-09-27 | 四川长虹电器股份有限公司 | The method for solving Internet of Things cloud service avalanche effect based on JAVA multithreading |
CN111078377B (en) * | 2019-11-29 | 2023-04-07 | 易方信息科技股份有限公司 | Thread working method |
CN111625332A (en) * | 2020-05-21 | 2020-09-04 | 杭州安恒信息技术股份有限公司 | Java thread pool rejection policy execution method and device and computer equipment |
CN111897643B (en) * | 2020-08-05 | 2024-07-02 | 深圳鼎盛电脑科技有限公司 | Thread pool configuration system, method, device and storage medium |
CN112667385A (en) * | 2021-01-15 | 2021-04-16 | 北京金和网络股份有限公司 | Cloud service system, task execution method and device thereof, and server |
CN112835704A (en) * | 2021-03-26 | 2021-05-25 | 中国工商银行股份有限公司 | Task processing method, thread pool management method, device and computing equipment |
CN114490112B (en) * | 2021-12-20 | 2024-09-20 | 阿里巴巴(中国)有限公司 | Message processing method, device and system |
CN114924849B (en) * | 2022-04-27 | 2024-06-04 | 上海交通大学 | High concurrency execution and resource scheduling method and device for industrial control system |
CN116974730B (en) * | 2023-09-22 | 2024-01-30 | 深圳联友科技有限公司 | Large-batch task processing method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599027A (en) * | 2009-06-30 | 2009-12-09 | 中兴通讯股份有限公司 | A kind of thread pool management method and system thereof |
CN103218264A (en) * | 2013-03-26 | 2013-07-24 | 广东威创视讯科技股份有限公司 | Multi-thread finite state machine switching method and multi-thread finite state machine switching device based on thread pool |
CN105760234A (en) * | 2016-03-17 | 2016-07-13 | 联动优势科技有限公司 | Thread pool management method and device |
US20170168843A1 (en) * | 2012-04-03 | 2017-06-15 | Microsoft Technology Licensing, Llc | Thread-agile execution of dynamic programming language programs |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6604125B1 (en) * | 1999-09-24 | 2003-08-05 | Sun Microsystems, Inc. | Mechanism for enabling a thread unaware or non thread safe application to be executed safely in a multi-threaded environment |
US20070229520A1 (en) * | 2006-03-31 | 2007-10-04 | Microsoft Corporation | Buffered Paint Systems |
US9397976B2 (en) * | 2009-10-30 | 2016-07-19 | International Business Machines Corporation | Tuning LDAP server and directory database |
CN101777008A (en) * | 2009-12-31 | 2010-07-14 | 中兴通讯股份有限公司 | Method and device for realizing mobile terminal system thread pool |
CN103455377B (en) * | 2013-08-06 | 2019-01-22 | 北京京东尚科信息技术有限公司 | System and method for management business thread pool |
CN105159768A (en) * | 2015-09-09 | 2015-12-16 | 浪潮集团有限公司 | Task management method and cloud data center management platform |
CN107450978A (en) * | 2016-05-31 | 2017-12-08 | 北京京东尚科信息技术有限公司 | The thread management method and device of distributed system |
CN107463439A (en) * | 2017-08-21 | 2017-12-12 | 山东浪潮通软信息科技有限公司 | A kind of thread pool implementation method and device |
-
2018
- 2018-02-01 CN CN201810102252.3A patent/CN108345499B/en active Active
- 2018-06-12 WO PCT/CN2018/090909 patent/WO2019148734A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599027A (en) * | 2009-06-30 | 2009-12-09 | 中兴通讯股份有限公司 | A kind of thread pool management method and system thereof |
US20170168843A1 (en) * | 2012-04-03 | 2017-06-15 | Microsoft Technology Licensing, Llc | Thread-agile execution of dynamic programming language programs |
CN103218264A (en) * | 2013-03-26 | 2013-07-24 | 广东威创视讯科技股份有限公司 | Multi-thread finite state machine switching method and multi-thread finite state machine switching device based on thread pool |
CN105760234A (en) * | 2016-03-17 | 2016-07-13 | 联动优势科技有限公司 | Thread pool management method and device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112965805A (en) * | 2021-03-25 | 2021-06-15 | 兴业数字金融服务(上海)股份有限公司 | Cross-process asynchronous task processing method and system based on memory mapping file |
CN112965805B (en) * | 2021-03-25 | 2023-12-05 | 兴业数字金融服务(上海)股份有限公司 | Cross-process asynchronous task processing method and system based on memory mapping file |
Also Published As
Publication number | Publication date |
---|---|
CN108345499A (en) | 2018-07-31 |
CN108345499B (en) | 2019-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019148734A1 (en) | Uniform thread pool processing method, application server, and computer readable storage medium | |
US9940346B2 (en) | Two-level management of locks on shared resources | |
WO2020238737A1 (en) | Database task processing method and apparatus, electronic device, and readable medium | |
US9396010B2 (en) | Optimization of packet processing by delaying a processor from entering an idle state | |
WO2019179026A1 (en) | Electronic device, method for automatically generating cluster access domain name, and storage medium | |
US8589537B2 (en) | Methods and computer program products for aggregating network application performance metrics by process pool | |
WO2019179027A1 (en) | Electronic device, firewall provisioning verification method, system and storage medium | |
CN108595282A (en) | A kind of implementation method of high concurrent message queue | |
US9038093B1 (en) | Retrieving service request messages from a message queue maintained by a messaging middleware tool based on the origination time of the service request message | |
US9378246B2 (en) | Systems and methods of accessing distributed data | |
WO2021169275A1 (en) | Sdn network device access method and apparatus, computer device, and storage medium | |
CN110445828B (en) | Data distributed processing method based on Redis and related equipment thereof | |
CN109842621A (en) | A kind of method and terminal reducing token storage quantity | |
CN110851276A (en) | Service request processing method, device, server and storage medium | |
WO2022142008A1 (en) | Data processing method and apparatus, electronic device, and storage medium | |
CN112217849B (en) | Task scheduling method, system and computer equipment in SD-WAN system | |
US10310857B2 (en) | Systems and methods facilitating multi-word atomic operation support for system on chip environments | |
CN111597056A (en) | Distributed scheduling method, system, storage medium and device | |
JP5884566B2 (en) | Batch processing system, progress confirmation device, progress confirmation method, and program | |
US20070180115A1 (en) | System and method for self-configuring multi-type and multi-location result aggregation for large cross-platform information sets | |
CN107632893B (en) | Message queue processing method and device | |
CN112100186B (en) | Data processing method and device based on distributed system and computer equipment | |
CN115774724A (en) | Concurrent request processing method and device, electronic equipment and storage medium | |
CN113778674A (en) | Lock-free implementation method of load balancing equipment configuration management under multi-core | |
US8566467B2 (en) | Data processing system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18903511 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.11.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18903511 Country of ref document: EP Kind code of ref document: A1 |