CN114237857A - Task distribution system for big data task capture - Google Patents
Task distribution system for big data task capture Download PDFInfo
- Publication number
- CN114237857A CN114237857A CN202111589330.5A CN202111589330A CN114237857A CN 114237857 A CN114237857 A CN 114237857A CN 202111589330 A CN202111589330 A CN 202111589330A CN 114237857 A CN114237857 A CN 114237857A
- Authority
- CN
- China
- Prior art keywords
- task
- type
- tasks
- task execution
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 claims abstract description 89
- 238000012360 testing method Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims description 27
- 238000011056 performance test Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 4
- 238000010998 test method Methods 0.000 claims 1
- 230000004931 aggregating effect Effects 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a task distribution system for big data task capture, which relates to the technical field of information and comprises the following steps: the task aggregator is used for aggregating the tasks to be processed uploaded from each task request terminal and attaching task type identifiers; the task type counting module is used for extracting task type identifications of the tasks to be processed, counting the quantity and the proportion of each task type and delivering the tasks to be processed to the message queue; the message queue provides a buffer area for temporarily storing the tasks to be processed; a task execution device for executing a task to be processed; the performance testing module is used for testing the processing capacity of each task execution device for different task types and classifying the task execution devices into corresponding device types according to equal proportion; and the load balancer distributes the tasks to be processed in the message queue to the task execution devices of the corresponding types. The invention distinguishes the processing capability of different types of tasks based on the task processing device, so that the performance of the whole system is optimal.
Description
Cross Reference to Related Applications
The application is based on application number 2021105075964, and the application date is: 2021, 5 months and 10 days, the invention name is: a task distribution system based on big data analysis.
Technical Field
The invention relates to the technical field of information, in particular to a task distribution system for capturing big data tasks.
Background
With the rapid development of mobile internet technology and the continuous upgrade of networks, when a task with a large data volume is processed, if a serial task processing mode is adopted, the processing time of the task is prolonged, so a parallel processing mode is generally adopted for the task, and a higher requirement is also provided for the parallel processing capability in order to improve the task distribution processing effect.
For example, the invention of publication No. CN103186418A discloses a task distribution method and system, in which a plurality of task processing devices are set to receive tasks only in an idle state, and when the number of tasks is greater than the number of task processing devices, each task processing device is processing tasks and there is no idle task processing device, so that the task distribution can be considered to meet the requirement of load balancing of each task processing device. Although the method meets the requirement of load balancing theoretically, only one task to be processed exists in the task processor each time, interaction with the message directory occurs between the same task processor processing any two adjacent tasks, and the interruption of task processing is inevitably caused in the process, so that the overall efficiency of task processing is reduced. In addition, in the prior art, only the overall performance of the task processing device is considered, but those skilled in the art know that tasks are classified into different types, for example, in the field of smart parks, different types of tasks such as face recognition, charging, navigation, statistics, and the like are included, and the processing capacities of different types of tasks are different for different task processing devices, so that in a distributed system, in order to optimize the overall performance of the task distribution system, a task distribution system capable of considering the processing capacities of each task processing device for different types is urgently needed.
Disclosure of Invention
The invention aims to provide a task distribution system for capturing big data tasks, which distinguishes the processing capacity of different types of tasks based on a task processing device, and enables the performance of the whole system to be optimal.
In order to achieve the purpose, the invention provides the following technical scheme:
a task distribution system for big data task capture comprises
The task aggregator is used for aggregating the tasks to be processed uploaded from the task request terminals and attaching task type identifiers to the tasks to be processed according to the task request terminals;
the task type counting module is used for extracting task type identifications of the tasks to be processed, counting the quantity and the proportion of each task type and delivering the tasks to be processed to the message queue;
the message queue provides a buffer area for temporarily storing the tasks to be processed;
the task execution devices are a plurality of in number and execute the tasks to be processed;
the performance testing module is used for testing the processing capacity of each task execution device on different task types, and classifying the task execution devices into corresponding device types according to equal proportion by combining the priority of the task types and the counted percentage of each task type;
and the load balancer distributes the tasks to be processed in the message queue to the task execution devices of the corresponding types.
Further, in the performance test module, a method for testing processing capabilities of different task categories is as follows:
generating a test task set containing a certain number of test tasks aiming at each task type, distributing the test task set to each task execution device respectively, counting the time for each task execution device to complete the test task set, and calculating the task amount of each task execution device to complete the task type in unit time as the processing capacity of the task execution device to the task type.
Further, in the performance test module, the classification method of the task execution device is to sequentially perform the following processing according to the priority of the task types from high to low:
and solving the product of the duty ratio of the task type of the current priority and the total number of the task execution devices as the number k of the corresponding device types, and attaching corresponding device type identifiers to the first k task execution devices with the strongest processing capability on the task type of the priority in the task execution devices without device type identifiers.
Further, the performance testing module allocates at least one standby task processing device when classifying the device types.
Further, the method for generating the standby task processing device is as follows:
KX1, rounding when solving for the number k;
KX2, after all device types are allocated, a plurality of task execution devices which are not allocated are left;
KX3 comparing the unassigned task execution devices with a predetermined number of spare task processing devices; if the unassigned task execution devices are larger than the preset number of standby task processing devices, entering KX 4; if the unassigned task execution devices are smaller than the preset number of standby task processing devices, entering KX 5;
KX4, sequentially polling the devices with the strongest processing capability corresponding to the task types in the unallocated task execution devices and attaching the corresponding device type identifiers until the unallocated task execution devices are equal to the preset number of standby task processing devices according to the sequence of the priorities of the task types from high to low;
KX5, sequentially polling the task execution devices with the marked device types from the task execution devices with the weakest processing capability of the corresponding task types to remove the identifiers thereof until the unassigned task execution devices are equal to the preset number of standby task processing devices;
KX6, attaching spare identifiers to the unassigned task execution devices.
Further, the load balancer comprises the following steps:
FP1, calculating the ratio of the processing capacity of each task execution device to the total processing capacity of the task execution device of the device type aiming at the same device type;
the FP2 takes the product of the number of the task types in the tasks to be processed and the ratio obtained in the FP1 as the upper limit of the task quantity of the task execution device for processing the batch of tasks aiming at the task types corresponding to the device types;
and the FP3 polls and distributes the tasks to be processed to different task execution devices which are corresponding to the device types and do not reach the upper limit of the task amount in sequence according to the task types after the load balancer extracts the tasks to be processed from the message queue, and removes the tasks from the polling and distributing queue when a certain task execution device reaches the upper limit of the task amount.
Further, after the task execution device finishes processing one task, feedback is sent back to the message queue through the load balancer, and after the message queue receives the feedback, the corresponding task to be executed is deleted.
Further, the load balancer monitors the completion of each task execution device, and when a certain task execution device completes all tasks to be processed, the task execution device of the same device class captures the tasks to be processed.
Further, the method for grabbing the task to be processed from the task execution devices of the same device category is as follows: calculating the predicted residual completion time in the task execution devices with the tasks to be processed in the same device category, comparing the predicted residual completion time with a set first time threshold, and if the predicted residual completion time of one or more task execution devices is larger than the first time threshold, transferring the tasks to be processed in the corresponding task execution devices to the task execution devices which have completed all the tasks to be processed according to the sequence of the predicted residual completion time from large to small until the predicted residual completion time of the task execution devices which have completed all the tasks to be processed reaches the first time threshold.
Compared with the prior art, the invention has the beneficial effects that: when the load balance of the tasks is carried out, the processing capacities of different task execution devices for different types of tasks are fully considered, an optimal balance scheme is selected from high to low according to the set task priority, the performance waste of the task execution devices is avoided, and the overall performance of the system is further improved.
Drawings
FIG. 1 is a system framework diagram of the present invention.
Fig. 2 is a flowchart of a method for generating a standby task processing device according to an embodiment of the present invention.
Fig. 3 is a flowchart of a method for allocating pending tasks in a message queue according to an embodiment of the present invention.
FIG. 4 is a flowchart of a method for fetching pending tasks according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a task distribution system for big data task grabbing, which includes
And the task aggregator is used for aggregating the tasks to be processed uploaded from the task request terminals and attaching task type identifiers to the tasks to be processed according to the task request terminals. When the intelligent park intelligent management system is applied to an intelligent park, various tasks can be generated by arranging a card punch for face recognition, fingerprint recognition and the like at a park entrance, a license plate recognition camera, a corresponding vehicle parking charging system and the like at a park exit, and generally the task types generated by each task request end are of the same type, so that the types of the tasks to be processed are distinguished according to the task request ends, and corresponding task type identifications are attached.
And the task type counting module is used for extracting the task type identification of the task to be processed, counting the task quantity and the duty ratio of each task type and delivering the task to be processed to the message queue. In one embodiment, the same batch of pending tasks includes 1000 { R1, R2.. RN }, where N is the total number of pending tasks, and N =1000 in this embodiment. There are 5 task types, the task type identification is A, B, C, D, E respectively, the corresponding task quantity is 150, 200, 250, 100, 300 respectively; the occupation ratios ZBj are respectively 0.15, 0.2, 0.25, 0.1 and 0.3; where j represents the task type, j ∈ (A, B, C, D, E).
A message queue providing a buffer for temporarily storing pending tasks { R1, R2.. RN };
a plurality of task execution devices, including { Z1, Z2,. ZM }, wherein M is the total number of task execution devices, and is used for executing the tasks to be processed;
and the performance testing module is used for testing the processing capacity of each task execution device on different task types, and classifying the task execution devices into corresponding device types according to equal proportion by combining the priority of the task types and the counted percentage of each task type.
The method for testing the processing capacity of different task categories comprises the following steps:
for each task type (A, B, C, D, E), a test task set including a certain number of test tasks is generated, the test task set is distributed to each task execution device, the time Tij of each task execution device for completing the test task set is counted, wherein i represents the serial number of the task execution device, i = [1, M ], j belongs to (A, B, C, D, E), the task amount (generally, the number of tasks is used as the task amount) of each task execution device i for completing the task type j in unit time is calculated as the processing capacity of the task execution device i for the task type j, and the processing capacity is recorded as NLij.
The classification method of the task execution device comprises the following steps of sequentially carrying out the following processing according to the priority of the task types from high to low; it is worth mentioning that the priority of the task type is pre-set by a person.
And the product of the duty ratio ZBj of the task type j of the current priority and the total number M of the task execution devices is obtained as the number k of the corresponding device types, and preliminary balance is realized on the basis of the duty ratio of the task types. In order to ensure that tasks with higher real-time requirements can be processed and completed in the first time, the first k task execution devices with the strongest processing capability for the task type with the priority in the task execution devices without device type identification are taken in the embodiment, and corresponding device type identifications are attached; the device type identifiers correspond to the task type identifiers one to one.
It is well known in the art that the number k must be an integer, and therefore, when the number k is actually calculated, the number k can be rounded or rounded to achieve this effect.
Further, the task execution device inevitably fails during the task execution process, and the remaining tasks cannot be performed. Therefore, it is necessary to provide a small number of standby task processing devices, which do not affect the overall progress and are only used when one or several task processing devices occasionally fail. Therefore, the invention allocates at least one standby task processing device when the performance test module is classifying the device type. Referring to fig. 2, the method for generating the standby task processing device is as follows:
KX1, rounding when solving for the number k; in the actual operation process, at least one task execution server remains.
KX2, after all device types are allocated, a plurality of task execution devices which are not allocated are left;
KX3 comparing the unassigned task execution devices with a preset number of spare task processing devices (determined by the staff member based on the actual situation); if the unassigned task execution devices are larger than the preset number of standby task processing devices, entering KX 4; if the unassigned task execution devices are smaller than the preset number of standby task processing devices, entering KX 5;
KX4, which is a type of device that requires the attachment of redundant unassigned task-executing devices to the corresponding task process, to further improve the overall performance of the system. And according to the sequence of the priorities of the task types from high to low, sequentially polling the unallocated task execution devices with the corresponding device type identifiers and the corresponding device type identifiers with the strongest processing capacity of the corresponding task types until the unallocated task execution devices are equal to the preset number of standby task processing devices. The attached device type with the strongest processing capability corresponding to the task type is correspondingly used as a standby device with weaker performance, so that the overall processing performance is improved.
And KX5, sequentially polling the task execution devices of the marked device types from the priority level of the task types from high to low to remove the identifier of the task execution device with the weakest processing capability corresponding to the task type until the task execution devices which are not allocated are equal to the preset number of spare task processing devices, and preferably selecting the task execution devices with weaker processing capability when a part of the task execution devices of the marked device types are to be removed as spare tasks, which is contrary to KX4 and has the effect of ensuring the whole processing performance.
KX6, attaching spare identifiers to the unassigned task execution devices. When one or more task processing devices do not process the completed tasks for a long time, the task processing devices are considered to have a downtime fault, and the tasks to be processed in the process are transferred to the standby task execution device immediately, so that the tasks can be ensured to be normally carried out.
And the load balancer distributes the tasks to be processed in the message queue to the task execution devices of the corresponding types. Referring to fig. 3, the specific allocation steps are as follows:
FP1, calculating the ratio of the processing capacity of each task execution device to the total processing capacity of the task execution device of the device type aiming at the same device type;
the FP2 takes the product of the number of the task types in the tasks to be processed and the ratio obtained in the FP1 as the upper limit of the task quantity of the task execution device for processing the batch of tasks aiming at the task types corresponding to the device types;
and the FP3 polls and distributes the tasks to be processed to different task execution devices which are corresponding to the device types and do not reach the upper limit of the task amount in sequence according to the task types after the load balancer extracts the tasks to be processed from the message queue, and removes the tasks from the polling and distributing queue when a certain task execution device reaches the upper limit of the task amount.
The allocation method can ensure that the completion time of each task execution device to the allocated tasks to be processed is approximately equal after the initial allocation.
It should be noted that, after the task execution device processes one task, the task execution device sends feedback back to the message queue through the load balancer, and after the message queue receives the feedback, deletes the corresponding task to be executed.
As is known in the art, during the process of executing a task by a task processing device, the execution environment of the task processing device may change, such as temperature, humidity, voltage, etc., so that after the initial assignment, the actual task completion time of the task execution device is not exactly the same as the expected task completion time, which may result in a situation where one or more task execution devices are faster and other task execution devices are slower.
Based on the request, the application provides a new grabbing mechanism: and the load balancer monitors the completion condition of each task execution device, and when a certain task execution device completes all tasks to be processed, the task execution devices of the same device category capture the tasks to be processed. The method for grabbing the task to be processed comprises the following steps: the method comprises the steps of calculating the predicted residual completion time of a task execution device with a task to be processed in the same device category, comparing the predicted residual completion time with a set first time threshold, wherein the first time threshold is the time allowed by workers, and the overall task processing progress is not influenced within the first time threshold. If the predicted residual completion time of one or more task execution devices is larger than the first time threshold, transferring the tasks to be processed in the corresponding task execution devices to the task execution devices which have completed all the tasks to be processed according to the sequence of the predicted residual completion time from large to small until the predicted residual completion time of the task execution devices which have completed all the tasks to be processed reaches the first time threshold.
Such as: the first time threshold is set to 10s,
when the task execution devices of the same device type are used, 1 of the task execution devices completes all tasks to be processed firstly;
calculating the predicted remaining completion time of other task performing devices of the same device class; the projected remaining completion time is calculated as: dividing the remaining tasks to be processed by the processing capacity of the corresponding task type;
grabbing a certain amount of tasks to be processed from the task execution device with the maximum predicted remaining completion time, and transferring the tasks to be processed to the task execution device which completes all the tasks to be processed; referring to fig. 4, the capturing amount is calculated by the following method:
ZQ1, calculating the task amount pi which each task execution device can expect to complete in the time period of the next first time threshold; taking the difference between the actual unfinished task amount qi and the task amount pi as an overflow value si of the task execution device, wherein i is the serial number of the task execution device;
ZQ2, sorting overflow values si of each task execution device;
ZQ3, calculating the task amount p0 which can be expected to be completed by the task execution device which completes all the tasks to be processed in the time period of the next first time threshold; comparing the task amount p0 with the maximum value smax of the overflow value si, if the task amount p0 is smaller than smax, only capturing the tasks to be processed in the task execution device corresponding to the maximum value smax of the overflow value si to the task execution devices which finish all the tasks to be processed, wherein the captured amount is p 0;
ZQ4, if the task quantity p0 is larger than smax, then capturing the overflow pending task whose overflow value smax corresponds to the overflow in the task execution device to the task execution device which has completed all pending tasks,
and ZQ5, repeating the steps from ZQ2 to ZQ3 until the total amount of the captured tasks reaches the task amount p0 which can be expected to be completed by the task execution device which completes all the tasks to be processed in the next time threshold period.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111589330.5A CN114237857B (en) | 2021-05-10 | 2021-05-10 | A task distribution system for big data task crawling |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111589330.5A CN114237857B (en) | 2021-05-10 | 2021-05-10 | A task distribution system for big data task crawling |
CN202110507596.4A CN112988360B (en) | 2021-05-10 | 2021-05-10 | A task distribution system based on big data analysis |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110507596.4A Division CN112988360B (en) | 2021-05-10 | 2021-05-10 | A task distribution system based on big data analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114237857A true CN114237857A (en) | 2022-03-25 |
CN114237857B CN114237857B (en) | 2024-12-27 |
Family
ID=76337452
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110507596.4A Active CN112988360B (en) | 2021-05-10 | 2021-05-10 | A task distribution system based on big data analysis |
CN202111589330.5A Active CN114237857B (en) | 2021-05-10 | 2021-05-10 | A task distribution system for big data task crawling |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110507596.4A Active CN112988360B (en) | 2021-05-10 | 2021-05-10 | A task distribution system based on big data analysis |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN112988360B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116225725A (en) * | 2023-05-10 | 2023-06-06 | 西安敦讯信息技术有限公司 | Flow configuration method and system based on RPA robot |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112988360B (en) * | 2021-05-10 | 2022-04-01 | 杭州绿城信息技术有限公司 | A task distribution system based on big data analysis |
CN113901262B (en) * | 2021-09-24 | 2024-12-24 | 北京达佳互联信息技术有限公司 | Method, device, server and storage medium for obtaining data to be processed |
CN116302404B (en) * | 2023-02-16 | 2023-10-03 | 北京大学 | Resource decoupling data center-oriented server non-perception calculation scheduling method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107741882A (en) * | 2017-11-22 | 2018-02-27 | 阿里巴巴集团控股有限公司 | The method and device and electronic equipment of distribution task |
CN112380024A (en) * | 2021-01-18 | 2021-02-19 | 天道金科股份有限公司 | Thread scheduling method based on distributed counting |
CN112988360A (en) * | 2021-05-10 | 2021-06-18 | 杭州绿城信息技术有限公司 | Task distribution system based on big data analysis |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101799809B (en) * | 2009-02-10 | 2011-12-14 | 中国移动通信集团公司 | Data mining method and system |
KR101553649B1 (en) * | 2013-05-13 | 2015-09-16 | 삼성전자 주식회사 | Multicore apparatus and job scheduling method thereof |
CN108965364B (en) * | 2017-05-22 | 2021-06-11 | 杭州海康威视数字技术股份有限公司 | Resource allocation method, device and system |
CN109309726A (en) * | 2018-10-25 | 2019-02-05 | 平安科技(深圳)有限公司 | Document generating method and system based on mass data |
CN111709613B (en) * | 2020-05-26 | 2025-01-14 | 中国平安财产保险股份有限公司 | Method, device and computer equipment for automatic task allocation based on data statistics |
CN112162839A (en) * | 2020-09-25 | 2021-01-01 | 太平金融科技服务(上海)有限公司 | Task scheduling method and device, computer equipment and storage medium |
-
2021
- 2021-05-10 CN CN202110507596.4A patent/CN112988360B/en active Active
- 2021-05-10 CN CN202111589330.5A patent/CN114237857B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107741882A (en) * | 2017-11-22 | 2018-02-27 | 阿里巴巴集团控股有限公司 | The method and device and electronic equipment of distribution task |
CN112380024A (en) * | 2021-01-18 | 2021-02-19 | 天道金科股份有限公司 | Thread scheduling method based on distributed counting |
CN112988360A (en) * | 2021-05-10 | 2021-06-18 | 杭州绿城信息技术有限公司 | Task distribution system based on big data analysis |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116225725A (en) * | 2023-05-10 | 2023-06-06 | 西安敦讯信息技术有限公司 | Flow configuration method and system based on RPA robot |
Also Published As
Publication number | Publication date |
---|---|
CN114237857B (en) | 2024-12-27 |
CN112988360B (en) | 2022-04-01 |
CN112988360A (en) | 2021-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112988360B (en) | A task distribution system based on big data analysis | |
CN112162865B (en) | Scheduling method and device of server and server | |
CN109726983B (en) | Method, device, computer equipment and storage medium for assigning approval tasks | |
CN102193832B (en) | Cloud Computing Resource Scheduling Method and Application System | |
CN112272203B (en) | Cluster service node selection method, system, terminal and storage medium | |
CN107918864B (en) | Electronic insurance policy generation method and device, computer equipment and storage medium | |
CN112134802A (en) | Edge computing power resource scheduling method and system based on terminal triggering | |
CN105487930A (en) | Task optimization scheduling method based on Hadoop | |
CN113010576A (en) | Method, device, equipment and storage medium for capacity evaluation of cloud computing system | |
CN107968802A (en) | The method, apparatus and filtering type scheduler of a kind of scheduling of resource | |
CN109240820A (en) | Processing method and processing device, electronic equipment and the storage medium of image processing tasks | |
CN110955516A (en) | Batch task processing method and device, computer equipment and storage medium | |
CN109117280A (en) | The method that is communicated between electronic device and its limiting process, storage medium | |
CN116755891B (en) | Event queue processing method and system based on multithreading | |
CN117707763A (en) | Hierarchical calculation scheduling method, system, equipment and storage medium | |
CN112612610B (en) | SOC service quality guarantee system and method based on Actor-Critic deep reinforcement learning | |
CN110648060A (en) | Method for automatically allocating tasks to customer service | |
CN107770038B (en) | Message sending method and device | |
CN112817732B (en) | A stream data processing method and system adapting to cloud-edge collaborative multi-data center scenarios | |
CN116089030A (en) | Data processing method, system, computer device and storage medium | |
CN111628943B (en) | Intelligent Internet of things method based on communication and perception technology | |
CN114915628A (en) | Data resource calling system and method for cloud computing database | |
CN111124688A (en) | Server resource control method and system | |
CN118656216B (en) | A data center resource management system and method based on cloud computing | |
CN117407143B (en) | Data center management system based on cloud computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |