[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112448977A - System, method, apparatus and computer readable medium for assigning tasks - Google Patents

System, method, apparatus and computer readable medium for assigning tasks Download PDF

Info

Publication number
CN112448977A
CN112448977A CN201910816194.5A CN201910816194A CN112448977A CN 112448977 A CN112448977 A CN 112448977A CN 201910816194 A CN201910816194 A CN 201910816194A CN 112448977 A CN112448977 A CN 112448977A
Authority
CN
China
Prior art keywords
task
client
processing
scheduler
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910816194.5A
Other languages
Chinese (zh)
Inventor
韩立村
于林坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910816194.5A priority Critical patent/CN112448977A/en
Publication of CN112448977A publication Critical patent/CN112448977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a system, a method, equipment and a computer readable medium for distributing tasks, and relates to the technical field of computers. One embodiment of the system: the system comprises: a client, one or more task schedulers, and one or more worker nodes; the client is used for requesting the task scheduler to process the task of the hypertext transfer protocol HTTP request by adopting a hypertext preprocessor PHP; the task scheduler is used for receiving the tasks, distributing the tasks to the working nodes and monitoring task processing results of the working nodes; and the working node is used for processing the task. This embodiment can reduce the processing time for processing the HTTP request, and therefore the HTTP request transmitted by the PHP is less likely to be timed out.

Description

System, method, apparatus and computer readable medium for assigning tasks
Technical Field
The present invention relates to the field of computer technology, and in particular, to a system, method, device, and computer readable medium for distributing tasks.
Background
LNMP web site architecture is currently the internationally popular network framework that includes: the system comprises a Linux operating system, an Nginx network server, a MySQL database and a Hypertext Preprocessor (PHP), wherein the Linux operating system, the Nginx network server, the MySQL database and the PHP are combined together to form a free and efficient website service system.
Among them, the fast, flexible and practical features of PHP determine that it is particularly suitable for website development. According to surveys, there are over 78.9% of public websites worldwide that employ PHP on the server side, which drives over 2 billion websites worldwide.
The universal Gateway Interface (FPM) is a Process Manager in the PHP express universal Gateway Interface (FastCGI) mode of operation, and its core function is Process management. The multi-process mode is adopted between the network server (Nginx) and the FPM, and because the working node (working node) is a process blocking model, the working node processes the sub-processes of the FPM, namely: the sub-process of the FPM can only respond to one hypertext Transfer Protocol (HTTP) request at the same time, and only after the HTTP request is processed, the next HTTP request is received.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art: since a part of HTTP requests consumes a large processing time, there is a high possibility that the HTTP requests transmitted through the PHP are timed out.
Disclosure of Invention
In view of this, embodiments of the present invention provide a system, a method, a device, and a computer-readable medium for distributing tasks, which can reduce the processing time for processing an HTTP request, so that the HTTP request sent by a PHP is less likely to be timed out.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a system for distributing tasks, the system including: a client, one or more task schedulers, and one or more worker nodes;
the client is coupled with the task scheduler, and the task scheduler is coupled with the one or more work nodes;
the client is used for requesting the task scheduler to process the task of the hypertext transfer protocol HTTP request by adopting a hypertext preprocessor PHP;
the task scheduler is used for receiving the tasks, distributing the tasks to the working nodes and monitoring task processing results of the working nodes;
and the working node is used for processing the task.
The task scheduler is further configured to store the task of the HTTP request in a database;
and when the task scheduler is restarted, loading the tasks requested by the HTTP from the database.
The client requests the task scheduler to process the task in an asynchronous mode by adopting PHP, and after the task is sent to the task scheduler, the connection with the task scheduler is disconnected;
or the like, or, alternatively,
and the client requests the task scheduler to process the task in a synchronous mode by adopting the PHP, and maintains the connection with the task scheduler after sending the task to the task scheduler so as to wait for the processing result of the task.
And under the condition that the client requests the task scheduler to process the task in a synchronous mode by adopting the PHP, the client inquires the processing state of the task from the task scheduler through the connection.
The system further comprises a monitoring node for monitoring the status of the task scheduler and the status of the working node,
the state of the working node comprises an IP address of the working node, an IP address of a task scheduler connected with the working node, a name for executing the task and the number of sub threads for executing the task;
the state of the task scheduler includes an IP address of the task scheduler, a number of outstanding tasks, a number of working nodes that are executing the tasks, and a number of available working nodes.
According to a second aspect of the embodiments of the present invention, there is provided a method of allocating tasks, including:
receiving a task which is sent by a client and processes a hypertext transfer protocol (HTTP) request by adopting a hypertext preprocessor (PHP) request;
and distributing the task of the HTTP request to a working node, and monitoring a task processing result of the working node.
The method further comprises the following steps:
storing the task of the HTTP request in a database;
before the task of processing the HTTP request, the method further includes:
and loading the task of the HTTP request from the database when restarting.
The task of receiving the PHP request and processing the HTTP request comprises the following steps:
and after receiving the task of requesting to process the HTTP request in an asynchronous mode by the PHP, disconnecting the client.
The task of processing the HTTP request by adopting the PHP request sent by the client side is received, and the task comprises the following steps:
receiving a request sent by a client in an asynchronous mode by adopting a PHP (hypertext preprocessor) and disconnecting the client after processing the task;
or the like, or, alternatively,
and receiving a request sent by a client in a synchronous mode by adopting a PHP (hypertext preprocessor) and maintaining the connection with the client after the task is processed so as to wait for the processing result of the task.
According to a third aspect of embodiments of the present invention, there is provided an electronic device for distributing tasks, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method as described above.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method as described above.
One embodiment of the above invention has the following advantages or benefits: because the client uses the PHP request task scheduler to process the tasks of the HTTP request. And the task scheduler receives the tasks, distributes the tasks to the working nodes and monitors the task processing results of the working nodes. The work nodes are used for processing tasks. Since the working node can process the task quickly, speed up the task processing and reduce the processing time of the HTTP request, the HTTP request sent by the PHP is less likely to be overtime.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of a process for processing an HTTP request according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the main structure of a system for assigning tasks according to an embodiment of the invention;
FIG. 3 is a timing diagram for allocating asynchronous processing tasks according to an embodiment of the invention;
FIG. 4 is a timing diagram of the allocation of synchronous processing tasks according to an embodiment of the invention;
FIG. 5 is a schematic illustration of monitoring according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a main flow of a method of assigning tasks according to an embodiment of the invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 8 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
And a multiprocessing mode is adopted between the network server and the FPM to process the HTTP request sent by the client. Referring to fig. 1, fig. 1 is a schematic diagram of a process of HTTP request according to an embodiment of the present invention. Including clients, web servers, and sub-processes.
The client sends the HTTP request to the network server, and the network server sends the HTTP request to the subprocess of the FPM for processing through the FastCgi. Specifically, the FPM creates and listens in a host process by creating the host process, and then re-carves out a plurality of sub-processes, each of which receives an HTTP request. The sub-process may receive and process the next HTTP request after processing the received HTTP request. If the previous HTTP request is not processed, the sub-process cannot process the next HTTP request. Since a part of HTTP requests consumes a large amount of processing time, there is a high possibility that the HTTP requests transmitted through the PHP will be overtime.
In order to solve the technical problem that the HTTP request sent by the PHP is likely to be overtime, the following technical solution in the embodiment of the present invention may be adopted.
Referring to fig. 2, fig. 2 is a schematic diagram of a main structure of a system for allocating tasks according to an embodiment of the present invention, specifically including a client, a task scheduler, and a work node. The client is coupled with the task scheduler; the task scheduler is coupled to the work node.
Specifically, client A1 and client A2 are each coupled to task scheduler A. The task scheduler a is coupled to the working node 1, the working node 2, the working node 3, the working node 4, the working node 5, and the working node 6, respectively. The task scheduler a is coupled to the database a.
Client B1 and client B2 are each coupled to task scheduler B. The task scheduler B is coupled to the working node 1, the working node 2, the working node 3, the working node 4, the working node 5, and the working node 6, respectively. The task scheduler B is coupled to the database B.
It is understood that the task scheduler may connect a plurality of clients; the task scheduler can be connected with one or more databases; the task scheduler may connect a plurality of work nodes. The same working node can register a plurality of task schedulers at the same time, and one working node can be connected with one or more task schedulers. The work node and the task scheduler are directly connected through a heartbeat mechanism. The heartbeat mechanism is a mechanism for sending a self-defined structure body, namely a heartbeat packet at fixed time to let the task scheduler know that the task scheduler still stays alive so as to ensure the validity of the connection. In this way, a task may be scheduled to process by one of the task schedulers in the event that the work node is idle.
In the embodiment of the invention, a plurality of task schedulers are deployed in a plurality of computer rooms in a production environment, so that each client HTTP request can be processed in time. Illustratively, the client randomly obtains an IP address of a task scheduler from the task scheduler list. Wherein the task scheduler includes IP addresses of the plurality of task schedulers. The client determines whether a task scheduler randomly acquiring the IP address is available or not through a ping function; and if the task scheduler is unavailable, randomly acquiring the IP address of one task scheduler from the task scheduler list. Then, according to the technical scheme of the embodiment of the invention, the task of the HTTP request is sent to the working node for processing through the task scheduler.
And processing the task of the HTTP request by adopting a PHP request task scheduler. The task of the HTTP request may include two types, one is an exception handling task and the other is a synchronization handling task.
Asynchronous processing tasks include tasks that do not require a client to wait for the results of the processing. As one example, asynchronous processing tasks include one or more of the following: order processing, batch mail, notification message, mass texting and log aggregation.
The synchronous processing tasks include tasks that require the client to wait for the processing results. As one example, the synchronization processing task includes generating a thumbnail and/or a picture cropping.
And the task scheduler receives a task of the HTTP request sent by the client. And distributing the task of the HTTP request to the working node, and monitoring the task processing result of the working node.
Specifically, when the task of the HTTP request is an asynchronous processing task, the task of the HTTP request is distributed to the work node to be asynchronously processed. And when the task of the HTTP request is a synchronous processing task, distributing the task of the HTTP request to the working node for synchronous processing.
The worker node is responsible for the task of processing the HTTP request. And carrying out asynchronous processing on the asynchronous processing task. The asynchronous processing result does not need to be returned to the client, and after the asynchronous processing is finished, a finishing message is directly sent to the task scheduler. The aim is to inform the task scheduler that the asynchronous processing task has been completed.
And for the synchronous processing task, the working node carries out synchronous processing. Considering that the client waits for the synchronous processing result, the task scheduler can allocate the synchronous processing task to the working node with stronger computing power for processing. After the synchronous processing task is completed, the working node needs to feed back a synchronous processing result to the client. And the synchronous processing result is forwarded to the client through the task scheduler.
In one embodiment of the invention, the task of losing HTTP requests that have not yet been processed is avoided. The tasks of the HTTP request may also be stored in a database, which may be a Redis database. In case of an emergency, such as: a task scheduler restart or a task scheduler crash unexpectedly. Upon restarting the task scheduler, the task scheduler may load the HTTP requested task from the database.
In the above embodiment, the task scheduler allocates the task of the HTTP request to the worker node. The worker nodes are responsible for processing asynchronous processing tasks and/or synchronous processing tasks. Directly sending a completion message to a task scheduler for an asynchronous processing task; and sending the synchronous processing result to the client aiming at the synchronous processing task. The plurality of working nodes can respectively and simultaneously process the asynchronous processing tasks and the synchronous processing tasks, and the processing progress of different tasks does not influence the processing of other tasks. Further, the speed of processing asynchronous processing tasks and/or synchronous processing tasks is increased, and the processing time for processing the HTTP request is reduced, so that the HTTP request transmitted by the PHP is less likely to be overtime. And further, the HTTP request of the client is responded rapidly, and the user experience is greatly improved.
The allocation of asynchronous processing tasks and the allocation of synchronous processing tasks are illustrated separately in the following with reference to the accompanying drawings.
Referring to fig. 3, fig. 3 is a timing diagram of allocating asynchronous processing tasks according to an embodiment of the invention.
In fig. 3, the client initiates an asynchronous processing task of the HTTP request by using the PHP, and allocates the asynchronous processing task to the work node through the task scheduler and the task daemon, and the work node processes the asynchronous processing task. The task daemon is a PHP daemon started by the PHP in a CLI mode and is responsible for communication with the working nodes and creation of the subprocess. CLI mode is PHP command line interface. And the task scheduler monitors the task processing result of the working node through the task daemon.
S301, asynchronous messages.
The client initiates a new task (SUBMIT _ JOB _ BG) by HTTP request, submitting the asynchronous processing task to the task scheduler.
And S302, idling.
The task daemon registers on the task scheduler through an idle (CAN _ DO) message.
And S303, requesting a task.
The task daemon initiates a request task (GRAB _ JOB) message, requesting to dispatch a task.
And S304, successfully creating the task.
And the task scheduler asynchronously replies a client task creation success (JOB _ CREATED) message to prompt the client that the task creation is successful.
And S305, no task.
If the task scheduler has NO task to dispatch, a NO task (NO _ JOB) message is returned to inform the working node that NO task exists.
And S306, disconnecting.
Because the task initiated by the client through the HTTP request is an asynchronous processing task, the client does not need to wait for a processing result for the asynchronous processing task. Since the client does not need to wait for the processing result, the connection between the client and the task scheduler may be disconnected after S304 in order to avoid resource consumption caused by the client and the task scheduler remaining connected. The client Disconnects (DISCONNET) from the task scheduler.
And S307, sleeping.
And after receiving the NO _ JOB, the task daemon enters a dormant state. Meanwhile, a PRE _ SLEEP message is sent to the task scheduler to inform the task scheduler to: if there is a task, please wake up with NOOP.
S308、NOOP
The task scheduler sends a NOOP message to the task daemon.
S309, request task
And after receiving the NOOP message, the task scheduler sends GRAB _ JOB to request the task from the task scheduler.
And S310, distributing tasks.
If the current asynchronous processing task is not processed, the task scheduler sends the asynchronous processing task to the task daemon process and sends a JOB _ ASSIGN message to the task daemon process.
S311, creating a task.
After receiving JOB _ ASSIGN message, the task daemon rewrites the subprocess in the CLI mode of PHP and monitors the state of the subprocess all the time
And S312, completing the task.
And the child process of the working node performs corresponding business logic processing to process the asynchronous processing task under the condition of receiving the context environment variable of the parent process.
And S313, completing the task.
After the task daemon monitors that the sub-process of the working node finishes processing the task, the task daemon sends a WORK _ COMPLETE message to a task scheduler to inform that the asynchronous processing task is finished. At the same time, new sub-processes are replicated to handle subsequently assigned tasks.
In the above embodiment of fig. 3, the client sends the asynchronous processing task to the work node through the task scheduler and the task daemon, and the work node is responsible for processing the asynchronous processing task. Due to the asynchronous processing task, the client does not need to wait for an asynchronous processing result, and after the asynchronous processing task is successfully established, the connection between the client and the task scheduler is disconnected, so that the waste of resources is avoided.
Referring to fig. 4, fig. 4 is a timing diagram of allocating synchronous processing tasks according to an embodiment of the present invention.
In fig. 4, the client initiates a processing task of an HTTP request by using a PHP, and allocates a synchronous processing task to a work node through a task scheduler and a task daemon, and the work node processes the synchronous processing task.
And S401, idling.
The task daemon registers on the task scheduler through an idle (CAN _ DO) message.
S402, requesting a task.
The task daemon initiates a request task (GRAB _ JOB) message, requesting to dispatch a task.
And S403, no task exists.
If the task scheduler has NO task to dispatch, a NO task (NO _ JOB) message is returned to inform the working node that NO task exists.
And S404, sleeping.
And after receiving the NO _ JOB, the task daemon enters a dormant state. Meanwhile, a PRE _ SLEEP message is sent to the task scheduler to inform the task scheduler to: if there is a task, please wake up with NOOP.
And S405, submitting the task.
The client initiates a new task (SUBMIT _ JOB _ BG) through HTTP request, and SUBMITs the synchronization processing task to the task scheduler.
S406, the task creation is successful.
And synchronously replying a JOB _ CREATED success (JOB _ CREATED) message to the client by the task scheduler to prompt the success of the task creation of the client.
S407、NOOP
The task scheduler sends a NOOP message to the task daemon.
S408, requesting task
And after receiving the NOOP message, the task scheduler sends GRAB _ JOB to request the task from the task scheduler.
And S409, acquiring the state.
And acquiring the state of the task scheduler, wherein the task scheduler is in a connection state, which indicates that the task scheduler can send the processing result of the synchronous processing task. Since the client is connected to the task scheduler, the client can query the task scheduler for the processing state of the synchronous processing task through the connection. The client can timely acquire the progress of the synchronous processing task.
And S410, distributing tasks.
And if the current synchronous processing task is not processed, the task scheduler dispatches the synchronous processing task to the task daemon process and sends a JOB _ ASSIGN message to the task daemon process.
S411, creating a task.
After receiving JOB _ ASSIGN message, the task daemon rewrites the subprocess in the CLI mode of PHP and monitors the state of the subprocess all the time
S412, feedback of state results
And the task scheduler feeds back the state result to the client. To ensure that the task scheduler is properly connected to the client.
And S413, completing the task.
And the child process of the working node performs corresponding business logic processing to process the synchronous processing task under the condition of receiving the context environment variable of the parent process.
And S414, completing the task.
After the task daemon monitors that the sub-process of the working node finishes processing the task, the task daemon sends a WORK _ COMPLETE message to a task scheduler to inform that the synchronous processing task is finished. At the same time, new sub-processes are replicated to handle subsequently assigned tasks.
And S415, completing the task.
And the task scheduler informs the terminal that the synchronous processing task is completed and feeds back a processing result to the client.
And S416, disconnecting the connection.
And after receiving the processing result, the client disconnects the connection between the client and the task scheduler.
In the embodiment of fig. 4, the client sends the synchronous processing task to the work node through the task scheduler and the task daemon, and the work node is responsible for processing the synchronous processing task. Since it is a synchronous processing task, the client needs to wait for the synchronous processing result, and after S406, the connection between the client and the task scheduler is maintained to wait for the processing result of the synchronous processing task. And after receiving the synchronous processing result, disconnecting the connection between the client and the task scheduler.
The embodiment of fig. 3 is for processing asynchronous processing tasks and the embodiment of fig. 4 is for processing synchronous processing tasks. The client does not need to wait for the asynchronous processing result, so that the connection between the client and the task scheduler can be disconnected after the task is successfully established; since the client needs to wait for the processing result of the synchronization processing task, the client disconnects the connection between the client and the task scheduler after receiving the synchronization processing result.
Referring to fig. 5, fig. 5 is a schematic illustration of monitoring according to an embodiment of the invention. The working state of the task scheduler and the working state of the working node can be obtained in real time.
In one embodiment of the invention, the system may further comprise a monitoring node for monitoring the status of the task scheduler and the status of the working node.
Illustratively, in fig. 5, different task schedulers are distinguished by their IP addresses. The numerical value in the Queue is the number of tasks which are not completed in the corresponding tasks; the numerical value in Running is the number of working nodes which are executing the task in the corresponding task; the value in the cable is the number of available working nodes in the corresponding task. It can be seen that the working state of the 3 task schedulers is shown in fig. 5.
In fig. 5, different working nodes are distinguished by their IP addresses. In which the working states of 3 working nodes are shown. In the working state of each working node, the IP address of the task scheduler to which the working node is connected, the name of the executing processing task, and the number of sub-threads executing the processing task are included.
The working state of the task scheduler in fig. 5 is based on the following information sent by the task scheduler: the number of tasks that are not completed, the number of working nodes that are performing the tasks, and the number of available working nodes. That is, the state of the task scheduler includes the IP address of the task scheduler, the number of outstanding tasks, the number of working nodes that are executing the tasks, and the number of available working nodes.
The working state of the working node in fig. 5 is formed based on the following information sent by the working node: the IP address of the working node, the IP address of the task scheduler connected with the working node, the name of the executed task and the number of sub-threads of the executed task. That is, the state of the working node includes: the IP address of the working node, the IP address of the task scheduler connected with the working node, the name of the executed task and the number of sub-threads of the executed task.
The working state of each task scheduler and each working node can be clearly known in fig. 5, so that the state of each task scheduler and each working node can be known in time.
After the technical scheme in the embodiment of the invention is adopted, statistics is carried out on a certain interface. Since the TP99 of this interface can be maintained around 6 milliseconds (ms), whereas with the prior art to handle HTTP requests, TP99 is typically up to several seconds. Therefore, the technical scheme of the embodiment of the invention has higher stability and processing efficiency.
Wherein, TP99 is to count the time consumed by each call in the technical solution of the present invention within a time period (e.g. 5 minutes), and sort the times in order from small to large, and take the 99 th% value as the TP99 value. After configuring the alarm threshold corresponding to the TP99 value, it needs to be ensured that at least 99% of the consumed time of all calls in the time period is less than the alarm threshold, otherwise, the system will alarm.
In addition, pressure measurement is carried out on the basis of the technical scheme in the embodiment of the invention, under the condition that the concurrency number is 100, the time lasts for 5 minutes, the TP99 is 159ms, and the TPS2172 is used for a single machine of a container comprising a 4-core CPU and an 8G memory. With the prior art, it is often difficult for TPS to exceed 600. The concurrency number refers to the number of connections accessing the server site at the same time. Pressure testing, which is a test method for establishing system stability, is usually performed outside the normal operating range of the system to examine the functional limits and hidden dangers.
Referring to fig. 6, fig. 6 is a schematic diagram of a main flow of a method of allocating tasks according to an embodiment of the present invention. The task scheduler may execute a method for allocating tasks, as shown in fig. 6, where the method for allocating tasks specifically includes:
s601, receiving a task which is sent by a client and adopts a PHP request to process the HTTP request.
S602, distributing the task of the HTTP request to the working node, and monitoring the task processing result of the working node.
In one embodiment of the invention, the method of assigning tasks further comprises: the task of the HTTP request is stored in a database.
Before the task of processing the HTTP request, the method further comprises the following steps:
at restart, the task of the HTTP request is loaded from the database.
In an embodiment of the present invention, a task of receiving a PHP request sent by a client and processing an HTTP request includes:
and receiving a request sent by the client in an asynchronous mode by adopting the PHP, and disconnecting the client after processing the task of the HTTP request.
Or the like, or, alternatively,
and receiving a request sent by the client in a synchronous mode by adopting the PHP, and maintaining the connection with the client after processing the task of the HTTP request so as to wait for the processing result of the task of the HTTP request.
In one embodiment of the present invention, further comprising: an inquiry of the processing state of the HTTP requested task is received through the above connection.
FIG. 7 illustrates an exemplary system architecture 700 for a method of assigning tasks or a system of assigning tasks to which embodiments of the present invention may be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 701, 702, 703 to interact with a server 705 over a network 704, to receive or send messages or the like. The terminal devices 701, 702, 703 may have installed thereon various communication client applications, such as a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, social platform software, etc. (by way of example only).
The terminal devices 701, 702, 703 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 701, 702, 703. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the method for distributing tasks provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the system for distributing tasks is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a transmitting unit, an obtaining unit, a determining unit, and a first processing unit. The names of these units do not in some cases constitute a limitation to the unit itself, and for example, the sending unit may also be described as a "unit sending a picture acquisition request to a connected server".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
receiving a task which is sent by a client and processes a hypertext transfer protocol (HTTP) request by adopting a hypertext preprocessor (PHP) request;
and distributing the task of the HTTP request to a working node, and monitoring a task processing result of the working node.
According to the technical scheme of the embodiment of the invention, the PHP request task scheduler is adopted to process the task of the HTTP request. And distributing the tasks to the working nodes and monitoring the task processing results of the working nodes. The work nodes are used for processing tasks. Since the working node can process the task quickly, speed up the task processing and reduce the processing time of the HTTP request, the HTTP request sent by the PHP is less likely to be overtime.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A system for distributing tasks, the system comprising: a client, one or more task schedulers, and one or more worker nodes;
the client is coupled with the task scheduler, and the task scheduler is coupled with the one or more work nodes;
the client is used for requesting the task scheduler to process the task of the hypertext transfer protocol HTTP request by adopting a hypertext preprocessor PHP;
the task scheduler is used for receiving the tasks, distributing the tasks to the working nodes and monitoring task processing results of the working nodes;
and the working node is used for processing the task.
2. The system for distributing tasks according to claim 1, wherein said task scheduler is further configured to store said HTTP requested task in a database;
and when the task scheduler is restarted, loading the tasks requested by the HTTP from the database.
3. The system for distributing tasks according to claim 1, wherein the client uses PHP to asynchronously request the task scheduler to process the task, and after sending the task to the task scheduler, disconnects the task scheduler;
or the like, or, alternatively,
and the client requests the task scheduler to process the task in a synchronous mode by adopting the PHP, and maintains the connection with the task scheduler after sending the task to the task scheduler so as to wait for the processing result of the task.
4. The system for distributing tasks according to claim 3, wherein in case that the client requests the task scheduler to process the task in a synchronous manner using PHP,
and the client inquires the processing state of the task from the task scheduler through the connection.
5. The system for distributing tasks according to claim 1, further comprising a monitoring node for monitoring a status of said task scheduler and a status of said working node,
the state of the working node comprises an IP address of the working node, an IP address of a task scheduler connected with the working node, a name for executing the task and the number of sub threads for executing the task;
the state of the task scheduler includes an IP address of the task scheduler, a number of outstanding tasks, a number of working nodes that are executing the tasks, and a number of available working nodes.
6. A method of distributing tasks, comprising:
receiving a task which is sent by a client and processes a hypertext transfer protocol (HTTP) request by adopting a hypertext preprocessor (PHP) request;
and distributing the task of the HTTP request to a working node, and monitoring a task processing result of the working node.
7. The method of assigning tasks according to claim 6, further comprising:
storing the task of the HTTP request in a database;
before the task of processing the HTTP request, the method further includes:
and loading the task of the HTTP request from the database when restarting.
8. The method for distributing tasks according to claim 6, wherein the receiving the tasks of processing the HTTP request by using the PHP request sent by the client comprises:
receiving a request sent by a client in an asynchronous mode by adopting a PHP (hypertext preprocessor) and disconnecting the client after processing the task;
or the like, or, alternatively,
and receiving a request sent by a client in a synchronous mode by adopting a PHP (hypertext preprocessor) and maintaining the connection with the client after the task is processed so as to wait for the processing result of the task.
9. An electronic device for distributing tasks, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 6-8.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 6-8.
CN201910816194.5A 2019-08-30 2019-08-30 System, method, apparatus and computer readable medium for assigning tasks Pending CN112448977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910816194.5A CN112448977A (en) 2019-08-30 2019-08-30 System, method, apparatus and computer readable medium for assigning tasks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910816194.5A CN112448977A (en) 2019-08-30 2019-08-30 System, method, apparatus and computer readable medium for assigning tasks

Publications (1)

Publication Number Publication Date
CN112448977A true CN112448977A (en) 2021-03-05

Family

ID=74734663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910816194.5A Pending CN112448977A (en) 2019-08-30 2019-08-30 System, method, apparatus and computer readable medium for assigning tasks

Country Status (1)

Country Link
CN (1) CN112448977A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191607A (en) * 2021-04-20 2021-07-30 北京异乡旅行网络科技有限公司 Task supervision method, device and system
CN114696888A (en) * 2022-04-25 2022-07-01 北京航天驭星科技有限公司 Port task processing method, device, equipment and medium of satellite measurement, operation and control system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685173A (en) * 2011-04-14 2012-09-19 天脉聚源(北京)传媒科技有限公司 Asynchronous task distribution system and scheduling distribution computing unit
US20130198358A1 (en) * 2012-01-30 2013-08-01 DoDat Process Technology, LLC Distributive on-demand administrative tasking apparatuses, methods and systems
CN104253850A (en) * 2014-01-07 2014-12-31 深圳市华傲数据技术有限公司 Distributed task scheduling method and system
CN104539645A (en) * 2014-11-28 2015-04-22 百度在线网络技术(北京)有限公司 Method and equipment for processing http request
CN108733461A (en) * 2017-04-18 2018-11-02 北京京东尚科信息技术有限公司 Distributed task dispatching method and apparatus
CN109656706A (en) * 2018-12-25 2019-04-19 江苏满运软件科技有限公司 Distributed task dispatching method, system, equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685173A (en) * 2011-04-14 2012-09-19 天脉聚源(北京)传媒科技有限公司 Asynchronous task distribution system and scheduling distribution computing unit
US20130198358A1 (en) * 2012-01-30 2013-08-01 DoDat Process Technology, LLC Distributive on-demand administrative tasking apparatuses, methods and systems
CN104253850A (en) * 2014-01-07 2014-12-31 深圳市华傲数据技术有限公司 Distributed task scheduling method and system
CN104539645A (en) * 2014-11-28 2015-04-22 百度在线网络技术(北京)有限公司 Method and equipment for processing http request
CN108733461A (en) * 2017-04-18 2018-11-02 北京京东尚科信息技术有限公司 Distributed task dispatching method and apparatus
CN109656706A (en) * 2018-12-25 2019-04-19 江苏满运软件科技有限公司 Distributed task dispatching method, system, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191607A (en) * 2021-04-20 2021-07-30 北京异乡旅行网络科技有限公司 Task supervision method, device and system
CN114696888A (en) * 2022-04-25 2022-07-01 北京航天驭星科技有限公司 Port task processing method, device, equipment and medium of satellite measurement, operation and control system

Similar Documents

Publication Publication Date Title
CN106919445B (en) Method and device for scheduling containers in cluster in parallel
CN108737270B (en) Resource management method and device for server cluster
CN108733461B (en) Distributed task scheduling method and device
CN109656690A (en) Scheduling system, method and storage medium
CN115004673B (en) Message pushing method, device, electronic equipment and computer readable medium
CN110413384B (en) Delay task processing method and device, storage medium and electronic equipment
CN109766172B (en) Asynchronous task scheduling method and device
US9323591B2 (en) Listening for externally initiated requests
WO2021159831A1 (en) Programming platform user code running method, platform, node, device and medium
CN110806928A (en) Job submitting method and system
CN111597033A (en) Task scheduling method and device
CN107066339A (en) Distributed job manager and distributed job management method
CN112052133A (en) Service system monitoring method and device based on Kubernetes
CN111181765A (en) Task processing method and device
CN112448977A (en) System, method, apparatus and computer readable medium for assigning tasks
CN105373563B (en) Database switching method and device
CN113821506A (en) Task execution method, device, system, server and medium for task system
CN112104679A (en) Method, apparatus, device and medium for processing hypertext transfer protocol request
CN111290842A (en) Task execution method and device
CN111835809B (en) Work order message distribution method, work order message distribution device, server and storage medium
CN112711522B (en) Cloud testing method and system based on docker and electronic equipment
CN108833147B (en) Configuration information updating method and device
CN113535371A (en) Method and device for multithreading asynchronous loading of resources
CN113743879A (en) Automatic rule processing method, system and related equipment
CN115361382B (en) Data processing method, device, equipment and storage medium based on data group

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210305