Disclosure of Invention
To overcome the problems in the related art, the present specification provides a current limiting method, apparatus and device for a distributed system.
According to a first aspect of embodiments of the present specification, there is provided a current limiting method for a distributed system, where the distributed system includes a plurality of nodes, and the plurality of nodes include at least one main node and at least one sub-node, the current limiting method includes:
after each node determines that current limiting is triggered, intercepting a request to be processed sent by a load balancer into a message processing queue;
after the main node selects a set number of to-be-processed requests from the message processing queue, the main node distributes the selected to-be-processed requests to at least one node;
and after receiving the to-be-processed request distributed by the main node, the child node executes a request processing flow.
Optionally, the master node selects the request to be processed from the message processing queue according to a set period.
Optionally, the master node is obtained by election of all nodes through an election mechanism.
Optionally, the determining that current limiting is triggered includes: the number of the requests to be processed reaches the set current limiting threshold value, or a set current limiting instruction is received.
Optionally, the method further includes: and each node also backs up the request to be processed sent by the load balancer in a database.
Optionally, the method further includes: and updating the processing state of the request in the database aiming at the request to be processed after the request is processed.
Optionally, the method further includes: and if the message processing queue runs abnormally, the main node selects a request to be processed from the database.
Optionally, the master node selects a set number of pending requests from the message processing queue according to the priority.
According to a second aspect of the embodiments of the present specification, there is provided a current limiting method for a distributed system, the method being applied to any node in the distributed system, the method including:
after the current limit is determined to be triggered, intercepting a request to be processed distributed by a load balancer into a message processing queue;
determining that the node is a main node or a child node;
if the node is the master node, after a set number of to-be-processed requests are selected from the message processing queue, the selected to-be-processed requests are distributed to at least one node;
and if the child node is the child node, executing a request processing flow after receiving the request distributed by the main node.
Optionally, the master node selects the request to be processed from the message processing queue according to a set period.
Optionally, the master node is obtained by election of all nodes through an election mechanism.
Optionally, the determining that current limiting is triggered includes: the number of the requests to be processed reaches the set current limiting threshold value, or a set current limiting instruction is received.
Optionally, the method further includes: and backing up the pending requests sent by the load balancer in a database.
Optionally, the method further includes: and updating the processing state of the request in the database aiming at the request to be processed after the request is processed.
Optionally, the method further includes: and if the message processing queue runs abnormally, the main node selects a request to be processed from the database.
Optionally, the master node selects a set number of pending requests from the message processing queue according to the priority.
According to a third aspect of the embodiments of the present specification, there is provided a current limiting apparatus for a distributed system, the apparatus being applied to any node in the distributed system, the apparatus including:
an interception module to: after the current limit is determined to be triggered, intercepting a request to be processed distributed by a load balancer into a message processing queue;
a determination module to: determining that the node is a main node or a child node;
a distribution module to: if the node is the master node, after a set number of to-be-processed requests are selected from the message processing queue, the selected to-be-processed requests are distributed to at least one node;
a processing module to: and if the child node is the child node, executing a request processing flow after receiving the request distributed by the main node.
Optionally, the master node selects the request to be processed from the message processing queue according to a set period.
Optionally, the master node is obtained by election of all nodes through an election mechanism.
Optionally, the intercepting module is further configured to: the number of the requests to be processed reaches the set current limiting threshold value, or a set current limiting instruction is received.
Optionally, the apparatus further includes a backup module, configured to: and backing up the pending requests sent by the load balancer in a database.
Optionally, the method further includes: and updating the processing state of the request in the database aiming at the request to be processed after the request is processed.
Optionally, the distribution module is further configured to: and if the message processing queue runs abnormally, the main node selects a request to be processed from the database.
Optionally, the master node selects a set number of pending requests from the message processing queue according to the priority.
According to a fourth aspect of embodiments of the present specification, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the aforementioned current limiting method when executing the program.
The technical scheme provided by the embodiment of the specification can have the following beneficial effects:
in this embodiment of the present description, in the current limiting scheme of the distributed system, when current limiting is required, each node intercepts pending requests distributed by the load balancer and sends the pending requests to the message processing queue, and after a master node selects a set number of pending requests from the message processing queue, the master node distributes the selected pending requests to at least one node for processing. The embodiment can control the flow of the whole system from the perspective of global flow, and can protect the system from normally providing service to the outside under the condition of large flow impact under the condition of saving system resources, thereby achieving the purpose of protecting the system.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
With the rapid development of the internet, when the access volume of the internet application is rapidly increased, a business system is not subjected to large-flow impact all the time, if the business system can not provide services to the outside normally under the large-flow impact, the whole system is likely to be broken down under the large-flow impact, although the business system can be split and split through distributed expansion, sometimes the flow of the business system is not always subjected to large flow, the large flow is likely to occur only during a promotion activity, if the system expansion is specially carried out for dealing with the short-term large flow, in a long period of time, a large part of resources of the distributed system are actually in an idle state, and further system resources are wasted.
The embodiment of the present specification provides a current limiting scheme for the distributed system scenario, which can control the flow of the entire system from the perspective of global flow, and can protect the system from normally providing services to the outside under the condition of large flow impact while saving system resources, thereby achieving the purpose of protecting the system.
Fig. 1 is a schematic view of a service scenario shown in this specification according to an exemplary embodiment, where fig. 1 includes: a plurality of request initiators, a distributed system comprising a plurality of nodes, and a load balancer; the load balancer can bear the to-be-processed requests initiated by the request initiators and distribute the to-be-processed requests to the nodes in the distributed system.
When large flow is encountered, the number of requests distributed to each node by the load balancer is large, and the requests may exceed the tolerable range of the node. A solution to limiting a distributed system is proposed in a scheme of this embodiment, as shown in fig. 2A, which is a flow chart of a current limiting method of a distributed system according to an exemplary embodiment shown in this specification, where the distributed system includes a plurality of nodes, and the plurality of nodes includes at least one main node and at least one child node, and includes:
in step 202, after each of the nodes determines that current limiting is triggered, the nodes intercept a pending request sent by the load balancer into a message processing queue.
In step 204, after selecting a set number of to-be-processed requests from the message processing queue, the master node distributes the selected to-be-processed requests to at least one of the nodes.
In step 206, after receiving the to-be-processed request distributed by the master node, the child node executes a request processing procedure.
Optionally, in practical application, a trigger time or condition for current limiting may be flexibly configured as needed, for example, a start time of a promotion activity may be used as a trigger time for current limiting, a current limiting instruction may be manually initiated, current limiting may be automatically triggered according to a flow size, current limiting may be performed when an abnormality occurs in a distributed system, and the like. Specifically, one way of determining that the current limiting is triggered is that the number of the requests to be processed reaches a set current limiting threshold, where the current limiting threshold may be flexibly set as needed, and based on this, this embodiment may monitor the current total flow of the system in the distributed system, and if the current total flow exceeds the set current limiting threshold, dynamically trigger the current limiting, and dynamically limit the current according to the flow within a certain time period configured in the background. Optionally, other manners for determining that the current limit is triggered may also receive a set current limit instruction, and the switch for dynamic current limit may also be triggered manually through background configuration, so as to achieve the purpose of protecting the distributed system.
In this embodiment, the size of the flow in the distributed system may be realized by calculating the number of the requests to be processed. As shown in fig. 2B, the distributed system can collect the traffic in the current system, for example, by globally counting a number of times per request, which can be implemented by, for example, Incr of Redis), where Redis an open-source journaled, Key-Value database written in ANSIC language, supporting network, which can be memory-based or persistent, and providing API in multiple languages. The Incr command of Redis can increment the digital value stored in the key by one. Specifically, each node in the system is connected to the same Redis (which may be independent or a cluster thereof), and the node counts a number of times for each request to be processed (calling the Incr of Redis to implement), so that the Redis records the overall flow of the distributed system. It can be understood that, in practical application, other modes may be flexibly selected according to needs, and this embodiment does not limit this.
In addition, a current limiting threshold (optionally, the maximum number of requests in a unit time) can be configured in the background of the distributed system, and when the flow of the system reaches the current limiting threshold, the current limiting switch can be automatically turned on to determine to execute current limiting processing; in other examples, the current limit switch may be manually triggered in the background, and then the distributed system may begin the current limit process.
In this embodiment, the nodes in the distributed system may receive the to-be-processed request sent by the load balancer, and if the flow of the distributed system is normal and the current limiting is not started, each node in the distributed system may directly process the to-be-processed request after receiving the to-be-processed request sent by the load balancer. Under the condition that current limitation is required, in this embodiment, each node does not directly process a to-be-processed request sent by the load balancer, but intercepts the to-be-processed request into a message processing queue (optionally, a priority queue zset of Redis may be used for implementation), and allocates a set number of requests to each node for processing after the requests are fished from the message processing queue by the master node. The embodiment limits the number of the requests to be processed by using the main node, and can prevent the impact of large flow on each node.
Optionally, in order to prevent the problem caused by the abnormal message processing queue, the to-be-processed request may also be backed up to the database at the same time to be used as bottom data, and in the case of the abnormal message processing queue, the to-be-processed request may also be acquired from the database.
In this embodiment, a Master node (Master node) may be determined from each node of the distributed system, and used for subsequent message distribution. Optionally, there are many implementations of determining the master node. In some examples, each node in the distributed system may select one of the nodes as a master node according to a set rule, optionally, the set rule may be randomly selected, or may be selected based on software and hardware resources of the node, and the like; and may also be configured by a technician. The embodiment also provides an implementation mode of an election mechanism, wherein the Master node is obtained by electing all nodes through the election mechanism, and each node elects a Master node.
In this embodiment, nodes other than the master node are referred to as child nodes, and based on this, each node in the distributed system may determine whether itself is the master node or the child node. For the master node, a set number of to-be-processed requests may be selected from the message processing queue and distributed to at least one node, and optionally, the to-be-processed requests may be distributed based on load balancing, and the distributed nodes do not limit whether to include the master node itself or whether to distribute the to-be-processed requests to all or part of the child nodes. Alternatively, the set number may be determined based on the size of the flow that can be borne in the distributed system, which is not limited in this embodiment. The priority of each request to be processed is preconfigured in the message processing queue, for example, the requests may be sorted according to the priority of some products, or the services may be sorted according to other dimensions according to requirements, and the like.
Optionally, because a certain time is required for processing the distributed to-be-processed request by the node, the master node may select the to-be-processed request from the message processing queue according to a set period, that is, the master node selects the to-be-processed request every certain time, as an example, the set period is 10 seconds, the master node selects the to-be-processed request once every 10 seconds and distributes the selected to-be-processed request, and the time of 10 seconds is used for processing the request for each node. The set period may be flexibly configured as needed, for example, determined based on the processing capability of the node, which is not limited in this embodiment. As an implementation manner, a timing task may be configured in a Master node, and after a trigger time point of the timing task is reached, to-be-processed requests are selected from a message processing queue according to priorities, and a specific selection number may be determined by a current limit number configured in a background. And finally, the main node distributes the selected request to be processed to the nodes in the distributed system. After each node in the distributed system receives the request information, the request can be processed according to a normal flow.
Optionally, each node further backs up the to-be-processed request sent by the load balancer in the database, and if the message processing queue runs abnormally, the master node may further select the to-be-processed request from the database. After the node processes the request to be processed, the processing state of the request in the database can be updated, and the processing state can be processing completed or unprocessed, so that when the request to be processed is selected from the main node database, the request to be processed can be selected according to the processing state of the request in the database.
Next, referring to fig. 2C, each node in the distributed system may be configured with a global interceptor, which is responsible for intercepting the pending requests. By utilizing the interceptor, whether the current limiting switch is started or not is judged at first, and the current limiting switch is started under various possible conditions: for example, the flow of the system reaches a threshold value of background default trigger flow limitation; it is also possible to manually configure the trigger throttling strategy in the background. If not, the request is processed normally according to normal processing logic. If the current limiting switch is turned on, the request to be processed is stored in the database, then the request information is written into the priority queue Zset of Redis, and the requested priority information is read from the priority configuration configured in the background. Optionally, the information related to the processed process of the pending request may also be returned to the request initiator. Optionally, a globally unique distributed order number may be generated for each to-be-processed request, so that the request initiator may query the processing progress, and after the to-be-processed request is processed, a message may be sent to notify the request initiator that the request information of the order number has been processed. And for the to-be-processed request which is processed completely, updating the processing state of the request in the database.
As shown in fig. 2D, each node in the distributed system configures a timing task triggered by a certain time period (e.g., how many minutes/seconds, etc., which can be configured according to service requirements). According to the timing task, the node can judge whether the node is a Master node according to a set period, if the node is not the Master node, no processing is performed, if the node is the Master node, request information is taken from a priority queue of the Redis, the selected quantity is the quantity of current limiting configured in the background (the number of current limiting requests such as minutes, seconds and the like), and if the request information taken from the Redis is abnormal, a request to be processed is selected from the database.
The master node can distribute a plurality of selected requests to be processed to each node in the cluster for processing, each node can be configured with a request processing flow, when the node receives the distributed requests to be processed, the request processing flow is triggered to start service logic processing, the processing state of the data related to the request information order number can be updated after the processing is finished, and a message can be sent to the request initiator to finish the processing of the request.
The embodiment is different from the traditional single-machine single-instance current limiting scheme, and starts from the current limiting angle of a brand-new distributed system, current limiting detection is carried out in real time, automatic triggering and manual triggering current limiting are supported, specific current limiting numbers can be dynamically configured through a background, and besides the persistence of request information, the selection from a priority queue is supported, and the bottom-of-pocket data of a database is supported, so that the stability of the request data is guaranteed.
For the request initiator, the current limiting policy is transparent and imperceptible to the request initiator, the request initiator can actively inquire the request processing state through the order number of one request before and after current limiting, and the client is informed of the request processing condition after the request processing is completed. For the load condition of the distributed system, the Master node supports election (Master election mechanism of Zookeeper), and even if the Master node is hung, other nodes can become the Master node again through the election mechanism; and the distribution of tasks is also load-balanced and is distributed to other nodes in the cluster through load balancing.
In this embodiment, from the perspective of a brand-new distributed system current limiting, current limiting detection is performed in real time, a current limiting policy is dynamically turned on, and a request to be processed can be consumed and processed in a load balanced manner at each node in the system.
The current limiting scheme of the embodiment is completely transparent for the request initiator, the client is unaware before and after the current limiting strategy is started, no burden is imposed on the request initiator, and no additional processing is required.
In this embodiment, on one hand, the current limit number of the system can be dynamically expanded through configuration of the background system, and on the other hand, the number of nodes for processing requests in the system can also be dynamically increased, and only distributed expansion of the application is needed.
In this embodiment, on one hand, after the Master node in the system is hung up, the Master node can be obtained by electing the rest nodes; on the other hand, after some child nodes are hung, the influence on the whole system is not particularly large, and the traffic of the whole system can still be distributed to other child nodes in a load balancing manner.
Fig. 3 is a flow chart illustrating another current limiting method for a distributed system according to an exemplary embodiment, which may be applied to any node in the distributed system, and the method includes:
in step 302, after it is determined that current limiting is triggered, intercepting a to-be-processed request distributed by a load balancer into a message processing queue;
in step 304, determining whether the node is a main node or a child node;
in step 306, if the node is the master node, after a set number of to-be-processed requests are selected from the message processing queue, the selected to-be-processed requests are distributed to at least one node;
in step 308, if the child node is a child node, the request processing flow is executed after the request distributed by the master node is received.
Optionally, the master node selects the request to be processed from the message processing queue according to a set period.
Optionally, the master node is obtained by election of all nodes through an election mechanism.
Optionally, the determining that current limiting is triggered includes: the number of the requests to be processed reaches the set current limiting threshold value, or a set current limiting instruction is received.
Optionally, the method further includes: and backing up the pending requests sent by the load balancer in a database.
Optionally, the method further includes: and updating the processing state of the request in the database aiming at the request to be processed after the request is processed.
Optionally, the method further includes: and if the message processing queue runs abnormally, the main node selects a request to be processed from the database.
Optionally, the master node selects a set number of pending requests from the message processing queue according to the priority.
The current limiting scheme of this embodiment is described from the perspective of each node in the distributed system, and the specific processing flow may refer to the description of the embodiments shown in fig. 2A to fig. 2D, which is not described herein again.
Corresponding to the embodiment of the current limiting method of the distributed system, the specification also provides an embodiment of a current limiting device of the distributed system and a terminal applied by the current limiting device.
The current limiting device of the distributed system in the specification can be applied to computer equipment, such as a server or terminal equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor in which the file processing is located. From a hardware aspect, as shown in fig. 4, which is a hardware structure diagram of a computer device where a current limiting apparatus of a distributed system of the present specification is located, except for the processor 410, the memory 430, the network interface 420, and the nonvolatile memory 440 shown in fig. 4, a server or an electronic device where the apparatus 431 is located in an embodiment may also include other hardware according to an actual function of the computer device, which is not described again.
As shown in fig. 5, fig. 5 is a block diagram of an apparatus shown in this specification according to an exemplary embodiment, the apparatus comprising:
an interception module 51 for: after the current limit is determined to be triggered, intercepting a request to be processed distributed by a load balancer into a message processing queue;
a determination module 52 configured to: determining that the node is a main node or a child node;
a distribution module 53 for: if the node is the master node, after a set number of to-be-processed requests are selected from the message processing queue, the selected to-be-processed requests are distributed to at least one node;
a processing module 54 for: and if the child node is the child node, executing a request processing flow after receiving the request distributed by the main node.
Optionally, the master node selects the request to be processed from the message processing queue according to a set period.
Optionally, the master node is obtained by election of all nodes through an election mechanism.
Optionally, the intercepting module is further configured to: the number of the requests to be processed reaches the set current limiting threshold value, or a set current limiting instruction is received.
Optionally, the apparatus further includes a backup module, configured to: and backing up the pending requests sent by the load balancer in a database.
Optionally, the method further includes: and updating the processing state of the request in the database aiming at the request to be processed after the request is processed.
Optionally, the distribution module is further configured to: and if the message processing queue runs abnormally, the main node selects a request to be processed from the database.
Optionally, the master node selects a set number of pending requests from the message processing queue according to the priority.
Accordingly, the present specification also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the current limiting method of the aforementioned distributed system when executing the program.
The implementation process of the function and the action of each module in the current limiting device of the distributed system is specifically described in detail in the implementation process of the corresponding step in the current limiting method of the distributed system, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.