[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113596149B - Flow control method, device, equipment and storage medium - Google Patents

Flow control method, device, equipment and storage medium Download PDF

Info

Publication number
CN113596149B
CN113596149B CN202110858404.4A CN202110858404A CN113596149B CN 113596149 B CN113596149 B CN 113596149B CN 202110858404 A CN202110858404 A CN 202110858404A CN 113596149 B CN113596149 B CN 113596149B
Authority
CN
China
Prior art keywords
service
flow
service server
service request
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110858404.4A
Other languages
Chinese (zh)
Other versions
CN113596149A (en
Inventor
蒋小波
蒋宁
曾琳铖曦
吴海英
黄浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Consumer Finance Co Ltd
Original Assignee
Mashang Consumer Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Consumer Finance Co Ltd filed Critical Mashang Consumer Finance Co Ltd
Priority to CN202110858404.4A priority Critical patent/CN113596149B/en
Publication of CN113596149A publication Critical patent/CN113596149A/en
Application granted granted Critical
Publication of CN113596149B publication Critical patent/CN113596149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application provides a flow control method, a flow control device, flow control equipment and a storage medium, wherein the flow control method comprises the following steps: after receiving the service request, the load balancing device determines a target flow corresponding to the service request, and determines a service server with the minimum corresponding transmitted service request flow as a target service server according to the transmitted service request flows respectively corresponding to the plurality of service servers, and the load balancing device transmits the service request to the target service server. The application can accurately control the flow of the service server, thereby more accurately realizing the load balance of the service server.

Description

Flow control method, device, equipment and storage medium
Technical Field
The present application relates to the field of cloud platforms, and in particular, to a flow control method, apparatus, device, and storage medium.
Background
The container orchestration engine (Kubernetes, k8 s) is an open source application for managing containerization on multiple hosts in a cloud platform, the goal of k8s is to make deploying containerized applications simple and efficient, and k8s provides a mechanism for application deployment, planning, updating, and maintenance.
Currently, in a k8s environment, the forwarding frequency proportion is generally allocated according to the weight value of a service server, so that load balancing is realized. The above manner cannot accurately achieve load balancing of the service server.
Disclosure of Invention
The application provides a flow control method, a flow control device, flow control equipment and a flow control storage medium, which are used for accurately controlling the flow of a service server, so that the load balance of the service server is realized more accurately.
In a first aspect, the present application provides a flow control method applied to a service system, where the service system includes a load balancing device and a plurality of service servers, the flow control method includes:
After receiving the service request, the load balancing equipment determines a target flow corresponding to the service request;
the load balancing equipment determines that the service server with the minimum corresponding transmitted service request flow is a target service server according to the transmitted service request flow respectively corresponding to the plurality of service servers, and the transmitted service request flow is the total flow corresponding to the service request transmitted to the service server in a preset period;
the load balancing device sends a service request to the target service server.
Optionally, the load balancing device determines, according to the sent service request flows respectively corresponding to the plurality of service servers, the service server with the smallest corresponding sent service request flow as the target service server, including: if a preset state change event of the service system is monitored, the load balancing equipment acquires the transmitted service request flow corresponding to a plurality of service servers respectively, and the load balancing equipment determines the service server with the minimum corresponding transmitted service request flow as a target service server according to the transmitted service request flow corresponding to the plurality of service servers respectively, wherein the preset state change event at least comprises the creation of the service server or the restarting of the service server; or if the transmitted service request flow of one service server in the plurality of service servers changes, the load balancing device acquires the transmitted service request flow corresponding to the plurality of service servers respectively, and the load balancing device determines the service server with the minimum corresponding transmitted service request flow as the target service server according to the transmitted service request flow corresponding to the plurality of service servers respectively.
Optionally, if it is monitored that the service system has a preset state change event, the load balancing device obtains the sent service request flows corresponding to the plurality of service servers respectively, including: if the service system is monitored to generate a preset state change event, the load balancing equipment compares the service server information obtained according to the preset interface with the service server information operated in the memory; if the service server is the newly added service server, the load balancing equipment acquires the transmitted service request flow corresponding to the newly added service server by initializing preset parameters; if the service server exists, the load balancing device imports the operation parameters in the corresponding memory to the existing service server to acquire the sent service request flow corresponding to the existing service server.
Optionally, the preset parameters include a flow ratio, and if the flow ratio is the newly added service server, the load balancing device obtains the sent service request flow corresponding to the newly added service server by initializing the preset parameters, including: if the value of the flow ratio corresponding to the newly added service server is a first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; or if the value of the flow ratio corresponding to the newly added service server is a second preset value, the sent service request flow corresponding to the newly added service server is the sum of the sent service request flows corresponding to all the service servers.
Optionally, the load balancing device determines, according to the sent service request flows respectively corresponding to the plurality of service servers, the service server with the smallest corresponding sent service request flow as the target service server, including: if the sent service request flows respectively corresponding to the plurality of service servers are all the third preset values, the load balancing equipment determines that one service server randomly selected from the plurality of service servers is a target service server; or if the sent service request flows respectively corresponding to the plurality of service servers are not all the third preset value, the load balancing equipment determines that the service server with the minimum corresponding sent service request flow is the target service server.
Optionally, after determining, in the plurality of service servers, that the service server with the smallest corresponding transmitted service request flow is the target service server, the load balancing device further includes: the load balancing device stores the identification of the target service server into a forwarding queue; the load balancing device sends a service request to a target service server, including: the load balancing device obtains the identification of the target service server from the forwarding queue, and sends a service request to the target service server according to the identification of the target service server.
Optionally, the flow control method further includes: if the traffic server configures the flow control rule, the load balancing device determines the sent traffic request flow of the corresponding traffic server according to the flow control rule.
Optionally, the flow control rule is a slow-start service server flow control rule, and if the service server configures the flow control rule, the load balancing device determines a sent service request flow of the corresponding service server according to the flow control rule, including: if the service server is configured with a slow-start service server flow control rule, the load balancing device controls the service server to realize slow start according to the slow-start service server flow control rule and determines the sent service request flow of the corresponding service server.
Optionally, the flow control method further includes: if the service server is configured with a slow-start service server flow control rule and the current flow of the service server represents a speed-limiting flow, determining that the sent service request flow of the service server is the current flow; or if the service server is configured with a slow-start service server flow control rule and the value of the flow proportion represents the speed-limiting flow, determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to the service servers which do not carry out the speed-limiting flow, or determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to all the service servers.
In a second aspect, the present application provides a flow control apparatus applied to a service system, the service system including a load balancing device and a plurality of service servers, the flow control apparatus comprising:
The first determining module is used for determining the target flow corresponding to the service request after receiving the service request;
The second determining module is used for determining that the service server with the minimum corresponding transmitted service request flow is a target service server according to the transmitted service request flow respectively corresponding to the plurality of service servers, and the transmitted service request flow is the total flow corresponding to the service request transmitted to the service server in a preset period;
And the processing module is used for sending the service request to the target service server.
Optionally, the second determining module is specifically configured to: if a preset state change event of the service system is monitored, the sent service request flow corresponding to a plurality of service servers is obtained, and the load balancing equipment determines that the service server with the minimum corresponding sent service request flow is a target service server according to the sent service request flow corresponding to the plurality of service servers, wherein the preset state change event at least comprises the creation service server or restarting service server; or if the transmitted service request flow of one service server in the plurality of service servers changes, acquiring the transmitted service request flow corresponding to the plurality of service servers respectively, and determining the service server with the minimum corresponding transmitted service request flow as the target service server by the load balancing equipment according to the transmitted service request flow corresponding to the plurality of service servers respectively.
Optionally, the second determining module is configured to, when monitoring that a preset state change event occurs in the service system, obtain sent service request flows corresponding to the plurality of service servers respectively, specifically configured to: if the service system is monitored to generate a preset state change event, comparing the service server information obtained according to a preset interface with the service server information operated in the memory; if the service request flow is the newly added service server, the sent service request flow corresponding to the newly added service server is obtained by initializing preset parameters; if the service server is the existing service server, the operation parameters in the corresponding memory are imported to the existing service server, and the sent service request flow corresponding to the existing service server is obtained.
Optionally, the preset parameters include a flow ratio, and the second determining module is specifically configured to, if the preset parameters are the newly added service server, obtain, by initializing the preset parameters, a sent service request flow corresponding to the newly added service server: if the value of the flow ratio corresponding to the newly added service server is a first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; or if the value of the flow ratio corresponding to the newly added service server is a second preset value, the sent service request flow corresponding to the newly added service server is the sum of the sent service request flows corresponding to all the service servers.
Optionally, the second determining module is configured to, when determining, according to the sent service request flows corresponding to the plurality of service servers, that the service server with the smallest corresponding sent service request flow is the target service server, specifically: if the sent service request flows respectively corresponding to the plurality of service servers are all the third preset values, determining that one service server selected randomly from the plurality of service servers is a target service server; or if the traffic of the sent service requests respectively corresponding to the plurality of service servers is not all the third preset value, determining the service server with the minimum traffic of the corresponding sent service requests as the target service server.
Optionally, after determining, in the plurality of service servers, that the service server with the smallest corresponding transmitted service request flow is the target service server, the second determining module is further configured to: storing the identification of the target service server to a forwarding queue; the processing module is specifically used for: and acquiring the identification of the target service server from the forwarding queue, and sending a service request to the target service server according to the identification of the target service server.
Optionally, the second determining module is further configured to: if the traffic server configures the traffic control rule, determining the traffic request traffic sent by the corresponding traffic server according to the traffic control rule.
Optionally, the flow control rule is a slow-start service server flow control rule, and the second determining module is configured to, when determining, according to the flow control rule, a sent service request flow of the corresponding service server if the service server configures the flow control rule, specifically to: if the service server is configured with the slow-start service server flow control rule, the service server is controlled to realize slow start according to the slow-start service server flow control rule, and the sent service request flow of the corresponding service server is determined.
Optionally, the second determining module is further configured to: if the service server is configured with a slow-start service server flow control rule and the current flow of the service server represents a speed-limiting flow, determining that the sent service request flow of the service server is the current flow; or if the service server is configured with a slow-start service server flow control rule and the value of the flow proportion represents the speed-limiting flow, determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to the service servers which do not carry out the speed-limiting flow, or determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to all the service servers.
In a third aspect, the present application provides an electronic device comprising: a memory and a processor;
the memory is used for storing program instructions;
the processor is configured to invoke program instructions in the memory to perform the flow control method according to the first aspect of the application.
In a fourth aspect, the present application provides a computer readable storage medium having stored therein computer program instructions which, when executed, implement a flow control method according to the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements a flow control method according to the first aspect of the present application.
According to the flow control method, the flow control device, the flow control equipment and the storage medium, after the load balancing equipment receives the service requests, the target flow corresponding to the service requests is determined, the load balancing equipment determines the service server with the minimum corresponding transmitted service request flow as the target service server according to the transmitted service request flows respectively corresponding to the service servers, and the load balancing equipment transmits the service requests to the target service server. The load balancing device determines the service server with the minimum corresponding transmitted service request flow as the target service server according to the transmitted service request flow corresponding to each service server, and transmits the service request to the target service server, so that the flow control can be accurately performed on the service server, and further the load balancing of the service server can be more accurately realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flow chart of a flow control method according to an embodiment of the present application;
FIG. 3 is a flow chart of a flow control method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of a flow control device according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a flow control device according to another embodiment of the present application;
FIG. 6 is a schematic diagram of a flow control system according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
First, some technical terms related to the present application will be explained:
and k8s, namely a container cluster management system, is an open-source platform and can realize the functions of automatic deployment, automatic capacity expansion or contraction, maintenance and the like of the container clusters.
An application (pod) instance, simply referred to as pod, i.e., the smallest or simplest basic unit of k8s creation or deployment, one pod represents one process running on a k8s cluster, each process containing internet protocol (Internet Protocol, IP) address information.
A load balancer (Haproxy), a piece of free and open source software written in the C language, provides high availability, load balancing, and application proxy based on transmission control protocol (Transmission Control Protocol, TCP) and hypertext transfer protocol (Hyper Text Transfer Protocol, HTTP).
Flow_size (flow_size) to represent the total flow size of the pod.
The flow rate (flow_ratio) is used for indicating the flow rate of the flow entering the pod, the default value is 100%, if 0< flow_ratio <100%, the flow rate is used for indicating the speed limiting flow, and if flow_ratio is 0, the flow rate is used for indicating the closed flow state, namely the flow is not received.
Slow start time (slow start time) is used to indicate a slow start countdown (in seconds).
And a flow_recovery policy (flow_recovery_policy) for indicating that when the slow start countdown is recovered to 0 (0 indicates that the slow start time has ended, and determines whether to recover the flow of the control pod to 100%, that is, a normal flow node), wherein the parameter adjustment mode of the flow_ratio, the flow_recovery_policy having a value of 0 indicates that the flow_ratio is manually adjusted, and the flow_recovery_policy having a value of 1 indicates that the flow_ratio is automatically adjusted (i.e., the flow_ratio is automatically recovered to 100%).
A slow start state (slow_start_status) for indicating whether slow start is on, 0 indicating on, and 1 indicating off (default off); in the slow start off state, the parameters slow_start_time and the parameter flow_recovery_policy are not in effect.
In multi-instance and multi-application business service environments, a load balancer serves as an entrance to unified business services, and is one of the indispensable components in either a physical architecture environment or a container architecture environment. A relatively wide range of load balancers such as Haproxy, nginx are used.
Currently, in a k8s environment, the forwarding frequency proportion is generally allocated according to the weight value of a service server, so that load balancing is realized. However, in the above manner of implementing load balancing, the weight values of the service servers are preconfigured, that is, the ratio of the number of requests allocated to each service server is fixed, and cannot be automatically adjusted according to the traffic size corresponding to the service request, so that load balancing cannot be implemented. For example, the weight values of the service server 1 and the service server 2 are configured in nginix to be 5, the nginix forwards the service request to the service server 1 and the service server 2 in a polling manner, but the traffic of each service request is different, so that the traffic of the service request processed by the service server 1 and the traffic request processed by the service server 2 are different, and the core purpose of load balancing is to realize load balancing of the traffic, so that the above manner cannot accurately realize load balancing of the service servers. In the k8s environment, when slow-start traffic preheating is applied, different traffic proportions cannot be controlled to a designated service server in different time periods. Illustratively, in the nminux polling algorithm, a slow-start (slow_start) parameter is provided for applying slow-start traffic preheating, and an nminux implementation module corresponding to the slow_start parameter is denoted by ngx _http_upstream_module, and a specific nminux configuration method corresponding to the slow_start parameter is as follows:
where slow_start=30s indicates that the weight (weight) is restored from 0 to 5 within 30 s.
The specific nmginx configuration method corresponding to the slow_start parameter is based on weight (weight) to perform flow preheating of slow start application, so that different flow proportions cannot be controlled to a designated service server in different time periods.
Based on the above problems, the present application provides a flow control method, apparatus, device and storage medium, which implement load balancing of service servers according to the sent service request flows corresponding to a plurality of service servers, so that load balancing of service servers can be implemented more accurately.
In the following, first, an application scenario of the solution provided by the present application is illustrated.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application. As shown in fig. 1, in the present application scenario, the client 110 sends a service request to the load balancing device 120, and the load balancing device 120 determines a target service server from the plurality of service servers 130, and sends the service request to the target service server. The specific implementation process of the load balancing device 120 in determining the target service server from the plurality of service servers 130 may be referred to in the following schemes of embodiments.
It should be noted that fig. 1 is only a schematic diagram of an application scenario provided by an embodiment of the present application, and the embodiment of the present application does not limit the devices included in fig. 1 or limit the positional relationship between the devices in fig. 1.
Next, a flow control method is described by way of specific examples. In the following embodiments of the application, a service server is taken as a pod as an example for explanation.
Fig. 2 is a flowchart of a flow control method according to an embodiment of the present application, which is applied to a service system, where the service system includes a load balancing device and a plurality of service servers. As shown in fig. 2, the method of the embodiment of the present application includes:
S201, after receiving the service request, the load balancing device determines a target flow corresponding to the service request.
In an embodiment of the present application, the load balancing device may be a load balancer, which may operate in a separate server, for example. The service request may be input by the user to the load balancing device performing the method embodiment, or sent by another device to the load balancing device performing the method embodiment. For example, if the service request is an HTTP request, the load balancing device may determine, after receiving the HTTP request, a target traffic corresponding to the service request according to a body (body) data size corresponding to an HTTP protocol, where the target traffic is, for example, 1M.
S202, the load balancing device determines the service server with the minimum corresponding sent service request flow as a target service server according to the sent service request flow respectively corresponding to the plurality of service servers.
The traffic of the sent service request is the total traffic corresponding to the service request sent to the service server in a preset period. The plurality of servers are all servers in the service system; each server has a corresponding sent traffic.
Illustratively, the service server is a pod, and the preset period is, for example, a period in which the pod starts running to the current time. The load balancing device can determine, according to the sent service request traffic corresponding to each of the plurality of pods, the pod with the smallest corresponding sent service request traffic as the target pod. For how the load balancing device determines the corresponding pod with the smallest traffic of the sent service request as the target pod, reference may be made to the following embodiments. For example, if there are three pod, namely pod1, pod2 and pod3, and the load balancing device determines that the sent service request traffic corresponding to pod1 is 10M, pod M, the sent service request traffic corresponding to pod3 is 8M, and the sent service request traffic corresponding to pod3 is 9M, then the load balancing device may determine that pod2 is the target pod.
S203, the load balancing device sends a service request to the target service server.
In the step, the load balancing device sends a service request to the target service server after determining the target service server. Optionally, the load balancing device may update the sent service request traffic corresponding to the target service server according to the target traffic. For example, if the target service server is pod2 and the target traffic corresponding to the service request is 1M, the load balancing device sends the service request to pod2, and updates the traffic of the sent service request corresponding to the target service server according to the target traffic 1M.
After the load balancing device completes the step S203, the steps S201 to S203 are repeatedly executed for the received new service request, thereby implementing traffic-based load balancing.
According to the flow control method provided by the embodiment of the application, after the load balancing equipment receives the service request, the target flow corresponding to the service request is determined, the load balancing equipment determines the service server with the minimum flow corresponding to the transmitted service request as the target service server from a plurality of service servers, and the load balancing equipment transmits the service request to the target service server. According to the load balancing device, according to the transmitted service request flow corresponding to each service server, the service server with the minimum corresponding transmitted service request flow is determined to be the target service server, and the service request is transmitted to the target service server, so that the flow control can be accurately performed on the service server, and further the load balancing of the service server can be more accurately realized.
On the basis of the above embodiment, optionally, if the traffic server configures a flow control rule, the load balancing device determines, according to the flow control rule, a sent traffic request flow of the corresponding traffic server.
The service server is illustratively a pod, each pod corresponds to a configuration file, and parameters configured in the configuration file may include: flow_ratio, slow_start_time, flow_recovery_policy, slow_start_status. The parameter settings in the configuration files corresponding to the pod respectively can be the same or different, i.e. the parameters can be configured as required. It will be appreciated that the parameter settings in the configuration file may correspond to different flow control rules. If the flow control rule is configured in the pod, the load balancing device determines the sent service request flow of the corresponding pod according to the flow control rule.
Further, the flow control rule is a slow-start service server flow control rule, and if the service server configures the flow control rule, the load balancing device determines, according to the flow control rule, a sent service request flow of the corresponding service server, and may include: if the service server is configured with a slow-start service server flow control rule, the load balancing device controls the service server to realize slow start according to the slow-start service server flow control rule and determines the sent service request flow of the corresponding service server.
Optionally, if the service server configures a slow-start service server flow control rule and the current flow of the service server indicates a speed-limiting flow, determining that the sent service request flow of the service server is the current flow; or if the service server is configured with a slow-start service server flow control rule and the value of the flow proportion represents the speed-limiting flow, determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to the service servers which do not carry out the speed-limiting flow, or determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to all the service servers. Specifically, under the condition that the value of the flow ratio is one hundred percent, the sent service request flow of the service server is the sum of the sent service request flows corresponding to all other service servers which do not perform speed limiting flow; when the value of the traffic proportion is greater than zero and less than one hundred percent, the sent service request traffic of the service servers is the sum of the sent service request traffic corresponding to all the service servers.
For example, if the slow_start_status is configured to be 0in the configuration file corresponding to the pod, it indicates that the flow preheating control for applying slow start is started (i.e., the flow control rule is a slow start service server flow control rule, which indicates speed limit flow), and the pod is controlled to implement slow start according to the parameter setting in the configuration file corresponding to the pod, and the sent service request flow (i.e., flow_size) of the pod is determined. Specifically, the flow_size 0<flow_ratio<100 is used to represent the size of the flow_size corresponding to the pod under the condition of limiting the flow; under the condition that Sum flow_ratio=100 is used for representing the speed-limiting flow, the Sum of the sent service request flows corresponding to other pod which does not carry out the speed-limiting flow; the tmp_sum 0<flow_ratio<100 represents the Sum of the sent service request traffic corresponding to all pod under the condition of speed limiting traffic. If the parameters in the configuration file corresponding to the pod are set as follows: the flow_start_status=0, flow_start_time >0, flow_recovery_policy=1, 0< flow_ratio < 100), the sent service request flow_size 0<flow_ratio<100 of the pod can be determined by the following three ways:
(1) If tmp_sum 0<flow_ratio<100 is equal to flow_size 0<flow_ratio<100, determining that the value of flow_size 0<flow_ratio<100 is the current sent service request flow of pod, i.e. the flow is kept unchanged;
(2) If it is The values of flow_size 0<flow_ratio<100 and tmp_sum 0<flow_ratio<100 are the minimum value in the sent service request traffic of all the current pod;
(3) If tmp_sum 0<flow_ratio<100 is smaller than flow_size 0<flow_ratio<100, then both flow_size 0<flow_ratio<100 and tmp_sum 0<flow_ratio<100 are Sum flow_ratio=100.
When the slow_start_time countdown is 0, the flow_ratio is automatically set to 100%, at this time, the flow_size corresponding to the pod takes the value of the maximum value in the sent service request flows of all the current pod, and the slow_start_status is set to 1, which indicates that the flow preheating control started slowly is finished.
If the parameters in the configuration file corresponding to the pod are set as follows: the flow_start_status=0, flow_start_time=0, flow_recovery_policy=0, 0< flow_ratio <100%, the sent service request flow_size 0<flow_ratio<100 of the pod can be determined by the following three ways:
(1) If tmp_sum 0<flow_ratio<100 is equal to flow_size 0<flow_ratio<100, determining that the value of flow_size 0<flow_ratio<100 is the current sent service request flow of pod, i.e. the flow is kept unchanged;
(2) If it is The values of flow_size 0<flow_ratio<100 and tmp_sum 0<flow_ratio<100 are the minimum value in the sent service request traffic of all the current pod;
(3) If tmp_sum 0<flow_ratio<100 is smaller than flow_size 0<flow_ratio<100, then both flow_size 0<flow_ratio<100 and tmp_sum 0<flow_ratio<100 are Sum flow_ratio=100.
If the parameters in the configuration file corresponding to the pod are set as follows: slow_start_status=1 (i.e. closing the application is slow-starting when neither slow_start_time nor flow_recovery_policy is active), 0< flow_ratio <100%, the sent traffic request flow_size 0<flow_ratio<100 of the pod can be determined by the following three ways:
(1) If tmp_sum 0<flow_ratio<100 is equal to flow_size 0<flow_ratio<100, determining that the value of flow_size 0<flow_ratio<100 is the current sent service request flow of pod, i.e. the flow is kept unchanged;
(2) If it is The values of flow_size 0<flow_ratio<100 and tmp_sum 0<flow_ratio<100 are the minimum value in the sent service request traffic of all the current pod;
(3) If tmp_sum 0<flow_ratio<100 is smaller than flow_size 0<flow_ratio<100, then both flow_size 0<flow_ratio<100 and tmp_sum 0<flow_ratio<100 are Sum flow_ratio=100.
Through parameter setting (i.e., flow control rule) in the configuration file corresponding to the service server, the load balancing device can determine the sent service request flow of the corresponding service server, so that the load balancing device can determine the service server with the minimum corresponding sent service request flow as the target service server among a plurality of service servers. Therefore, different flow ratios can be controlled to the designated service server in different time periods, and the flow preheating control of slow start application is realized.
Fig. 3 is a flowchart of a flow control method according to another embodiment of the present application. Based on the above embodiments, the embodiments of the present application further describe how load balancing is implemented. As shown in fig. 3, the method of the embodiment of the present application may include:
S301, after receiving a service request, the load balancing device determines a target flow corresponding to the service request.
A detailed description of this step may be referred to the related description of S201 in the embodiment shown in fig. 2, and will not be repeated here.
In the embodiment of the present application, the step S202 in fig. 2 may be further refined into two steps S302 and S303 as follows:
S302, if a preset state change event of the service system is monitored, the load balancing equipment acquires the sent service request flows corresponding to the service servers respectively, and the load balancing equipment determines the service server with the minimum corresponding sent service request flow as a target service server according to the sent service request flows corresponding to the service servers respectively.
The preset state change event at least comprises creating a service server or restarting the service server. Illustratively, the service server is a pod, in the k8s environment, the load balancing device monitors whether the value of the default parameter resource version (resourceVersion) of the k8s orchestration file changes, and if the value of resourceVersion changes, which means that a new pod is possibly created or the pod is restarted, the load balancing device acquires the sent service request flows corresponding to the multiple pods respectively. After the load balancing device obtains the sent service request flows respectively corresponding to the plurality of service servers, the service server with the minimum corresponding sent service request flow can be determined as the target service server according to the sent service request flows respectively corresponding to the plurality of service servers.
Further, if it is monitored that the service system generates a preset state change event, the load balancing device obtains the sent service request flows corresponding to the plurality of service servers respectively, which may include: if the service system is monitored to generate a preset state change event, the load balancing equipment compares the service server information obtained according to the preset interface with the service server information operated in the memory; if the service server is the newly added service server, the load balancing equipment acquires the transmitted service request flow corresponding to the newly added service server by initializing preset parameters; if the service server exists, the load balancing device imports the operation parameters in the corresponding memory to the existing service server to acquire the sent service request flow corresponding to the existing service server.
Optionally, if the preset parameter includes a traffic proportion, the load balancing device obtains the traffic of the sent service request corresponding to the newly added service server by initializing the preset parameter, and may further include: if the value of the flow ratio corresponding to the newly added service server is a first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; or if the value of the flow ratio corresponding to the newly added service server is a second preset value, the sent service request flow corresponding to the newly added service server is the sum of the sent service request flows corresponding to all the service servers. Specifically, the first preset value is, for example, 100%; the second preset value is, for example, a value between 0 and 100%, excluding 0 and 100%, and only the service server corresponding to the traffic proportion in the current service system is in the start state.
The service server is a pod, and the preset parameters are parameters configured in a configuration file corresponding to the pod, and specifically, the preset parameters may include: flow_ratio, slow_start_time, flow_recovery_policy, slow_start_status. A preset interface such as an application programming interface (Application Programming Interface, API) provided for k8 s. When the load balancing device monitors that a preset state change event occurs in the service system, the load balancing device compares the IP information respectively corresponding to the multiple pod obtained according to the preset interface with the IP information respectively corresponding to the multiple pod running in the memory, and if the load balancing device determines that the load balancing device is a new added pod, the load balancing device obtains the sent service request flow corresponding to the new added pod by initializing preset parameters. Specifically, the flow of the sent service request corresponding to the newly added pod can be determined in the following two ways:
(1) If the parameters in the configuration file corresponding to the newly added pod are set as follows: the flow_ratio is 100%, and the flow_size is the maximum value of the sent service request traffic of all the current pod;
(2) If the parameters in the configuration file corresponding to the newly added pod are set as follows: 0< flow_ratio <100%, the initial value of flow_size 0<flow_ratio<100 is Sum flow_ratio=100, and the initial value of tmp_sum 0<flow_ratio<100 and the initial value of flow_size 0<flow_ratio<100 are recorded into memory. Sum All is used to represent the Sum of the sent service request flows corresponding to all pod, if Sum All is 0, the value of flow_size 0<flow_ratio<100 is 0; if Sum All is greater than 0, then the value of flow_size 0<flow_ratio<100 is Sum flow_ratio=100.
If the current pod is determined to be the existing pod, the load balancing device copies the operation parameter value (i.e. flow_size) in the corresponding memory to the existing pod, and obtains the sent service request flow corresponding to the existing pod, so as to maintain the operation state of the existing pod and ensure the accuracy of flow calculation.
Optionally, if it is determined that the service server obtained according to the preset interface is not already running in the memory, the load balancing device deletes the data corresponding to the service server that is not already running in the memory.
If it is determined that the pod obtained according to the preset interface is not already running in the memory, the corresponding data in the memory of the pod that is not already running in the memory is deleted according to the IP corresponding to the pod that is not already running in the memory.
Optionally, the load balancing device determines, according to the sent service request flows respectively corresponding to the plurality of service servers, the service server with the smallest corresponding sent service request flow as the target service server, and may include: if the sent service request flows respectively corresponding to the plurality of service servers are all the third preset values, the load balancing equipment determines that one service server randomly selected from the plurality of service servers is a target service server; or if the sent service request flows respectively corresponding to the plurality of service servers are not all the third preset value, the load balancing equipment determines that the service server with the minimum corresponding sent service request flow is the target service server.
The third preset value is, for example, 0. And if the traffic request flows sent by all the pod acquired by the load balancing equipment are 0, determining one pod randomly selected from all the pods as a target pod. If all the pod acquired by the load balancing device respectively correspond to the sent service request flows which are not all 0, the load balancing device determines that the pod with the minimum corresponding sent service request flow is the target pod. Optionally, if there are multiple identical pod with minimum traffic of sent service request, one is randomly selected as the target pod.
S303, if the transmitted service request flow of one service server in the plurality of service servers changes, the load balancing device acquires the transmitted service request flows respectively corresponding to the plurality of service servers, and the load balancing device determines the service server with the minimum corresponding transmitted service request flow as the target service server according to the transmitted service request flows respectively corresponding to the plurality of service servers.
In an exemplary embodiment, after sending a service request to a pod, the load balancing device updates the sent service request traffic corresponding to the pod, that is, the sent service request traffic corresponding to the pod changes, and the load balancing device obtains the sent service request traffic corresponding to each pod from the memory according to the change of the sent service request traffic corresponding to the pod. After the load balancing device obtains the sent service request flows corresponding to the multiple pod respectively, the load balancing device can determine, according to the sent service request flows corresponding to the multiple pod respectively, the pod with the smallest corresponding sent service request flow as the target pod.
It should be noted that the present application does not limit the execution sequence of S302 and S303.
S304, the load balancing device stores the identification of the target service server into a forwarding queue.
In this step, the queue is a data structure, also a special linear table, and only allows deletion operations to be performed at the front end (front) of the table, and insertion operations to be performed at the rear end (rear) of the table, and the queue is a sequential queue or a linked list queue, for example. The forwarding queue is a queue for storing the identity of the target traffic server. Illustratively, the load balancing device, upon determining the target pod, stores the IP of the target pod to the forwarding queue.
Optionally, if the flow_ratio value corresponding to the service server is a second preset value, for example, 0, the identifier of the service server is stored in a return queue (the return queue is a queue for storing the identifier of the service server with the flow_ratio value being the second preset value), and accordingly, the sent service request flow of the service server will not change, that is, will not change.
S305, the load balancing device obtains the identification of the target service server from the forwarding queue, and sends the service request to the target service server according to the identification of the target service server.
Illustratively, the load balancing device obtains the IP of the target pod from the forwarding queue, and sends a service request to the target pod according to the IP of the target pod.
After step S305, the load balancing device may update the sent service request traffic corresponding to the target service server according to the target traffic. Further, optionally, updating the sent service request traffic corresponding to the target service server according to the target traffic may include: the load balancing device updates the sent service request flow corresponding to the target service server to the sum of the target flow and the sent service request flow of the target service server before forwarding the service request.
For example, if the target traffic is 1M and the traffic of the sent service request of the target pod before forwarding the service request is 9M, the load balancing device updates the traffic of the sent service request corresponding to the target pod to 10M.
Further, optionally, if the parameters in the configuration file corresponding to the target service server are set as follows: the flow_start_status=1 (when the flow_start_time and the flow_recovery_policy are not in effect), and the flow_ratio=100%, the load balancing device updates the traffic of the sent service request corresponding to the target service server to the sum of the traffic of the target service server and the traffic of the sent service request before the forwarding of the service request.
After the load balancing device completes the step S305, the steps S301 to S305 are repeatedly executed for the received new service request, so as to implement load balancing based on traffic.
According to the flow control method provided by the embodiment of the application, after the load balancing equipment receives the service request, the target flow corresponding to the service request is determined, if a preset state change event occurs in the service system or if the sent service request flow of one service server in the plurality of service servers is changed, the load balancing equipment acquires the sent service request flows respectively corresponding to the plurality of service servers, so that the sent service request flows respectively corresponding to the plurality of service servers can be ensured to be timely and accurately acquired; according to the transmitted service request flow which corresponds to the service servers respectively, the service server with the minimum corresponding transmitted service request flow is determined to be the target service server, the load balancing equipment stores the identification of the target service server into the forwarding queue, the load balancing equipment acquires the identification of the target service server from the forwarding queue, and transmits the service request to the target service server according to the identification of the target service server, so that the service request can be ensured to be accurately transmitted to the target service server. Because the load balancing device of the embodiment of the application determines the service server with the minimum corresponding transmitted service request flow as the target service server according to the transmitted service request flow corresponding to each service server and transmits the service request to the target service server, the load balancing device can accurately control the flow of the service server and further more accurately realize the load balancing of the service server
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 4 is a schematic structural diagram of a flow control device according to an embodiment of the present application, which is applied to a service system, where the service system includes a load balancing device and a plurality of service servers. As shown in fig. 4, a flow control device 400 according to an embodiment of the present application includes: a first determination module 401, a second determination module 402, and a processing module 403. Wherein:
The first determining module 401 is configured to determine, after receiving the service request, a target flow corresponding to the service request.
The second determining module 402 is configured to determine, according to the sent service request flows respectively corresponding to the plurality of service servers, that the service server with the smallest corresponding sent service request flow is the target service server, and that the sent service request flow is the total flow corresponding to the service request sent to the service server in the preset period.
A processing module 403, configured to send a service request to a target service server.
Alternatively, the second determining module 402 may be specifically configured to: if a preset state change event of the service system is monitored, the sent service request flow corresponding to a plurality of service servers is obtained, and the load balancing equipment determines that the service server with the minimum corresponding sent service request flow is a target service server according to the sent service request flow corresponding to the plurality of service servers, wherein the preset state change event at least comprises the creation service server or restarting service server; or if the transmitted service request flow of one service server in the plurality of service servers changes, acquiring the transmitted service request flow corresponding to the plurality of service servers respectively, and determining the service server with the minimum corresponding transmitted service request flow as the target service server by the load balancing equipment according to the transmitted service request flow corresponding to the plurality of service servers respectively.
Optionally, when the second determining module 402 is configured to obtain the sent service request flows respectively corresponding to the plurality of service servers if it is monitored that the service system has a preset state change event, the second determining module may be specifically configured to: if the service system is monitored to generate a preset state change event, comparing the service server information obtained according to a preset interface with the service server information operated in the memory; if the service request flow is the newly added service server, the sent service request flow corresponding to the newly added service server is obtained by initializing preset parameters; if the service server is the existing service server, the operation parameters in the corresponding memory are imported to the existing service server, and the sent service request flow corresponding to the existing service server is obtained.
Optionally, when the preset parameters include a traffic proportion, and the second determining module 402 is configured to, if the traffic proportion is a newly added service server, obtain, by initializing the preset parameters, a sent service request traffic corresponding to the newly added service server, where the sent service request traffic may be specifically configured to: if the value of the flow ratio corresponding to the newly added service server is a first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; or if the value of the flow ratio corresponding to the newly added service server is a second preset value, the sent service request flow corresponding to the newly added service server is the sum of the sent service request flows corresponding to all the service servers.
In some embodiments, when the second determining module 402 is configured to determine, according to the sent service request flows corresponding to the plurality of service servers, that the service server with the smallest corresponding sent service request flow is the target service server, the second determining module may be specifically configured to: if the sent service request flows respectively corresponding to the plurality of service servers are all the third preset values, determining that one service server selected randomly from the plurality of service servers is a target service server; or if the traffic of the sent service requests respectively corresponding to the plurality of service servers is not all the third preset value, determining the service server with the minimum traffic of the corresponding sent service requests as the target service server.
Optionally, after determining, from the plurality of service servers, that the service server with the smallest corresponding transmitted service request traffic is the target service server, the second determining module 402 may be further configured to: storing the identification of the target service server to a forwarding queue; the processing module 403 may be specifically configured to: and acquiring the identification of the target service server from the forwarding queue, and sending a service request to the target service server according to the identification of the target service server.
Optionally, the second determining module 402 may be further configured to: if the traffic server configures the traffic control rule, determining the traffic request traffic sent by the corresponding traffic server according to the traffic control rule.
Optionally, the flow control rule is a slow-start service server flow control rule, and the second determining module 402 may be specifically configured to, when determining, according to the flow control rule, the sent service request flow of the corresponding service server if the service server is configured with the flow control rule: if the service server is configured with the slow-start service server flow control rule, the service server is controlled to realize slow start according to the slow-start service server flow control rule, and the sent service request flow of the corresponding service server is determined.
Optionally, the second determining module 402 is further configured to: if the service server is configured with a slow-start service server flow control rule and the current flow of the service server represents a speed-limiting flow, determining that the sent service request flow of the service server is the current flow; or if the service server is configured with a slow-start service server flow control rule and the value of the flow proportion represents the speed-limiting flow, determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to the service servers which do not carry out the speed-limiting flow, or determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to all the service servers.
The device of the present embodiment may be used to execute the technical solution of any of the above-described method embodiments, and its implementation principle and technical effects are similar, and are not described herein again.
Fig. 5 is a schematic structural diagram of a flow control device according to another embodiment of the present application. As shown in fig. 5, a flow control device 500 according to an embodiment of the present application may include: a computation module 501, a scheduling module 502 and a forwarding module 503. Wherein:
A calculation module 501, configured to obtain the latest pod running in the memory and determine a sent service request flow corresponding to each pod; if the current pod is the new pod, initializing the parameters of the new pod, if the current pod is the existing pod, importing the operation parameters into the existing pod, if the pod is not already existing, deleting the information corresponding to the non-existing pod in the calculation module 501, deleting the data corresponding to the non-existing pod in the forwarding queue and the return queue, and importing the determined latest pod and the sent service request flow data corresponding to each pod into the scheduling module 502.
A scheduling module 502, configured to determine a target pod stored in the forwarding queue according to the sent service request traffic corresponding to each pod; if the flow_ratio value corresponding to the pod is 0, the pod is stored in the return queue, and the sent service request flow corresponding to the pod is directly returned to the calculation module 501.
A forwarding module 503, configured to obtain a forwarding target pod from the forwarding queue, and send a service request to the target pod; according to the service request, a target flow corresponding to the service request is determined, and the target flow is sent to the calculation module 501, so as to update the sent service request flow corresponding to the target pod.
It can be appreciated that the functions of the calculation module and the scheduling module in the embodiment of the present application are similar to those of the second determination module in the above embodiment; the forwarding module in the embodiment of the present application has functions similar to those of the first determining module and the processing module in the above-described embodiment.
Based on the flow control device shown in fig. 5, fig. 6 is a schematic diagram of a flow control system according to an embodiment of the present application. As shown in fig. 6, in the flow control system 600, a computing module 501 obtains the latest pod601 running in the memory and determines the flow of the sent service request corresponding to each pod601; if the running parameters are the existing pod601, the running parameters are imported to the existing pod601, if the running parameters are the existing pod601, the information corresponding to the existing pod601 in the calculation module 501 is deleted, the data corresponding to the existing pod601 in the forwarding queue 602 and the return queue 603 are deleted, and the determined latest pod601 and the data corresponding to the service request sent by each pod601 are imported to the scheduling module 502. The scheduling module 502 determines a target pod601 stored in the forwarding queue 602 according to the sent service request flow corresponding to each pod601; if the flow_ratio value corresponding to the pod601 is 0, the pod601 is stored in the return queue 603, and the sent service request flow corresponding to the pod601 is directly returned to the calculation module 501. The forwarding module 503 obtains a forwarding target pod601 from a forwarding queue 602, and sends a service request to the target pod601; according to the service request, a target flow corresponding to the service request is determined, and the target flow is sent to the calculation module 501, so as to update the sent service request flow corresponding to the target pod 601.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. By way of example, an electronic device such as the load balancing device disclosed in the present application may be provided as a server or computer. Referring to fig. 7, an electronic device 700 includes a processing component 701 further including one or more processors and memory resources represented by memory 702 for storing instructions, such as applications, executable by the processing component 701. The application program stored in the memory 702 may include one or more modules each corresponding to a set of instructions. Further, the processing component 701 is configured to execute instructions to perform any of the method embodiments described above.
The electronic device 700 may also include a power supply component 703 configured to perform power management of the electronic device 700, a wired or wireless network interface 704 configured to connect the electronic device 700 to a network, and an input output (I/O) interface 705. The electronic device 700 may operate based on an operating system stored in the memory 702, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
The application also provides a computer readable storage medium, wherein the computer readable storage medium stores computer execution instructions, and when the processor executes the computer execution instructions, the scheme of the flow control method is realized.
The application also provides a computer program product comprising a computer program which, when executed by a processor, implements aspects of the flow control method as described above.
The computer readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an Application SPECIFIC INTEGRATED Circuits (ASIC). The processor and the readable storage medium may reside as discrete components in a flow control device.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (11)

1. A flow control method, applied to a service system, where the service system includes a load balancing device and a plurality of service servers, the flow control method comprising:
after receiving a service request, the load balancing equipment determines a target flow corresponding to the service request;
The load balancing device determines a service server with the minimum corresponding transmitted service request flow as a target service server according to the transmitted service request flow respectively corresponding to the plurality of service servers, wherein the transmitted service request flow is the total flow corresponding to the service request transmitted to the service server in a preset period; if the service server is configured with a slow-start service server flow control rule, the load balancing device controls the service server to realize slow start according to the slow-start service server flow control rule and determines the sent service request flow of the corresponding service server;
and the load balancing equipment sends the service request to the target service server.
2. The flow control method according to claim 1, wherein the load balancing device determines, according to the transmitted service request flows respectively corresponding to the plurality of service servers, a service server with a smallest corresponding transmitted service request flow as a target service server, including:
if the service system is monitored to generate a preset state change event, the load balancing device acquires the sent service request flows respectively corresponding to the plurality of service servers, and the load balancing device determines the service server with the minimum corresponding sent service request flow as a target service server according to the sent service request flows respectively corresponding to the plurality of service servers, wherein the preset state change event at least comprises the creation of the service server or the restarting of the service server; or alternatively
If the transmitted service request flow of one service server of the plurality of service servers changes, the load balancing device obtains the transmitted service request flow corresponding to the plurality of service servers respectively, and the load balancing device determines the service server with the minimum corresponding transmitted service request flow as the target service server according to the transmitted service request flow corresponding to the plurality of service servers respectively.
3. The flow control method according to claim 2, wherein the load balancing device obtains the sent service request flows respectively corresponding to the plurality of service servers if the service system is monitored to have a preset state change event, including:
If the service system is monitored to generate a preset state change event, the load balancing equipment compares the service server information obtained according to a preset interface with the service server information running in the memory;
if the service request flow is the newly added service server, the load balancing equipment acquires the sent service request flow corresponding to the newly added service server by initializing preset parameters;
If the service server is the existing service server, the load balancing device imports the operation parameters in the corresponding memory to the existing service server to acquire the sent service request flow corresponding to the existing service server.
4. The flow control method according to claim 3, wherein the preset parameters include a flow ratio, and the load balancing device obtains the flow of the sent service request corresponding to the newly added service server by initializing the preset parameters if the newly added service server is the newly added service server, including:
if the value of the flow ratio corresponding to the newly added service server is a first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; or alternatively
And if the value of the flow ratio corresponding to the newly added service server is a second preset value, the sent service request flow corresponding to the newly added service server is the sum of the sent service request flows corresponding to all the service servers.
5. The flow control method according to claim 2, wherein the load balancing device determines, according to the transmitted service request flows respectively corresponding to the plurality of service servers, a service server with a smallest corresponding transmitted service request flow as a target service server, including:
If the sent service request flows respectively corresponding to the plurality of service servers are all the third preset values, the load balancing equipment determines that one service server randomly selected from the plurality of service servers is the target service server; or alternatively
And if the traffic of the sent service requests respectively corresponding to the plurality of service servers is not all the third preset value, the load balancing equipment determines the service server with the minimum traffic of the corresponding sent service requests as the target service server.
6. The flow control method according to any one of claims 1 to 5, wherein the load balancing apparatus, among the plurality of service servers, after determining that a service server having a smallest corresponding transmitted service request flow is a target service server, further comprises:
The load balancing device stores the identification of the target service server into a forwarding queue;
the load balancing device sending the service request to the target service server, including:
the load balancing device obtains the identification of the target service server from the forwarding queue, and sends the service request to the target service server according to the identification of the target service server.
7. The flow control method according to claim 1, characterized by further comprising:
If the service server is configured with a slow-start service server flow control rule and the current flow of the service server represents a speed-limiting flow, determining that the sent service request flow of the service server is the current flow; or alternatively
If the service server is configured with a slow-start service server flow control rule and the value of the flow ratio represents a speed-limiting flow, determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to the service servers which do not carry out the speed-limiting flow, or determining that the sent service request flow of the service server is the sum of the sent service request flows corresponding to all the service servers.
8. A flow control apparatus for use in a traffic system comprising a load balancing device and a plurality of traffic servers, the flow control apparatus comprising:
The first determining module is used for determining the target flow corresponding to the service request after receiving the service request;
The second determining module is used for determining a service server with the minimum corresponding transmitted service request flow as a target service server according to the transmitted service request flow respectively corresponding to the plurality of service servers, wherein the transmitted service request flow is the total flow corresponding to the service request transmitted to the service server in a preset period; the second determining module is further configured to, if the service server configures a slow-start service server flow control rule, control the service server to implement slow start according to the slow-start service server flow control rule by the load balancing device, and determine a sent service request flow of the corresponding service server;
and the processing module is used for sending the service request to the target service server.
9. An electronic device, comprising: a memory and a processor;
the memory is used for storing program instructions;
The processor is configured to invoke program instructions in the memory to perform the flow control method of any of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein computer program instructions, which when executed, implement the flow control method according to any of claims 1 to 7.
11. A computer program product comprising a computer program which, when executed by a processor, implements a flow control method as claimed in any one of claims 1 to 7.
CN202110858404.4A 2021-07-28 2021-07-28 Flow control method, device, equipment and storage medium Active CN113596149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110858404.4A CN113596149B (en) 2021-07-28 2021-07-28 Flow control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110858404.4A CN113596149B (en) 2021-07-28 2021-07-28 Flow control method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113596149A CN113596149A (en) 2021-11-02
CN113596149B true CN113596149B (en) 2024-07-12

Family

ID=78251205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110858404.4A Active CN113596149B (en) 2021-07-28 2021-07-28 Flow control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113596149B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117478610B (en) * 2023-12-27 2024-03-12 成都新希望金融信息有限公司 Global flow control method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110233866A (en) * 2018-03-06 2019-09-13 中国移动通信集团广东有限公司 A kind of load-balancing method and load balancer

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9667711B2 (en) * 2014-03-26 2017-05-30 International Business Machines Corporation Load balancing of distributed services
CN104980361B (en) * 2014-04-01 2018-09-21 华为技术有限公司 A kind of load-balancing method, apparatus and system
CN104902001B (en) * 2015-04-07 2018-04-06 杭州电子科技大学 Web request load-balancing method based on operating system virtualization
US10616318B1 (en) * 2017-11-28 2020-04-07 Amazon Technologies, Inc. Load balancer employing slow start, weighted round robin target selection
CN111831450B (en) * 2020-07-20 2023-07-28 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for distributing server resources
CN111857974A (en) * 2020-07-30 2020-10-30 江苏方天电力技术有限公司 Service access method and device based on load balancer
CN113179323B (en) * 2021-04-29 2023-07-04 杭州迪普科技股份有限公司 HTTPS request processing method, device and system for load balancing equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110233866A (en) * 2018-03-06 2019-09-13 中国移动通信集团广东有限公司 A kind of load-balancing method and load balancer

Also Published As

Publication number Publication date
CN113596149A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US11972107B2 (en) Quality of service management in a distributed storage system
US10735345B2 (en) Orchestrating computing resources between different computing environments
US11157324B2 (en) Partitioning for delayed queues in a distributed network
US8984058B2 (en) Pre-fetching remote resources
US9729488B2 (en) On-demand mailbox synchronization and migration system
US20190306225A1 (en) Adaptive communication control device
US8375383B2 (en) Rolling upgrades in distributed applications
US11899987B2 (en) Quality of service management in a distributed storage system
CN112333096A (en) Micro-service traffic scheduling method and related components
US9602950B1 (en) Context-based data storage management between devices and cloud platforms
CN112346926B (en) Resource state monitoring method and device and electronic equipment
US11831495B2 (en) Hierarchical cloud computing resource configuration techniques
US9342291B1 (en) Distributed update service
US10642585B1 (en) Enhancing API service schemes
CN113596149B (en) Flow control method, device, equipment and storage medium
CN112702195A (en) Gateway configuration method, electronic device and computer readable storage medium
CN113079098B (en) Method, device, equipment and computer readable medium for updating route
US20180095440A1 (en) Non-transitory computer-readable storage medium, activation control method, and activation control device
CN113986423A (en) Bullet frame display method and system, storage medium and terminal equipment
US11526499B2 (en) Adaptively updating databases of publish and subscribe systems using optimistic updates
US11768704B2 (en) Increase assignment effectiveness of kubernetes pods by reducing repetitive pod mis-scheduling
CN113722084B (en) Data processing method, device, electronic equipment and computer storage medium
US11271866B2 (en) Data sharing events
CN111538926A (en) Automatic offline package publishing method and device, electronic equipment and storage medium
CN113703798A (en) Distributed service updating method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant