[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115134304B - Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center - Google Patents

Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center Download PDF

Info

Publication number
CN115134304B
CN115134304B CN202210740907.6A CN202210740907A CN115134304B CN 115134304 B CN115134304 B CN 115134304B CN 202210740907 A CN202210740907 A CN 202210740907A CN 115134304 B CN115134304 B CN 115134304B
Authority
CN
China
Prior art keywords
path
data packet
delay
time
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210740907.6A
Other languages
Chinese (zh)
Other versions
CN115134304A (en
Inventor
胡晋彬
贺蔓
饶淑莹
王进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN202210740907.6A priority Critical patent/CN115134304B/en
Publication of CN115134304A publication Critical patent/CN115134304A/en
Application granted granted Critical
Publication of CN115134304B publication Critical patent/CN115134304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a self-adaptive load balancing method for avoiding disorder of data packets of a cloud computing data center, which relates to the technical field of data processing. The application can effectively avoid the condition of rerouting disorder in the RDMA network, ensure the flexibility of the data packet switching path, promote the network load balance, reduce the transmission delay and improve the data transmission efficiency.

Description

Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center
Technical Field
The application relates to the technical field of data processing, in particular to a self-adaptive load balancing method for avoiding data packet disorder of a cloud computing data center.
Background
In order to meet the low-latency and high-throughput reliable transmission requirements of online data intensive services, distributed machine learning and storage applications in cloud computing, the CPU utilization rate is reduced, and the Remote Direct Memory Access (RDMA) technology based on Ethernet is widely deployed in modern aggregation enhanced Ethernet data centers, so that data transmission bypasses a core processing system of an end host, and the processing latency of the data transmission on the end host is remarkably reduced.
However, existing load balancing solutions cannot work effectively in RDMA deployed data centers. The load balancing mechanism based on the packet (packet) switching path is easy to disorder, namely, the packet with a large sequence number arrives at the receiving end before the packet with a small sequence number, so that the network card of the receiving end needs to buffer the disorder packet, and even buffer overflow and packet loss are caused. Although the load balancing mechanism based on flow (flow) is not out of order, the data packets cannot flexibly switch paths, so that the network load is unbalanced, and the link utilization rate is low. The load balancing mechanism based on the flow sheet (flowlet) switching path can avoid disorder, and meanwhile, when the flow sheet interval exceeds a certain preset threshold value, the flowlet can flexibly switch paths, but in the current RDMA transmission mechanism, the forwarding rate usually passes through rate smoothing, and the interval between data packets hardly reaches the set threshold value, so that the flowlet rarely appears and cannot flexibly reroute the flow, thereby causing unbalanced load and low link utilization. If the threshold for flowlet interval occurrence is set directly to a smaller value, frequent rerouting may result in out-of-order between the flow slices.
Therefore, how to flexibly switch paths of data packets while avoiding disorder, balance loads, improve link utilization and reduce transmission completion time of streams is a problem to be solved urgently in a data center where RDMA is deployed.
Disclosure of Invention
The self-adaptive load balancing method for avoiding data packet disorder in the cloud computing data center can effectively solve the problem that the conventional load balancing mechanism cannot guarantee flexible rerouting of data packets and no disorder phenomenon in a data center network deployed with RDMA.
In order to solve the technical problems, the application adopts the following technical methods: an adaptive load balancing method for avoiding data packet disorder of a cloud computing data center, comprising the following steps:
step one, initializingBase path round trip delay RTT and path delay update period T th The starting time t of the path delay updating period;
step two, the exchanger monitors whether a new data packet arrives, if so, the exchanger changes to step three; otherwise, continuing to monitor whether a new data packet arrives;
judging whether the current data packet is the first data packet of a new flow, if so, selecting a path with minimum delay from all paths as a forwarding path; otherwise, turning to the fourth step;
step four, calculating the arrival time t of the current data packet c Time t for forwarding data packet before the flow to which the data packet belongs e Difference t of f ,t f =t c -t e And calculates the delay difference t between the current path and other paths with smaller delay than the current path p Turning to the fifth step;
step five, judging whether other paths exist to enable t f Greater than t p If yes, rerouting the data packet, selecting any other path meeting the condition as a forwarding path, and turning to the step six; otherwise, the data packet does not switch paths, the current path is selected as a forwarding path, and the step six is performed;
step six, forwarding the previous data packet of the current data packet belongs to the flow by the time t e Setting the forwarding time of the current data packet, and switching to the second step.
Further, in step one, the time-based path round trip delay RTT is set to 50 μs during initialization, and the path delay update period T th Set to 2RTT and the start time t of the path delay update period is set to 0.
Further, from the second step to any time before the new data packet arrives at the sixth step, it is determined whether the difference between the current time and the start time T of the path delay update period is greater than or equal to the path delay update period T th If yes, the round trip delay of each path is updated, and the starting time t of the path delay updating period is set as the current time.
Further, when the round trip delay of each path is updated, the one-way path delay carried by the data packet for detecting the path delay is updated.
Further, in step five, if there are multiple other paths such that t f Greater than t p When the data packet is rerouted, selecting a path with minimum delay meeting the condition as a forwarding path, and turning to the step six; otherwise, the data packet does not switch paths, the current path is selected as a forwarding path, and the step six is performed.
According to the self-adaptive load balancing method for avoiding data packet disorder in the cloud computing data center, under the condition that a time interval threshold value related to a switching path is not required to be set, in an RDMA network, whether rerouting is performed can be adaptively judged according to the data packet interval of the same stream and the parallel equivalent path delay difference, and no disorder phenomenon is ensured, namely when the time interval between a data packet currently reaching a switch and the forwarding time of the previous data packet of the stream to which the data packet belongs is larger than the delay difference between the current path and other paths with smaller delay than the current path, the path with the smallest delay is selected as the rerouting path of the current data packet in other paths meeting the condition. The method can effectively avoid the condition of rerouting disorder in the RDMA network, ensure the flexibility of the data packet switching path, promote network load balance, reduce transmission delay and improve data transmission efficiency.
Drawings
FIG. 1 is a flow chart of an adaptive load balancing method for avoiding data packet disorder in a cloud computing data center according to the present application;
FIG. 2 is a topology diagram of a test scenario of an NS-3 network simulation platform in an embodiment of the application;
FIG. 3 is a plot of the ratio of out-of-order for MP-RDMA, RDMALB in a symmetrical topology and an asymmetrical topology in accordance with an embodiment of the present application;
fig. 4 is a graph of average stream completion time versus 99 minutes bit stream completion time for seven load balancing mechanisms under different workloads and different load strengths in an embodiment of the present application.
Detailed Description
The application will be further described with reference to examples and drawings, to which reference is made, but which are not intended to limit the scope of the application.
Before describing the present application, the design idea of the present application will be described herein. The application aims to provide an adaptive load balancing scheme which does not need to preset a time interval threshold related to the route switching, and the disorder phenomenon can not occur while the flexible rerouting of the data packet is ensured. Specifically, when the time interval between the current arrival of a data packet at the switch and the forwarding time of the previous data packet of the flow to which the data packet belongs is greater than the delay difference between the current path and other paths with smaller delay than the current path, the current data packet can be rerouted from the current path to the path with smaller delay than the current path. Meanwhile, in order to reduce transmission delay, a path with the minimum delay is selected from all rerouting paths meeting the conditions to be used as a forwarding path. For example: the packet currently arriving at the switch, the packet of which the previous forwarded packet of the flow belongs to, is the slave path P 1 The time interval between the data packet currently arriving at the switch and the forwarding time of the previous data packet of the flow to which the data packet belongs is t 1 If there are other paths P at this time x Is smaller than the path delay of the current path, and t 1 Greater than the current path and path P x In which case the current packet may be rerouted to a path P that is less than the current path delay x And no disorder phenomenon occurs, namely, the current data packet must arrive at the receiving end after the previous data packet of the stream to which the data packet belongs. The present application aims at providing a path P from which the condition is satisfied x The path with the smallest path delay is selected as the rerouting path of the current data packet.
The following describes in detail an adaptive load balancing method for avoiding data packet disorder in a cloud computing data center according to the present application, as shown in fig. 1, including:
step one, initializing: the base path round trip delay RTT is set to be 50 mu s, and the path delay update period T th Set to 2RTT and the start time t of the path delay update period is set to 0.
Step two, exchangerMonitoring whether a new data packet arrives, if so, judging whether the difference value between the current time and the starting time T of the path delay updating period is greater than or equal to the path delay updating period T th If it is greater than or equal to the path delay update period T th Updating the round trip delay of each path according to the unidirectional path delay carried by the data packet for detecting the path delay, setting the starting time T of the path delay updating period as the current time, turning to the next step, and if the starting time T is smaller than the path delay updating period T th If no new data packet arrives, the switch continues to monitor whether a new data packet arrives.
Judging whether the current data packet is the first data packet of a new flow, if so, selecting a path with minimum delay from all paths as a forwarding path; otherwise, turning to the fourth step.
Step four, calculating the arrival time t of the current data packet c Time t for forwarding data packet before the flow to which the data packet belongs e Difference t of f ,t f =t c -t e And calculates the delay difference t between the current path and other paths with smaller delay than the current path p Turning to the fifth step.
Step five, judging whether other paths exist to enable t f Greater than t p If yes, rerouting the data packet, and selecting the path with the minimum delay in other paths meeting the condition as a forwarding path, and turning to the step six; otherwise, the data packet does not switch paths, the current path is selected as a forwarding path, and the step six is performed.
Step six, forwarding the previous data packet of the current data packet belongs to the flow by the time t e Setting the forwarding time of the current data packet, and switching to the second step.
It is noted that "determine whether the difference between the current time and the start time T of the path delay update period is greater than or equal to the path delay update period T th If it is greater than or equal to the path delay update period T th Updating the round trip delay of each path according to the unidirectional path delay carried by the data packet for detecting the path delay, and delaying the pathThe initial time T of the time update period is set as the current time, and the next step is switched to if the initial time T is smaller than the path delay update period T th Then directly turning to the next step; the step can be started from the second step by monitoring that a new data packet arrives, or can be started from the third step to any time before the sixth step is executed.
In order to verify the effectiveness of the application, the performance test of the method according to the application is carried out by using an NS-3 network simulation platform, and the experimental settings are as follows: in NS-3 simulation experiments, as shown in fig. 2, a leaf-spine network topology is adopted, in which 10 leaf switches and 10 spine switches are connected to 30 servers and 10 spine switches through 40Gbps links, the oversubscription ratio of the leaf switch layer is 3:1, the link delay is 5 microseconds, the switch buffer size is set to 9MB, the PFC mechanism is enabled to ensure lossless transmission, and the PFC threshold of each ingress port is set to 256KB. The experiment generates three typical workloads, namely a web server, a cache follower and a data mining, the average stream size is between 64KB and 7.41KB, and the transmission time of the stream is compliant with the Poisson distribution. In the Web server workload, all flows are less than 1MB, while in the data-mining workload, about 9% of flows are greater than 1MB, each generated between random pairs of source and target hosts, the traffic ratio within and between leaf switches is 1:3, and the network average load increases from 0.5 to 0.8.
Firstly, changing the number of parallel paths in a symmetrical topological structure, fig. 3 (a) shows that as the number of parallel paths increases, the RDMALB provided by the application effectively avoids the disorder phenomenon, and the disorder data packet proportion is 0. This is because the delay difference between the new rerouting path and the current path of a packet is smaller than the interval between two consecutive packets in the same stream, ensuring that packets with large sequence numbers in the same stream arrive at the receiving end later than packets with small sequence numbers, i.e. packets arrive at the receiving end in order. For MP-RDMA, the disorder degree of the data packet can only be controlled within a preset range, and disorder phenomenon cannot be avoided. Next, in the symmetrical topology, the present experiment changes the default 40Gbps bandwidth of the partial parallel path to 25Gbps to create a network topology with asymmetric bandwidth, and fig. 3 (b) shows that even in the highest degree of asymmetry (i.e. when the number of asymmetric paths is 4), the RDMALB provided by the present application realizes out-of-order transmission, and the out-of-order packet ratio is still 0.
Finally, the experiment tested the average stream completion time and 99 minutes bit stream completion time of different schemes under different workloads and different load strengths, and the test results are shown in fig. 4.
FIG. 4 (a) is a graph of average flow completion time versus seven load balancing mechanisms (i.e., ECMP, DC+ECMP, MP-RDMA, letFlow, DC +LetFlow, PCN+ LetFlow, RDMALB) under web service workload and four different load strengths (i.e., 0.5, 0.6, 0.7, 0.8);
FIG. 4 (b) is a graph of 99 minutes bit stream completion time versus web server workload and different load strengths for seven load balancing mechanisms;
FIG. 4 (c) is a graph of average flow completion time versus cache follower workload and different load strengths for seven load balancing mechanisms;
FIG. 4 (d) is a 99-component bit stream completion time versus cache follower workload and different load strengths for seven load balancing mechanisms;
FIG. 4 (e) is a graph of average flow completion time versus data mining workload and different load strengths for seven load balancing mechanisms;
FIG. 4 (f) is a 99 minutes bit stream completion time versus graph for seven load balancing mechanisms under data mining workload and different load strengths;
the RDMALB provided by the application obtains the best performance among three workloads, taking the data-mining workload in fig. 4 (e) as an example, under the condition that the load intensity is 0.8, compared with ECMP, letFlow, DC +ecmp (i.e., dcqcn+ecmp), dc+letflow (i.e., dcqcn+letflow), pcn+letflow and MP-RDMA, the RDMALB reduces the average flow completion time by 65%, 58%, 76%, 70%, 18% and 38%, respectively. This is because RDMALB adaptively flexibly reroutes packets out of order according to the packet interval and path delay differences to ensure high link utilization and load balancing. Since ECMP is a process of transmitting packets at the stream level, all packets of a stream are transmitted on one path, and the link utilization is low, resulting in a large stream completion time. The LetFlow can only reroute packets if a flowlet is present, but the flowlet is rarely present in RDMA network environment, so that the packets cannot flexibly switch paths, resulting in an increase in flow completion time. Even if these load balancing mechanisms are coordinated with existing congestion control mechanisms, performance is worse than RDMALB. The specific reason is that the DC rate convergence process is slow, and the flow completion times of dcn+ecmp and dc+letflow are respectively greater than those of ECMP and LetFlow after the transmission rate is restored to the line speed. MP-RDMA equalizes traffic in a congestion-aware manner, with significantly better performance than ECMP and LetFlow. While in pcn+letflow, PCN can identify and limit the rate of congestion flow, significantly reduce PFC triggers over MP-RDMA, letFlow is very difficult to work, load imbalance, and still has poorer performance than RDMALB.
Therefore, according to the self-adaptive load balancing method for avoiding data packet disorder in the cloud computing data center, under the condition that a time interval threshold value related to a switching path is not required to be set, in an RDMA network, whether rerouting is performed can be judged in a self-adaptive mode according to the data packet interval of the same stream and the parallel equivalent path delay difference, the disorder phenomenon is avoided, the flexibility of the data packet switching path is guaranteed, network load balancing is promoted, transmission delay is reduced, and data transmission efficiency is improved.
The foregoing embodiments are preferred embodiments of the present application, and in addition, the present application may be implemented in other ways, and any obvious substitution is within the scope of the present application without departing from the concept of the present application.
In order to facilitate understanding of the improvements of the present application over the prior art, some of the figures and descriptions of the present application have been simplified and some other elements have been omitted for clarity, as will be appreciated by those of ordinary skill in the art.

Claims (4)

1. The self-adaptive load balancing method for avoiding data packet disorder in the cloud computing data center is characterized by comprising the following steps:
step one, initializing a basic path round trip delay RTT and a path delay update period T th The starting time t of the path delay updating period;
step two, the exchanger monitors whether a new data packet arrives, if so, the exchanger changes to step three; otherwise, continuing to monitor whether a new data packet arrives;
judging whether the current data packet is the first data packet of a new flow, if so, selecting a path with minimum delay from all paths as a forwarding path; otherwise, turning to the fourth step;
step four, calculating the arrival time t of the current data packet c Time t for forwarding data packet before the flow to which the data packet belongs e Difference t of f ,t f =t c -t e And calculates the delay difference t between the current path and other paths with smaller delay than the current path p Turning to the fifth step;
step five, judging whether other paths exist to enable t f Greater than t p If yes, rerouting the data packet, and selecting the path with the minimum delay in other paths meeting the condition as a forwarding path, and turning to the step six; otherwise, the data packet does not switch paths, the current path is selected as a forwarding path, and the step six is performed;
step six, forwarding the previous data packet of the current data packet belongs to the flow by the time t e Setting the forwarding time of the current data packet, and switching to the second step.
2. The adaptive load balancing method for avoiding data packet disorder in a cloud computing data center according to claim 1, wherein: in step one, the time-based path round trip delay RTT is set to 50 μs during initialization, and the path delay update period T th Set to 2RTT and the start time t of the path delay update period is set to 0.
3. The cloud computing data center of claim 2, wherein the adaptation to avoid packet out-of-orderThe load balancing method is characterized in that: from the second step to the step six, judging whether the difference between the current time and the starting time T of the path delay updating period is greater than or equal to the path delay updating period T th If yes, the round trip delay of each path is updated, and the starting time t of the path delay updating period is set as the current time.
4. The adaptive load balancing method for avoiding data packet disorder in a cloud computing data center according to claim 3, wherein: and updating the round trip delay of each path according to the one-way path delay carried by the data packet for detecting the path delay.
CN202210740907.6A 2022-06-27 2022-06-27 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center Active CN115134304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210740907.6A CN115134304B (en) 2022-06-27 2022-06-27 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210740907.6A CN115134304B (en) 2022-06-27 2022-06-27 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center

Publications (2)

Publication Number Publication Date
CN115134304A CN115134304A (en) 2022-09-30
CN115134304B true CN115134304B (en) 2023-10-03

Family

ID=83379145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210740907.6A Active CN115134304B (en) 2022-06-27 2022-06-27 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center

Country Status (1)

Country Link
CN (1) CN115134304B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117478615B (en) * 2023-12-28 2024-02-27 贵州大学 Reliable transmission method in deterministic network

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499957A (en) * 2008-01-29 2009-08-05 中国电信股份有限公司 Multipath load balance implementing method and data forwarding apparatus
CN105873096A (en) * 2016-03-24 2016-08-17 重庆邮电大学 Optimization method of efficient throughput capacity of multipath parallel transmission system
CN107426102A (en) * 2017-07-26 2017-12-01 桂林电子科技大学 Multipath parallel transmission dynamic decision method based on path quality
CN110351196A (en) * 2018-04-02 2019-10-18 华中科技大学 Load-balancing method and system based on accurate congestion feedback in cloud data center
CN110351187A (en) * 2019-08-02 2019-10-18 中南大学 Data center network Road diameter switches the adaptive load-balancing method of granularity
CN110932814A (en) * 2019-12-05 2020-03-27 北京邮电大学 Software-defined network time service safety protection method, device and system
CN111416777A (en) * 2020-03-26 2020-07-14 中南大学 Load balancing method and system based on path delay detection
CN113098789A (en) * 2021-03-26 2021-07-09 南京邮电大学 SDN-based data center network multipath dynamic load balancing method
CN113810405A (en) * 2021-09-15 2021-12-17 佳缘科技股份有限公司 SDN network-based path jump dynamic defense system and method
CN114666278A (en) * 2022-05-25 2022-06-24 湖南工商大学 Data center load balancing method and system based on global dynamic flow segmentation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9503378B2 (en) * 2013-06-07 2016-11-22 The Florida International University Board Of Trustees Load-balancing algorithms for data center networks
CN106713158B (en) * 2015-07-16 2019-11-29 华为技术有限公司 The method and device of load balancing in Clos network
US10142248B2 (en) * 2015-09-29 2018-11-27 Huawei Technologies Co., Ltd. Packet mis-ordering prevention in source routing hitless reroute using inter-packet delay and precompensation
US10715446B2 (en) * 2016-09-12 2020-07-14 Huawei Technologies Co., Ltd. Methods and systems for data center load balancing

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499957A (en) * 2008-01-29 2009-08-05 中国电信股份有限公司 Multipath load balance implementing method and data forwarding apparatus
CN105873096A (en) * 2016-03-24 2016-08-17 重庆邮电大学 Optimization method of efficient throughput capacity of multipath parallel transmission system
CN107426102A (en) * 2017-07-26 2017-12-01 桂林电子科技大学 Multipath parallel transmission dynamic decision method based on path quality
CN110351196A (en) * 2018-04-02 2019-10-18 华中科技大学 Load-balancing method and system based on accurate congestion feedback in cloud data center
CN110351187A (en) * 2019-08-02 2019-10-18 中南大学 Data center network Road diameter switches the adaptive load-balancing method of granularity
CN110932814A (en) * 2019-12-05 2020-03-27 北京邮电大学 Software-defined network time service safety protection method, device and system
CN111416777A (en) * 2020-03-26 2020-07-14 中南大学 Load balancing method and system based on path delay detection
CN113098789A (en) * 2021-03-26 2021-07-09 南京邮电大学 SDN-based data center network multipath dynamic load balancing method
CN113810405A (en) * 2021-09-15 2021-12-17 佳缘科技股份有限公司 SDN network-based path jump dynamic defense system and method
CN114666278A (en) * 2022-05-25 2022-06-24 湖南工商大学 Data center load balancing method and system based on global dynamic flow segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Flex: A flowlet-level load balancing based on load-adaptive timeout in DCN;Xinglong Diao;Future Generation Computer Systems;全文 *
基于最小反馈时延的多径应答路径选择算法;阳旺;李贺武;吴茜;吴建平;;清华大学学报(自然科学版)(07);全文 *
数据中心网络负载均衡问题研究;沈耿彪;李清;江勇;汪漪;徐明伟;;软件学报(07);全文 *

Also Published As

Publication number Publication date
CN115134304A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
EP3949293B1 (en) Slice-based routing
Zhang et al. Load balancing in data center networks: A survey
Perry et al. Fastpass: A centralized" zero-queue" datacenter network
He et al. Presto: Edge-based load balancing for fast datacenter networks
US10218629B1 (en) Moving packet flows between network paths
US10454830B2 (en) System and method for load balancing in a data network
Kabbani et al. Flowbender: Flow-level adaptive routing for improved latency and throughput in datacenter networks
CA2882535C (en) Control device discovery in networks having separate control and forwarding devices
Hong et al. Finishing flows quickly with preemptive scheduling
Li et al. MPTCP incast in data center networks
WO2021050481A1 (en) Packet order recovery in a programmable edge switch in a data center network
Hu et al. TLB: Traffic-aware load balancing with adaptive granularity in data center networks
CN111224888A (en) Method for sending message and message forwarding equipment
CN115134304B (en) Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center
CN113746751A (en) Communication method and device
Dong et al. Low-cost datacenter load balancing with multipath transport and top-of-rack switches
Hussain et al. A dynamic multipath scheduling protocol (DMSP) for full performance isolation of links in software defined networking (SDN)
Nithin et al. Efficient load balancing for multicast traffic in data center networks using SDN
CN115022227B (en) Data transmission method and system based on circulation or rerouting in data center network
US20240015563A1 (en) Quasi-stateful load balancing
Majidi et al. ECN+: A marking-aware optimization for ECN threshold via per-port in data center networks
CN108881010A (en) Congestion path method of adjustment based on benefit and loss evaluation
Li et al. VMS: Traffic balancing based on virtual switches in datacenter networks
Wen et al. OmniFlow: Coupling load balancing with flow control in datacenter networks
Mon et al. Flow path computing in software defined networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant