[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

EP3975497A1 - Message load balancing method - Google Patents

Message load balancing method Download PDF

Info

Publication number
EP3975497A1
EP3975497A1 EP20843488.6A EP20843488A EP3975497A1 EP 3975497 A1 EP3975497 A1 EP 3975497A1 EP 20843488 A EP20843488 A EP 20843488A EP 3975497 A1 EP3975497 A1 EP 3975497A1
Authority
EP
European Patent Office
Prior art keywords
maximum number
concurrent streams
load balancing
setting frame
streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP20843488.6A
Other languages
German (de)
French (fr)
Other versions
EP3975497A4 (en
Inventor
Ting Hu
Peng Ren
Shirong ZHAO
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Publication of EP3975497A1 publication Critical patent/EP3975497A1/en
Publication of EP3975497A4 publication Critical patent/EP3975497A4/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/163In-band adaptation of TCP data exchange; In-band control procedures

Definitions

  • the disclosure relates to the field of communications, and in particular, to a method for message load balancing.
  • NF In a 5GC system defined by 3GPP, SBI messages between various NFs use HTTP/2 based on TCP as the protocol type. Due to the large-capacity requirements of the 5GC system, NF generally uses multiple processing instances to process HTTP messages (the instances here may be virtual machines or containers), and there is a load balance relationship among multiple instances.
  • each of browsers distributed at different locations will establish at least one independent TCP link to the server, and the massive number of TCP links are processed by the load balancing algorithm, which enables the number of TCP links on each instance basically the same, so that the number of HTTP messages on each instance is also basically the same.
  • the disclosure provides a method and a device for message load balancing, to at least alleviate the above-mentioned problem of load imbalance among multiple processing instances caused by the small number of TCP links between NFs in the 5GC system.
  • the disclosure provides a method for message load balancing.
  • the method includes: requesting, by a first NF, a second NF to establish N TCP links, the N TCP links being configured to send HTTP messages; sending, by the second NF, a first setting frame to the first NF, the first setting frame carrying a first maximum number of concurrent streams; sending, by the second NF, a second setting frame to the first NF, the second setting frame carrying a second maximum number of concurrent streams, the second maximum number of concurrent streams being less than the first maximum number of concurrent streams; and requesting, by the first NF, the second NF to establish M TCP links based on a total number of current service requests and the second maximum number of concurrent streams, where the second maximum number of concurrent streams multiplied by a sum of M and N is equal to the total number of current service requests.
  • the disclosure provides a network device.
  • the network device includes a processor, a memory and a communication bus.
  • the communication bus is configured to implement communication between the processor and the memory.
  • the processor is configured to execute a service processing program stored in the memory to perform the above-mentioned service processing method.
  • the disclosure further provides a storage medium.
  • the storage medium stores at least one of a first service processing program and a second service processing program.
  • the first service processing program and the second service processing program when executed by one or more processors, cause the one or more processors to perform the above-mentioned service processing method.
  • a first NF serving as a client requests a second NF serving as a server to establish N TCP links, the N TCP links being configured to send HTTP messages.
  • the first NF serving as the client sends a TCP SYNC (Synchronize Sequence Numbers) packet to the second NF serving as the server, enters a SYN_SEND state, and waits for the server to confirm.
  • the second NF serving as the server confirms the SYNC of the client and sends a SYN packet, namely SYN+ACK (Acknowledge character) packet.
  • the second NF serving as the server enters a SYN_RECV state.
  • the first NF serving as the client receives the SYN+ACK packet of the second NF serving as the server, and sends an acknowledgement packet ACK to the server.
  • the first NF serving as the client and the second NF serving as the server enter an ESTABLISHED state to complete a three-way handshake.
  • the link is successfully established, and two initial TCP links are established between the first NF and the second NF.
  • the second NF sends a first setting frame to the first NF, the first setting frame carrying a first maximum number of concurrent streams.
  • the second NF serving as the server sends an initial SETTINGS frame to the first NF serving as the client, where parameter SETTINGS_MAX_CONCURRENT_STREAMS (the current maximum number of concurrent streams) indicates a maximum number of HTTP message requests that can be processed simultaneously on a link, and the parameter SETTINGS_MAX_CONCURRENT_STREAMS (the current maximum number of concurrent streams) has an initial value of 100.
  • the first NF serving as the client sends 200 HTTP requests (that is, the total number of current service requests) to the second NF every second.
  • the average time delay between sending an HTTP request from the first NF and receiving a HTTP response from the second NF is 1 second, which means that it takes 1 second for a stream to be created and released
  • the complete operation of a group of streams from creation to release can be completed every second
  • the maximum number of concurrent streams in a TCP link is the initial value 100 specified in the initial SETTINGS frame sent by the second NF
  • the first NF only needs to establish two TCP links with the second NF to meet the requirements.
  • the second NF has two processing instances with one link on each, and the third instance is idle.
  • the number of TCP links carried by multiple processing instances on the second NF serving as the server is not equal.
  • the number of processing instances on the second NF is greater than 2, so the number of messages processed by various instances is not equal, resulting in load imbalance. If multiple NFs establish links to the second NF in this way, the load imbalance between different instances of the second NF may be more serious.
  • the processing instance on the second NF is a virtual machine or a processing container.
  • the second NF sends a second setting frame to the first NF, the second setting frame carrying a second maximum number of concurrent streams, the second maximum number of concurrent streams being less than the first maximum number of concurrent streams.
  • the second NF reduces the maximum number of concurrent streams of the second NF to 80.
  • the above setting is applied to the first NF, and the parameter SETTINGS_MAX_CONCURRENT_STREAM in the first NF is reduced to 80.
  • the maximum number of messages sent on the two TCP links per second is 160, which is less than 200 (the total number of current service requests), thus being unable to meet the service requirements.
  • the second NF when the number of processing instances on the second NF is greater than N, the second NF reduces the current maximum number of concurrent streams. At this time, the number of processing instances is 3, the number of TCP links is 2, and the second NF reduces the current maximum number of concurrent streams.
  • the second NF reads a load balancing configuration file from local configuration files, and reduces the maximum number of concurrent streams in the setting frame.
  • the second NF may receive a configuration instruction input by a user, and reduces the maximum number of concurrent streams in the setting frame.
  • the first NF requests the second NF to establish M TCP links based on the total number of current service requests and the reduced maximum number of concurrent streams, where the second maximum number of concurrent streams multiplied by a sum of M and N is equal to the total number of current service requests.
  • the second NF evenly distributes the M TCP links on various processing instances of the second NF.
  • the second NF performs load balancing processing on the TCP SYNC message, and according to the distribution of the original N TCP links, the newly-established link and the original two links are evenly distributed on processing instances of the second NF.
  • the first NF uses the new links to send the HTTP message. If the new links still cannot meet the requirement of the maximum number of concurrent streams, steps S102-S103 are repeated until the requirement is met.
  • the network device includes a processor 401, a memory 402, and a communication bus 403 for connecting the processor 401 and the memory 402.
  • the memory 402 may be a storage medium storing at least one of a first service processing program and a second service processing program.
  • the network device in this embodiment serves as the first NF serving as the client.
  • the network device in this embodiment serves as the second NF serving as the server.
  • the storage medium may store one or more computer programs that may be read, compiled and executed by one or more processors.
  • the storage medium stores at least one of the first service processing program and the second service processing program, which when executed by one or more processors, cause the one or more processors to carry out the method for message load balancing described in the disclosure.
  • the disclosure solves the problem of load imbalance among multiple processing instances caused by the small number of TCP links between NFs in the 5GC system.
  • Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
  • the term computer storage medium includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules or other data.
  • the computer storage medium includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disks (DVD) or other optical disk storages, magnetic cassettes, magnetic tapes, magnetic disk storages or other magnetic storage devices or any other media that may be used to store desired information and may be accessed by a computer.
  • communication media typically contain computer-readable instructions, data structures, program modules or other data in a modulated data signal such as carriers or other transmission mechanisms, and may include any information delivery media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

Disclosed is a method for message load balancing, including: requesting by a first NF a second NF to establish N TCP links for sending HTTP messages; sending by the second NF a first setting frame carrying a first maximum number of concurrent streams to the first NF; sending by the second NF a second setting frame carrying a second maximum number of concurrent streams to the first NF, the second maximum number of concurrent streams being less than the first maximum number of concurrent streams; and requesting by the first NF the second NF to establish M new TCP links based on a total number of current service requests and the second maximum number of concurrent streams, where the second maximum number of concurrent streams multiplied by a sum of M and N is equal to the total number of current service requests.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • The disclosure claims the priority of the Chinese patent application CN201910665215.8 entitled "METHOD FOR MESSAGE LOAD BALANCING" filed on July 22nd, 2019, the entire contents of which are incorporated herein by reference.
  • TECHNICAL FIELD
  • The disclosure relates to the field of communications, and in particular, to a method for message load balancing.
  • BACKGROUND
  • Explanation of terms:
    • 3GPP: 3rd Generation Partnership Project, which is an international organization responsible for formulating wireless communication standards.
    • 5GC: 5G Core Network, which is a core network connected to a 5G access network.
    • TCP: Transmission Control Protocol, which is a transmission layer communication protocol based on byte streams.
    • NF: Network Function, which is a processing function adopted or defined by 3GPP in a network, which has defined functional behaviors and interfaces defined by 3GPP. The network function can be regarded as a network element running on proprietary hardware or a software instance running on proprietary hardware or a virtual function instantiated on a suitable platform.
    • SBI: Service Based Interface, which manifests how a set of services are provided or opened for a given NF.
    • HTTP: Hyper Text Transfer Protocol.
    • Http/2: HTTP version 2.0 (HTTP version 2).
  • In a 5GC system defined by 3GPP, SBI messages between various NFs use HTTP/2 based on TCP as the protocol type. Due to the large-capacity requirements of the 5GC system, NF generally uses multiple processing instances to process HTTP messages (the instances here may be virtual machines or containers), and there is a load balance relationship among multiple instances.
  • In the browser-server networking mode of the traditional Internet, each of browsers distributed at different locations will establish at least one independent TCP link to the server, and the massive number of TCP links are processed by the load balancing algorithm, which enables the number of TCP links on each instance basically the same, so that the number of HTTP messages on each instance is also basically the same.
  • In the 5GC system, due to the multi-stream mechanism of the HTTP/2 protocol, a large number of HTTP messages can be simultaneously processed on each TCP link, and the link reuse rate is very high. Therefore, only a small number of TCP links are required between two NFs to meet the requirement of large capacity, which will bring a problem that the number of TCP links carried by multiple processing instances on a NF as the HTTP server is not equal, so the number of messages processed by each instance is not equal, resulting in load imbalance.
  • SUMMARY
  • The disclosure provides a method and a device for message load balancing, to at least alleviate the above-mentioned problem of load imbalance among multiple processing instances caused by the small number of TCP links between NFs in the 5GC system.
  • The disclosure provides a method for message load balancing. The method includes: requesting, by a first NF, a second NF to establish N TCP links, the N TCP links being configured to send HTTP messages; sending, by the second NF, a first setting frame to the first NF, the first setting frame carrying a first maximum number of concurrent streams; sending, by the second NF, a second setting frame to the first NF, the second setting frame carrying a second maximum number of concurrent streams, the second maximum number of concurrent streams being less than the first maximum number of concurrent streams; and requesting, by the first NF, the second NF to establish M TCP links based on a total number of current service requests and the second maximum number of concurrent streams, where the second maximum number of concurrent streams multiplied by a sum of M and N is equal to the total number of current service requests.
  • The disclosure provides a network device. The network device includes a processor, a memory and a communication bus. The communication bus is configured to implement communication between the processor and the memory.
  • The processor is configured to execute a service processing program stored in the memory to perform the above-mentioned service processing method.
  • The disclosure further provides a storage medium. The storage medium stores at least one of a first service processing program and a second service processing program. The first service processing program and the second service processing program, when executed by one or more processors, cause the one or more processors to perform the above-mentioned service processing method.
  • BRIEF DESCRIPTION OF DRAWINGS
    • FIG. 1 is a flow chart of a method for message load balancing according to the disclosure;
    • FIG. 2 is a schematic diagram of a message load balancing system according to an implementation of the disclosure;
    • FIG. 3 is a timing diagram of another message load balancing system according to an implementation of the disclosure; and
    • FIG. 4 is a schematic diagram of a network device according to the disclosure.
    DETAILED DESCRIPTION
  • Objects, technical themes and advantages of the present disclosure will be clearer from a detailed description of embodiments of the disclosure in conjunction with the drawings. It should be understood that the embodiments described herein are only intended to explain, but not to limit the disclosure.
  • As shown in FIGS. 1 and 3, at step S101, a first NF serving as a client requests a second NF serving as a server to establish N TCP links, the N TCP links being configured to send HTTP messages.
  • In an implementation, the first NF serving as the client sends a TCP SYNC (Synchronize Sequence Numbers) packet to the second NF serving as the server, enters a SYN_SEND state, and waits for the server to confirm. The second NF serving as the server confirms the SYNC of the client and sends a SYN packet, namely SYN+ACK (Acknowledge character) packet. At this time, the second NF serving as the server enters a SYN_RECV state. The first NF serving as the client receives the SYN+ACK packet of the second NF serving as the server, and sends an acknowledgement packet ACK to the server. After this packet is sent, the first NF serving as the client and the second NF serving as the server enter an ESTABLISHED state to complete a three-way handshake. After the first NF and the second NF complete the three-way handshake of TCP, the link is successfully established, and two initial TCP links are established between the first NF and the second NF.
  • At step S102, the second NF sends a first setting frame to the first NF, the first setting frame carrying a first maximum number of concurrent streams.
  • In an implementation, after the initial TCP links are successfully established, the second NF serving as the server sends an initial SETTINGS frame to the first NF serving as the client, where parameter SETTINGS_MAX_CONCURRENT_STREAMS (the current maximum number of concurrent streams) indicates a maximum number of HTTP message requests that can be processed simultaneously on a link, and the parameter SETTINGS_MAX_CONCURRENT_STREAMS (the current maximum number of concurrent streams) has an initial value of 100.
  • In an implementation, as shown in FIG. 2, the first NF serving as the client sends 200 HTTP requests (that is, the total number of current service requests) to the second NF every second. Assuming that the average time delay between sending an HTTP request from the first NF and receiving a HTTP response from the second NF is 1 second, which means that it takes 1 second for a stream to be created and released, the complete operation of a group of streams from creation to release can be completed every second, and the maximum number of concurrent streams in a TCP link is the initial value 100 specified in the initial SETTINGS frame sent by the second NF, the maximum number of requests that can be sent on a TCP link per second is 1 100 (streams) =100 (streams). If the message volume between the first NF and the second NF is at most 200 pairs per second, according to the above formula, the first NF only needs to establish two TCP links with the second NF to meet the requirements. At this time, the second NF has two processing instances with one link on each, and the third instance is idle.
  • When the first NF sends a large number of HTTP messages to the second NF, the number of TCP links carried by multiple processing instances on the second NF serving as the server is not equal. At this time, the number of processing instances on the second NF is greater than 2, so the number of messages processed by various instances is not equal, resulting in load imbalance. If multiple NFs establish links to the second NF in this way, the load imbalance between different instances of the second NF may be more serious.
  • In an implementation, the processing instance on the second NF is a virtual machine or a processing container.
  • At step S103, the second NF sends a second setting frame to the first NF, the second setting frame carrying a second maximum number of concurrent streams, the second maximum number of concurrent streams being less than the first maximum number of concurrent streams.
  • In an implementation, the second NF reduces the maximum number of concurrent streams of the second NF to 80. The above setting is applied to the first NF, and the parameter SETTINGS_MAX_CONCURRENT_STREAM in the first NF is reduced to 80. At this time, the maximum number of messages sent on the two TCP links per second is 160, which is less than 200 (the total number of current service requests), thus being unable to meet the service requirements.
  • In an implementation, when the number of processing instances on the second NF is greater than N, the second NF reduces the current maximum number of concurrent streams. At this time, the number of processing instances is 3, the number of TCP links is 2, and the second NF reduces the current maximum number of concurrent streams.
  • In an implementation, the second NF reads a load balancing configuration file from local configuration files, and reduces the maximum number of concurrent streams in the setting frame.
  • In an implementation, the second NF may receive a configuration instruction input by a user, and reduces the maximum number of concurrent streams in the setting frame.
  • At step S104, the first NF requests the second NF to establish M TCP links based on the total number of current service requests and the reduced maximum number of concurrent streams, where the second maximum number of concurrent streams multiplied by a sum of M and N is equal to the total number of current service requests.
  • In an implementation, the second NF evenly distributes the M TCP links on various processing instances of the second NF. The second NF performs load balancing processing on the TCP SYNC message, and according to the distribution of the original N TCP links, the newly-established link and the original two links are evenly distributed on processing instances of the second NF.
  • In an implementation, the first NF uses the new links to send the HTTP message. If the new links still cannot meet the requirement of the maximum number of concurrent streams, steps S102-S103 are repeated until the requirement is met.
  • In addition, this embodiment provides a network device. As shown in FIG. 4, the network device includes a processor 401, a memory 402, and a communication bus 403 for connecting the processor 401 and the memory 402. The memory 402 may be a storage medium storing at least one of a first service processing program and a second service processing program.
  • In an implementation, when the processor reads the first service processing program, the network device in this embodiment serves as the first NF serving as the client.
  • In an implementation, when the processor reads the second service processing program, the network device in this embodiment serves as the second NF serving as the server.
  • This embodiment provides a storage medium. The storage medium may store one or more computer programs that may be read, compiled and executed by one or more processors. In this embodiment, the storage medium stores at least one of the first service processing program and the second service processing program, which when executed by one or more processors, cause the one or more processors to carry out the method for message load balancing described in the disclosure.
  • The disclosure solves the problem of load imbalance among multiple processing instances caused by the small number of TCP links between NFs in the 5GC system.
  • It can be understood by those having ordinary skill in the art that all or some of the steps in the methods, , functional modules/units in the systems and devices disclosed above can be implemented as software, firmware, hardware and their appropriate combinations. In the hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components. For example, a physical component may have multiple functions, or a function or a step may be cooperatively executed by several physical components. Some or all of the components may be implemented as software executed by a processor (such as a digital signal processor or a microprocessor), or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium). As is well known to those having ordinary skill in the art, the term computer storage medium includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules or other data. The computer storage medium includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disks (DVD) or other optical disk storages, magnetic cassettes, magnetic tapes, magnetic disk storages or other magnetic storage devices or any other media that may be used to store desired information and may be accessed by a computer. Furthermore, it is well-known to those having ordinary skill in the art that communication media typically contain computer-readable instructions, data structures, program modules or other data in a modulated data signal such as carriers or other transmission mechanisms, and may include any information delivery media.
  • In addition, it should be understood by those having ordinary skill in the art that the service processing method, system, network device and storage medium provided in each embodiment of the disclosure may be applied not only to the existing communication system, but also to any future communication system.

Claims (8)

  1. A method for message load balancing, comprising:
    requesting, by a first NF, a second NF to establish N TCP links, the N TCP links being configured to send HTTP messages;
    sending, by the second NF, a first setting frame to the first NF, the first setting frame carrying a first maximum number of concurrent streams;
    sending, by the second NF, a second setting frame to the first NF, the second setting frame carrying a second maximum number of concurrent streams, the second maximum number of concurrent streams being less than the first maximum number of concurrent streams; and
    requesting, by the first NF, the second NF to establish M new TCP links based on a total number of current service requests and the second maximum number of concurrent streams, wherein the second maximum number of concurrent streams multiplied by a sum of M and N is equal to the total number of current service requests, and M and N are both integers greater than 0.
  2. The method for message load balancing of claim 1, further comprising: distributing, by the second NF, the M TCP links evenly on processing instances of the second NF.
  3. The method for message load balancing of claim 1, wherein:
    the total number of current service requests is the number of HTTP messages sent by the first NF to the second NF.
  4. The method for message load balancing of claim 1, wherein the second NF sends the second setting frame to the first NF in response to a number of processing instances on the second NF being greater than N, the second setting frame carrying the second maximum number of concurrent streams, the second maximum number of concurrent streams being less than the first maximum number of concurrent streams.
  5. The method for message load balancing of claim 1, wherein the second NF receives a configuration instruction input by a user, reduces a maximum number of concurrent streams according to the configuration instruction, and sends the second setting frame to the first NF, the second setting frame carrying the second maximum number of concurrent streams, the second maximum number of concurrent streams being less than the first maximum number of concurrent streams.
  6. The method for message load balancing of claim 1, wherein the second NF reads a load balancing configuration file from local configuration files, and sends the second setting frame to the first NF, the second setting frame carrying the second maximum number of concurrent streams, the second maximum number of concurrent streams being less than the first maximum number of concurrent streams.
  7. A network device, comprising: a processor, a memory and a communication bus, wherein:
    the communication bus is configured to implement communication between the processor and the memory; and
    the processor is configured to execute a service processing program stored in the memory to perform the method for message load balancing of claim 1.
  8. A storage medium, storing at least one of a first service processing program and a second service processing program which, when executed by one or more processors, cause the one or more processors to perform the method for message load balancing of claim 1.
EP20843488.6A 2019-07-22 2020-05-29 Message load balancing method Withdrawn EP3975497A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910665215.8A CN112291180A (en) 2019-07-22 2019-07-22 Message load balancing method
PCT/CN2020/093225 WO2021012787A1 (en) 2019-07-22 2020-05-29 Message load balancing method

Publications (2)

Publication Number Publication Date
EP3975497A1 true EP3975497A1 (en) 2022-03-30
EP3975497A4 EP3975497A4 (en) 2022-04-27

Family

ID=74192523

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20843488.6A Withdrawn EP3975497A4 (en) 2019-07-22 2020-05-29 Message load balancing method

Country Status (3)

Country Link
EP (1) EP3975497A4 (en)
CN (1) CN112291180A (en)
WO (1) WO2021012787A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11330027B1 (en) * 2021-03-16 2022-05-10 Oracle International Corporation Methods, systems, and computer readable media for hypertext transfer protocol (HTTP) stream tuning for load and overload
US11888957B2 (en) 2021-12-07 2024-01-30 Oracle International Corporation Methods, systems, and computer readable media for locality and serving scope set based network function (NF) profile prioritization and message routing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10212041B1 (en) * 2016-03-04 2019-02-19 Avi Networks Traffic pattern detection and presentation in container-based cloud computing architecture
US10587668B2 (en) * 2017-10-18 2020-03-10 Citrix Systems, Inc. Method to determine optimal number of HTTP2.0 streams and connections for better QoE
CN109412960B (en) * 2018-10-17 2022-04-29 国网四川省电力公司经济技术研究院 High-concurrency TCP application congestion control method based on dynamic adjustment of TCP connection number

Also Published As

Publication number Publication date
WO2021012787A1 (en) 2021-01-28
EP3975497A4 (en) 2022-04-27
CN112291180A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
USRE50184E1 (en) Intelligent work load manager
Venkatasubramanian et al. Load management in distributed video servers
CN108270732B (en) A kind of Streaming Media processing method and system
US20190312938A1 (en) Data Transmission Method And Apparatus
US11206706B2 (en) Method and apparatus for web browsing on multihomed mobile devices
CN103051551A (en) Distributed system and automatic maintaining method for same
WO2019000866A1 (en) Data processing method and internet of things (iot) gateway
EP3975497A1 (en) Message load balancing method
CN109246833B (en) Bearer configuration determination method, bearer configuration information sending method, bearer configuration determination device, bearer configuration information sending device, main base station and auxiliary base station
US10402280B2 (en) File transfer system and method, policy server, terminal and storage medium
CN112631788A (en) Data transmission method and data transmission server
WO2021057666A1 (en) Transmission control method, network management server, base station and storage medium
CN109361762A (en) A kind of document transmission method, apparatus and system
JP2020534760A (en) RBG division method and user terminal
CN101516131A (en) Method, system and device for data synchronization
CN107995233B (en) Method for establishing connection and corresponding equipment
CN115955437B (en) Data transmission method, device, equipment and medium
CN111131470B (en) Terminal device, data processing method thereof and data processing system
CN102111436B (en) Storage device and method for accessing storage device through internet small computer system interface (iSCSI)
US8780708B2 (en) Transmission control system
CN105847370A (en) Video file scheduling distribution or request method and system
US20020083211A1 (en) Method and apparatus for synchronizing calls in a server and client system
US11576181B2 (en) Logical channel management in a communication system
US20060069778A1 (en) Content distribution system
CN112491903B (en) Account checking method, device and system among multiple systems

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20211222

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

A4 Supplementary search report drawn up and despatched

Effective date: 20220325

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 65/40 20220101ALI20220321BHEP

Ipc: H04L 9/40 20220101ALI20220321BHEP

Ipc: H04L 67/1001 20220101ALI20220321BHEP

Ipc: H04L 67/02 20220101AFI20220321BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20220817