[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20230231825A1 - Routing for large server deployments - Google Patents

Routing for large server deployments Download PDF

Info

Publication number
US20230231825A1
US20230231825A1 US18/067,935 US202218067935A US2023231825A1 US 20230231825 A1 US20230231825 A1 US 20230231825A1 US 202218067935 A US202218067935 A US 202218067935A US 2023231825 A1 US2023231825 A1 US 2023231825A1
Authority
US
United States
Prior art keywords
request
node
nodes
url
computing system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/067,935
Inventor
Jeremy Goodsitt
Austin Walters
Fardin Abdi Taghi Abad
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital One Services LLC
Original Assignee
Capital One Services LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital One Services LLC filed Critical Capital One Services LLC
Priority to US18/067,935 priority Critical patent/US20230231825A1/en
Assigned to CAPITAL ONE SERVICES, LLC reassignment CAPITAL ONE SERVICES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ABDI TAGHI ABAD, FARDIN, GOODSITT, JEREMY, WALTERS, AUSTIN
Publication of US20230231825A1 publication Critical patent/US20230231825A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/10Mapping addresses of different types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1006Server selection for load balancing with static server selection, e.g. the same server being selected for a specific client
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/30Types of network names
    • H04L2101/33Types of network names containing protocol addresses or telephone numbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]

Definitions

  • a computer cluster is a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system.
  • Clusters with hundreds of computers may be used to perform complex distributed processing tasks such as deep neural network (DNN) machine learning.
  • DNN deep neural network
  • Clusters may be deployed to improve performance and availability while typically being much more cost-effective than single computers of comparable speed or availability.
  • Cloud-based computing environments make it possible to allocate large clusters programmatically using Application Programming Interfaces (APIs) though which an administrator can instantiate and configure virtual machines (or “instances”) as desired or necessary.
  • APIs Application Programming Interfaces
  • cloud-based clusters and other large server deployments may utilize load balancers to distribute network traffic across physical and/or virtual servers.
  • a load balancer may be provided as a software program that listens on a network port where external clients connect. The load balancer may forward client requests to one of the “backend” servers, which processes the request and send a response back to the load balancer.
  • Some load balancers may include routing capabilities. For example, existing load balancers may be configured to route certain types of requests to specific backend servers.
  • load balancers have had to maintain state information about the backend servers. For example, some existing load balancers maintain a lookup-table or prioritized list of backend servers. Processing a single request may involve iterating through long lists of rules in order to determine where to route the request. Moreover, before client requests can be routed to a particular backend server, the server must be registered with the load balancer. In cloud-based systems where the allocation of backend servers can change frequently, the load balancer must be updated often and can require complex rules to ensure proper routing of traffic. These problems are compounded when multiple load balancers are employed for redundancy or scalability.
  • the present disclosure relates to a method including: receiving a first request from a client device; generating a plurality of uniform resource locators (URLs), each of the plurality of URLs including an encoded representation of a network address associated with a respective node from a plurality of nodes in a computer cluster; sending a first response to a client device, the first response including the plurality of URLs; receiving a second request from the client device, the second request including a first URL from the plurality of URLs; determining the second request should be routed to a first network address based on decoding the first URL, the first network address associated with a first node from the plurality of nodes; and forwarding the second request to the first node in response to the determining.
  • URLs uniform resource locators
  • each of the plurality of URLs includes a hash of an Internet Protocol (IP) address associated with a respective node from the plurality of nodes.
  • generating the plurality of URLs includes generating a web page including the plurality of URLs in response to receiving the request from the client device.
  • the method includes receiving a second response from the first node an forwarding the second response to the client device.
  • receiving the first request from the client device includes receiving the first request at a cluster node manager.
  • receiving the second request from the client device includes receiving the second request at a cluster load balancer.
  • each of the plurality of URLs includes an encoded representation of a network address within the subdomain portion of the URL.
  • each of the plurality of URLs includes an encoded representation of a network address within the path portion of the URL.
  • the present disclosure relates to a method including: sending a first request to a cluster load manager; receiving a first response including from the cluster node manager, the first response including a plurality of uniform resource locators (URLs), each of the plurality of URLs including an encoded representation of a network address associated with a respective node from a plurality of nodes in a computer cluster; sending a second request to a cluster load balancer, the second request including a first URL from the plurality of URLs.
  • URLs uniform resource locators
  • the cluster load balancer may be configured to: determine the second request should be routed to a first network address based on decoding the first URL, the first network address associated with a first node from the plurality of nodes; and forward the second request to the first node in response to the determining.
  • the method includes receiving a second response from the cluster load balancer, the second response generated by the first node in response to receiving the second request forwarded by the cluster load balancer.
  • each of the plurality of URLs includes a hash of an Internet Protocol (IP) address associated with a respective node from the plurality of nodes.
  • sending the first request to the cluster load manager includes sending a request to an Hypertext Transfer Protocol (HTTP) server.
  • HTTP Hypertext Transfer Protocol
  • each of the plurality of URLs includes an encoded representation of a network address within the subdomain portion of the URL.
  • each of the plurality of URLs includes an encoded representation of a network address within the path portion of the URL.
  • the present disclosure relates to a system including: a processor; a volatile memory; and a non-volatile memory storing computer program code.
  • the computer program code causes the processor to execute a process operable to: receive a first request from a client device; generate a plurality of uniform resource locators (URLs), each of the plurality of URLs including an encoded representation of a network address associated with a respective node from a plurality of nodes in a computer cluster; send a first response to the client device, the first response including the plurality of URLs to a client device; receive a second request from the client device, the second request including a first URL from the plurality of URLs; determine the second request should be routed to a first network address based on decoding the first URL, the first network address associated with a first node from the plurality of nodes; and forward the second request to the first node in response to the determining.
  • URLs uniform resource locators
  • each of the plurality of URLs includes a hash of an Internet Protocol (IP) address associated with a respective node from the plurality of nodes.
  • generating the plurality of URLs includes generating a web page including the plurality of URLs in response to receiving the request from the client device.
  • the process is operable to receive a second response from the first node and forward the second response to the client device.
  • receiving the first request from the client device includes receiving the first request at a cluster node manager.
  • receiving the second request from the client device includes receiving the second request at a cluster load balancer.
  • FIG. 1 is a diagram of a cluster computing system, according to some embodiments of the present disclosure.
  • FIG. 2 is a flow diagram showing processing that may occur within the system of FIG. 1 , according to some embodiments of the present disclosure.
  • FIG. 3 is another flow diagram showing processing that may occur within the system of FIG. 1 , according to some embodiments of the present disclosure.
  • a load balancer may route network traffic to particular backend servers using network address information encoded within incoming requests.
  • An incoming request may include the network address of a backend server encoded within a Uniform Resource Locator (URL), for example within the subdomain or path portion of the URL.
  • the load balancer may decode the network address from the subdomain/path portion of the URL, and then forward the request to the server at that address.
  • the load balancer may be able to route traffic to a large number of backend servers without having to maintain state information about those servers.
  • FIG. 1 illustrates a cluster computing system 100 , according to an embodiment of the present disclosure.
  • the system 100 includes a cluster node manager 102 , a database 104 , a cluster load balancer 106 , and a computer cluster 108 .
  • Users can interact with the system 100 using one or more client devices 112 , such as a desktop, laptop, tablet, or mobile computing device.
  • client devices 112 such as a desktop, laptop, tablet, or mobile computing device.
  • users can interact with the system 100 using a web browser executing on the client device 112 .
  • the various components of the system 100 and client devices 112 may be connected as illustrated in FIG. 1 or in any other suitable manner.
  • the illustrated components may be connected by one or more wireless or wireline computer networks.
  • the computer cluster (or “cluster”) 108 includes one or more nodes 110 a, 110 b, . . . , 110 n ( 110 generally).
  • Each node 110 may correspond to a physical or virtual computer.
  • all of the nodes 110 have substantially the same hardware and/or software configuration.
  • each node 110 may comprise a Hypertext Transfer Protocol (HTTP) server and a copy of application software responsive to HTTP requests received by the HTTP server.
  • HTTP Hypertext Transfer Protocol
  • each node 110 may comprise an HTTP server, visualization tools, and an MI processor (none of which are shown in FIG. 1 ).
  • the MI processor may comprise TENSORFLOWTM, an open-source software library for MI
  • the visualization tools may comprise TENSORBOARDTM, an open-source tool to visualize neural network models (or other MI models) defined as “graphs” within a TENSORFLOWTM application.
  • the visualization tools and/or MI processor may utilize other open-source or proprietary MI software libraries known to one of ordinary skill in the art.
  • the visualization tools may include a web-based graphical user interface, and a client device 112 can connect to the HTTP server of a particular node 110 in order to access the node's visualization tools.
  • the cluster node manager (or “node manager”) 102 may be configured to manage nodes within the cluster 108 .
  • the node manager 102 may instantiate and de-instantiate nodes 110 based on system load or other criteria.
  • the node manager 102 may allocate nodes 110 based on load information obtained from individual nodes 110 and use this load information to dynamically adjust the number of nodes 110 .
  • the load information may include one or more metrics associated with the training of a MI model, such as a deep neural network model.
  • the cluster computing system 100 may be hosted within a cloud computing environment.
  • the node manager 102 may instantiate and de-instantiate nodes 110 using an Application Programming Interface (API) provided by the cloud computing environment.
  • API Application Programming Interface
  • the node manager 102 may maintain information about the state of the cluster 108 .
  • the node manager 102 maintains a lookup table including a network address for each allocated node 110 .
  • the network addresses are Internet Protocol (IP) addresses.
  • IP Internet Protocol
  • the IP addresses are internal to the system 100 , meaning that client devices 112 cannot directly connect to the nodes 110 using those IP addresses. Instead, the client devices 112 may connect to the nodes indirectly through the cluster load balancer 106 .
  • the node manager 102 stores state information within the database 104 .
  • the node manager 102 may include an HTTP server or other type of server that can process requests from the client device 112 .
  • the client device 112 can request, from the node manager 102 , information about nodes 110 currently allocated within the cluster 108 .
  • the node manager 102 may generate a Uniform Resource Locator (URLs) for each node 110 in the cluster and return a list of node URLs to the client device 112 .
  • the node manager 102 may return the list of URLs as a web page which is rendered in a web browser of the client device 112 .
  • URLs Uniform Resource Locator
  • the URL generated for a particular node 110 may include an encoded representation of that node's network address within the system 110 .
  • the URL generated for node 110 a may include an encoded representation of this address within the subdomain portion of the URL as shown here:
  • the node's address may be encoded with the path portion of the URL, as shown here:
  • the node's network address may be encoded using hexadecimal encoding.
  • the network address may be encrypted within the URL to avoid exposing network address information outside of the system 100 and/or to prevent URL tampering.
  • the cluster load balancer 106 manages network traffic between the client devices 112 and the cluster nodes 110 .
  • the cluster load balancer 106 may receive a request from a client device 112 , determine which cluster node 110 should handle the request, and then forward the client request to that node 110 .
  • the node 110 may process the client request, return a response to the cluster load balancer 106 , and the cluster load balancer 106 may forward the response to the client device.
  • cluster load balancer 106 may be configured to route client requests to particular nodes 110 specified by the request.
  • a client request may include a URL having an encoded representation of a node's network address.
  • the URL may correspond to a URL generated by the cluster node manager 102 and provided to the client device 112 , as described above.
  • the cluster load balancer 106 may be configured to decode the URL to determine the node's network address (e.g., IP address) and then forward the client request to the node using the decoded address. For example, if the client request includes the following URL:
  • the cluster load balancer 106 may decode this URL to determine node address 10.0.1.1 and then forward the client request to that address.
  • the cluster load balancer 106 can route client requests to particular nodes 110 without having to maintain any information about the nodes. Moreover, unlike existing load balancers, the techniques described herein do not require iterating through rules lists or using lookup tables and, thus, scale to an arbitrary number of nodes 110 without affecting the processing or storage requirements within the load balancer 106 .
  • the cluster load balancer 106 utilizes a NGINXTM, an open source web server and load balancer, along with OpenResty® to route network traffic to the cluster nodes 110 .
  • a method 200 can be used for stateless routing in a cluster computing system, according to an embodiment of the present disclosure.
  • a URL may be generated for each node in a computer cluster (e.g., each node 110 in cluster 108 of FIG. 1 ).
  • Each URL may include an encoded representation of the network address for the corresponding node.
  • the URLs have a format similar to one of the URL formats described above in conjunction with FIG. 1 .
  • the URLs may be sent to a client device, such as client device 112 of FIG. 1 .
  • blocks 202 and 204 may be performed by a cluster node manager (e.g., cluster node manager 102 of FIG. 1 ).
  • the URLs are sent to the client device as a web page.
  • a request is received from a client device, the request including a first URL.
  • the first URL includes an encoded representation of a network address of a first node of the cluster (e.g., node 110 a in FIG. 1 ).
  • the first URL may be one of the URLs generated and sent to the client at blocks 202 and 204 .
  • the first URL is decoded to obtain the network address of the first node.
  • the request is forwarded to the first node using the decoded network address.
  • a response is received from the first node, e.g., after the first node has completed processing of the request.
  • the response is forwarded to the client device.
  • blocks 206 - 214 may be performed by a cluster load balancer (e.g., cluster load balancer 106 of FIG. 1 ).
  • a method 300 may be performed by a client device (e.g., client device 112 of FIG. 1 ) to send a request to particular node within a computer cluster (e.g., node 110 a in cluster 108 of FIG. 1 ), according to an embodiment of the present disclosure.
  • a request is sent to a cluster node manager (e.g., cluster node manager 102 of FIG. 1 ).
  • the cluster node manager responds with a list of URLs, each URL is associated with a node of the cluster and includes an encoded representation of that node's network address.
  • the URLs have a format similar to one of the URL formats described above in conjunction with FIG. 1 .
  • the cluster node manager responses with a web page including the list of URLs.
  • a request is sent to a cluster load balancer (e.g., cluster load balancer 106 of FIG. 1 ), the request including a first URL associated with the first node.
  • the cluster load balancer is configured to decode the first URL to obtain the first node's network address and then forward the request to the first node using the decoded network address.
  • a response is received from the cluster load balancer, the response being generated by the first node.
  • the cluster load balancer is configured to forward the response from the first node to the client device.
  • the subject matter described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural means disclosed in this specification and structural equivalents thereof, or in combinations of them.
  • the subject matter described herein can be implemented as one or more computer program products, such as one or more computer programs tangibly embodied in an information carrier (e.g., in a machine readable storage device), or embodied in a propagated signal, for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers).
  • a computer program (also known as a program, software, software application, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file.
  • a program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processor of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • Information carriers suitable for embodying computer program instructions and data include all forms of nonvolatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, flash memory device, or magnetic disks.
  • semiconductor memory devices such as EPROM, EEPROM, flash memory device, or magnetic disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

In one aspect, the present disclosure relates to a method comprising: receiving, at a client device, information from a node manager about a plurality of nodes in a computer cluster, the information comprising a network address associated each of the plurality of nodes and sending, by the client device, a request to a load balancer to access a first node from the plurality of nodes, the request comprising a first URL including an encoded representation of the network address associated with the first node. The load balancer is configured to determine the request should be routed to a first network address based on decoding the URL, the first network address associated with a first node from the plurality of nodes and forward the request to the first node in response to the determining.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a continuation of U.S. application Ser. No. 16/729,647, filed Dec. 30, 2019, which is a divisional of U.S. application Ser. No. 16/196,204, filed Nov. 20, 2018, now U.S. Pat. No. 10,523,628, issued Dec. 31, 2019, which is a continuation of U.S. application Ser. No. 15/892,795, filed Feb. 9, 2018, now U.S. Pat. No. 10,230,683, issued Mar. 12, 2019, which are incorporated by reference in their entireties.
  • BACKGROUND
  • As is known in the art, a computer cluster is a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system. Clusters with hundreds of computers (or “nodes”) may be used to perform complex distributed processing tasks such as deep neural network (DNN) machine learning. Clusters may be deployed to improve performance and availability while typically being much more cost-effective than single computers of comparable speed or availability. Cloud-based computing environments make it possible to allocate large clusters programmatically using Application Programming Interfaces (APIs) though which an administrator can instantiate and configure virtual machines (or “instances”) as desired or necessary.
  • As is also known in the art, cloud-based clusters and other large server deployments may utilize load balancers to distribute network traffic across physical and/or virtual servers. A load balancer may be provided as a software program that listens on a network port where external clients connect. The load balancer may forward client requests to one of the “backend” servers, which processes the request and send a response back to the load balancer. Some load balancers may include routing capabilities. For example, existing load balancers may be configured to route certain types of requests to specific backend servers.
  • Traditionally, load balancers have had to maintain state information about the backend servers. For example, some existing load balancers maintain a lookup-table or prioritized list of backend servers. Processing a single request may involve iterating through long lists of rules in order to determine where to route the request. Moreover, before client requests can be routed to a particular backend server, the server must be registered with the load balancer. In cloud-based systems where the allocation of backend servers can change frequently, the load balancer must be updated often and can require complex rules to ensure proper routing of traffic. These problems are compounded when multiple load balancers are employed for redundancy or scalability.
  • SUMMARY
  • According to one aspect, the present disclosure relates to a method including: receiving a first request from a client device; generating a plurality of uniform resource locators (URLs), each of the plurality of URLs including an encoded representation of a network address associated with a respective node from a plurality of nodes in a computer cluster; sending a first response to a client device, the first response including the plurality of URLs; receiving a second request from the client device, the second request including a first URL from the plurality of URLs; determining the second request should be routed to a first network address based on decoding the first URL, the first network address associated with a first node from the plurality of nodes; and forwarding the second request to the first node in response to the determining.
  • In some embodiments, each of the plurality of URLs includes a hash of an Internet Protocol (IP) address associated with a respective node from the plurality of nodes. In certain embodiments, generating the plurality of URLs includes generating a web page including the plurality of URLs in response to receiving the request from the client device. In other embodiments, the method includes receiving a second response from the first node an forwarding the second response to the client device. In some embodiments, receiving the first request from the client device includes receiving the first request at a cluster node manager. In certain embodiments, receiving the second request from the client device includes receiving the second request at a cluster load balancer. In other embodiments, each of the plurality of URLs includes an encoded representation of a network address within the subdomain portion of the URL. In some embodiments, each of the plurality of URLs includes an encoded representation of a network address within the path portion of the URL.
  • According to another aspect, the present disclosure relates to a method including: sending a first request to a cluster load manager; receiving a first response including from the cluster node manager, the first response including a plurality of uniform resource locators (URLs), each of the plurality of URLs including an encoded representation of a network address associated with a respective node from a plurality of nodes in a computer cluster; sending a second request to a cluster load balancer, the second request including a first URL from the plurality of URLs. The cluster load balancer may be configured to: determine the second request should be routed to a first network address based on decoding the first URL, the first network address associated with a first node from the plurality of nodes; and forward the second request to the first node in response to the determining.
  • In some embodiments, the method includes receiving a second response from the cluster load balancer, the second response generated by the first node in response to receiving the second request forwarded by the cluster load balancer. In certain embodiments, each of the plurality of URLs includes a hash of an Internet Protocol (IP) address associated with a respective node from the plurality of nodes. In some embodiments, sending the first request to the cluster load manager includes sending a request to an Hypertext Transfer Protocol (HTTP) server. In some embodiments, each of the plurality of URLs includes an encoded representation of a network address within the subdomain portion of the URL. In certain embodiments, each of the plurality of URLs includes an encoded representation of a network address within the path portion of the URL.
  • According to yet another aspect, the present disclosure relates to a system including: a processor; a volatile memory; and a non-volatile memory storing computer program code. When executed on the processor, the computer program code causes the processor to execute a process operable to: receive a first request from a client device; generate a plurality of uniform resource locators (URLs), each of the plurality of URLs including an encoded representation of a network address associated with a respective node from a plurality of nodes in a computer cluster; send a first response to the client device, the first response including the plurality of URLs to a client device; receive a second request from the client device, the second request including a first URL from the plurality of URLs; determine the second request should be routed to a first network address based on decoding the first URL, the first network address associated with a first node from the plurality of nodes; and forward the second request to the first node in response to the determining.
  • In some embodiments, each of the plurality of URLs includes a hash of an Internet Protocol (IP) address associated with a respective node from the plurality of nodes. In certain embodiments, generating the plurality of URLs includes generating a web page including the plurality of URLs in response to receiving the request from the client device. In particular embodiments, the process is operable to receive a second response from the first node and forward the second response to the client device. In some embodiments, receiving the first request from the client device includes receiving the first request at a cluster node manager. In certain embodiments, receiving the second request from the client device includes receiving the second request at a cluster load balancer.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various objectives, features, and advantages of the disclosed subject matter can be more fully appreciated with reference to the following detailed description of the disclosed subject matter when considered in connection with the following drawings.
  • FIG. 1 is a diagram of a cluster computing system, according to some embodiments of the present disclosure.
  • FIG. 2 is a flow diagram showing processing that may occur within the system of FIG. 1 , according to some embodiments of the present disclosure.
  • FIG. 3 is another flow diagram showing processing that may occur within the system of FIG. 1 , according to some embodiments of the present disclosure.
  • The drawings are not necessarily to scale, or inclusive of all elements of a system, emphasis instead generally being placed upon illustrating the concepts, structures, and techniques sought to be protected herein.
  • DETAILED DESCRIPTION
  • According to various embodiments of the present disclosure, a load balancer may route network traffic to particular backend servers using network address information encoded within incoming requests. An incoming request may include the network address of a backend server encoded within a Uniform Resource Locator (URL), for example within the subdomain or path portion of the URL. The load balancer may decode the network address from the subdomain/path portion of the URL, and then forward the request to the server at that address. Using this approach, the load balancer may be able to route traffic to a large number of backend servers without having to maintain state information about those servers.
  • FIG. 1 illustrates a cluster computing system 100, according to an embodiment of the present disclosure. The system 100 includes a cluster node manager 102, a database 104, a cluster load balancer 106, and a computer cluster 108. Users can interact with the system 100 using one or more client devices 112, such as a desktop, laptop, tablet, or mobile computing device. In some embodiments, users can interact with the system 100 using a web browser executing on the client device 112. The various components of the system 100 and client devices 112 may be connected as illustrated in FIG. 1 or in any other suitable manner. The illustrated components may be connected by one or more wireless or wireline computer networks.
  • In the embodiment of FIG. 1 , the computer cluster (or “cluster”) 108 includes one or more nodes 110 a, 110 b, . . . , 110 n (110 generally). Each node 110 may correspond to a physical or virtual computer. In some embodiments, all of the nodes 110 have substantially the same hardware and/or software configuration. For example, each node 110 may comprise a Hypertext Transfer Protocol (HTTP) server and a copy of application software responsive to HTTP requests received by the HTTP server.
  • In particular embodiments, the system 100 may be used for machine intelligence (MI) tasks. In such embodiments, each node 110 may comprise an HTTP server, visualization tools, and an MI processor (none of which are shown in FIG. 1 ). The MI processor may comprise TENSORFLOW™, an open-source software library for MI, and the visualization tools may comprise TENSORBOARD™, an open-source tool to visualize neural network models (or other MI models) defined as “graphs” within a TENSORFLOW™ application. In other embodiments, the visualization tools and/or MI processor may utilize other open-source or proprietary MI software libraries known to one of ordinary skill in the art. In certain embodiments, the visualization tools may include a web-based graphical user interface, and a client device 112 can connect to the HTTP server of a particular node 110 in order to access the node's visualization tools.
  • Referring again to the embodiment of FIG. 1 , the cluster node manager (or “node manager”) 102 may be configured to manage nodes within the cluster 108. For example, the node manager 102 may instantiate and de-instantiate nodes 110 based on system load or other criteria. In particular embodiments, the node manager 102 may allocate nodes 110 based on load information obtained from individual nodes 110 and use this load information to dynamically adjust the number of nodes 110. In particular embodiments, the load information may include one or more metrics associated with the training of a MI model, such as a deep neural network model.
  • In some embodiments, the cluster computing system 100 may be hosted within a cloud computing environment. In such embodiments, the node manager 102 may instantiate and de-instantiate nodes 110 using an Application Programming Interface (API) provided by the cloud computing environment.
  • Referring to the embodiment of FIG. 1 , the node manager 102 may maintain information about the state of the cluster 108. In some embodiments, the node manager 102 maintains a lookup table including a network address for each allocated node 110. In certain embodiments, the network addresses are Internet Protocol (IP) addresses. In some embodiments, the IP addresses are internal to the system 100, meaning that client devices 112 cannot directly connect to the nodes 110 using those IP addresses. Instead, the client devices 112 may connect to the nodes indirectly through the cluster load balancer 106. In certain embodiments, the node manager 102 stores state information within the database 104.
  • Referring again to the embodiment of FIG. 1 , the node manager 102 may include an HTTP server or other type of server that can process requests from the client device 112. In some embodiments, the client device 112 can request, from the node manager 102, information about nodes 110 currently allocated within the cluster 108. In certain embodiments, the node manager 102 may generate a Uniform Resource Locator (URLs) for each node 110 in the cluster and return a list of node URLs to the client device 112. In some embodiments, the node manager 102 may return the list of URLs as a web page which is rendered in a web browser of the client device 112. In some embodiments, the URL generated for a particular node 110 may include an encoded representation of that node's network address within the system 110. For example, assuming a first node 110 a has IP address 10.0.1.1, the URL generated for node 110 a may include an encoded representation of this address within the subdomain portion of the URL as shown here:
  • http://inst-0a000101.example-url.com
  • As another example, the node's address may be encoded with the path portion of the URL, as shown here:
  • http://example-url.com/inst-0a000101/
  • In particular embodiments, the node's network address may be encoded using hexadecimal encoding. In some embodiments, the network address may be encrypted within the URL to avoid exposing network address information outside of the system 100 and/or to prevent URL tampering.
  • In some embodiments, the cluster load balancer 106 manages network traffic between the client devices 112 and the cluster nodes 110. In particular, the cluster load balancer 106 may receive a request from a client device 112, determine which cluster node 110 should handle the request, and then forward the client request to that node 110. In turn, the node 110 may process the client request, return a response to the cluster load balancer 106, and the cluster load balancer 106 may forward the response to the client device.
  • In some embodiments, cluster load balancer 106 may be configured to route client requests to particular nodes 110 specified by the request. For example, a client request may include a URL having an encoded representation of a node's network address. The URL may correspond to a URL generated by the cluster node manager 102 and provided to the client device 112, as described above. In such embodiments, the cluster load balancer 106 may be configured to decode the URL to determine the node's network address (e.g., IP address) and then forward the client request to the node using the decoded address. For example, if the client request includes the following URL:
  • http://inst-0a000101.example-url.com
  • the cluster load balancer 106 may decode this URL to determine node address 10.0.1.1 and then forward the client request to that address.
  • Whereas traditional load balancers may need to keep track of each backend server, the cluster load balancer 106 can route client requests to particular nodes 110 without having to maintain any information about the nodes. Moreover, unlike existing load balancers, the techniques described herein do not require iterating through rules lists or using lookup tables and, thus, scale to an arbitrary number of nodes 110 without affecting the processing or storage requirements within the load balancer 106.
  • In one embodiment, the cluster load balancer 106 utilizes a NGINX™, an open source web server and load balancer, along with OpenResty® to route network traffic to the cluster nodes 110.
  • Referring to FIG. 2 , a method 200 can be used for stateless routing in a cluster computing system, according to an embodiment of the present disclosure. At block 202, a URL may be generated for each node in a computer cluster (e.g., each node 110 in cluster 108 of FIG. 1 ). Each URL may include an encoded representation of the network address for the corresponding node. In some embodiments, the URLs have a format similar to one of the URL formats described above in conjunction with FIG. 1 . At block 204, the URLs may be sent to a client device, such as client device 112 of FIG. 1 . In certain embodiments, blocks 202 and 204 may be performed by a cluster node manager (e.g., cluster node manager 102 of FIG. 1 ). In some embodiments, the URLs are sent to the client device as a web page.
  • At block 206, a request is received from a client device, the request including a first URL. The first URL includes an encoded representation of a network address of a first node of the cluster (e.g., node 110 a in FIG. 1 ). The first URL may be one of the URLs generated and sent to the client at blocks 202 and 204. At block 208, the first URL is decoded to obtain the network address of the first node. At block 210, the request is forwarded to the first node using the decoded network address. At block 212, a response is received from the first node, e.g., after the first node has completed processing of the request. At block 214, the response is forwarded to the client device. In some embodiments, blocks 206-214 may be performed by a cluster load balancer (e.g., cluster load balancer 106 of FIG. 1 ).
  • Referring to FIG. 3 , a method 300 may be performed by a client device (e.g., client device 112 of FIG. 1 ) to send a request to particular node within a computer cluster (e.g., node 110 a in cluster 108 of FIG. 1 ), according to an embodiment of the present disclosure. At block 302, a request is sent to a cluster node manager (e.g., cluster node manager 102 of FIG. 1 ). At block 304, the cluster node manager responds with a list of URLs, each URL is associated with a node of the cluster and includes an encoded representation of that node's network address. In particular embodiments, the URLs have a format similar to one of the URL formats described above in conjunction with FIG. 1 . In some embodiments, the cluster node manager responses with a web page including the list of URLs.
  • At block 306, a request is sent to a cluster load balancer (e.g., cluster load balancer 106 of FIG. 1 ), the request including a first URL associated with the first node. In some embodiments, the cluster load balancer is configured to decode the first URL to obtain the first node's network address and then forward the request to the first node using the decoded network address. At block 308, a response is received from the cluster load balancer, the response being generated by the first node. In some embodiments, the cluster load balancer is configured to forward the response from the first node to the client device.
  • Although embodiments of the present disclosure have been described for use with cluster computing systems, the concepts sought to be protected herein can be utilized in any multi-server computing system.
  • The subject matter described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural means disclosed in this specification and structural equivalents thereof, or in combinations of them. The subject matter described herein can be implemented as one or more computer program products, such as one or more computer programs tangibly embodied in an information carrier (e.g., in a machine readable storage device), or embodied in a propagated signal, for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). A computer program (also known as a program, software, software application, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file. A program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processor of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of nonvolatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, flash memory device, or magnetic disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • It is to be understood that the disclosed subject matter is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception, upon which this disclosure is based, may readily be utilized as a basis for the designing of other structures, methods, and systems for carrying out the several purposes of the disclosed subject matter. It is important, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the disclosed subject matter.
  • Although the disclosed subject matter has been described and illustrated in the foregoing exemplary embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the disclosed subject matter may be made without departing from the spirit and scope of the disclosed subject matter.

Claims (21)

1-20. (canceled)
21. A method, comprising:
receiving, by a computing system, a request from a client device for access to a first node of a plurality of nodes of a computer cluster, the request comprising a first uniform resource locator (URL), the first URL comprising an encoded representation of a network address associated with the node;
decoding, by the computing system, the first URL to determine a location to which the request should be routed; and
routing, by the computing system, the request to the determined location based on the decoding and without having prior knowledge about state information associated with the plurality of nodes.
22. The method of claim 21, further comprising:
receiving, by the computing system, a second request to access another node of the computer cluster from a second client device
determining, by the computing system, that the second request does not include a URL of a desired node; and
routing, by the computing system, the second request to a second node of the plurality of nodes without having prior knowledge about the state information associated with the plurality of nodes.
23. The method of claim 21, further comprising:
generating, by the computing system, a plurality of URLs, each of the plurality of URLs comprising an encoded representation of a network address associated with a respective node from the plurality of nodes in the computer cluster.
24. The method of claim 21, further comprising:
identifying, by the computing system, state information associated with the plurality of nodes;
determining, by the computing system, that a load on the computer cluster exceeds a threshold amount; and
based on the determining, allocating, by the computing system, new nodes to the computer cluster.
25. The method of claim 21, wherein a path portion of the first URL comprises the encoded representation.
26. The method of claim 21, wherein a subdomain portion of the first URL comprises the encoded representation.
27. The method of claim 21, wherein the first URL comprises a hash of an Internet Protocol (IP) address associated with the first node.
28. A non-transitory computer readable medium comprising one or more sequences of instructions, which, when executed by a processor, causes a computing system to perform operations comprising:
receiving, by the computing system, a request from a client device for access to a first node of a plurality of nodes of a computer cluster, the request comprising a first uniform resource locator (URL), the first URL comprising an encoded representation of a network address associated with the node;
decoding, by the computing system, the first URL to determine a location to which the request should be routed; and
routing, by the computing system, the request to the determined location based on the decoding and without having prior knowledge about state information associated with the plurality of nodes.
29. The non-transitory computer readable medium of claim 28, further comprising:
receiving, by the computing system, a second request to access another node of the computer cluster from a second client device
determining, by the computing system, that the second request does not include a URL of a desired node; and
routing, by the computing system, the second request to a second node of the plurality of nodes without having prior knowledge about the state information associated with the plurality of nodes.
30. The non-transitory computer readable medium of claim 28, further comprising:
generating, by the computing system, a plurality of URLs, each of the plurality of URLs comprising an encoded representation of a network address associated with a respective node from the plurality of nodes in the computer cluster.
31. The non-transitory computer readable medium of claim 28, further comprising:
identifying, by the computing system, state information associated with the plurality of nodes;
determining, by the computing system, that a load on the computer cluster exceeds a threshold amount; and
based on the determining, allocating, by the computing system, new nodes to the computer cluster.
32. The non-transitory computer readable medium of claim 28, wherein a path portion of the first URL comprises the encoded representation.
33. The non-transitory computer readable medium of claim 28, wherein a subdomain portion of the first URL comprises the encoded representation.
34. The non-transitory computer readable medium of claim 28, wherein the first URL comprises a hash of an Internet Protocol (IP) address associated with the first node.
35. A system comprising:
a processor; and
a memory comprising one or more sequences of instructions, which, when executed by the processor, causes the system to perform operations comprising:
receiving a request from a client device for access to a first node of a plurality of nodes of a computer cluster, the request comprising a first uniform resource locator (URL), the first URL comprising an encoded representation of a network address associated with the node;
decoding the first URL to determine a location to which the request should be routed; and
routing the request to the determined location based on the decoding and without having prior knowledge about state information associated with the plurality of nodes.
36. The system of claim 35, wherein the operations further comprise:
receiving a second request to access another node of the computer cluster from a second client device
determining that the second request does not include a URL of a desired node; and
routing the second request to a second node of the plurality of nodes without having prior knowledge about the state information associated with the plurality of nodes.
37. The system of claim 35, wherein the operations further comprise:
generating a plurality of URLs, each of the plurality of URLs comprising an encoded representation of a network address associated with a respective node from the plurality of nodes in the computer cluster.
38. The system of claim 35, wherein the operations further comprise:
identifying state information associated with the plurality of nodes;
determining that a load on the computer cluster exceeds a threshold amount; and
based on the determining, allocating new nodes to the computer cluster.
39. The system of claim 35, wherein a path portion of the first URL comprises the encoded representation.
40. The system of claim 35, wherein a subdomain portion of the first URL comprises the encoded representation.
US18/067,935 2018-02-09 2022-12-19 Routing for large server deployments Pending US20230231825A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/067,935 US20230231825A1 (en) 2018-02-09 2022-12-19 Routing for large server deployments

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15/892,795 US10230683B1 (en) 2018-02-09 2018-02-09 Routing for large server deployments
US16/196,204 US10523628B2 (en) 2018-02-09 2018-11-20 Routing for large server deployments
US16/729,647 US11570135B2 (en) 2018-02-09 2019-12-30 Routing for large server deployments
US18/067,935 US20230231825A1 (en) 2018-02-09 2022-12-19 Routing for large server deployments

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/729,647 Continuation US11570135B2 (en) 2018-02-09 2019-12-30 Routing for large server deployments

Publications (1)

Publication Number Publication Date
US20230231825A1 true US20230231825A1 (en) 2023-07-20

Family

ID=65529241

Family Applications (4)

Application Number Title Priority Date Filing Date
US15/892,795 Active US10230683B1 (en) 2018-02-09 2018-02-09 Routing for large server deployments
US16/196,204 Active US10523628B2 (en) 2018-02-09 2018-11-20 Routing for large server deployments
US16/729,647 Active US11570135B2 (en) 2018-02-09 2019-12-30 Routing for large server deployments
US18/067,935 Pending US20230231825A1 (en) 2018-02-09 2022-12-19 Routing for large server deployments

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US15/892,795 Active US10230683B1 (en) 2018-02-09 2018-02-09 Routing for large server deployments
US16/196,204 Active US10523628B2 (en) 2018-02-09 2018-11-20 Routing for large server deployments
US16/729,647 Active US11570135B2 (en) 2018-02-09 2019-12-30 Routing for large server deployments

Country Status (3)

Country Link
US (4) US10230683B1 (en)
EP (1) EP3525422B1 (en)
CA (1) CA3032673C (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10230683B1 (en) * 2018-02-09 2019-03-12 Capital One Services, Llc Routing for large server deployments
CN110308996B (en) * 2019-07-09 2022-11-04 北京首汽智行科技有限公司 Matching method for user URL request
US11392583B2 (en) * 2019-10-03 2022-07-19 Palantir Technologies Inc. Systems and methods for managing queries from different types of client applications
US20210377294A1 (en) * 2020-05-28 2021-12-02 Citrix Systems, Inc. Constraining resource allocation rate for stateful multi-tenant http proxies and denial-of-service attack prevention
US20230156071A1 (en) * 2021-11-18 2023-05-18 Red Hat, Inc. Dynamic scaling of a distributed computing system
JP2023076038A (en) * 2021-11-22 2023-06-01 ブラザー工業株式会社 Server system
CN114500537B (en) * 2022-03-24 2024-11-01 杭州博盾习言科技有限公司 Container service access method, system, storage medium and electronic equipment

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060064478A1 (en) * 2004-05-03 2006-03-23 Level 3 Communications, Inc. Geo-locating load balancing
US20110185065A1 (en) * 2010-01-28 2011-07-28 Vladica Stanisic Stateless forwarding of load balanced packets
US20110283013A1 (en) * 2010-05-14 2011-11-17 Grosser Donald B Methods, systems, and computer readable media for stateless load balancing of network traffic flows
US20120036180A1 (en) * 2010-08-06 2012-02-09 Palo Alto Research Center Incorporated Service virtualization over content-centric networks
US20140040452A1 (en) * 2012-07-31 2014-02-06 Microsoft Corporation Processing requests
US20140297863A1 (en) * 2013-04-02 2014-10-02 Alibaba Group Holding Limited Managing redirected website login using a short address
US9294408B1 (en) * 2012-07-02 2016-03-22 Amazon Technologies, Inc. One-to-many stateless load balancing
US9392075B1 (en) * 2015-07-23 2016-07-12 Haproxy Holdings, Inc. URLs with IP-generated codes for link security in content networks
US20160248886A1 (en) * 2015-02-24 2016-08-25 Cisco Technology, Inc. Split-client constrained application execution in an industrial network
US20170026321A1 (en) * 2013-09-05 2017-01-26 Aldo Ciavatta Method and system for establishing a communication between mobile computing devices
US20170331789A1 (en) * 2016-05-13 2017-11-16 Citrix Systems, Inc. Systems and methods for a unique mechanism of providing 'clientless sslvpn' access to a variety of web-applications through a sslvpn gateway
US20180041570A1 (en) * 2015-04-01 2018-02-08 Telefonaktiebolaget Lm Ericsson (Publ) System, Apparatus and Method for Load Balancing
US20180152509A1 (en) * 2015-06-16 2018-05-31 Amazon Technologies, Inc. Supporting heterogeneous environments during code deployment
US20180295134A1 (en) * 2017-04-07 2018-10-11 Citrix Systems, Inc. Systems and methods for securely and transparently proxying saas applications through a cloud-hosted or on-premise network gateway for enhanced security and visibility
US20190238504A1 (en) * 2018-01-26 2019-08-01 Citrix Systems, Inc. Split-tunneling for clientless ssl-vpn sessions with zero-configuration
US20190306231A1 (en) * 2018-03-29 2019-10-03 Hewlett Packard Enterprise Development Lp Container cluster management
US20210409319A1 (en) * 2015-09-07 2021-12-30 Citrix Systems, Inc. Systems and methods for dynamic routing on a shared ip address

Family Cites Families (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6952737B1 (en) * 2000-03-03 2005-10-04 Intel Corporation Method and apparatus for accessing remote storage in a distributed storage cluster architecture
US6779039B1 (en) * 2000-03-31 2004-08-17 Avaya Technology Corp. System and method for routing message traffic using a cluster of routers sharing a single logical IP address distinct from unique IP addresses of the routers
US8239445B1 (en) * 2000-04-25 2012-08-07 International Business Machines Corporation URL-based sticky routing tokens using a server-side cookie jar
US7480737B2 (en) * 2002-10-25 2009-01-20 International Business Machines Corporation Technique for addressing a cluster of network servers
EP1634423B1 (en) * 2003-06-06 2013-01-02 Computer Associates Think, Inc. System and method for compressing url request parameters
US20040260745A1 (en) * 2003-06-18 2004-12-23 Gage Christopher A. S. Load balancer performance using affinity modification
US7454489B2 (en) * 2003-07-01 2008-11-18 International Business Machines Corporation System and method for accessing clusters of servers from the internet network
US7631100B2 (en) * 2003-10-07 2009-12-08 Microsoft Corporation Supporting point-to-point intracluster communications between replicated cluster nodes
US7676599B2 (en) * 2004-01-28 2010-03-09 I2 Telecom Ip Holdings, Inc. System and method of binding a client to a server
US7779463B2 (en) * 2004-05-11 2010-08-17 The Trustees Of Columbia University In The City Of New York Systems and methods for correlating and distributing intrusion alert information among collaborating computer systems
US7864710B2 (en) * 2005-12-20 2011-01-04 Level 3 Communications, Llc System and method for routing signaling messages in a communication network
US7724746B2 (en) * 2006-08-02 2010-05-25 Cisco Technology, Inc. Method and system for providing load balanced traffic in redundant infiniband ethernet gateways network
EP1931114B1 (en) * 2006-12-08 2010-07-28 Ubs Ag Method and apparatus for detecting the IP address of a computer and location information associated therewith
US8254349B2 (en) * 2007-02-23 2012-08-28 Qualcomm Incorporated Routing data in a cluster
US8533453B2 (en) * 2008-03-12 2013-09-10 Go Daddy Operating Company, LLC Method and system for configuring a server and dynamically loading SSL information
US7962597B2 (en) * 2008-03-31 2011-06-14 Amazon Technologies, Inc. Request routing based on class
US7904345B2 (en) * 2008-06-10 2011-03-08 The Go Daddy Group, Inc. Providing website hosting overage protection by transference to an overflow server
US9268779B2 (en) * 2009-01-28 2016-02-23 Mckesson Financial Holdings Methods, computer program products, and apparatuses for dispersing content items
US8165122B2 (en) * 2009-05-26 2012-04-24 Alcatel Lucent System and method for converting unicast client requests into multicast client requests
US8825859B2 (en) * 2009-12-23 2014-09-02 Citrix Systems, Inc. System and methods for mixed mode of IPv6 and IPv4 DNS of global server load balancing
US9054943B2 (en) * 2009-12-23 2015-06-09 Citrix Systems, Inc. Systems and methods for mixed mode handling of IPv6 and IPv4 traffic by a virtual server
JP5087118B2 (en) 2010-09-28 2012-11-28 株式会社東芝 Communication equipment
US20150019353A1 (en) * 2012-02-06 2015-01-15 Adstruc, Inc. System for managing the utilization of a plurality of outdoor advertising units
ES2425627B1 (en) * 2011-05-12 2014-05-05 Telefónica, S.A. METHOD AND TRACKER FOR DISTRIBUTION OF CONTENT THROUGH A NETWORK OF DISTRIBUTION OF CONTENT
US9813491B2 (en) * 2011-10-20 2017-11-07 Oracle International Corporation Highly available network filer with automatic load balancing and performance adjustment
US9866475B2 (en) * 2012-06-15 2018-01-09 Citrix Systems, Inc. Systems and methods for forwarding traffic in a cluster network
US10394611B2 (en) * 2012-11-26 2019-08-27 Amazon Technologies, Inc. Scaling computing clusters in a distributed computing system
US9686158B1 (en) * 2013-03-13 2017-06-20 United Services Automobile Association (Usaa) Point to node in a multi-tiered middleware environment
US9888055B2 (en) * 2013-03-15 2018-02-06 Profitbricks Gmbh Firewall for a virtual network and related techniques
US9515929B2 (en) * 2013-06-28 2016-12-06 Netronome Systems, Inc. Traffic data pre-filtering
EP3132589A4 (en) * 2014-04-15 2017-11-29 Level 3 Communications, LLC Geolocation via internet protocol
US9357076B2 (en) * 2014-06-06 2016-05-31 Cisco Technology, Inc. Load balancing of distributed media agents in a conference system
US9614687B2 (en) * 2014-06-06 2017-04-04 Cisco Technology, Inc. Dynamic configuration of a conference system with distributed media agents
US9509742B2 (en) * 2014-10-29 2016-11-29 DLVR, Inc. Configuring manifest files referencing infrastructure service providers for adaptive streaming video
US10142386B2 (en) * 2014-10-29 2018-11-27 DLVR, Inc. Determining manifest file data used in adaptive streaming video delivery
US10216770B1 (en) * 2014-10-31 2019-02-26 Amazon Technologies, Inc. Scaling stateful clusters while maintaining access
CN104320487B (en) * 2014-11-11 2018-03-20 网宿科技股份有限公司 The HTTP scheduling system and method for content distributing network
US11303604B2 (en) * 2015-03-31 2022-04-12 Conviva Inc. Advanced resource selection
WO2016186530A1 (en) * 2015-05-15 2016-11-24 Ringcentral, Inc. Systems and methods for determining routing information for a network request
US10073899B2 (en) * 2015-05-18 2018-09-11 Oracle International Corporation Efficient storage using automatic data translation
US10034201B2 (en) * 2015-07-09 2018-07-24 Cisco Technology, Inc. Stateless load-balancing across multiple tunnels
US20170359344A1 (en) * 2016-06-10 2017-12-14 Microsoft Technology Licensing, Llc Network-visitability detection control
US10645057B2 (en) * 2016-06-22 2020-05-05 Cisco Technology, Inc. Domain name system identification and attribution
US11038986B1 (en) * 2016-09-29 2021-06-15 Amazon Technologies, Inc. Software-specific auto scaling
US10560726B2 (en) * 2017-07-26 2020-02-11 CodeShop BV System and method for delivery and caching of personalized media streaming content
US10574444B2 (en) * 2018-01-22 2020-02-25 Citrix Systems, Inc. Systems and methods for secured web application data traffic
US10230683B1 (en) * 2018-02-09 2019-03-12 Capital One Services, Llc Routing for large server deployments

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060064478A1 (en) * 2004-05-03 2006-03-23 Level 3 Communications, Inc. Geo-locating load balancing
US20110185065A1 (en) * 2010-01-28 2011-07-28 Vladica Stanisic Stateless forwarding of load balanced packets
US20110283013A1 (en) * 2010-05-14 2011-11-17 Grosser Donald B Methods, systems, and computer readable media for stateless load balancing of network traffic flows
US20120036180A1 (en) * 2010-08-06 2012-02-09 Palo Alto Research Center Incorporated Service virtualization over content-centric networks
US9294408B1 (en) * 2012-07-02 2016-03-22 Amazon Technologies, Inc. One-to-many stateless load balancing
US20140040452A1 (en) * 2012-07-31 2014-02-06 Microsoft Corporation Processing requests
US20140297863A1 (en) * 2013-04-02 2014-10-02 Alibaba Group Holding Limited Managing redirected website login using a short address
US20170026321A1 (en) * 2013-09-05 2017-01-26 Aldo Ciavatta Method and system for establishing a communication between mobile computing devices
US20160248886A1 (en) * 2015-02-24 2016-08-25 Cisco Technology, Inc. Split-client constrained application execution in an industrial network
US20180041570A1 (en) * 2015-04-01 2018-02-08 Telefonaktiebolaget Lm Ericsson (Publ) System, Apparatus and Method for Load Balancing
US20180152509A1 (en) * 2015-06-16 2018-05-31 Amazon Technologies, Inc. Supporting heterogeneous environments during code deployment
US9392075B1 (en) * 2015-07-23 2016-07-12 Haproxy Holdings, Inc. URLs with IP-generated codes for link security in content networks
US20210409319A1 (en) * 2015-09-07 2021-12-30 Citrix Systems, Inc. Systems and methods for dynamic routing on a shared ip address
US20170331789A1 (en) * 2016-05-13 2017-11-16 Citrix Systems, Inc. Systems and methods for a unique mechanism of providing 'clientless sslvpn' access to a variety of web-applications through a sslvpn gateway
US20180295134A1 (en) * 2017-04-07 2018-10-11 Citrix Systems, Inc. Systems and methods for securely and transparently proxying saas applications through a cloud-hosted or on-premise network gateway for enhanced security and visibility
US20190238504A1 (en) * 2018-01-26 2019-08-01 Citrix Systems, Inc. Split-tunneling for clientless ssl-vpn sessions with zero-configuration
US20190306231A1 (en) * 2018-03-29 2019-10-03 Hewlett Packard Enterprise Development Lp Container cluster management

Also Published As

Publication number Publication date
EP3525422A1 (en) 2019-08-14
CA3032673C (en) 2024-06-11
US10230683B1 (en) 2019-03-12
US20200137020A1 (en) 2020-04-30
EP3525422B1 (en) 2022-04-20
US20190253379A1 (en) 2019-08-15
US10523628B2 (en) 2019-12-31
US11570135B2 (en) 2023-01-31
CA3032673A1 (en) 2019-08-09

Similar Documents

Publication Publication Date Title
US20230231825A1 (en) Routing for large server deployments
KR102425996B1 (en) Multi-cluster Ingress
US11095711B2 (en) DNS Resolution of internal tenant-specific domain addresses in a multi-tenant computing environment
US9973573B2 (en) Concurrency reduction service
US11349803B2 (en) Intelligent redirector based on resolver transparency
US11818209B2 (en) State management and object storage in a distributed cloud computing network
US12074918B2 (en) Network-based Media Processing (NBMP) workflow management through 5G Framework for Live Uplink Streaming (FLUS) control
CN113315706A (en) Private cloud flow control method, device and system
CN111712799B (en) Automatic distribution of models for execution on non-edge devices and edge devices
Lee et al. The impact of container virtualization on network performance of IoT devices
CN113994645A (en) Automatically replicating API calls to split data centers
CN108141704B (en) Location identification of previous network message processors
EP3481099B1 (en) Load balancing method and associated device
US20240007537A1 (en) System and method for a web scraping tool
US11579915B2 (en) Computing node identifier-based request allocation
US10958580B2 (en) System and method of performing load balancing over an overlay network
US10277472B2 (en) System and method for improved advertisement of dynamic storage endpoints and storage control endpoints
US20230421538A1 (en) Hostname based reverse split tunnel with wildcard support

Legal Events

Date Code Title Description
AS Assignment

Owner name: CAPITAL ONE SERVICES, LLC, VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GOODSITT, JEREMY;WALTERS, AUSTIN;ABDI TAGHI ABAD, FARDIN;REEL/FRAME:062146/0445

Effective date: 20180209

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED