[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112673596A - Service insertion at a logical gateway - Google Patents

Service insertion at a logical gateway Download PDF

Info

Publication number
CN112673596A
CN112673596A CN201980057472.1A CN201980057472A CN112673596A CN 112673596 A CN112673596 A CN 112673596A CN 201980057472 A CN201980057472 A CN 201980057472A CN 112673596 A CN112673596 A CN 112673596A
Authority
CN
China
Prior art keywords
logical
service
data
network
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980057472.1A
Other languages
Chinese (zh)
Other versions
CN112673596B (en
Inventor
A·纳温
K·蒙达拉吉
R·米施拉
F·卡瓦迪亚
R·科甘蒂
P·罗兰多
冯勇
J·贾殷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weirui LLC
Original Assignee
Vmville Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/120,283 external-priority patent/US11595250B2/en
Priority claimed from US16/120,281 external-priority patent/US10944673B2/en
Application filed by Vmville Co ltd filed Critical Vmville Co ltd
Priority to CN202310339981.1A priority Critical patent/CN116319541A/en
Publication of CN112673596A publication Critical patent/CN112673596A/en
Application granted granted Critical
Publication of CN112673596B publication Critical patent/CN112673596B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/42Centralised routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

Many companies and other entities use software-defined data centers (e.g., local data centers and/or public cloud data centers) to host their networks. Providers of software-defined data centers typically offer various network security options, but some entities wish to incorporate existing third-party security services (or other services) into their hosted networks. Therefore, techniques for more easily incorporating these services into virtual networks would be useful.

Description

Service insertion at a logical gateway
Background
Many companies and other entities use software-defined data centers (e.g., local data centers and/or public cloud data centers) to host their networks. Providers of software-defined data centers typically offer various network security options, but some entities want to incorporate existing third-party security services (or other services) into their hosted networks. Therefore, techniques for more easily incorporating these services into virtual networks would be useful.
Disclosure of Invention
Some embodiments provide a network management and control system that enables integration of third party services machines for handling data traffic entering and/or exiting a logical network. These third party services may include various types of non-packet forwarding services, such as firewalls, Virtual Private Network (VPN) services, Network Address Translation (NAT), load balancing, and the like. In some embodiments, the network management and control system manages the integration of these service machines, but does not manage the lifecycle of the machines themselves.
In some embodiments, a logical network includes at least one logical switch to which logical network endpoints (e.g., data compute nodes such as virtual machines, containers, etc.) are connected and logical routers for handling data traffic entering and/or exiting the logical network. Further, the logical network may include a plurality of logical switches logically connected to each other through the aforementioned logical router or another logical router. In some embodiments, the logical network includes a multi-tier (tier) logical router. The logical routers in the first tier connect groups of logical switches (e.g., logical switches of a particular tenant). These first-tier logical routers are connected to logical routers in a second tier for data traffic sent to and from the logical networks (e.g., data traffic from external clients connected to web servers hosted in the logical networks, etc.). The second hierarchy of logical routers is implemented at least in part in a centralized manner for handling connections to external networks, and in some embodiments third party service machines are attached to the centralized components of these logical routers. The logical network of other embodiments includes only a single tier of logical routers to which the third party service is attached.
In some embodiments, a network management and control system (hereinafter referred to as a network control system) receives both (i) configuration data defining a logical network (i.e., a logical switch, attachment of a data compute node to a logical switch, a logical router, etc.) and (ii) configuration data attaching a third party service to a logical router (i.e., a logical router handling connections to an external network). Based on this configuration data, the network control system configures various managed forwarding elements (managed forwarding elements) to implement logical forwarding elements (logical switches, distributed aspects of logical routers, etc.) as well as other packet processing operations of the logical network (e.g., distributed firewall rules). Further, some embodiments configure certain managed forwarding elements operating on the gateway machine to implement a centralized logical routing component that handles the connection of logical networks to one or more external networks. Such managed forwarding elements on the gateway machine are also configured to redirect (e.g., using policy-based routing) at least a subset of this ingress and/or egress data traffic between the logical network and the external network to an attached third party service via a separate interface of the gateway.
In some embodiments, receiving configuration data for an attached third party service includes several separate configuration inputs (e.g., from an administrator). After configuring the logical router, some embodiments receive (i) a service attachment interface defining the logical router, (ii) a logical switch defining the logical switch to which the service attachment interface is connected, (iii) a configuration data defining the service interface (e.g., the interface of the service machine to which data traffic is redirected), and (iv) connecting the service attachment interface and the service interface of the logical router to the logical switch. Further, in some embodiments, the administrator defines a rule or set of rules that specify which incoming and/or outgoing traffic is redirected to the service interface.
Some embodiments use a variety of different topologies to enable multiple services to connect to a logical router. For example, multiple services may be connected to the same logical switch, in which case the services all have interfaces in the same subnet, and may send data traffic directly between each other (if configured to do so). In this arrangement, the logical router may have a single interface (for traffic to all services) connected to the logical switch, or separate interfaces connected to the logical switch for each attached service. In other cases, a separate logical switch may be defined for each service (with a separate logical router interface connected to each logical switch). Further, multiple interfaces may be defined for each service machine for handling different sets of traffic (e.g., traffic to/from different external networks or different logical network subnets).
Further, in some embodiments, the service machine may be connected to the logical router via a different type of connection. In particular, some embodiments allow for connecting the service machine in any of the following ways: (i) l2 line card mode or (ii) L3 single arm mode. In the L2 mode, the two interfaces of the logical router are connected to the two separate interfaces of the service machine via two separate logical switches, and data traffic is sent to the service machine via one of the interfaces and received back from the service machine via the other interface. Data traffic may be sent to the service machine via one interface for traffic entering the logical network and via another interface for traffic exiting the logical network. In the L3 mode, a single interface is used on the logical router for each connection with the serving machine.
Once configured, the gateway redirects some or all of the data traffic between the logical network and the external network to the service machine. As described above, some embodiments use a set of policy-based routing (PBR) rules to determine whether to redirect each data message. In some embodiments, the gateway applies these PBR rules to outgoing data messages after performing logical routing on the data messages, and applies the PBR rules to incoming data messages before performing logical routing and/or switching on the incoming data messages.
That is, for outgoing data messages, the gateways perform logical exchanges (if needed), then perform logical routing for routing components connected to the external network to determine that the data message is actually directed outside of the logical network, and then apply the PBR rules to determine whether to redirect the data message to a service. If the data message is redirected, the gateway forwards the data message to the external network when it returns from the service (if the data message is not dropped/blocked by the service).
For incoming data messages, the gateway applies PBR rules to determine whether to redirect the data message to a service before processing the data message through any logical forwarding elements. If the data message is redirected, the gateway then performs logical routing and switching, etc., on the data message as it returns from the service (if the data message is not dropped/blocked by the service) to determine how to forward the data message to the logical network.
In some embodiments, the PBR rule uses a two-stage lookup to determine whether to redirect the data message (and to which interface to redirect the data message). In particular, each rule specifies a unique identifier, rather than a PBR rule (i.e., a routing rule based on header fields other than the destination network address) providing redirection details. Each identifier corresponds to a service machine, and the gateway stores a dynamically updated data structure for each identifier. In some embodiments, these data structures indicate the type of connection to the service (e.g., L2 lineplug or L3 single-arm), the network address of the interface of the service to which the data message is redirected (for L2 mode, some embodiments use a virtual network address corresponding to the data link layer address of the return service attachment interface of the gateway), dynamically updated state data, and failover policies. The state data is dynamically updated based on the health/reachability of the service, which may be tested using a heartbeat protocol such as Bidirectional Forwarding Detection (BFD). In some embodiments, the failover policy specifies how to process data messages in the event of service unavailability. These failover policy options may include, for example, dropping data messages, forwarding data messages to their destination without redirection to service, redirection to a backup service machine, and so forth.
The foregoing summary is intended to serve as a brief description of some embodiments of the invention. It is not intended to be an introduction or overview of all subject matter disclosed in this document. The embodiments described in this summary, as well as other embodiments, are further described in the following detailed description and the accompanying drawings referred to in the detailed description. Therefore, a thorough review of the summary, detailed description, and drawings is required in order to understand all embodiments described in this document. Furthermore, the claimed subject matter is not limited by the illustrative details in the summary, detailed description, and drawings, but rather is defined by the appended claims, as claimed subject matter may be embodied in other specific forms without departing from the spirit of the subject matter.
Drawings
The novel features believed characteristic of the invention are set forth in the appended claims. However, for the purpose of explanation, several embodiments of the invention are set forth in the following drawings.
Figure 1 conceptually illustrates an example logical network to which third party services of some embodiments may connect.
Figure 2 conceptually illustrates an example of connecting a third party services machine to a centralized router.
Figure 3 conceptually illustrates a process of some embodiments for configuring a gateway machine of a logical network to redirect incoming and/or outgoing data traffic to a third party service machine.
Figure 4 conceptually illustrates a centralized routing component with two service attachment interfaces connected to two separate service endpoint interfaces of a third party service machine via two separate logical switches.
Figure 5 conceptually illustrates a centralized routing component with one service attachment interface connected to two separate interfaces of a third party service machine via one logical switch.
Figure 6 conceptually illustrates a centralized routing component with one service attachment interface connected to interfaces of two different third party service machines via one logical switch.
Figure 7 conceptually illustrates a centralized routing component with two service attachment interfaces, each connected to a different one of the two service machines via a separate logical switch.
Fig. 8 illustrates the path of an incoming data message through multiple logical processing stages implemented by a third party services machine and a gateway managed forwarding element connected in L3 single-arm mode.
Fig. 9 shows the path of outgoing data messages through the gateway MFE and third party service machine implementation of fig. 8 for a number of logical processing stages.
Fig. 10 shows the path of an incoming data message through a number of logical processing stages implemented by a third party service machine and gateway MFE connected in L2 circuit plug-in mode.
Fig. 11 shows the path of outgoing data messages through the gateway MFE and third party service machine implementation of fig. 10 for a number of logical processing stages.
Figure 12 conceptually illustrates a process of some embodiments for applying policy-based route redirection rules to data messages.
Fig. 13 shows a table of policy-based routing rules.
Figure 14 conceptually illustrates a data structure that is dynamically updated based on changes in the connection status of the service machine to which the data structure redirects data messages.
Figure 15 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
Detailed Description
In the following detailed description of the present invention, numerous details, examples, and embodiments of the invention are set forth and described. It is apparent, however, to one skilled in the art that the present invention is not limited to the embodiments set forth, and that the present invention may be practiced without some of the specific details and examples discussed.
Some embodiments provide a network management and control system that enables integration of third party services machines for handling data traffic entering and/or exiting a logical network. These third party services may include various types of non-packet forwarding services, such as firewalls, Virtual Private Network (VPN) services, Network Address Translation (NAT), load balancing, and the like. In some embodiments, the network management and control system manages the integration of these service machines, but does not manage the lifecycle of the machines themselves (hence these service machines are referred to as third party services).
In some embodiments, a logical network includes at least one logical switch to which logical network endpoints (e.g., data compute nodes such as virtual machines, containers, etc.) are connected and logical routers for handling data traffic entering and/or exiting the logical network. Further, the logical network may include a plurality of logical switches logically connected to each other through the aforementioned logical router or another logical router.
Figure 1 conceptually illustrates an example logical network 100 to which third party services of some embodiments may connect. As shown, the logical network 100 includes a level 0 logical router 105 (also referred to as a provider logical router), a level l logical router 110 (also referred to as a tenant logical router), and two logical switches 115 and 120. Data Compute Nodes (DCNs) 125 and 140 (e.g., virtual machines, containers, etc.) are attached to each of the logical switches 115 and 120. The data compute nodes 125 exchange data messages with each other and with one or more external networks 145 through a physical network that implements the logical network (e.g., within a data center).
Logical network 100 represents an abstraction of a network configured by a user of a network management and control system of some embodiments. That is, in some embodiments, a network administrator configures logical network 100 as a conceptual collection of logical switches, routers, etc., with policies applied to these logical forwarding elements. A network management and control system generates configuration data for physical managed forwarding elements (e.g., software virtual switches operating in virtualization software of hosts, virtual machines, and/or bare metal machines operating as logical network gateways, etc.) to implement these logical forwarding elements. For example, when DCN 125 and 140 hosted on a physical host sends data messages, in some embodiments a managed forwarding element executing in the virtualization software of the host processes the data messages to implement a logical network. The managed forwarding element will apply the logical switch configuration for the logical switch to which the DCN is attached, then apply the level 1 logical router configuration, etc. to determine the destination of the data message.
In some embodiments, as in this example, the logical network comprises a plurality of layers of logical routers. The logical routers in the first tier (e.g., tier 1 logical router 110) connect groups of logical switches (e.g., logical switches of a particular tenant). These first-level logical routers are connected to logical routers in a second hierarchy (e.g., level 0 logical router 105) for data traffic sent to and from the logical networks (e.g., data traffic from external clients connected to web servers hosted in the logical networks, etc.).
The network management and control system of some embodiments (hereinafter referred to as a network control system) defines a plurality of routing components for at least some of the logical routers. Specifically, the level 0 logical router 105 in this example has a distributed routing component 150 ("distributed router") and a centralized routing component 155 connected by an internal logical switch 160 called transit logical switch. In some cases, multiple centralized routers are defined for the level 0 logical router, each connected to the transit logical switch 160. For example, some embodiments define two centralized routers, one active and one standby.
In some embodiments, the distributed router 150 and transit logical switch 160 are implemented in a distributed manner (e.g., with logical switches 115 and 120 and l-th tier logical router 110), meaning that the first hop managed forwarding elements of the data message apply the policies of those logical forwarding elements to the data message. However, the centralized routers 155 are implemented in a centralized fashion (i.e., a single host implements each such centralized router). These centralized routers handle the connection of logical networks to external networks (e.g., to other logical networks implemented in the same or other data centers, to external web clients, etc.). The centralized router may perform various stateful services (e.g., network address translation, load balancing, etc.) and exchange routes with one or more external routers (using, for example, BGP or OSPF). Different embodiments may implement a centralized router using bare metal machines, virtual switches executing in the virtualization software of the host, or other contexts.
As mentioned, some embodiments allow an administrator to attach third party services to logical routers using a network control system. In some such embodiments, these third party services are attached to a centralized router that handles data traffic between logical network endpoints and external networks (e.g., centralized router 155 of a level 0 router). While the discussion that follows primarily refers to the connection of third party services to a level 0 logical router, in some embodiments, third party services may also be connected to a level l logical router.
Figure 2 conceptually illustrates an example of connecting a third party services machine 200 to a centralized router 205. Specifically, in some embodiments, a network administrator defines a service attachment interface 210 on a logical router, a service endpoint 215 of a third party service machine, a particular logical switch 220 for service attachment, and attaches both the service attachment interface 210 and the service endpoint 215 to the logical switch 220. In some embodiments, the administrator provides this information through an Application Programming Interface (API) of the management plane of the network control system (e.g., using a network management application user interface that translates user interactions into API calls to the management plane).
In some embodiments, the management plane receives (i) configuration data defining a logical network (i.e., logical switches, attachment of data compute nodes to logical switches, logical routers, etc.), and (ii) configuration data attaching one or more third party services to logical routers that handle connection of the logical network to external networks. Based on this configuration data, the network control system configures the various managed forwarding elements to implement the logical forwarding elements (logical switches, distributed aspects of logical routers, etc.) as well as other packet processing operations for the logical network (e.g., distributed firewall rules). In some embodiments, the management plane generates configuration data based on the input and provides the configuration data to a central control plane (e.g., a set of centralized controllers). The central control plane identifies managed forwarding elements that require each atomic configuration data and distributes the configuration data to the local controllers for each identified managed forwarding element. These local controllers are then responsible for configuring managed forwarding elements (including gateway machines that implement centralized routers) to implement logical forwarding elements of the logical network, including redirecting appropriate data messages to third party services (e.g., according to policy-based routing rules provided by an administrator).
In some embodiments, receiving configuration data for attaching a third party service includes several separate configuration inputs (e.g., from an administrator). Figure 3 conceptually illustrates a process 300 of some embodiments for configuring a gateway machine of a logical network to redirect incoming and/or outgoing data traffic to a third party service machine. In some embodiments, process 300 is performed by a management plane of a network control system that receives input through API calls.
In the description of the process, it is assumed that a logical network has been configured and includes a logical router having at least one centralized component configured to handle data traffic entering and exiting the logical network. Some embodiments configure certain managed forwarding elements operating on the gateway machine to implement these centralized logical routing components that handle the connection of the logical network to one or more external networks.
As shown, the process 300 begins by receiving (at 305) an input to define a service attachment interface for a logical router. In some embodiments, the service attachment interface is a dedicated type of interface for the logical router. In different embodiments, the service attachment interface is defined by an administrator, either on a particular centralized component or, typically, on a logical router. In the latter case, the management plane either applies the interface to a particular one of the components (e.g., if the administrator defines that the service attachment interface will only handle traffic sent to or from a particular uplink interface of the logical router assigned to the particular centralized component), or creates a separate interface for each centralized component of the logical router. For example, in some embodiments, active and standby centralized routing components are defined and an interface is created on each of these components.
Next, the process 300 receives (at 310) an input to define a logical switch for connecting the logical router to a third party service. Further, the process receives (at 315) an input to attach a service attachment interface to the logical switch. In some embodiments, the creation of the logical switch is similar to the logical switch of the logical network to which the data compute node (e.g., VM, etc.) is attached. In other embodiments, the logical switch is defined by an administrator as a particular service attachment logical switch. The logical switch has a privately assigned subnet that (i) includes network addresses of service attachment interfaces attached to the logical switch, and (ii) only needs to include enough network addresses for any interfaces of third party services and any service attachment interfaces connected to the logical switch. For example, as shown below, a logical switch that connects a single logical router interface to a single third party service interface may be a "/31" subnet using a classless inter-domain routing (CIDR) notation. Even if the logical router performs route advertisement to external physical routers for logical network subnets (e.g., using BGP or OSPF), in some embodiments the subnet for the service attachment logical switch is not advertised (or entered into routing tables for the various logical router layers).
In some embodiments, if the logical router includes multiple centralized components (e.g., active and standby components), and the service attachment interface corresponds to an interface on each of these components, the attachment service attachment interface actually attaches each of these interfaces to the logical switch. In this case, each centralized component interface has a separate network address in the subnet of the logical switch.
Next, process 300 receives (at 320) an input to define a service endpoint interface and receives (at 325) an input to attach the service endpoint interface to the logical switch (to which the service attachment interface of the logical router is attached). In some embodiments, the service endpoint interface represents an interface on a third party service machine. In some embodiments, when the administrator defines the endpoint interfaces to which the centralized routing component will connect, these interfaces may be service endpoint interfaces (also referred to as logical endpoint interfaces, which correspond to service machines and connect to service attachment interfaces through logical switches) or external interfaces (also referred to as virtual endpoint interfaces, which correspond to network addresses reachable from the centralized component). An external router interface is an example of this latter interface.
In addition, some embodiments require an administrator to define a third party services machine (either through the network control system or through a separate data center computing manager). For example, in some embodiments, a network administrator defines both a service type and a service instance (e.g., an instance of the service type). As mentioned above, the service endpoint interface should also have a network address within the subnet of the logical switch to which the interface is attached.
It should be understood that operations 305-325 need not occur in the particular order shown in fig. 3. For example, a network administrator may initially create two interfaces (a service attachment interface on a logical router and a service endpoint interface representing a third party service) and then subsequently create a logical switch and attach the interfaces to the logical switch.
In addition, process 300 receives (at 330) one or more rules for redirecting data messages to the service endpoint interface. In some embodiments, these are policy-based routing rules that (i) specify which incoming and/or outgoing traffic is to be redirected to a service interface, and (ii) are applied by the gateway machine independently of its usual routing operations. In some embodiments, the administrator defines the redirection rules according to one or more data message header fields, such as source and/or destination network addresses, source and/or destination transport layer ports, transport protocols, interfaces to receive data messages, and the like. For each service interface, the administrator may create a redirection rule or rules. For example, the redirected data messages may include all incoming and/or outgoing data messages for a particular uplink, data messages sent only from or to a particular logical switched subnet, and so on.
Finally, after receiving the above-mentioned configuration data, the process 300 configures (at 335) the gateway machine to implement the centralized logical router and redirection to the service endpoint interface. The process 300 then ends. If multiple centralized routing components have logical switch interfaces attached to the service endpoints, a gateway machine for each of these components is configured. In some embodiments, the management plane generates configuration data for the service attachment interface and redirection rules and provides this information to the central control plane. The central control plane identifies each gateway machine that needs this information and provides the appropriate configuration data to the local controller of that gateway machine. The local controller of some embodiments converts this configuration data into a format that is readable by the gateway machine (if it is not already in such a format) and directly configures the gateway machine to implement policy-based routing rules.
Some embodiments use a variety of different topologies to enable multiple services to connect to a logical router. For example, multiple services may be connected to the same logical switch, in which case the services all have interfaces in the same subnet, and may send data traffic directly between each other (if configured to do so). In this arrangement, the logical router may have a single interface (for traffic to all services) connected to the logical switch, or a separate interface connected to the logical switch for each attached service. In other cases, a separate logical switch may be defined for each service (with a separate logical router interface connected to each logical switch). Further, multiple interfaces may be defined for each service machine for handling different sets of traffic (e.g., traffic to/from different external networks or different logical network subnets).
Fig. 4-7 conceptually illustrate several different such topologies for connecting a centralized routing component of a logical router to one or more service machines. Each of these figures shows a centralized router connected to one or more logical switches to which one or more service machines are also connected. It should be understood that these figures represent a logical view of the connections, and in some embodiments, the gateway machine implementing the centralized router will also implement the logical switch (es).
Figure 4 conceptually illustrates a centralized routing component 400 having two service attachment interfaces connected to two separate service endpoint interfaces of a third party service machine 405 via two separate logical switches 410 and 415. This topology essentially uses a separate service attachment interface and a separate logical switch for each connection to a third party service. In this example, each of the logical switches 410 and 415 is assigned a "/31" subnet that includes two network addresses. Since each logical switch is created specifically for connecting one service attachment interface of the centralized routing component 400 to the service machine 405, only two addresses are required for each switch. In some embodiments, the redirection rules for the router redirect data messages sent to and from each uplink to a different interface of the third party service machine (thereby using a different one of the service attachment interfaces).
Figure 5 conceptually illustrates a centralized routing component 500 having one service attachment interface that connects to two separate interfaces of a third party service machine 505 via one logical switch 510. In some embodiments, the administrator creates one logical switch with one service attachment interface on the centralized router component for each third party service machine, but defines multiple service endpoint interfaces for the third party service machine. In this case, the logical switch subnet accommodates a larger number of network addresses (in this example, a "/24" subnet is used). In some embodiments, the redirection rules are set to redirect data messages sent to and from each uplink to a different interface of the third party service machine via the same service attachment interface and logical switch. In some embodiments, using a setup with multiple service endpoint interfaces attached to the same logical switch on a service machine requires a third party service machine to use separate routing tables (e.g., virtual routing and forwarding instances) for each interface.
Figure 6 conceptually illustrates a centralized routing component 600 having one service attachment interface that connects to interfaces of two different third party service machines 605 and 610 via one logical switch 615. The service machines 605 and 610 in this scenario may provide two separate services (e.g., firewall and cloud extension services) or a primary machine and a standby machine for a single high availability service. In some embodiments, because the interfaces of service machines 605 and 610 are on the same logical switch, data messages may also be sent from one service to another. In this example, centralized routing component 600 has a single uplink; some embodiments using this configuration will include two service attachments and two logical switches, each connected to (different) interfaces of two service machines, to process data messages received for or destined for two different uplinks.
Figure 7 conceptually illustrates a centralized routing component 700 having two service attachment interfaces, each connected to a different one of two service machines 705 and 710 via separate logical switches 715 and 720. As with the previous example, the two service machines may provide two separate services, or may act as a host and backup for a single high availability service. In this example, the centralized routing component has a single uplink; some embodiments using this configuration will include two additional service attachments corresponding to each additional uplink, connected via separate logical switches to separate interfaces on each service machine. In these examples, using a separate interface on the serving machine corresponding to each different uplink allows the serving machine to apply a particular processing configuration to data messages sent to or received from each different uplink.
In addition to these various topologies, in some embodiments, third party service machines may also be connected to the centralized routing component via different types of connections. In particular, some embodiments allow service machines to be connected in either (i) L2 line card mode or (ii) L3 single arm mode. In the L2 mode shown in fig. 10 and 11, the two interfaces of the logical router are connected to the two separate interfaces of the service machine via two separate logical switches, and data traffic is sent to the service machine via one of the interfaces and received back from the service machine via the other interface. For traffic entering the logical network, data traffic may be sent to the service machine via one interface, and for traffic exiting the logical network, data traffic may be sent to the service machine via another interface.
In the L3 model shown in fig. 8 and 9, a single interface is used on the logical router for each connection with the serving machine. Once configured, the gateway redirects some or all of the data traffic between the logical network and the external network to the service machine. As described above, some embodiments use a set of policy-based routing (PBR) rules to determine whether to redirect each data message. In some embodiments, the gateway applies these PBR rules to outgoing data messages after performing logical routing on the data messages, and applies the PBR rules to incoming data messages before performing logical routing and/or switching on the incoming data messages.
Fig. 8 illustrates the path (represented by dashed lines) of an incoming data message through the various logical processing stages implemented by the gateway managed forwarding element 800 and the third party services machine 805. As described above, in this example, the third party service machine is connected in L3 one-armed mode. In this mode, the data message is sent to the network address of the third party service machine, which sends the data message back to the network address of the logical router service attachment interface.
The gateway MFE 800 implements several stages of logical network processing, including policy-based routing (PBR) redirection rules 810, centralized routing component processing 815, service attachment logical switch processing 820, and additional logical processing 825 (e.g., transit logical switch processing, distributed routing component processing, processing for logical routers and/or logical switches of other tiers to which network endpoints are connected, etc., hi some embodiments, gateway MFE 800 is a data path (e.g., a data path based on a Data Plane Development Kit (DPDK)) in a bare metal computer or virtual machine.
For incoming data messages in fig. 8, the gateway MFE 800 applies the PBR rules 810 to determine whether to redirect the data message before processing the data message through any logical forwarding elements. In some embodiments, the gateway MFE also performs additional operations, such as IPSec and/or other locally applied services, before applying the PBR rules. The PBR rules, described in more detail below, identify whether a given data message is to be redirected (e.g., based on various data message header fields, such as source and/or destination IP addresses), how to redirect data messages that match a particular set of header field values, and so forth. In this case, the PBR rule 810 specifies an interface to redirect data messages to the third party services machine 805.
Based on this determination, centralized routing component process 815 identifies that the redirection interface corresponds to a service attachment logical switch, and thus gateway MFE 800 then performs this logical switch process 820. Based on this logical switch processing, the gateway MFE sends a data message (e.g., with encapsulation) to the third party service machine 805. The service machine 805 performs its service processing (e.g., firewall, NAT, cloud extension, etc.) and returns data messages to the gateway MFE (unless the service drops/blocks the data messages). Upon return of the data message from the service, the gateway MFE then performs centralized routing component processing 815 (e.g., routing based on the destination network address), and in turn performs additional logical processing operations 825. In some embodiments, data messages returned from the third party service machine are marked with a flag to indicate that the PBR rule need not be applied again. Based on these operations, the gateway MFE 800 sends the data message to its destination in the logical network (e.g., by encapsulating the data message and sending the data message to a host in the data center).
Fig. 9 shows the path (represented by dashed lines) of outgoing data messages through the various logical processing stages implemented by the gateway MFE 800 and the third party service machine 805. Upon receipt of the data message, the gateway MFE 800 first applies any logical network processing 825, such as transit logical switches (between distributed routing components and centralized routing components), that is needed before the centralized routing component. In some cases, the layer l logical router will also have a centralized routing component implemented on the gateway MFE, in which case additional logical processing may include the centralized routing component, the distributed routing components of the layer 0 logical router, transit logical switches between them, and so forth.
The centralized routing component processing 815 identifies the uplink interface as its output interface, which results in the application of the PBR rule 810. In this case, these rules also redirect outgoing data messages to the service machine 805, so the gateway MFE 800 again applies the centralized routing component processing 815 and then the service attachment logical switch processing 820 and sends the data message to the third party service machine 805. Assuming that the data message is not dropped by the serving machine 805, the gateway MFE 800 receives the data message via its interface corresponding to the serving attachment logical switch. At this point, the centralized routing component process 815 again identifies the uplink as the output interface for that component, and the gateway MFE sends the data message to the external physical network router associated with the uplink. As described above, upon receiving a data message from the serving machine 805, the data message is marked with a flag so that, in some embodiments, the gateway MFE does not apply the PBR rule 810 again.
If the service machine is logically connected to the l-th level logical router, then in some embodiments, the PBR rule (for outgoing data messages) is applied after the l-th level logical router processing and before the 0-th level logical router processing. The gateway MFE then applies the level 0 distributed routing component, transit logical switch, and level 0 centralized routing component when returning from the serving machine. Incoming traffic is similarly processed by applying PBR rules after the level 0 distributed routing component and before the l level centralized routing component.
As described above, fig. 10 and 11 illustrate the use of L2 line plug-in mode to connect a service machine to a centralized routing component. Fig. 10 shows the path (represented by dashed lines) of an incoming data message through a number of logical processing stages implemented by gateway MFE 1000 and third party services machine 1005. In the L2 plug-in-line mode, two interfaces of the logical router are associated with each connection to the service machine 1005. Data messages are sent to the service machine via one of the interfaces and returned via the other interface.
As shown in the examples of fig. 8 and 9, the gateway MFE 1000 implements PBR redirection rules 1010, centralized routing component processing 1015, and additional logical processing 1030. Gateway MFE 1000 also implements two separate service attachment logical switches 1020 and 1025 because there are two separate interfaces for connecting to service machine 1005. In some embodiments, the interface associated with the first logical switch 1020 is an "untrusted" interface, while the interface associated with the second logical switch 1025 is a "trusted" interface. In this figure, each centralized routing component service attachment interface is associated with a separate interface of gateway MFE 1000. However, in other embodiments, these service attachment interfaces share one gateway MFE interface.
For incoming data messages in fig. 10, the gateway MFE 1000 applies the PBR rules 1010 to determine whether to redirect the data message before processing the data message through any logical forwarding elements. In some embodiments, the gateway MFE also performs additional operations, such as IPSec and/or other locally applied services, before applying the PBR rules. The PBR rules, described in more detail below, identify whether a given data message is to be redirected (e.g., based on various data message header fields, such as source and/or destination IP addresses), how to redirect data messages that match a particular set of header field values, and so forth. In this case, the PBR rule 1010 specifies an interface that redirects the data message to the third party services machine 805 associated with the first logical switch 1020.
Based on the determination, the centralized routing component processing 815 identifies that the redirection interface corresponds to the first service attachment logical switch 1020. Because the service machine 1005 is connected in L2 circuit-card mode, the centralized routing component uses the MAC address of that interface as the source address of the redirected data message and uses the MAC address of another service attachment interface (connected to the second logical switch 1025) as the destination address. This causes a data message to be returned by service machine 1005 to the second (trusted) interface.
Gateway MFE 1000 then performs logical exchange process 1020 and sends the data message to third party service machine 1005 based on the logical exchange process. The service machine 1005 performs its service processing (e.g. firewall, NAT, cloud extension, etc.) and returns data messages to the gateway MFE (unless the service drops/blocks the data messages). Upon return of the data message from the service, the gateway MFE identifies the second logical switch 1025 for processing based on the destination address of the data message and/or the gateway MFE interface on which the message was received, then performs processing of the centralized routing component 1015 (e.g., routing based on the destination network address), and in turn performs additional logical processing operations 1030. In some embodiments, data messages returned from the third party service machine are marked with a flag to indicate that the PBR rule need not be applied again. Based on these operations, the gateway MFE 800 sends the data message to its destination in the logical network (e.g., by encapsulating the data message and sending the data message to a host in the data center).
Fig. 11 shows the path of an outgoing data message through a number of logical processing stages (represented by dashed lines) implemented by gateway MFE 1000 and third party service machine 1005 connected in L2 circuit-plug mode. Upon receipt of the data message, gateway MFE 1000 first applies any logical network processing 1030, such as transit logical switches (between distributed routing components and centralized routing components), that is required before the centralized routing component. In some cases, the l-th tier logical router will also have a centralized routing component implemented on the gateway MFE, in which case the additional logical processing 1030 may include the centralized routing component, the distributed routing components of the 0-th tier logical router, transit logical switches between them, and so forth.
The centralized routing component processing 1015 then identifies the uplink interface as its output interface, which results in the application of the PBR rules 1010. In this case, the rules redirect the outgoing data message to the service machine 805 via a trusted interface attached to the second logical switch 1025. Thus, gateway MFE 800 again applies centralized routing component processing 1015, then applies the processing of second service attachment logical switch 1025, and sends the data message to third party service machine 1005. In this direction, the data message has the trusted interface MAC address as its source address and the untrusted interface MAC address as its destination address, traversing the reverse path from the centralized routing component 1015 to the serving machine 1005 and back for the incoming data message.
Assuming that the data message is not dropped by serving machine 1005, gateway MFE 800 receives the data message via the interface corresponding to first serving attachment logical switch 1020. At this point, the centralized routing component process 1015 again identifies the uplink as an output interface and the gateway MFE sends the data message to the external physical network router associated with the uplink. As described above, in some embodiments, data messages are marked with a flag when received from the serving machine 1005 so that the gateway MFE does not apply the PBR rules 1010 again.
In some embodiments, the PBR rule uses a two-stage lookup to determine whether to redirect the data message (and to which interface to redirect the data message). In particular, each rule specifies a unique identifier, rather than the PBR rule providing redirection detail information directly. Each identifier corresponds to a service machine, and the gateway stores a dynamically updated data structure for each identifier that provides detailed information on how to redirect data messages.
Figure 12 conceptually illustrates a process 1200 of some embodiments for applying policy-based route redirection rules to data messages. In some embodiments, the process 300 is performed by a gateway MFE, such as those shown in fig. 8-11, when applying PBR rules to incoming (from an external network) or outgoing (from a logical network) data messages. This process 1200 will be described in part with reference to FIG. 13, which shows a set of PBR rules and data structures for some of these rules.
As shown, process 1200 begins by receiving (at 1205) a data message for PBR processing. This may be a data message received from an external network via a logical router uplink or a data message sent by a logical network endpoint for which the gateway MFE has identified the uplink as an egress port of the centralized routing component. In some embodiments, process 1200 is not applied to data messages that have set a flag indicating that the data message is received from a third party service machine.
The process 1200 then performs (at 1210) a lookup for a set of PBR rules. In some embodiments, the rules are organized as a set of flow entries with matching conditions and actions for data messages that match each set of matching conditions. Depending on the context of the gateway data path, the PBR rule of some embodiments uses a hash table (or set of hash tables) that uses one or more hashes of a set of data message header fields. Other embodiments use other techniques to identify matching PBR rules.
Fig. 13 shows a table of PBR rules 1300. In this case, the rules match on both the source and destination IP addresses, but the PBR rules of some embodiments may also match on other header fields (and other combinations of header fields with source and/or destination IP addresses). For example, the first two matching conditions are opposite to each other, one for handling incoming data messages (from 70.70.70.0/24 in the external network to 60.60.60.0/24 subnet in the logical network) and the other for handling corresponding outgoing data messages. The third match condition matches on any data message sent from the source subnet 20.20.20.0/24 (i.e., independent of the destination address). As described further below, the action specifies a unique policy identifier, rather than a specific redirect action.
Returning to FIG. 12, the process 1200 determines (at 1215) whether the data message matches any PBR rules based on a PBR lookup. In some embodiments, the PBR rule table includes a default (lowest priority) rule (or set of rules) for data messages that do not match any other rule. If the data message does not match any PBR rules (or only the default rules), the process forwards (at 1220) the data message to its destination without any redirection. Thus, outgoing data messages are sent to the appropriate physical router (after performing any additional IPSec or other local service processing), while incoming data messages begin logical processing at the centralized logical router.
On the other hand, if the data message matches one of the PBR rules, the process looks up (at 1225) the data structure for the unique identifier specified by the matched PBR rule. As shown in fig. 13, the action of each PBR rule does not directly specify that the matching data message is to be redirected to a particular next hop address. Instead, these actions specify a unique policy identifier that in turn maps to a corresponding dynamically updated data structure. That is, the gateway MFE is configured to store a data structure for each unique identifier specified in the PBR action. These data structures may be database table entries or any other type of modifiable data structure. In some embodiments, the gateway MFE is configured as some or all of the fields of the data structure based on, for example, current network conditions.
In some embodiments, these data structures indicate the type of connection to the service (e.g., L2 patch or L3 single arm), the network address of the interface of the service to which the data message is redirected, dynamically updated state data, and failover policies. The state data is dynamically updated based on the health/reachability of the service, which may be tested using a heartbeat protocol such as Bidirectional Forwarding Detection (BFD). In some embodiments, the failover policy specifies how to process data messages in the event of service unavailability.
Fig. 13 shows the contents of two of these data structures. The data structure 1305 for the unique identifier ABCDE indicates that the service machine to which the policy is redirected is connected in L2 line plug-in mode (so that a data message in the opposite direction that matches the second PBR rule will be redirected to the same service machine in the opposite direction). The data structure 1305 also indicates a pseudo IP address for redirection. This pseudo IP is not actually an address of the serving machine, but rather resolves to a MAC address of a service attachment interface of the centralized routing component (e.g., a trusted interface of the centralized routing component for incoming data messages) through which the data message is to be returned. In some embodiments, this address resolution may be performed using statically configured ARP entries.
In addition, the data structure 1305 specifies the current BFD status of the connection to the serving machine (the connection is currently connected (up)) and a failover policy that indicates how to handle the data message if the BFD status is disconnected (down). It should be noted that although these examples use BFD, other mechanisms for monitoring reachability of a serving machine may also be used (e.g., other heartbeat protocols, other measurements of connection status, etc.). In this case, the failover policy indicates that the data message should be discarded if the service computer is unavailable. Other failover policy options may include, for example, forwarding the data message to its destination without redirection to a service, redirection to a backup service machine, and so forth.
The data structure 1310 for the unique identifier ZYXWV indicates that the service machine to which the policy is redirected is connected in L3 single-arm mode, so the redirect IP address provides the address of the service machine interface (rather than a pseudo IP). The BFD state of this connection is also up, but in this case the failover policy provides redirection to backup service machines located at different IP addresses on different subnets (i.e., connected to different logical switches).
Returning to FIG. 12, the process 1200 processes (at 1230) the data message according to instructions in a data structure for the unique identifier. This may include redirecting the data message to a next hop IP address specified by the data structure; discarding the data message if the connection is down and the failure policy specifies discarding the data message; or if the connection is down and the failure policy specifies that redirection is to be ignored, then the forwarding data message is processed according to the logical network.
As described above, the data structure for each redirection policy is dynamically updated by the gateway MFE. In some embodiments, a BFD thread executes on the gateway machine to (i) send BFD messages to the serving machine and (ii) receive BFD messages from the serving machine. For service machines connected in L3 single-arm mode, the service machine also executes a BFD thread that sends BFD messages to the gateway. In L2 line plug-in mode, on the other hand, the BFD thread sends BFD messages to the serving machine from one of the interfaces connecting the centralized routing component to the serving machine and receives these messages back on the other interface. Some such embodiments send out BFD messages (where BFD messages are sent from a trusted interface, received at an untrusted interface, and vice versa) over two interfaces. This process is described in more detail in U.S. patent application 15/937,615, which is incorporated herein by reference. In some embodiments, one BFD thread executes on each gateway MFE and exchanges messages with all connected service machines, while in other embodiments, a separate BFD thread executes on the gateway MFE to exchange messages with each connected service machine. When the BFD thread detects that BFD messages are no longer being received from a particular serving machine, the gateway MFE modifies the data structure for that serving machine.
Figure 14 conceptually illustrates a data structure 1310 that is dynamically updated based on changes in the connection status of the service machine to which the data structure redirects data messages. The figure shows the data structure 1310 and the connection between the gateway machine 1400 and the two service machines 1415 and 1420 in two stages 1405 and 1410.
In the first stage 1405, the data structure 1310 is in the same state as in fig. 13, indicating that the connection to the service machine endpoint interface 169.254.10.1 is currently up, according to the BFD state. The gateway machine 1400 executes a BFD thread 1425 in addition to operating the gateway MFE with its logical network processing, PBR rules, etc. The BFD thread 1425 periodically sends BFD messages to both the first serving machine 1415 (at its interface with IP address 169.254.10.1) and the second serving machine 1420 (at its interface with IP address 169.254.11.1). In addition, each of these service machines 1415 and 1420 executes their own BFD threads 1430 and 1435, respectively, with the BFD threads 1430 and 1435 periodically sending BFD messages to the gateway machines. As indicated by the large X, at this stage 1405 the connection between the gateway machine 1400 and the first service machine 1415 becomes down. This may occur due to physical connection problems, problems with a crash of the service machine 1415, etc. Accordingly, the BFD thread 1425 will no longer receive BFD messages from the service machine 1415.
In the second stage 1410, the connection between the gateway machine 1400 and the service machine 1415 no longer exists. In addition, data structure 1305 has been dynamically updated by the gateway MFE to indicate the BFD status as down. As a result of the failover policy specified by this data structure 1305, data messages with the source IP in subnet 20.20.20.0/24 will be redirected to the 169.254.11.1 interface of second serving machine 1420 until the connection with first serving machine 1415 is restored to up.
In some embodiments, multiple threads may write to data structures 1305 and 1310. For example, some embodiments allow the BFD thread as well as the configuration receiver thread to both write these data structures (e.g., to modify the BFD state and make any configuration changes received from the network control system). In addition, one or more packet processing threads can read these data structures to perform packet lookups. Some embodiments enable these packet processing threads to read from the data structures even though one of the writer threads is currently accessing these structures so that packet processing is not interrupted by the writer threads.
Figure 15 conceptually illustrates an electronic system 1500 for implementing some embodiments of the invention. Electronic system 1500 may be a computer (e.g., desktop computer, personal computer, tablet computer, server computer, mainframe, blade computer, etc.), telephone, PDA, or any other type of electronic device. Such electronic systems include interfaces for various types of computer-readable media and for various other types of computer-readable media. Electronic system 1500 includes bus 1505, processing unit(s) 1510, system memory 1525, read only memory 1530, permanent storage device 1535, input device 1540, and output device 1545.
Bus 1505 generally represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of electronic system 1500. For example, bus 1505 communicatively connects processing unit(s) 1510 with read only memory 1530, system memory 1525, and persistent storage 1535.
Processing unit(s) 1510 retrieve the instructions to be executed and the data to be processed from the various memory units in order to perform the processes of the present invention. In different embodiments, the processing unit(s) may be a single processor or a multi-core processor.
Read Only Memory (ROM)1530 stores static data and instructions for processing unit(s) 1510 and other modules of the electronic system. On the other hand, persistent storage device 1535 is a read-write memory device. Which is a non-volatile memory unit that stores instructions and data even when the electronic system 1500 is turned off. Some embodiments of the invention use a mass storage device (such as a magnetic or optical disk and its corresponding disk drive) as the persistent storage device 1535.
Other embodiments use removable storage devices (e.g., floppy disks, flash drives, etc.). As a permanent storage device. Like the persistent storage device 1535, the system memory 1525 is a read-write memory device. Unlike the storage device 1535, however, the system memory is a volatile read-and-write memory, such as random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, processes of the present invention are stored in system memory 1525, persistent storage 1535, and/or read only memory 1530. Processing unit(s) 1510 retrieve instructions to be executed and data to be processed from the various memory units in order to perform the processes of some embodiments.
The bus 1505 is also connected to input and output devices 1540 and 1545. The input devices enable a user to communicate information and select commands to the electronic system. Input devices 1540 include an alphanumeric keyboard and a pointing device (also referred to as a "cursor control device"). The output device 1545 displays images generated by an electronic system. Output devices include printers and display devices, such as Cathode Ray Tubes (CRTs) or Liquid Crystal Displays (LCDs). Some embodiments include devices such as touch screens as input and output devices.
Finally, as shown in FIG. 15, bus 1505 also couples the electronic system 1500 to a network 1565 through a network adapter (not shown). In this manner, the computer may be part of a computer network, such as one of a local area network ("LAN"), a wide area network ("WAN"), or an intranet or a network such as the Internet. Any or all of the components of electronic system 1500 may be used in conjunction with the present invention.
Some embodiments include electronic components, such as microprocessors, storage devices, and memories, that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as a computer-readable storage medium, machine-readable medium, or machine-readable storage medium). Some examples of such computer-readable media include RAM, ROM, compact disk read-only (CD-ROM), compact disk recordable (CD-R), compact disk rewritable (CD-RW), digital versatile disks read-only (e.g., DVD-ROM, dual-layer DVD-ROM), various DVD recordable/rewritable (e.g., DVD-RAM, DVD-RW, DVD + RW, etc.), flash memory (e.g., SD card, mini-SD card, micro-SD card, etc.), magnetic and/or solid state hard drivesActuator, read-only and recordable
Figure BDA0002955564720000241
Disks, ultra-high density optical disks, any other optical or magnetic medium, and floppy disks. The computer-readable medium may store a computer program that is executable by at least one processing unit and that includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as produced by a compiler, and files containing higher level code that are executed by a computer, electronic component, or microprocessor using an interpreter.
Although the above discussion refers primarily to a microprocessor or multi-core processor executing software, some embodiments are performed by one or more integrated circuits, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA). In some embodiments, such integrated circuits execute instructions stored on the circuitry itself.
As used in this specification, the terms "computer," "server," "processor," and "memory" all refer to electronic or other technical devices. These terms do not include humans or groups of humans. For purposes of this description, the term display refers to display on an electronic device. As used in this specification, the terms "computer-readable medium," "plurality of computer-readable media," and "machine-readable medium" are entirely limited to tangible, physical objects that store information in a form readable by a computer. These terms do not include any wireless signals, wired download signals, and any other temporary signals.
Throughout this specification, reference is made to computing and network environments that include Virtual Machines (VMs). However, a virtual machine is only one example of a Data Compute Node (DCN) or a data compute end node (also referred to as an addressable node). The DCN may include a non-virtualized physical host, a virtual machine, a container that runs on top of a host operating system without the need for a hypervisor or a separate operating system, and a hypervisor kernel network interface module.
In some embodiments, VMs operate on a host using their own guest operating system using the resources of the host virtualized by virtualization software (e.g., hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can select which applications run on top of the guest operating system. On the other hand, some containers are constructs that run on top of a host operating system, without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses namespaces to isolate containers from each other and thus provide operating system level isolation of different groups of applications operating within different containers. This isolation is similar to the VM isolation provided in the context of virtual machine hypervisor virtualization of virtualized system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications running in different containers. Such containers are lighter weight than VMs.
In some embodiments, the hypervisor core network interface module is a non-VMDCN that includes a network stack with a hypervisor core network interface and receive/send threads. One example of a hypervisor kernel network interface module is the vmknic module, which is the ESXi of VMware corporationTMA part of a hypervisor.
It should be appreciated that while this description refers to VMs, the examples given may be any type of DCN, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. Indeed, in some embodiments, an example network may include a combination of different types of DCNs.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, many of the figures (including fig. 10 and 12) conceptually illustrate the process. The specific operations of these processes may be performed out of the exact order shown and described. Certain operations may not be performed in accordance with a sequential series of operations, and different certain operations may be performed in different embodiments. Further, a process may be implemented using multiple sub-processes, or as part of a larger macro-process. Accordingly, it will be understood by those of ordinary skill in the art that the present invention is not limited by the foregoing illustrative details, but is defined by the appended claims.

Claims (32)

1. A method, comprising:
receiving a definition of a logical network for implementation in a data center, the logical network comprising at least one logical switch to which a logical network endpoint is attached and a logical router for handling data traffic between the logical network endpoint in the data center and an external network;
receiving, via an additional logical switch designated for service attachment, configuration data attaching a third party service to at least one interface of the logical router, the third party service to perform non-forwarding processing on data traffic between a logical network endpoint and an external network; and
configuring a gateway machine in the data center to implement the logical router and redirect at least a subset of data traffic between the logical network endpoint and the external network to the attached third party service.
2. The method of claim 1, wherein receiving configuration data comprises:
receiving a definition of a logical router interface as a service attachment interface;
receiving a definition of an additional logical switch and a connection of a service attachment interface to the additional logical switch; and
an attachment of a third party service to an additional logical switch is received.
3. The method of claim 2, wherein the definition of the additional logical switch designates the additional logical switch as an attached logical switch for a third party service.
4. The method of claim 2, wherein receiving configuration data further comprises receiving a definition of a third party service.
5. The method of claim 1, further comprising defining a distributed routing component and a centralized routing component for the logical router, wherein distributed routing component is implemented by a plurality of machines including the gateway machine and centralized routing component is implemented only by the gateway machine.
6. The method of claim 1, wherein the gateway machine is configured to redirect data traffic received from an external network prior to applying a logical router configuration to the data traffic.
7. The method of claim 6, wherein the gateway machine applies a logical router configuration to data traffic received from an external network after redirecting the data traffic to and receiving the data traffic back from a third party service.
8. The method of claim 1, wherein the gateway machine is configured to redirect data traffic directed to an external network after applying a logical router configuration to the data traffic.
9. The method of claim 1, wherein the third party service is a first third party service and the subset of data traffic between a logical network endpoint and an external network is a first subset of the data traffic, the method further comprising:
receiving, via the additional logical switch, configuration data that attaches a second third-party service to an interface of the logical router, the second third-party service also being used to perform non-forwarding processing on data traffic between a logical network endpoint and an external network; and
configuring the gateway machine to redirect a second subset of the data traffic to a second third-party service.
10. The method of claim 9, wherein the first third party service and the second third party service have interfaces with network addresses in the same subnet as the interfaces.
11. The method of claim 1, wherein the interface is a first interface of the logical router, the additional logical switch is a first logical switch designated for service attachment, and the subset of data traffic between the logical network endpoint and an external network is a first subset of the data traffic, the method further comprising:
receiving, via a second logical switch designated for service attachment, configuration data that attaches the third party service to a second interface of the logical router; and
configuring the gateway machine to redirect a second subset of the data traffic to the third party service via a second logical switch.
12. The method of claim 11, wherein the third party service has separate interfaces with separate network addresses attached to the first and second logical switches.
13. The method of claim 1, wherein the configuration data attaches the third party service to two interfaces of the logical router, wherein the gateway machine is configured to direct incoming data traffic from an external network to the third party service via a first one of the interfaces and receive the incoming data traffic back from the third party service via a second one of the interfaces.
14. The method of claim 13, wherein the gateway machine is configured to direct outgoing data traffic from the logical network endpoint to the third party service via the second interface and receive the outgoing data traffic back from the third party service via the first interface.
15. A method for forwarding a data message, the method comprising:
performing a lookup to map a set of header fields of the data message to an identifier corresponding to a service performing non-forwarding processing on the data message;
retrieving instructions for forwarding data messages to the service using a dynamically updated data structure for the identifier; and
forwarding the data message according to instructions retrieved from a data structure for the identifier.
16. The method of claim 15, wherein the method is performed by a gateway of a logical network implemented in a data center, the gateway for processing data messages between logical network endpoints operating in the data center and a physical network external to the data center.
17. The method of claim 16, wherein:
the logical network comprises at least one logical switch and a logical router to which the logical network endpoints are connected;
the logical router comprises a distributed routing component and one or more centralized routing components;
the gateway implements one of the centralized routing components to process data messages between the logical network endpoints and the physical network external to the data center.
18. The method of claim 15, wherein the lookup comprises a policy-based routing decision.
19. The method of claim 15, wherein the set of header fields includes at least a source network address of the data message.
20. The method of claim 15, wherein the service is a third party service virtual machine.
21. The method of claim 15, wherein the dynamically updated data structure specifies (i) an IP address for reaching the service, (ii) a reachability status of the service, and (iii) a failover policy for when the service is unreachable.
22. The method of claim 21, wherein the reachability status is dynamically updated based on a reachability protocol.
23. The method of claim 21, wherein the service is connected using layer 2(L2) circuit card mode, wherein the IP address is a pseudo address corresponding to an interface of a gateway forwarding the data message.
24. The method of claim 23, wherein the interface is a first interface, wherein the gateway executes a Bidirectional Forwarding Detection (BFD) thread that transmits BFD messages to the service over a second interface and receives the BFD messages from the service over the first interface.
25. The method of claim 21, wherein the IP address is an address of a machine implementing the service.
26. The method of claim 21, wherein the failover policy specifies discarding data messages when the service is not reachable.
27. The method of claim 21, wherein the failover policy specifies routing data messages based on the destination network address when the service is not reachable.
28. The method of claim 21, wherein the failover policy specifies a backup service to which the data message is to be redirected when the service is not reachable.
29. The method of claim 21, wherein when the service is not reachable, the failover policy specifies one of: (i) discard a data message, (ii) route a data message based on the destination network address, and (iii) a backup service to which the data message is to be redirected.
30. A machine readable medium storing a program which when implemented by at least one processing unit implements the method of any of claims 1-29.
31. A computing device, comprising:
a set of processing units; and
a machine readable medium storing a program which when implemented by at least one of the processing units implements the method of any of claims 1-29.
32. A system comprising means for implementing the method of any one of claims 1-29.
CN201980057472.1A 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway Active CN112673596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310339981.1A CN116319541A (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US16/120,283 US11595250B2 (en) 2018-09-02 2018-09-02 Service insertion at logical network gateway
US16/120,281 US10944673B2 (en) 2018-09-02 2018-09-02 Redirection of data messages at logical network gateway
US16/120,283 2018-09-02
US16/120,281 2018-09-02
PCT/US2019/047586 WO2020046686A1 (en) 2018-09-02 2019-08-21 Service insertion at logical network gateway

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310339981.1A Division CN116319541A (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway

Publications (2)

Publication Number Publication Date
CN112673596A true CN112673596A (en) 2021-04-16
CN112673596B CN112673596B (en) 2023-05-02

Family

ID=67841276

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201980057472.1A Active CN112673596B (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway
CN202310339981.1A Pending CN116319541A (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310339981.1A Pending CN116319541A (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway

Country Status (3)

Country Link
EP (1) EP3815312A1 (en)
CN (2) CN112673596B (en)
WO (1) WO2020046686A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9225638B2 (en) 2013-05-09 2015-12-29 Vmware, Inc. Method and system for service switching using service tags
US9755898B2 (en) 2014-09-30 2017-09-05 Nicira, Inc. Elastically managing a service node group
US10320679B2 (en) 2014-09-30 2019-06-11 Nicira, Inc. Inline load balancing
US11296930B2 (en) 2014-09-30 2022-04-05 Nicira, Inc. Tunnel-enabled elastic service model
US10609091B2 (en) 2015-04-03 2020-03-31 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US10805181B2 (en) 2017-10-29 2020-10-13 Nicira, Inc. Service operation chaining
US11012420B2 (en) 2017-11-15 2021-05-18 Nicira, Inc. Third-party service chaining using packet encapsulation in a flow-based forwarding element
US10797910B2 (en) 2018-01-26 2020-10-06 Nicira, Inc. Specifying and utilizing paths through a network
US10805192B2 (en) 2018-03-27 2020-10-13 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US11595250B2 (en) 2018-09-02 2023-02-28 Vmware, Inc. Service insertion at logical network gateway
US10944673B2 (en) 2018-09-02 2021-03-09 Vmware, Inc. Redirection of data messages at logical network gateway
US11036538B2 (en) 2019-02-22 2021-06-15 Vmware, Inc. Providing services with service VM mobility
US11140218B2 (en) 2019-10-30 2021-10-05 Vmware, Inc. Distributed service chain across multiple clouds
US11283717B2 (en) 2019-10-30 2022-03-22 Vmware, Inc. Distributed fault tolerant service chain
US11223494B2 (en) 2020-01-13 2022-01-11 Vmware, Inc. Service insertion for multicast traffic at boundary
US11659061B2 (en) 2020-01-20 2023-05-23 Vmware, Inc. Method of adjusting service function chains to improve network performance
US11153406B2 (en) 2020-01-20 2021-10-19 Vmware, Inc. Method of network performance visualization of service function chains
US11528219B2 (en) 2020-04-06 2022-12-13 Vmware, Inc. Using applied-to field to identify connection-tracking records for different interfaces
US11611625B2 (en) 2020-12-15 2023-03-21 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers
US11734043B2 (en) 2020-12-15 2023-08-22 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106134137A (en) * 2014-03-14 2016-11-16 Nicira股份有限公司 The advertising of route of managed gateway
CN107005584A (en) * 2014-09-30 2017-08-01 Nicira股份有限公司 Inline service switch
CN107113208A (en) * 2015-01-27 2017-08-29 华为技术有限公司 The network virtualization of network infrastructure
CN107210959A (en) * 2015-01-30 2017-09-26 Nicira股份有限公司 Router logic with multiple route parts
US20170317954A1 (en) * 2016-04-28 2017-11-02 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US20170317969A1 (en) * 2016-04-29 2017-11-02 Nicira, Inc. Implementing logical dhcp servers in logical networks

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7239639B2 (en) * 2001-12-27 2007-07-03 3Com Corporation System and method for dynamically constructing packet classification rules
US8634418B2 (en) * 2011-07-01 2014-01-21 Juniper Networks, Inc. Providing extended administrative groups in computer networks
WO2013063332A1 (en) * 2011-10-25 2013-05-02 Nicira, Inc. Network virtualization apparatus and method with scheduling capabilities
US10135732B2 (en) * 2012-12-31 2018-11-20 Juniper Networks, Inc. Remotely updating routing tables
EP3192213A1 (en) * 2014-09-12 2017-07-19 Voellmy, Andreas R. Managing network forwarding configurations using algorithmic policies
EP3026851B1 (en) * 2014-11-27 2017-08-23 Alcatel Lucent Apparatus, network gateway, method and computer program for providing information related to a specific route to a service in a network
US10038628B2 (en) * 2015-04-04 2018-07-31 Nicira, Inc. Route server mode for dynamic routing between logical and physical networks
US10095535B2 (en) * 2015-10-31 2018-10-09 Nicira, Inc. Static route types for logical routers
US10305858B2 (en) * 2015-12-18 2019-05-28 Nicira, Inc. Datapath processing of service rules with qualifiers defined in terms of dynamic groups
US11277338B2 (en) * 2016-09-26 2022-03-15 Juniper Networks, Inc. Distributing service function chain data and service function instance data in a network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106134137A (en) * 2014-03-14 2016-11-16 Nicira股份有限公司 The advertising of route of managed gateway
CN107005584A (en) * 2014-09-30 2017-08-01 Nicira股份有限公司 Inline service switch
CN107113208A (en) * 2015-01-27 2017-08-29 华为技术有限公司 The network virtualization of network infrastructure
CN107210959A (en) * 2015-01-30 2017-09-26 Nicira股份有限公司 Router logic with multiple route parts
US20170317954A1 (en) * 2016-04-28 2017-11-02 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US20170317969A1 (en) * 2016-04-29 2017-11-02 Nicira, Inc. Implementing logical dhcp servers in logical networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李清固: "数据中心安全网络虚拟化", 《信息安全与技术》 *

Also Published As

Publication number Publication date
CN112673596B (en) 2023-05-02
EP3815312A1 (en) 2021-05-05
CN116319541A (en) 2023-06-23
WO2020046686A9 (en) 2020-05-22
WO2020046686A1 (en) 2020-03-05

Similar Documents

Publication Publication Date Title
CN112673596B (en) Service insertion method, device and system at logic gateway
US20230179474A1 (en) Service insertion at logical network gateway
US10944673B2 (en) Redirection of data messages at logical network gateway
US10659431B2 (en) Implementing logical network security on a hardware switch
CN112640369B (en) Method, apparatus, and machine-readable medium for intelligently using peers in a public cloud
US10601705B2 (en) Failover of centralized routers in public cloud logical networks
US10862753B2 (en) High availability for stateful services in public cloud logical networks
US10715419B1 (en) Software defined networking between virtualized entities of a data center and external entities
CN107925617B (en) Routing configuration method, device and system of logic router
US8953441B2 (en) Re-routing network traffic after link failure
US9025468B1 (en) Custom routing decisions
US9979694B2 (en) Managing communications between virtual computing nodes in a substrate network
CN110278151B (en) Dynamic routing for logical routers
CN111478852B (en) Route advertisement for managed gateways
US8923294B2 (en) Dynamically provisioning middleboxes
CN115380517A (en) Architecture for extending logical switches between multiple data centers
US11032183B2 (en) Routing information validation in SDN environments
CN111095880A (en) High availability of stateful services in public cloud logic networks
US11895030B2 (en) Scalable overlay multicast routing
US10491483B2 (en) Using application programming interface calls for communication between network functions
US11582067B2 (en) Systems and methods for providing network connectors
US20240380696A1 (en) Route advertisement by managed gateways

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: California, USA

Patentee after: Weirui LLC

Country or region after: U.S.A.

Address before: California, USA

Patentee before: VMWARE, Inc.

Country or region before: U.S.A.

CP03 Change of name, title or address