[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20060029072A1 - Switching system for virtual LANs - Google Patents

Switching system for virtual LANs Download PDF

Info

Publication number
US20060029072A1
US20060029072A1 US11/248,708 US24870805A US2006029072A1 US 20060029072 A1 US20060029072 A1 US 20060029072A1 US 24870805 A US24870805 A US 24870805A US 2006029072 A1 US2006029072 A1 US 2006029072A1
Authority
US
United States
Prior art keywords
elements
ingress
egress
switching
header
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/248,708
Inventor
Ananda Perera
Edwin Hoffman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raptor Networks Technology Inc
Original Assignee
Raptor Networks Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raptor Networks Technology Inc filed Critical Raptor Networks Technology Inc
Priority to US11/248,708 priority Critical patent/US20060029072A1/en
Assigned to RAPTOR NETWORK TECHNOLOGY, INC. reassignment RAPTOR NETWORK TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HOFFMAN, EDWIN, PERERA, ANANDA
Publication of US20060029072A1 publication Critical patent/US20060029072A1/en
Assigned to AGILITY CAPITAL, LLC, BRIDGE BANK N.A. reassignment AGILITY CAPITAL, LLC SECURITY AGREEMENT Assignors: RAPTOR NETWORKS TECHNOLOGY, INC.
Assigned to AGILITY CAPITAL, LLC, BRIDGE BANK N.A. reassignment AGILITY CAPITAL, LLC SECURITY AGREEMENT Assignors: RAPTOR NETWORKS TECHNOLOGY INC.
Assigned to RAPTOR NETWORKS TECHNOLOGY, INC. reassignment RAPTOR NETWORKS TECHNOLOGY, INC. SECURITY AGREEMENT Assignors: AGILITY CAPITAL, LLC, BRIDGE BANK N.A.
Assigned to RAPTOR NETWORKS TECHNOLOGY, INC., RAPTOR NETWORKS TECHNOLOGY INC. reassignment RAPTOR NETWORKS TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: AGILITY CAPITAL, LLC, BRIDGE BANK N.A.
Assigned to CASTLERIGG MASTER INVESTMENTS LTD., AS COLLATERAL AGENT reassignment CASTLERIGG MASTER INVESTMENTS LTD., AS COLLATERAL AGENT GRANT OF SECURITY INTEREST Assignors: RAPTOR NETWORKS TECHNOLOGY, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/552Prevention, detection or correction of errors by ensuring the integrity of packets received through redundant connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/102Packet switching elements characterised by the switching fabric construction using shared medium, e.g. bus or ring

Definitions

  • the field of the invention is network switches.
  • Modem computer networks typically communicate using discrete packets or frames of data according to predefined protocols. There are multiple such standards, including the ubiquitous TCP and IP standards. For all but the simplest local topologies, networks employ intermediate nodes between the end-devices. Bridges, switches, and/or routers, are all examples of intermediate nodes.
  • a network switch is any intermediate device that forwards packets between end-devices and/or other intermediate devices. Switches operate at the data link layer (layer 2) and sometimes the network layer (layer 3) of the OSI Reference Model, and therefore typically support any packet protocol.
  • a switch has a plurality of input and output ports. Although a typical switch has only 8, 16, or other relatively small number of ports, it is known to connect switches together to provide large numbers of inputs and outputs.
  • Prior art FIG. 1 shows a typical arrangement of switch modules into a large switch that provides 128 inputs and 128 outputs.
  • U.S. Pat. No. 6,256,546 to Beshai (March 2002) describes a protocol that uses an adaptive packet header to simplify packet routing and increase transfer speed among switch modules.
  • Beshai's system is advantageous because it is not limited to a fixed cell length, such as the 53 byte length of an Asynchronous Transfer Mode (ATM) system, and because it reportedly has better quality of service and higher throughput that an Internetworking Protocol (IP) switched network.
  • ATM Asynchronous Transfer Mode
  • IP Internetworking Protocol
  • FIG. 1A depicts a system according to Beshai's '546 patent.
  • pluralities of edge modules (ingress modules 110 A-D and egress modules 130 A-D) are interconnected by a passive core 120 .
  • Each of the ingress modules 110 A-D accept data packets in multiple formats, adds a standardized header that indicates a destination for the packet, and switches the packets to the appropriate egress modules 130 A-D through the passive core 120 .
  • the header is removed from the packet, and the packet is transferred to a sink in its native format.
  • the solid lines of 112 A- 112 D depict unencapsulated information arriving to circuit ports, ATM ports, frame-relay ports, IP ports, and UTM ports.
  • the solid lines of 132 A-D depict unencapsulated information exiting to the various ports in the native format of the information.
  • the dotted lines of core 120 and facing portions of the ingress 110 A-D and egress 130 A-D modules depict information that is contained UTM headed packets.
  • the entire system 100 operates as a single distributed switch, in which all switching is done at the edge (ingress and egress modules).
  • Beshai's solution in the '546 patent has significant drawbacks.
  • the system is described as a multi-service switch (with circuit ports, ATM ports, frame relay ports, IP ports, and UTM ports), there is no contemplation of using the switch as an Ethernet switch. Ethernet offers significant advantages over other protocols, including connectionless stateful communication.
  • a second drawback is that the optical core is contemplated to be entirely passive. The routes need to be set up and torn down before packets are switched across the core. As such Beshai does not propose a distributed switching fabric, he only discloses a distributed edge fabric with optical cross-connected cores.
  • a third, related disadvantage is that Beshai's concept only supports a single channel from one module to another. All of those deficiencies reduce functionality.
  • Beshai publication no. 2001/0006522 resolves one of the deficiencies of the '546 patent, namely the single channel limitation between modules.
  • Beshai teaches a switching system having packet-switching edge modules and channel switching core modules.
  • traffic entering the system through ports 162 A is sorted at each edge module 160 A-D, and switched to various core elements 180 A-C via paths 170 .
  • the core elements switch the traffic to other destination edge modules 180 A-C, for delivery to final destinations.
  • Beshai contemplates that the core elements can use channel switching to minimize the potential wasted time in a pure TDM (time division mode) system, and that the entire system can use time counter co-ordination to realize harmonious reconfiguration of edge modules and core modules.
  • the channel switching core of the '522 application provides nothing more than virtual channels between edge devices. It does not switch individual packets of data.
  • the '522 application incorporates by reference Beshai's Ser. No. 09/244824 application regarding High-Capacity Packet Switch (issued as U.S. Pat. No. 6,721,271 in April 2004)
  • the '522 application still fails to teach, suggest, or motivate one of ordinary skill to provide a fully distributed network (edge and core) that acts as a single switch.
  • the present invention provides apparatus, systems, and methods in which the switching takes place both at the distributed edge nodes and within a distributed core, and where the entire system acts as a single switch through encapsulation of information using a special header that is added by the system upon ingress, and removed by the system upon egress.
  • the routing header includes as least a destination element address, and preferably also includes a destination port address, a source element address. Where the system is configured to address clusters of elements, the header also preferably includes a destination cluster address and a source cluster address.
  • the ingress and egress elements preferably support Ethernet or other protocol providing connectionless media with a stateful connection. At least some of the ingress and egress elements preferably have least 8 input ports and 8 output ports, and communicate at a speed of at least one, and more preferably at least 10 Gbs.
  • Preferred switches include management protocols for discovering which elements are connected, for constructing appropriate connection tables, for designating a master element, and for resolving failures and off-line conditions among the switches.
  • Secure data protocol (SDP), port to port (PTP) protocol, and active/active protection service (AAPS) are all preferably implemented.
  • SRT Strict Ring Topology
  • Other topologies can be can alternatively or additionally employed.
  • Components of a distributed switching fabric can be geographically separated by at least one kilometer, and in some cases by over 150 kilometers.
  • FIG. 1A is a schematic of a prior art arrangement of switch modules that cooperate to act as a single switch.
  • FIG. 1B is a schematic of a prior art arrangement of switch modules connected by an active core, but where the modules operate independently of one another.
  • FIG. 2 is a schematic of a true distributed fabric switching system, in which edge elements add or remove headers, and the core actively switches packets according to the headers.
  • FIG. 3 is a schematic of a routing header.
  • FIG. 4 shows a high level design of a preferred combination Ingress/Egress element
  • FIG. 5 shows a high level design of a preferred core element
  • FIG. 6 is a schematic of a RaptorTM 1010 switch.
  • FIG. 7 is a schematic of a RaptorTM 1808 switch.
  • FIG. 8 is a schematic of an exemplary distributed switching system according to preferred aspects of the present invention.
  • FIG. 9 is a schematic of a super fabric implementation of a distributed switching fabric.
  • a switching system 200 generally includes ingress elements 210 A-C, egress elements 230 A-C, core switching elements 220 A-C and connector elements 240 A-C.
  • the ingress elements encapsulate incoming packets with a routing header (see FIG. 3 ), and perform initial switching. The encapsulated packets then enter the core elements for further switching.
  • the intermediate elements facilitate communication between core elements.
  • the egress elements remove the header, and deliver the packets to a sink or final destination.
  • switching (encapsulation) header must, at a bare minimum, include at least a destination element address.
  • the header also includes destination port ID, and where elements are clustered and optional destination cluster ID.
  • an “ID” is something that is the same as, or can be resolved into an address.
  • a preferred switching header 300 generally includes a Destination Cluster ID 310 , a Destination Element ID 320 , a Destination Port ID 330 , a Source Cluster ID 340 and a Source Element ID 350 .
  • the each of the fields has a length of at least 1 byte and up to 2 bytes.
  • header is used here as in a euphemistic sense to mean any additional routing data that is included in a package that encapsulates other information.
  • the header need not be located at the head end of the frame or packet.
  • Ingress 210 A-C and egress 230 A-C elements are shown in FIG. 2 as distinct elements. In fact, they are similar in construction, and they may be implemented as a single device. Such elements can have any suitable number of ports, and can operate using any suitable logic.
  • Currently preferred chips to implement the design are Broadcom'sTM BCM5690, BCM5670, and BCM5464S chips, according to the detailed schematics included in one or more of the priority provisional applications.
  • FIG. 4 shows a high level design of a preferred combination ingress/egress element 400 , which can be utilized for any of the ingress 210 A-C and egress 230 A-C elements.
  • Ingress/Egress element 400 generally includes a logical switching frame 410 , Ethernet ingress/egress ports 420 A-L, encapsulated packet I/O port 430 , layer 2 table(s) 440 , layer 3 table(s) 450 , and access control table(s) 460 .
  • Ingress/egress elements are the only elements that are typically assigned element IDs. When packets arrive at an ingress/egress port 420 , it is assumed that all ISO layer 2 fault parameters are satisfied and the packet is correct.
  • the destination MAC address is searched in the layer 2 MAC table 440 , where the destination element ID and destination port ID are already stored. Once matched, the element and port IDs are placed into the switching header, along with the destination cluster ID, and source element ID. The resulting frame is then sent out to the core element.
  • the ID is checked to make sure the packet is targeted to the particular element at which it arrived. If there is a discrepancy, the frame is checked to determine whether it is a multicast or broadcast frame. If it is a multicast frame, the internal switching header is stripped and the resulting packet is copied to all interested parties (registered IGMP “Internet Group Management Protocol” joiners). If it is a broadcast frame, the RAST header is stripped, and the resulting packet is copied to all ports except the incoming port over which the frame arrived. If the frame is a unicast frame, the element ID is stripped off, and the packet is cut through to the corresponding physical port.
  • ingress/egress elements could be single port, in preferred embodiments they would typically have multiple ports, including at least one encapsulated packet port, and at least one standards based port (such as Gigabit Ethernet).
  • standards based port such as Gigabit Ethernet
  • preferred ingress/egress elements include 1 Gigabit Ethernet multi-port modules, and 10 Gigabit Ethernet single port modules.
  • an ingress/egress element may be included in the same physical device with a core element. In that case the device comprises a hybrid core-ingress/egress device. See FIGS. 6 and 7 .
  • FIG. 5 shows a high level design of a preferred core element 500 , which can be utilized for any of the core switching elements 220 A-C.
  • Core element 500 generally includes a logical switching frame 510 , a plurality of ingress and/or egress ports 520 A-H, one or more unicast tables 530 , one or more multicast tables 540 .
  • the unicast table contains a list of all registered element IDs that are known to the core element. Elements become registered during the MDP (Management Discovery Protocol) phase of startup.
  • the multicast table contains element IDs that are registered during the “discovery phase” of a multicast protocol's joining sequence. This is where the multicast protocol evidences an interested party, and uses these IDs to decide which ports take part in the hardware copy of the frames. If the element ID is not known to this core element, or the frame is designated a broadcast frame, the frame floods all egress ports.
  • Connector elements 240 A-C are low level devices that allow the core elements to communicate with other core elements over cables or fibers. They assist in enforcing protocols, but have no switching functions. Examples of such elements are XAU1 over copper connectors XAU1/XGmil over fiber connectors using MSA XFP.
  • FIG. 6 is a schematic of a preferred commercial embodiment of a hybrid core-ingress device, designated as a RaptorTM 1010 switch.
  • the switch 600 generally includes two 10 GBase ingress elements 610 A-B, two ingress elements other than 10 GBase 615 A-B, a core element 620 , and intermediate connector elements 630 A-D.
  • the system is capable of providing 12.5 Gbps throughput.
  • FIG. 7 is a schematic of a preferred commercial embodiment of a hybrid core-ingress device, designated as a RaptorTM 1808 switch.
  • the switch 700 could include eight 10 GBase ingress elements 710 A-D, a core element 720 , or eight intermediate connector elements 730 A-D, or any combination of elements up to a total of eight.
  • a switching system 800 includes two of the RaptorTM 1010 switches 600 A-B and four of the RaptorTM 1808 switches 700 A-D, as well as connecting optical or other lines 810 .
  • the lines preferably comprise a 10 GB or greater backplane.
  • the links between the 1010 switches can be 10-40 km at present, and possibly greater lengths in the future.
  • the links between the core switches can be over 40 km.
  • a major advantage of the inventive subject matter is that it implements switching of Ethernet packets using a distributed switching fabric.
  • Contemplated embodiments are not strictly limited to Ethernet, however. It is contemplated, for example, that an ingress element can convert SONET to Ethernet, encapsulate and route the packets as described above, and then convert back from Ethernet to SONET.
  • Switching systems contemplated herein can use any suitable topology.
  • the distributed switch fabric contemplated herein can even support a mixture of ring, mesh, star and bus topologies, with looping controlled via Spanning Tree Avoidance algorithms.
  • the presently preferred topology is a Strict Ring Topology (SRT), in which there is only one physical or logical link between elements.
  • SRT Strict Ring Topology
  • each source element address is checked upon ingress via any physical or logical link into a core element. If the source element address is the one that is directly connected to the core element, the data stream will be blocked. If the source element address is not the one that is directly connected to this core element, the package will be forwarded using the normal rules.
  • a break in the ring can be handled in any of several known ways, including reversion to a straight bus topology, which would cause an element table update to all elements.
  • EMU element manager unit
  • MDP Management Discovery Protocol
  • Each element transmits an initial MDP establish message containing its MAC address and user assigned priority number (if assigned 0 used if not set). Each element also listens for incoming MDP messages, containing such information. As each element receives the MDP messages, one of two decisions is made. If the received MAC address is lower than the MAC address assigned to the receiving element, the message is forwarded to all active links with the original MAC address, the link number it was received on, and the MAC address of the system that is forwarding the message. If a priority is set, the lowest priority (greater than 0) is deemed as lowest MAC address and processed as such. If on the other hand the received MAC address is higher than the MAC address assigned to the receiving element, then the message is not forwarded. If a priority is set that is higher than the received priority, the same process is carried out
  • the system identifies the MAC address of the master unit, and creates a connection matrix based on the MAC addresses of the elements discovered, the active port numbers, and the MAC addresses of each of the elements, as well as each of their ports.
  • This matrix is distributed to all elements, and forms the base of the distributed switch fabric.
  • the matrix can be any reasonable size, including the presently preferred support for a total of 1024 elements.
  • each new element As each new element joins an established cluster, it issues a MDP initialization message, which is answered by a stored copy of the adjacency table.
  • the new element insert its own information into the table, and issues an update element message to the master, which in turn will check the changes and issue an element update message to all elements.
  • Heart Beat Protocol enables the detection of a faked element. If an element fails or is removed from the matrix, a Heart Beat Protocol (HBP) can be used to signal that a particular link to an element is not in service. Whatever system is running the HBP sends an element update message to the master, which then reformats the table, and issues an element update message to all elements.
  • HBP Heart Beat Protocol
  • Traffic Load factors can be calculated in any suitable manner.
  • traffic load is calculated by local management units and periodically communicated in element load messages to the master. It is contemplated that such information can be used to load balance multiple physical or logical links between elements.
  • Element messages are preferably sent using a secure data protocol (SDP), which performs an ACK/NAK function on all messages to ensure their delivery.
  • SDP is preferably operated as a layer 2 secure data protocol that also includes the ability to encrypt element messages between elements.
  • element messages and SDP can also be used to communicate other data between elements, and thereby support desired management features.
  • element messages can be used to support Port To Port Protocol (PTPP), which provides a soft permanent virtual connection to exist between element/port pairs.
  • PTPP Port To Port Protocol
  • PTPP is simply an element-to-element message that sets default encapsulation to a specific element address/port address for source and destination.
  • MPLS Multiprotocol Label Switching
  • PTPP allows for extremely convenient routing around failures, provided that another link is available at both the originating (ingress) side and the terminating (egress) side, and there is no other blockage in the intervening links (security/Access Control List (ACL)/Quality of Service (QoS), etc),
  • AAPS Active/Active Protection Service
  • the method is analogous to multicasting in that the hardware copies data from the master link to the secondary link.
  • the receiving end of the AAPS will only forward the first copy of any data received (correctly) to the end node.
  • cluster enabled elements are simply normal elements, but with one or more links that are capable of adding/subtracting cluster address numbers.
  • a system that utilizes clusters in this manner is referred to herein as a super fabric.
  • Super fabrics can be designed to any reasonable size, including especially a current version of super fabric that allows up to 255 clusters of 1024 elements to be connected in a “single” switch system.
  • the management unit operating in super fabric mode retains details about all clusters, but does not MAC address data. Inter-cluster communication is via dynamic Virtual LAN (VLAN) tunnels which are created when a cluster level ACL detects a matched sequence that has been predefined.
  • VLAN Virtual LAN
  • matches include any of: (a) a MAC address or MAC address pairs; (b) VLAN ID pairs; (c) IP subnet or subnet pair; (d) TCP/UDP Protocol numbers or pairs, ranges etc; (e) protocol number(s); and (f) layer 2-7 match of specific data.
  • the management unit can also keep a list of recent broadcasts, and perform a matching operation on broadcasts received. Forwarding of previously sent broadcasts can thereby be prevented, so that after a learning period only new broadcasts will forwarded to other links.
  • clusters are managed by a management unit, they can continue to operate upon failure of the master. If the master management unit fails, a new master is selected and the cluster continues to operate. In preferred embodiments, any switch unit can be the master unit. In cases where only the previous management has failed, the ingress/egress elements and core element are manageable by the new master over an inband connection.
  • Inter-cluster communication is preferably via a strict PTPP based matrix of link addresses.
  • MDP discovers this link
  • HBP checks the link for health
  • SDP allows communication between management elements to keep the cluster informed of any changes. If all of the above is properly implemented, a cluster of switch elements can act as a single logical Gigabit Ethernet or 10 Gigabit Ethernet LAN switch, with all standards based switch functions available over the entire logical switch.
  • Link Aggregation IEEE 802.3ad can operate across the entire cluster. This allows other vendors' systems that use IEEE 802.3ad to aggregate traffic over multiple hardware platforms, and provides greater levels of redundancy than heretofore possible.
  • Virtual LANs (VLANs) 802.1Q can operate over the entire cluster without the need for VLAN trunks or VLAN tagging on inter-switch links. Still further, port mirroring (a defacto standard) is readily implemented, providing mirroring of any port in a cluster to any other port in the cluster.
  • Pause frames received on any ingress/egress port can be reflected over the cluster to all ports contributing to the traffic flow on that port, and pause frames can be issued on those contributing ports to avoid bottlenecks.
  • ISO Layer 3 IP routing
  • IP routing operates over the entire cluster as though it was a single routed hop, even though the cluster may be geographically separated by 160 Km or more.
  • ISO Layer 4 ACLs can be assigned to any switch element in the cluster just as they would be in any standard layer 2/3/4 switch, and a single ACL may be applied to the entire cluster in a single command.
  • IEEE 802.1X operates over the entire cluster, which would not the case if a standard set of switching systems were connected.
  • a super fabric implementation 900 of a distributed switching fabric generally includes four 20 Gbps pipes 910 A-D, each of which is connected to a corresponding cluster 920 A-D that includes a control element 922 A-D that understand the cluster messaging structure.
  • each cluster there are numerous ingress/egress elements 400 coupled together.
  • each of the control elements 922 A-D has two 10 Gbps pipes that connect the ingress/egress elements 400 for intra-cluster communication.
  • inter-cluster pipes 930 A-D which in this instance also communicate at 10 Gbps.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Small-Scale Networks (AREA)

Abstract

A switch encapsulates incoming information using a header, and removes the header upon egress. The header is used by both distributed ingress nodes and within a distributed core to facilitate switching. The ingress and egress elements preferably support Ethernet or other protocol providing connectionless media with a stateful connection. Preferred switches include management protocols for discovering which elements are connected, for constructing appropriate connection tables, for designating a master element, and for resolving failures and off-line conditions among the switches. Secure data protocol (SDP), port to port (PTP) protocol, and active/active protection service (AAPS) are all preferably implemented. Systems and methods contemplated herein can advantageously use Strict Ring Topology (SRT), and conf configure the topology automatically. Components of a distributed switching fabric can be geographically separated by at least one kilometer, and in some cases by over 150 kilometers.

Description

  • This application claims priority to provisional application number 60/511,145 filed Oct. 14, 2003; provisional application number 60/511,144 filed Oct. 14, 2003; provisional application number 60/511,143 filed Oct. 14, 2003; provisional application number 60/511,142 filed Oct. 14, 2003; provisional application number 60/511,141 filed Oct. 14, 2003; provisional application number 60/511,140 filed Oct. 14, 2003; provisional application number 60/511,139 filed Oct. 14, 2003; provisional application number 60/511,138 filed Oct. 14, 2003; provisional application number 60/511,021 filed Oct. 14, 2003; and provisional application number 60/563,262 filed Apr. 16, 2004, all of which are incorporated herein by reference in their entirety.
  • FIELD OF THE INVENTION
  • The field of the invention is network switches.
  • BACKGROUND
  • Modem computer networks typically communicate using discrete packets or frames of data according to predefined protocols. There are multiple such standards, including the ubiquitous TCP and IP standards. For all but the simplest local topologies, networks employ intermediate nodes between the end-devices. Bridges, switches, and/or routers, are all examples of intermediate nodes.
  • As used herein, a network switch is any intermediate device that forwards packets between end-devices and/or other intermediate devices. Switches operate at the data link layer (layer 2) and sometimes the network layer (layer 3) of the OSI Reference Model, and therefore typically support any packet protocol. A switch has a plurality of input and output ports. Although a typical switch has only 8, 16, or other relatively small number of ports, it is known to connect switches together to provide large numbers of inputs and outputs. Prior art FIG. 1 shows a typical arrangement of switch modules into a large switch that provides 128 inputs and 128 outputs.
  • One problem with simple embodiments of the prior art design of FIG. 1 is that failure of any given switch destroys integrity of the entire switching system. One solution is to provide entire redundant backup systems (external redundancy), so that a spare system can quickly replace functionality of a defective system. That solution, however, is overly expensive because an entire backup must be deployed for each working system. The solution is also problematic in that the redundant system must be engaged upon failure of substantially any component within the working system. Another solution is to provide redundant modules within the system, and to deploy those modules intelligently (internal redundancy). But that solution is problematic because all the components are situated locally to one another. A fire, earthquake or other catastrophe will still terminally disrupt the functionality of the entire system.
  • U.S. Pat. No. 6,256,546 to Beshai (March 2002) describes a protocol that uses an adaptive packet header to simplify packet routing and increase transfer speed among switch modules. Beshai's system is advantageous because it is not limited to a fixed cell length, such as the 53 byte length of an Asynchronous Transfer Mode (ATM) system, and because it reportedly has better quality of service and higher throughput that an Internetworking Protocol (IP) switched network. The Beshai patent, is incorporated herein by reference along with all other extrinsic material discussed herein
  • Prior art FIG. 1A depicts a system according to Beshai's '546 patent. There, pluralities of edge modules (ingress modules 110A-D and egress modules 130A-D) are interconnected by a passive core 120. Each of the ingress modules 110A-D accept data packets in multiple formats, adds a standardized header that indicates a destination for the packet, and switches the packets to the appropriate egress modules 130A-D through the passive core 120. At the egress modules 130A-D the header is removed from the packet, and the packet is transferred to a sink in its native format. The solid lines of 112A-112D depict unencapsulated information arriving to circuit ports, ATM ports, frame-relay ports, IP ports, and UTM ports. Similarly, the solid lines of 132A-D depict unencapsulated information exiting to the various ports in the native format of the information. The dotted lines of core 120 and facing portions of the ingress 110A-D and egress 130A-D modules depict information that is contained UTM headed packets. The entire system 100 operates as a single distributed switch, in which all switching is done at the edge (ingress and egress modules).
  • Despite numerous potential advantages, Beshai's solution in the '546 patent has significant drawbacks. First, although the system is described as a multi-service switch (with circuit ports, ATM ports, frame relay ports, IP ports, and UTM ports), there is no contemplation of using the switch as an Ethernet switch. Ethernet offers significant advantages over other protocols, including connectionless stateful communication. A second drawback is that the optical core is contemplated to be entirely passive. The routes need to be set up and torn down before packets are switched across the core. As such Beshai does not propose a distributed switching fabric, he only discloses a distributed edge fabric with optical cross-connected cores. A third, related disadvantage, is that Beshai's concept only supports a single channel from one module to another. All of those deficiencies reduce functionality.
  • Beshai publication no. 2001/0006522 (July 2001) resolves one of the deficiencies of the '546 patent, namely the single channel limitation between modules. In the '522 application Beshai teaches a switching system having packet-switching edge modules and channel switching core modules. As shown in prior art FIG. 1B, traffic entering the system through ports 162A is sorted at each edge module 160A-D, and switched to various core elements 180A-C via paths 170. The core elements switch the traffic to other destination edge modules 180A-C, for delivery to final destinations. Beshai contemplates that the core elements can use channel switching to minimize the potential wasted time in a pure TDM (time division mode) system, and that the entire system can use time counter co-ordination to realize harmonious reconfiguration of edge modules and core modules.
  • Leaving aside the switching mechanisms between and within the core elements, the channel switching core of the '522 application provides nothing more than virtual channels between edge devices. It does not switch individual packets of data. Thus, even though the '522 application incorporates by reference Beshai's Ser. No. 09/244824 application regarding High-Capacity Packet Switch (issued as U.S. Pat. No. 6,721,271 in April 2004), the '522 application still fails to teach, suggest, or motivate one of ordinary skill to provide a fully distributed network (edge and core) that acts as a single switch.
  • What is still needed is a switching system in which the switching takes place both at the distributed edge nodes and within a distributed core, and where the entire system acts as a single switch.
  • SUMMARY OF THE INVENTION
  • The present invention provides apparatus, systems, and methods in which the switching takes place both at the distributed edge nodes and within a distributed core, and where the entire system acts as a single switch through encapsulation of information using a special header that is added by the system upon ingress, and removed by the system upon egress.
  • The routing header includes as least a destination element address, and preferably also includes a destination port address, a source element address. Where the system is configured to address clusters of elements, the header also preferably includes a destination cluster address and a source cluster address.
  • The ingress and egress elements preferably support Ethernet or other protocol providing connectionless media with a stateful connection. At least some of the ingress and egress elements preferably have least 8 input ports and 8 output ports, and communicate at a speed of at least one, and more preferably at least 10 Gbs.
  • Preferred switches include management protocols for discovering which elements are connected, for constructing appropriate connection tables, for designating a master element, and for resolving failures and off-line conditions among the switches. Secure data protocol (SDP), port to port (PTP) protocol, and active/active protection service (AAPS) are all preferably implemented.
  • Systems and methods contemplated herein can advantageously use Strict Ring Topology (SRT), and conf configure the topology automatically. Other topologies can be can alternatively or additionally employed. Components of a distributed switching fabric can be geographically separated by at least one kilometer, and in some cases by over 150 kilometers.
  • Various objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of preferred embodiments of the invention, along with the accompanying drawings in which like numerals represent like components.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1A is a schematic of a prior art arrangement of switch modules that cooperate to act as a single switch.
  • FIG. 1B is a schematic of a prior art arrangement of switch modules connected by an active core, but where the modules operate independently of one another.
  • FIG. 2 is a schematic of a true distributed fabric switching system, in which edge elements add or remove headers, and the core actively switches packets according to the headers.
  • FIG. 3 is a schematic of a routing header.
  • FIG. 4 shows a high level design of a preferred combination Ingress/Egress element
  • FIG. 5 shows a high level design of a preferred core element
  • FIG. 6 is a schematic of a Raptor™ 1010 switch.
  • FIG. 7 is a schematic of a Raptor™ 1808 switch.
  • FIG. 8 is a schematic of an exemplary distributed switching system according to preferred aspects of the present invention.
  • FIG. 9 is a schematic of a super fabric implementation of a distributed switching fabric.
  • DETAILED DESCRIPTION
  • In FIG. 2 a switching system 200 generally includes ingress elements 210A-C, egress elements 230A-C, core switching elements 220A-C and connector elements 240A-C. The ingress elements encapsulate incoming packets with a routing header (see FIG. 3), and perform initial switching. The encapsulated packets then enter the core elements for further switching. The intermediate elements facilitate communication between core elements. The egress elements remove the header, and deliver the packets to a sink or final destination.
  • Those skilled in the art will appreciate that switching (encapsulation) header must, at a bare minimum, include at least a destination element address. In preferred embodiments the header also includes destination port ID, and where elements are clustered and optional destination cluster ID. Also optional are fields for source cluster, source element, and source port IDs. As used herein an “ID” is something that is the same as, or can be resolved into an address. In FIG. 3 a preferred switching header 300 generally includes a Destination Cluster ID 310, a Destination Element ID 320, a Destination Port ID 330, a Source Cluster ID 340 and a Source Element ID 350. In this particular example, the each of the fields has a length of at least 1 byte and up to 2 bytes. Those skilled in the art should also appreciate that the term “header” is used here as in a euphemistic sense to mean any additional routing data that is included in a package that encapsulates other information. The header need not be located at the head end of the frame or packet.
  • Ingress 210A-C and egress 230A-C elements are shown in FIG. 2 as distinct elements. In fact, they are similar in construction, and they may be implemented as a single device. Such elements can have any suitable number of ports, and can operate using any suitable logic. Currently preferred chips to implement the design are Broadcom's™ BCM5690, BCM5670, and BCM5464S chips, according to the detailed schematics included in one or more of the priority provisional applications.
  • FIG. 4 shows a high level design of a preferred combination ingress/egress element 400, which can be utilized for any of the ingress 210A-C and egress 230A-C elements. Ingress/Egress element 400 generally includes a logical switching frame 410, Ethernet ingress/egress ports 420A-L, encapsulated packet I/O port 430, layer 2 table(s) 440, layer 3 table(s) 450, and access control table(s) 460. Ingress/egress elements are the only elements that are typically assigned element IDs. When packets arrive at an ingress/egress port 420, it is assumed that all ISO layer 2 fault parameters are satisfied and the packet is correct. The destination MAC address is searched in the layer 2 MAC table 440, where the destination element ID and destination port ID are already stored. Once matched, the element and port IDs are placed into the switching header, along with the destination cluster ID, and source element ID. The resulting frame is then sent out to the core element.
  • When an encapsulated frame arrives, the ID is checked to make sure the packet is targeted to the particular element at which it arrived. If there is a discrepancy, the frame is checked to determine whether it is a multicast or broadcast frame. If it is a multicast frame, the internal switching header is stripped and the resulting packet is copied to all interested parties (registered IGMP “Internet Group Management Protocol” joiners). If it is a broadcast frame, the RAST header is stripped, and the resulting packet is copied to all ports except the incoming port over which the frame arrived. If the frame is a unicast frame, the element ID is stripped off, and the packet is cut through to the corresponding physical port.
  • Although ingress/egress elements could be single port, in preferred embodiments they would typically have multiple ports, including at least one encapsulated packet port, and at least one standards based port (such as Gigabit Ethernet). Currently preferred ingress/egress elements include 1 Gigabit Ethernet multi-port modules, and 10 Gigabit Ethernet single port modules. In other aspects of preferred embodiments, an ingress/egress element may be included in the same physical device with a core element. In that case the device comprises a hybrid core-ingress/egress device. See FIGS. 6 and 7.
  • FIG. 5 shows a high level design of a preferred core element 500, which can be utilized for any of the core switching elements 220A-C. Core element 500 generally includes a logical switching frame 510, a plurality of ingress and/or egress ports 520A-H, one or more unicast tables 530, one or more multicast tables 540.
  • When an encapsulated frame arrives at an ingress side of any port in the core element, the header is read for the destination ID. The ID is used to cut through the frame to the specific egress side port for which the ID has been registered. The unicast table contains a list of all registered element IDs that are known to the core element. Elements become registered during the MDP (Management Discovery Protocol) phase of startup. The multicast table contains element IDs that are registered during the “discovery phase” of a multicast protocol's joining sequence. This is where the multicast protocol evidences an interested party, and uses these IDs to decide which ports take part in the hardware copy of the frames. If the element ID is not known to this core element, or the frame is designated a broadcast frame, the frame floods all egress ports.
  • Connector elements 240A-C (depicted in FIG. 2 as RAST™, for Raptor Adaptive Switch Technology™ Header), are low level devices that allow the core elements to communicate with other core elements over cables or fibers. They assist in enforcing protocols, but have no switching functions. Examples of such elements are XAU1 over copper connectors XAU1/XGmil over fiber connectors using MSA XFP.
  • FIG. 6 is a schematic of a preferred commercial embodiment of a hybrid core-ingress device, designated as a Raptor™ 1010 switch. The switch 600 generally includes two 10 GBase ingress elements 610A-B, two ingress elements other than 10 GBase 615A-B, a core element 620, and intermediate connector elements 630A-D. The system is capable of providing 12.5 Gbps throughput.
  • FIG. 7 is a schematic of a preferred commercial embodiment of a hybrid core-ingress device, designated as a Raptor™ 1808 switch. The switch 700 could include eight 10 GBase ingress elements 710A-D, a core element 720, or eight intermediate connector elements 730A-D, or any combination of elements up to a total of eight.
  • In FIG. 8 a switching system 800 includes two of the Raptor™ 1010 switches 600A-B and four of the Raptor™ 1808 switches 700A-D, as well as connecting optical or other lines 810. The lines preferably comprise a 10 GB or greater backplane. In this embodiment the links between the 1010 switches can be 10-40 km at present, and possibly greater lengths in the future. The links between the core switches can be over 40 km.
  • Ethernet
  • A major advantage of the inventive subject matter is that it implements switching of Ethernet packets using a distributed switching fabric. Contemplated embodiments are not strictly limited to Ethernet, however. It is contemplated, for example, that an ingress element can convert SONET to Ethernet, encapsulate and route the packets as described above, and then convert back from Ethernet to SONET.
  • Topology
  • Switching systems contemplated herein can use any suitable topology. Interestingly, the distributed switch fabric contemplated herein can even support a mixture of ring, mesh, star and bus topologies, with looping controlled via Spanning Tree Avoidance algorithms.
  • The presently preferred topology, however, is a Strict Ring Topology (SRT), in which there is only one physical or logical link between elements. To implement SRT each source element address is checked upon ingress via any physical or logical link into a core element. If the source element address is the one that is directly connected to the core element, the data stream will be blocked. If the source element address is not the one that is directly connected to this core element, the package will be forwarded using the normal rules. A break in the ring can be handled in any of several known ways, including reversion to a straight bus topology, which would cause an element table update to all elements.
  • Management of the topology is preferably accomplished using element messages, which can advantageously be created and promulgated by an element manager unit (EMU). An EMU would typically manage multiple types of elements, including ingress/egress elements and core switching elements.
  • Management Discovery Protocol
  • In order for a distributed switch fabric to operate, all individual elements need to discover contributing elements to the fabric. The process is referred to herein as Management Discovery Protocol (MDP). MDP discovers fabric elements that contain individual management units, and decides which element become the master unit and which become the backup units. Usually, MDP needs to be re-started in every element after power stabilizes, the individual management units have booted, and port connectivity is established. The sequence of a preferred MDP operation is as follows:
  • Each element transmits an initial MDP establish message containing its MAC address and user assigned priority number (if assigned 0 used if not set). Each element also listens for incoming MDP messages, containing such information. As each element receives the MDP messages, one of two decisions is made. If the received MAC address is lower than the MAC address assigned to the receiving element, the message is forwarded to all active links with the original MAC address, the link number it was received on, and the MAC address of the system that is forwarding the message. If a priority is set, the lowest priority (greater than 0) is deemed as lowest MAC address and processed as such. If on the other hand the received MAC address is higher than the MAC address assigned to the receiving element, then the message is not forwarded. If a priority is set that is higher than the received priority, the same process is carried out
  • Eventually the system identifies the MAC address of the master unit, and creates a connection matrix based on the MAC addresses of the elements discovered, the active port numbers, and the MAC addresses of each of the elements, as well as each of their ports. This matrix is distributed to all elements, and forms the base of the distributed switch fabric. The matrix can be any reasonable size, including the presently preferred support for a total of 1024 elements.
  • As each new element joins an established cluster, it issues a MDP initialization message, which is answered by a stored copy of the adjacency table. The new element insert its own information into the table, and issues an update element message to the master, which in turn will check the changes and issue an element update message to all elements.
  • Heart Beat Protocol
  • Heart Beat Protocol enables the detection of a faked element. If an element fails or is removed from the matrix, a Heart Beat Protocol (HBP) can be used to signal that a particular link to an element is not in service. Whatever system is running the HBP sends an element update message to the master, which then reformats the table, and issues an element update message to all elements.
  • It is also possible that various pieces of hardware will send an interrupt or trap to the manager, which will trigger an element update message before HBP can discover the failure. Failure likely to be detected early on by hardware include; loss of signal on optical interfaces; loss of connectivity on copper interfaces; hardware failure of interface chips. A user selected interface disable command or shutdown command can also be used to trigger an element update message.
  • Traffic Load
  • Traffic Load factors can be calculated in any suitable manner. In currently preferred systems and methods, traffic load is calculated by local management units and periodically communicated in element load messages to the master. It is contemplated that such information can be used to load balance multiple physical or logical links between elements.
  • Security
  • Element messages are preferably sent using a secure data protocol (SDP), which performs an ACK/NAK function on all messages to ensure their delivery. SDP is preferably operated as a layer 2 secure data protocol that also includes the ability to encrypt element messages between elements.
  • As discussed elsewhere herein, element messages and SDP can also be used to communicate other data between elements, and thereby support desired management features. Among other things, element messages can be used to support Port To Port Protocol (PTPP), which provides a soft permanent virtual connection to exist between element/port pairs. As currently contemplated, PTPP is simply an element-to-element message that sets default encapsulation to a specific element address/port address for source and destination. PTPP is thus similar to Multiprotocol Label Switching (MPLS) in that it creates a substitute virtual circuit. But unlike MPLS, if a failure occurs, it is the “local” element that automatically re-routes data around the problem. Implemented in this manner, PTPP allows for extremely convenient routing around failures, provided that another link is available at both the originating (ingress) side and the terminating (egress) side, and there is no other blockage in the intervening links (security/Access Control List (ACL)/Quality of Service (QoS), etc),
  • It is also possible to provide a lossless failover system that will not lose a single packet of data in case of a link failure. Such a system can be implemented using Active/Active Protection Service (AAPS), in which the same data is sent in a parallel fashion. The method is analogous to multicasting in that the hardware copies data from the master link to the secondary link. Ideally, the receiving end of the AAPS will only forward the first copy of any data received (correctly) to the end node.
  • Super Fabric
  • Large numbers of elements can advantageously be mapped together in logical clusters, and addressed by including destination and source cluster IDs in the switching headers. In one sense, cluster enabled elements are simply normal elements, but with one or more links that are capable of adding/subtracting cluster address numbers. A system that utilizes clusters in this manner is referred to herein as a super fabric. Super fabrics can be designed to any reasonable size, including especially a current version of super fabric that allows up to 255 clusters of 1024 elements to be connected in a “single” switch system.
  • As currently contemplated, the management unit operating in super fabric mode retains details about all clusters, but does not MAC address data. Inter-cluster communication is via dynamic Virtual LAN (VLAN) tunnels which are created when a cluster level ACL detects a matched sequence that has been predefined. Currently contemplated matches include any of: (a) a MAC address or MAC address pairs; (b) VLAN ID pairs; (c) IP subnet or subnet pair; (d) TCP/UDP Protocol numbers or pairs, ranges etc; (e) protocol number(s); and (f) layer 2-7 match of specific data. The management unit can also keep a list of recent broadcasts, and perform a matching operation on broadcasts received. Forwarding of previously sent broadcasts can thereby be prevented, so that after a learning period only new broadcasts will forwarded to other links.
  • Although clusters are managed by a management unit, they can continue to operate upon failure of the master. If the master management unit fails, a new master is selected and the cluster continues to operate. In preferred embodiments, any switch unit can be the master unit. In cases where only the previous management has failed, the ingress/egress elements and core element are manageable by the new master over an inband connection.
  • Inter-cluster communication is preferably via a strict PTPP based matrix of link addresses. When a link exists between elements that received encapsulated packets, MDP discovers this link, HBP checks the link for health, and SDP allows communication between management elements to keep the cluster informed of any changes. If all of the above is properly implemented, a cluster of switch elements can act as a single logical Gigabit Ethernet or 10 Gigabit Ethernet LAN switch, with all standards based switch functions available over the entire logical switch.
  • The above-described clustering is advantageous in several ways.
  • Link Aggregation IEEE 802.3ad can operate across the entire cluster. This allows other vendors' systems that use IEEE 802.3ad to aggregate traffic over multiple hardware platforms, and provides greater levels of redundancy than heretofore possible.
  • Virtual LANs (VLANs) 802.1Q can operate over the entire cluster without the need for VLAN trunks or VLAN tagging on inter-switch links. Still further, port mirroring (a defacto standard) is readily implemented, providing mirroring of any port in a cluster to any other port in the cluster.
  • Pause frames received on any ingress/egress port can be reflected over the cluster to all ports contributing to the traffic flow on that port, and pause frames can be issued on those contributing ports to avoid bottlenecks.
  • ISO Layer 3 (IP routing) operates over the entire cluster as though it was a single routed hop, even though the cluster may be geographically separated by 160 Km or more.
  • ISO Layer 4 ACLs can be assigned to any switch element in the cluster just as they would be in any standard layer 2/3/4 switch, and a single ACL may be applied to the entire cluster in a single command.
  • IEEE 802.1X operates over the entire cluster, which would not the case if a standard set of switching systems were connected.
  • In FIG. 9, a super fabric implementation 900 of a distributed switching fabric generally includes four 20 Gbps pipes 910A-D, each of which is connected to a corresponding cluster 920A-D that includes a control element 922A-D that understand the cluster messaging structure. Within each cluster there are numerous ingress/egress elements 400 coupled together. In this particular embodiment there each of the control elements 922A-D has two 10 Gbps pipes that connect the ingress/egress elements 400 for intra-cluster communication. There are also inter-cluster pipes 930A-D, which in this instance also communicate at 10 Gbps.
  • Thus, specific embodiments and applications of distributed switching fabric switches have been disclosed. It should be apparent, however, to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced.

Claims (6)

1-34. (canceled)
35. A switching system that provides for virtual LANs to exist in multiple switch nodes as native VLANs without the use of VLAN trunks or VLAN tagging, by employing a distributed switch fabric.
36. The system of claim 35 wherein the distributed switch fabric comprises a plurality of ingress switching elements, each with a plurality of input and output ports; hardware that adds a routing header to the packets entering the input ports of the plurality of ingress elements,
37. The system of claim 36 wherein the distributed switch fabric further comprises a plurality of egress switching elements, each with a plurality of input and output ports.
38. The system of claim 37 wherein the distributed switch fabric comprises a backbone that provides active switching among the plurality of ingress and egress elements.
39. The system of claim 38 wherein each of the switching elements is adapted to use the routing header to pass the packets through the backbone from one of the ingress switching elements to at least one of the egress switching elements to which the one ingress element is not otherwise directly connected.
US11/248,708 2003-10-14 2005-10-11 Switching system for virtual LANs Abandoned US20060029072A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/248,708 US20060029072A1 (en) 2003-10-14 2005-10-11 Switching system for virtual LANs

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
US51113803P 2003-10-14 2003-10-14
US51114303P 2003-10-14 2003-10-14
US51114203P 2003-10-14 2003-10-14
US51102103P 2003-10-14 2003-10-14
US51114003P 2003-10-14 2003-10-14
US51113903P 2003-10-14 2003-10-14
US51114503P 2003-10-14 2003-10-14
US51114403P 2003-10-14 2003-10-14
US51114103P 2003-10-14 2003-10-14
US56326204P 2004-04-16 2004-04-16
US10/965,444 US20050105538A1 (en) 2003-10-14 2004-10-12 Switching system with distributed switching fabric
US11/248,708 US20060029072A1 (en) 2003-10-14 2005-10-11 Switching system for virtual LANs

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/965,444 Division US20050105538A1 (en) 2003-10-14 2004-10-12 Switching system with distributed switching fabric

Publications (1)

Publication Number Publication Date
US20060029072A1 true US20060029072A1 (en) 2006-02-09

Family

ID=34468581

Family Applications (8)

Application Number Title Priority Date Filing Date
US10/965,444 Abandoned US20050105538A1 (en) 2003-10-14 2004-10-12 Switching system with distributed switching fabric
US11/248,707 Abandoned US20060029071A1 (en) 2003-10-14 2005-10-11 Multiplexing system that supports a geographically distributed subnet
US11/248,708 Abandoned US20060029072A1 (en) 2003-10-14 2005-10-11 Switching system for virtual LANs
US11/248,111 Abandoned US20060039369A1 (en) 2003-10-14 2005-10-11 Multiplexing system having an automatically configured topology
US11/248,710 Abandoned US20060029056A1 (en) 2003-10-14 2005-10-11 Virtual machine task management system
US11/248,711 Abandoned US20060029057A1 (en) 2003-10-14 2005-10-11 Time division multiplexing system
US11/248,639 Abandoned US20060029055A1 (en) 2003-10-14 2005-10-11 VLAN fabric network
US11/610,281 Expired - Lifetime US7352745B2 (en) 2003-10-14 2006-12-13 Switching system with distributed switching fabric

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/965,444 Abandoned US20050105538A1 (en) 2003-10-14 2004-10-12 Switching system with distributed switching fabric
US11/248,707 Abandoned US20060029071A1 (en) 2003-10-14 2005-10-11 Multiplexing system that supports a geographically distributed subnet

Family Applications After (5)

Application Number Title Priority Date Filing Date
US11/248,111 Abandoned US20060039369A1 (en) 2003-10-14 2005-10-11 Multiplexing system having an automatically configured topology
US11/248,710 Abandoned US20060029056A1 (en) 2003-10-14 2005-10-11 Virtual machine task management system
US11/248,711 Abandoned US20060029057A1 (en) 2003-10-14 2005-10-11 Time division multiplexing system
US11/248,639 Abandoned US20060029055A1 (en) 2003-10-14 2005-10-11 VLAN fabric network
US11/610,281 Expired - Lifetime US7352745B2 (en) 2003-10-14 2006-12-13 Switching system with distributed switching fabric

Country Status (4)

Country Link
US (8) US20050105538A1 (en)
EP (1) EP1673683A4 (en)
JP (1) JP2007507990A (en)
WO (1) WO2005038599A2 (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090268737A1 (en) * 2008-04-24 2009-10-29 James Ryan Giles Method and Apparatus for VLAN-Based Selective Path Routing
US20100061367A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to lossless operation within a data center
US20100061242A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to a flexible data center security architecture
US20100061389A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to virtualization of data center resources
US20100061241A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to flow control within a data center switch fabric
US20100061394A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to any-to-any connectivity within a data center
US20100061240A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to low latency within a data center
US20100061391A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to a low cost data center architecture
US20110238816A1 (en) * 2010-03-23 2011-09-29 Juniper Networks, Inc. Methods and apparatus for automatically provisioning resources within a distributed control plane of a switch
US8588224B2 (en) 2011-05-14 2013-11-19 International Business Machines Corporation Priority based flow control in a distributed fabric protocol (DFP) switching network architecture
US8635614B2 (en) 2011-05-14 2014-01-21 International Business Machines Corporation Method for providing location independent dynamic port mirroring on distributed virtual switches
US8717874B2 (en) 2011-09-12 2014-05-06 International Business Machines Corporation Updating a switch software image in a distributed fabric protocol (DFP) switching network
US8750129B2 (en) 2011-10-06 2014-06-10 International Business Machines Corporation Credit-based network congestion management
US8767722B2 (en) 2011-05-14 2014-07-01 International Business Machines Corporation Data traffic handling in a distributed fabric protocol (DFP) switching network architecture
US8767529B2 (en) 2011-09-12 2014-07-01 International Business Machines Corporation High availability distributed fabric protocol (DFP) switching network architecture
US8798080B2 (en) 2011-05-14 2014-08-05 International Business Machines Corporation Distributed fabric protocol (DFP) switching network architecture
US8824485B2 (en) 2011-05-13 2014-09-02 International Business Machines Corporation Efficient software-based private VLAN solution for distributed virtual switches
US8856801B2 (en) 2011-05-14 2014-10-07 International Business Machines Corporation Techniques for executing normally interruptible threads in a non-preemptive manner
US8948003B2 (en) 2011-06-17 2015-02-03 International Business Machines Corporation Fault tolerant communication in a TRILL network
US9059922B2 (en) 2011-10-06 2015-06-16 International Business Machines Corporation Network traffic distribution
US9276953B2 (en) 2011-05-13 2016-03-01 International Business Machines Corporation Method and apparatus to detect and block unauthorized MAC address by virtual machine aware network switches
US9282060B2 (en) 2010-12-15 2016-03-08 Juniper Networks, Inc. Methods and apparatus for dynamic resource management within a distributed control plane of a switch
US20160134563A1 (en) * 2010-06-08 2016-05-12 Brocade Communications Systems, Inc. Remote port mirroring
US9813252B2 (en) 2010-03-23 2017-11-07 Juniper Networks, Inc. Multicasting within a distributed control plane of a switch
US11271871B2 (en) 2008-09-11 2022-03-08 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture

Families Citing this family (241)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9138644B2 (en) * 2002-12-10 2015-09-22 Sony Computer Entertainment America Llc System and method for accelerated machine switching
US8949922B2 (en) * 2002-12-10 2015-02-03 Ol2, Inc. System for collaborative conferencing using streaming interactive video
US9077991B2 (en) * 2002-12-10 2015-07-07 Sony Computer Entertainment America Llc System and method for utilizing forward error correction with video compression
US8979655B2 (en) 2002-12-10 2015-03-17 Ol2, Inc. System and method for securely hosting applications
US10201760B2 (en) 2002-12-10 2019-02-12 Sony Interactive Entertainment America Llc System and method for compressing video based on detected intraframe motion
US9314691B2 (en) * 2002-12-10 2016-04-19 Sony Computer Entertainment America Llc System and method for compressing video frames or portions thereof based on feedback information from a client device
US20090118019A1 (en) * 2002-12-10 2009-05-07 Onlive, Inc. System for streaming databases serving real-time applications used through streaming interactive video
US9061207B2 (en) * 2002-12-10 2015-06-23 Sony Computer Entertainment America Llc Temporary decoder apparatus and method
US8549574B2 (en) 2002-12-10 2013-10-01 Ol2, Inc. Method of combining linear content and interactive content compressed together as streaming interactive video
US20100166056A1 (en) * 2002-12-10 2010-07-01 Steve Perlman System and method for encoding video using a selected tile and tile rotation pattern
US8526490B2 (en) * 2002-12-10 2013-09-03 Ol2, Inc. System and method for video compression using feedback including data related to the successful receipt of video content
US8366552B2 (en) * 2002-12-10 2013-02-05 Ol2, Inc. System and method for multi-stream video compression
US7450592B2 (en) 2003-11-12 2008-11-11 At&T Intellectual Property I, L.P. Layer 2/layer 3 interworking via internal virtual UNI
US7460537B2 (en) * 2004-01-29 2008-12-02 Brocade Communications Systems, Inc. Supplementary header for multifabric and high port count switch support in a fibre channel network
EP1738258A4 (en) 2004-03-13 2009-10-28 Cluster Resources Inc System and method for providing object triggers
US8782654B2 (en) 2004-03-13 2014-07-15 Adaptive Computing Enterprises, Inc. Co-allocating a reservation spanning different compute resources types
US20070266388A1 (en) 2004-06-18 2007-11-15 Cluster Resources, Inc. System and method for providing advanced reservations in a compute environment
US8176490B1 (en) 2004-08-20 2012-05-08 Adaptive Computing Enterprises, Inc. System and method of interfacing a workload manager and scheduler with an identity manager
WO2006053093A2 (en) 2004-11-08 2006-05-18 Cluster Resources, Inc. System and method of providing system jobs within a compute environment
US8863143B2 (en) 2006-03-16 2014-10-14 Adaptive Computing Enterprises, Inc. System and method for managing a hybrid compute environment
WO2006112981A2 (en) 2005-03-16 2006-10-26 Cluster Resources, Inc. Automatic workload transfer to an on-demand center
US9231886B2 (en) 2005-03-16 2016-01-05 Adaptive Computing Enterprises, Inc. Simple integration of an on-demand compute environment
EP3203374B1 (en) 2005-04-07 2021-11-24 III Holdings 12, LLC On-demand access to compute resources
US8509113B2 (en) * 2006-01-12 2013-08-13 Ciena Corporation Methods and systems for managing digital cross-connect matrices using virtual connection points
US8619771B2 (en) 2009-09-30 2013-12-31 Vmware, Inc. Private allocated networks over shared communications infrastructure
US8892706B1 (en) 2010-06-21 2014-11-18 Vmware, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US8924524B2 (en) 2009-07-27 2014-12-30 Vmware, Inc. Automated network configuration of virtual machines in a virtual lab data environment
DE102006044856B4 (en) * 2006-09-22 2010-08-12 Siemens Ag Method for switching data packets with a route coding in a network
US7684410B2 (en) * 2006-10-31 2010-03-23 Hewlett-Packard Development Company, L.P. VLAN aware trunks
US20080159301A1 (en) * 2006-12-29 2008-07-03 De Heer Arjan Arie Enabling virtual private local area network services
US8041773B2 (en) 2007-09-24 2011-10-18 The Research Foundation Of State University Of New York Automatic clustering for self-organizing grids
US7599314B2 (en) 2007-12-14 2009-10-06 Raptor Networks Technology, Inc. Surface-space managed network fabric
GB2459838B (en) * 2008-05-01 2010-10-06 Gnodal Ltd An ethernet bridge and a method of data delivery across a network
JP5262291B2 (en) * 2008-05-22 2013-08-14 富士通株式会社 Network connection device and aggregation / distribution device
US8195774B2 (en) 2008-05-23 2012-06-05 Vmware, Inc. Distributed virtual switch for virtualized computer systems
US7983194B1 (en) 2008-11-14 2011-07-19 Qlogic, Corporation Method and system for multi level switch configuration
JP5251457B2 (en) 2008-11-27 2013-07-31 富士通株式会社 Data transmission device
US9876735B2 (en) 2009-10-30 2018-01-23 Iii Holdings 2, Llc Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US9054990B2 (en) 2009-10-30 2015-06-09 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US20110103391A1 (en) 2009-10-30 2011-05-05 Smooth-Stone, Inc. C/O Barry Evans System and method for high-performance, low-power data center interconnect fabric
US9077654B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US20130107444A1 (en) 2011-10-28 2013-05-02 Calxeda, Inc. System and method for flexible storage and networking provisioning in large scalable processor installations
US8599863B2 (en) 2009-10-30 2013-12-03 Calxeda, Inc. System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US9069929B2 (en) 2011-10-31 2015-06-30 Iii Holdings 2, Llc Arbitrating usage of serial port in node card of scalable and modular servers
US9465771B2 (en) 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US8780911B2 (en) * 2009-10-08 2014-07-15 Force10 Networks, Inc. Link aggregation based on port and protocol combination
US9680770B2 (en) 2009-10-30 2017-06-13 Iii Holdings 2, Llc System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9648102B1 (en) 2012-12-27 2017-05-09 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9311269B2 (en) 2009-10-30 2016-04-12 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US8687629B1 (en) * 2009-11-18 2014-04-01 Juniper Networks, Inc. Fabric virtualization for packet and circuit switching
US9716672B2 (en) 2010-05-28 2017-07-25 Brocade Communications Systems, Inc. Distributed configuration management for virtual cluster switching
US9270486B2 (en) 2010-06-07 2016-02-23 Brocade Communications Systems, Inc. Name services for virtual cluster switching
US8867552B2 (en) 2010-05-03 2014-10-21 Brocade Communications Systems, Inc. Virtual cluster switching
US9769016B2 (en) 2010-06-07 2017-09-19 Brocade Communications Systems, Inc. Advanced link tracking for virtual cluster switching
CN101854303B (en) * 2010-05-27 2013-07-24 北京星网锐捷网络技术有限公司 Network topology linker
US9628293B2 (en) 2010-06-08 2017-04-18 Brocade Communications Systems, Inc. Network layer multicasting in trill networks
US9608833B2 (en) 2010-06-08 2017-03-28 Brocade Communications Systems, Inc. Supporting multiple multicast trees in trill networks
US9806906B2 (en) 2010-06-08 2017-10-31 Brocade Communications Systems, Inc. Flooding packets on a per-virtual-network basis
US8839238B2 (en) * 2010-06-11 2014-09-16 International Business Machines Corporation Dynamic virtual machine shutdown without service interruptions
US9680750B2 (en) 2010-07-06 2017-06-13 Nicira, Inc. Use of tunnels to hide network addresses
US8750164B2 (en) 2010-07-06 2014-06-10 Nicira, Inc. Hierarchical managed switch architecture
US9807031B2 (en) 2010-07-16 2017-10-31 Brocade Communications Systems, Inc. System and method for network configuration
GB2482118B (en) 2010-07-19 2017-03-01 Cray Uk Ltd Ethernet switch with link aggregation group facility
US8873389B1 (en) * 2010-08-09 2014-10-28 Chelsio Communications, Inc. Method for flow control in a packet switched network
US20130188647A1 (en) * 2010-10-29 2013-07-25 Russ W. Herrell Computer system fabric switch having a blind route
US9154327B1 (en) 2011-05-27 2015-10-06 Cisco Technology, Inc. User-configured on-demand virtual layer-2 network for infrastructure-as-a-service (IaaS) on a hybrid cloud network
US9736085B2 (en) 2011-08-29 2017-08-15 Brocade Communications Systems, Inc. End-to end lossless Ethernet in Ethernet fabric
US8964601B2 (en) 2011-10-07 2015-02-24 International Business Machines Corporation Network switching domains with a virtualized control plane
US11095549B2 (en) 2011-10-21 2021-08-17 Nant Holdings Ip, Llc Non-overlapping secured topologies in a distributed network fabric
US9699117B2 (en) 2011-11-08 2017-07-04 Brocade Communications Systems, Inc. Integrated fibre channel support in an ethernet fabric switch
US9450870B2 (en) 2011-11-10 2016-09-20 Brocade Communications Systems, Inc. System and method for flow management in software-defined networks
US8995272B2 (en) 2012-01-26 2015-03-31 Brocade Communication Systems, Inc. Link aggregation in software-defined networks
US9088477B2 (en) * 2012-02-02 2015-07-21 International Business Machines Corporation Distributed fabric management protocol
US8660129B1 (en) 2012-02-02 2014-02-25 Cisco Technology, Inc. Fully distributed routing over a user-configured on-demand virtual network for infrastructure-as-a-service (IaaS) on hybrid cloud networks
US9306832B2 (en) 2012-02-27 2016-04-05 Ravello Systems Ltd. Virtualized network for virtualized guests as an independent overlay over a physical network
US9742693B2 (en) 2012-02-27 2017-08-22 Brocade Communications Systems, Inc. Dynamic service insertion in a fabric switch
US8665889B2 (en) 2012-03-01 2014-03-04 Ciena Corporation Unidirectional asymmetric traffic pattern systems and methods in switch matrices
US9077651B2 (en) 2012-03-07 2015-07-07 International Business Machines Corporation Management of a distributed fabric system
US9077624B2 (en) 2012-03-07 2015-07-07 International Business Machines Corporation Diagnostics in a distributed fabric system
US9154416B2 (en) 2012-03-22 2015-10-06 Brocade Communications Systems, Inc. Overlay tunnel in a fabric switch
US9374301B2 (en) 2012-05-18 2016-06-21 Brocade Communications Systems, Inc. Network feedback in software-defined networks
US10277464B2 (en) 2012-05-22 2019-04-30 Arris Enterprises Llc Client auto-configuration in a multi-switch link aggregation
CN104272668B (en) 2012-05-23 2018-05-22 博科通讯系统有限公司 Layer 3 covers gateway
US9137141B2 (en) 2012-06-12 2015-09-15 International Business Machines Corporation Synchronization of load-balancing switches
US9602430B2 (en) 2012-08-21 2017-03-21 Brocade Communications Systems, Inc. Global VLANs for fabric switches
US9401872B2 (en) 2012-11-16 2016-07-26 Brocade Communications Systems, Inc. Virtual link aggregations across multiple fabric switches
US9413691B2 (en) 2013-01-11 2016-08-09 Brocade Communications Systems, Inc. MAC address synchronization in a fabric switch
US9548926B2 (en) * 2013-01-11 2017-01-17 Brocade Communications Systems, Inc. Multicast traffic load balancing over virtual link aggregation
US9350680B2 (en) 2013-01-11 2016-05-24 Brocade Communications Systems, Inc. Protection switching over a virtual link aggregation
US9565113B2 (en) 2013-01-15 2017-02-07 Brocade Communications Systems, Inc. Adaptive link aggregation and virtual link aggregation
US9565099B2 (en) 2013-03-01 2017-02-07 Brocade Communications Systems, Inc. Spanning tree in fabric switches
US9577955B2 (en) * 2013-03-12 2017-02-21 Forrest Lawrence Pierson Indefinitely expandable high-capacity data switch
WO2014145750A1 (en) 2013-03-15 2014-09-18 Brocade Communications Systems, Inc. Scalable gateways for a fabric switch
US9699001B2 (en) 2013-06-10 2017-07-04 Brocade Communications Systems, Inc. Scalable and segregated network virtualization
US9565028B2 (en) 2013-06-10 2017-02-07 Brocade Communications Systems, Inc. Ingress switch multicast distribution in a fabric switch
US9571386B2 (en) 2013-07-08 2017-02-14 Nicira, Inc. Hybrid packet processing
US9755963B2 (en) 2013-07-09 2017-09-05 Nicira, Inc. Using headerspace analysis to identify flow entry reachability
US9407580B2 (en) 2013-07-12 2016-08-02 Nicira, Inc. Maintaining data stored with a packet
US9197529B2 (en) 2013-07-12 2015-11-24 Nicira, Inc. Tracing network packets through logical and physical networks
US9282019B2 (en) 2013-07-12 2016-03-08 Nicira, Inc. Tracing logical network packets through physical network
US9887960B2 (en) 2013-08-14 2018-02-06 Nicira, Inc. Providing services for logical networks
US9952885B2 (en) 2013-08-14 2018-04-24 Nicira, Inc. Generation of configuration files for a DHCP module executing within a virtualized container
US9503371B2 (en) 2013-09-04 2016-11-22 Nicira, Inc. High availability L3 gateways for logical networks
US9577845B2 (en) 2013-09-04 2017-02-21 Nicira, Inc. Multiple active L3 gateways for logical networks
US9602398B2 (en) 2013-09-15 2017-03-21 Nicira, Inc. Dynamically generating flows with wildcard fields
US9674087B2 (en) 2013-09-15 2017-06-06 Nicira, Inc. Performing a multi-stage lookup to classify packets
US10063458B2 (en) 2013-10-13 2018-08-28 Nicira, Inc. Asymmetric connection with external networks
US9977685B2 (en) 2013-10-13 2018-05-22 Nicira, Inc. Configuration of logical router
US9264330B2 (en) 2013-10-13 2016-02-16 Nicira, Inc. Tracing host-originated logical network packets
US9912612B2 (en) 2013-10-28 2018-03-06 Brocade Communications Systems LLC Extended ethernet fabric switches
US10193771B2 (en) 2013-12-09 2019-01-29 Nicira, Inc. Detecting and handling elephant flows
US9967199B2 (en) 2013-12-09 2018-05-08 Nicira, Inc. Inspecting operations of a machine to detect elephant flows
US9569368B2 (en) 2013-12-13 2017-02-14 Nicira, Inc. Installing and managing flows in a flow table cache
US9996467B2 (en) 2013-12-13 2018-06-12 Nicira, Inc. Dynamically adjusting the number of flows allowed in a flow table cache
US9548873B2 (en) 2014-02-10 2017-01-17 Brocade Communications Systems, Inc. Virtual extensible LAN tunnel keepalives
US9419889B2 (en) 2014-03-07 2016-08-16 Nicira, Inc. Method and system for discovering a path of network traffic
US9384033B2 (en) 2014-03-11 2016-07-05 Vmware, Inc. Large receive offload for virtual machines
US9742682B2 (en) 2014-03-11 2017-08-22 Vmware, Inc. Large receive offload for virtual machines
US9755981B2 (en) 2014-03-11 2017-09-05 Vmware, Inc. Snooping forwarded packets by a virtual machine
US9590901B2 (en) 2014-03-14 2017-03-07 Nicira, Inc. Route advertisement by managed gateways
US9419855B2 (en) 2014-03-14 2016-08-16 Nicira, Inc. Static routes for logical routers
US9225597B2 (en) 2014-03-14 2015-12-29 Nicira, Inc. Managed gateways peering with external router to attract ingress packets
US9313129B2 (en) 2014-03-14 2016-04-12 Nicira, Inc. Logical router processing by network controller
US10581758B2 (en) 2014-03-19 2020-03-03 Avago Technologies International Sales Pte. Limited Distributed hot standby links for vLAG
US10476698B2 (en) 2014-03-20 2019-11-12 Avago Technologies International Sales Pte. Limited Redundent virtual link aggregation group
US9503321B2 (en) 2014-03-21 2016-11-22 Nicira, Inc. Dynamic routing for logical routers
US9647883B2 (en) 2014-03-21 2017-05-09 Nicria, Inc. Multiple levels of logical routers
US9419874B2 (en) 2014-03-27 2016-08-16 Nicira, Inc. Packet tracing in a software-defined networking environment
US9893988B2 (en) 2014-03-27 2018-02-13 Nicira, Inc. Address resolution using multiple designated instances of a logical router
US9413644B2 (en) 2014-03-27 2016-08-09 Nicira, Inc. Ingress ECMP in virtual distributed routing environment
US10193806B2 (en) 2014-03-31 2019-01-29 Nicira, Inc. Performing a finishing operation to improve the quality of a resulting hash
US9940180B2 (en) 2014-03-31 2018-04-10 Nicira, Inc. Using loopback interfaces of multiple TCP/IP stacks for communication between processes
US9729679B2 (en) 2014-03-31 2017-08-08 Nicira, Inc. Using different TCP/IP stacks for different tenants on a multi-tenant host
US9686200B2 (en) 2014-03-31 2017-06-20 Nicira, Inc. Flow cache hierarchy
US9385954B2 (en) 2014-03-31 2016-07-05 Nicira, Inc. Hashing techniques for use in a network environment
US10091125B2 (en) 2014-03-31 2018-10-02 Nicira, Inc. Using different TCP/IP stacks with separately allocated resources
US9832112B2 (en) 2014-03-31 2017-11-28 Nicira, Inc. Using different TCP/IP stacks for different hypervisor services
US9667528B2 (en) 2014-03-31 2017-05-30 Vmware, Inc. Fast lookup and update of current hop limit
US9680798B2 (en) 2014-04-11 2017-06-13 Nant Holdings Ip, Llc Fabric-based anonymity management, systems and methods
US10063473B2 (en) 2014-04-30 2018-08-28 Brocade Communications Systems LLC Method and system for facilitating switch virtualization in a network of interconnected switches
FR3020613B1 (en) * 2014-05-05 2016-04-29 Peugeot Citroen Automobiles Sa DOOR THRESHOLDING ELEMENT OF A MOTOR VEHICLE
US9444754B1 (en) 2014-05-13 2016-09-13 Chelsio Communications, Inc. Method for congestion control in a network interface card
US9800471B2 (en) 2014-05-13 2017-10-24 Brocade Communications Systems, Inc. Network extension groups of global VLANs in a fabric switch
US10491467B2 (en) 2014-05-23 2019-11-26 Nant Holdings Ip, Llc Fabric-based virtual air gap provisioning, systems and methods
US10110712B2 (en) 2014-06-04 2018-10-23 Nicira, Inc. Efficient packet classification for dynamic containers
US9774707B2 (en) 2014-06-04 2017-09-26 Nicira, Inc. Efficient packet classification for dynamic containers
US9379956B2 (en) 2014-06-30 2016-06-28 Nicira, Inc. Identifying a network topology between two endpoints
US9419897B2 (en) 2014-06-30 2016-08-16 Nicira, Inc. Methods and systems for providing multi-tenancy support for Single Root I/O Virtualization
US9553803B2 (en) 2014-06-30 2017-01-24 Nicira, Inc. Periodical generation of network measurement data
US9577927B2 (en) 2014-06-30 2017-02-21 Nicira, Inc. Encoding control plane information in transport protocol source port field and applications thereof in network virtualization
US9692698B2 (en) 2014-06-30 2017-06-27 Nicira, Inc. Methods and systems to offload overlay network packet encapsulation to hardware
US9742881B2 (en) 2014-06-30 2017-08-22 Nicira, Inc. Network virtualization using just-in-time distributed capability for classification encoding
US10616108B2 (en) 2014-07-29 2020-04-07 Avago Technologies International Sales Pte. Limited Scalable MAC address virtualization
US9544219B2 (en) 2014-07-31 2017-01-10 Brocade Communications Systems, Inc. Global VLAN services
US9807007B2 (en) 2014-08-11 2017-10-31 Brocade Communications Systems, Inc. Progressive MAC address learning
US9768980B2 (en) 2014-09-30 2017-09-19 Nicira, Inc. Virtual distributed bridging
US10250443B2 (en) 2014-09-30 2019-04-02 Nicira, Inc. Using physical location to modify behavior of a distributed virtual network element
US10511458B2 (en) 2014-09-30 2019-12-17 Nicira, Inc. Virtual distributed bridging
US11178051B2 (en) 2014-09-30 2021-11-16 Vmware, Inc. Packet key parser for flow-based forwarding elements
US10020960B2 (en) 2014-09-30 2018-07-10 Nicira, Inc. Virtual distributed bridging
US9524173B2 (en) 2014-10-09 2016-12-20 Brocade Communications Systems, Inc. Fast reboot for a switch
US9699029B2 (en) 2014-10-10 2017-07-04 Brocade Communications Systems, Inc. Distributed configuration management in a switch group
US10469342B2 (en) 2014-10-10 2019-11-05 Nicira, Inc. Logical network traffic analysis
US9626255B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Online restoration of a switch snapshot
US9628407B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Multiple software versions in a switch group
US10003552B2 (en) 2015-01-05 2018-06-19 Brocade Communications Systems, Llc. Distributed bidirectional forwarding detection protocol (D-BFD) for cluster of interconnected switches
US9942097B2 (en) 2015-01-05 2018-04-10 Brocade Communications Systems LLC Power management in a network of interconnected switches
US10129180B2 (en) 2015-01-30 2018-11-13 Nicira, Inc. Transit logical switch within logical router
US10038592B2 (en) 2015-03-17 2018-07-31 Brocade Communications Systems LLC Identifier assignment to a new switch in a switch group
US9807005B2 (en) 2015-03-17 2017-10-31 Brocade Communications Systems, Inc. Multi-fabric manager
US10044676B2 (en) 2015-04-03 2018-08-07 Nicira, Inc. Using headerspace analysis to identify unneeded distributed firewall rules
US10038628B2 (en) 2015-04-04 2018-07-31 Nicira, Inc. Route server mode for dynamic routing between logical and physical networks
US10579406B2 (en) 2015-04-08 2020-03-03 Avago Technologies International Sales Pte. Limited Dynamic orchestration of overlay tunnels
US10348625B2 (en) 2015-06-30 2019-07-09 Nicira, Inc. Sharing common L2 segment in a virtual distributed router environment
US10171430B2 (en) 2015-07-27 2019-01-01 Forrest L. Pierson Making a secure connection over insecure lines more secure
US10230629B2 (en) 2015-08-11 2019-03-12 Nicira, Inc. Static route configuration for logical router
US10075363B2 (en) 2015-08-31 2018-09-11 Nicira, Inc. Authorization for advertised routes among logical routers
US10171303B2 (en) 2015-09-16 2019-01-01 Avago Technologies International Sales Pte. Limited IP-based interconnection of switches with a logical chassis
US10095535B2 (en) 2015-10-31 2018-10-09 Nicira, Inc. Static route types for logical routers
US11675587B2 (en) 2015-12-03 2023-06-13 Forrest L. Pierson Enhanced protection of processors from a buffer overflow attack
US10564969B2 (en) 2015-12-03 2020-02-18 Forrest L. Pierson Enhanced protection of processors from a buffer overflow attack
US9912614B2 (en) 2015-12-07 2018-03-06 Brocade Communications Systems LLC Interconnection of switches based on hierarchical overlay tunneling
US10333849B2 (en) 2016-04-28 2019-06-25 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US10841273B2 (en) 2016-04-29 2020-11-17 Nicira, Inc. Implementing logical DHCP servers in logical networks
US10484515B2 (en) 2016-04-29 2019-11-19 Nicira, Inc. Implementing logical metadata proxy servers in logical networks
US10091161B2 (en) 2016-04-30 2018-10-02 Nicira, Inc. Assignment of router ID for logical routers
US10560320B2 (en) 2016-06-29 2020-02-11 Nicira, Inc. Ranking of gateways in cluster
US10153973B2 (en) 2016-06-29 2018-12-11 Nicira, Inc. Installation of routing tables for logical router in route server mode
US10454758B2 (en) 2016-08-31 2019-10-22 Nicira, Inc. Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP
CN107809387B (en) * 2016-09-08 2020-11-06 华为技术有限公司 Message transmission method, device and network system
JP7158113B2 (en) 2016-09-26 2022-10-21 ナント ホールディングス アイピー,エルエルシー Virtual circuit in cloud network
US10341236B2 (en) 2016-09-30 2019-07-02 Nicira, Inc. Anycast edge service gateways
US10237090B2 (en) 2016-10-28 2019-03-19 Avago Technologies International Sales Pte. Limited Rule-based network identifier mapping
US10212071B2 (en) 2016-12-21 2019-02-19 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US10742746B2 (en) 2016-12-21 2020-08-11 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US10237123B2 (en) 2016-12-21 2019-03-19 Nicira, Inc. Dynamic recovery from a split-brain failure in edge nodes
US10616045B2 (en) 2016-12-22 2020-04-07 Nicira, Inc. Migration of centralized routing components of logical router
US10805239B2 (en) 2017-03-07 2020-10-13 Nicira, Inc. Visualization of path between logical network endpoints
US10587479B2 (en) 2017-04-02 2020-03-10 Nicira, Inc. GUI for analysis of logical network modifications
US10313926B2 (en) 2017-05-31 2019-06-04 Nicira, Inc. Large receive offload (LRO) processing in virtualized computing environments
US10681000B2 (en) 2017-06-30 2020-06-09 Nicira, Inc. Assignment of unique physical network addresses for logical network addresses
US10637800B2 (en) 2017-06-30 2020-04-28 Nicira, Inc Replacement of logical network addresses with physical network addresses
US10608887B2 (en) 2017-10-06 2020-03-31 Nicira, Inc. Using packet tracing tool to automatically execute packet capture operations
US10374827B2 (en) 2017-11-14 2019-08-06 Nicira, Inc. Identifier that maps to different networks at different datacenters
US10511459B2 (en) 2017-11-14 2019-12-17 Nicira, Inc. Selection of managed forwarding element for bridge spanning multiple datacenters
WO2020019315A1 (en) * 2018-07-27 2020-01-30 浙江天猫技术有限公司 Computational operation scheduling method employing graphic data, system, computer readable medium, and apparatus
US10931560B2 (en) 2018-11-23 2021-02-23 Vmware, Inc. Using route type to determine routing protocol behavior
US10797998B2 (en) 2018-12-05 2020-10-06 Vmware, Inc. Route server for distributed routers using hierarchical routing protocol
US10938788B2 (en) 2018-12-12 2021-03-02 Vmware, Inc. Static routes for policy-based VPN
US11198062B2 (en) 2019-07-18 2021-12-14 Nani Holdings IP, LLC Latency management in an event driven gaming network
US11095480B2 (en) 2019-08-30 2021-08-17 Vmware, Inc. Traffic optimization using distributed edge services
US11283699B2 (en) 2020-01-17 2022-03-22 Vmware, Inc. Practical overlay network latency measurement in datacenter
US11962518B2 (en) 2020-06-02 2024-04-16 VMware LLC Hardware acceleration techniques using flow selection
US11606294B2 (en) 2020-07-16 2023-03-14 Vmware, Inc. Host computer configured to facilitate distributed SNAT service
US11616755B2 (en) 2020-07-16 2023-03-28 Vmware, Inc. Facilitating distributed SNAT service
US11611613B2 (en) 2020-07-24 2023-03-21 Vmware, Inc. Policy-based forwarding to a load balancer of a load balancing cluster
US11451413B2 (en) 2020-07-28 2022-09-20 Vmware, Inc. Method for advertising availability of distributed gateway service and machines at host computer
US11902050B2 (en) 2020-07-28 2024-02-13 VMware LLC Method for providing distributed gateway service at host computer
US11196628B1 (en) 2020-07-29 2021-12-07 Vmware, Inc. Monitoring container clusters
US11558426B2 (en) 2020-07-29 2023-01-17 Vmware, Inc. Connection tracking for container cluster
US11570090B2 (en) 2020-07-29 2023-01-31 Vmware, Inc. Flow tracing operation in container cluster
US11792134B2 (en) 2020-09-28 2023-10-17 Vmware, Inc. Configuring PNIC to perform flow processing offload using virtual port identifiers
US11736566B2 (en) 2020-09-28 2023-08-22 Vmware, Inc. Using a NIC as a network accelerator to allow VM access to an external storage via a PF module, bus, and VF module
US11829793B2 (en) 2020-09-28 2023-11-28 Vmware, Inc. Unified management of virtual machines and bare metal computers
US11593278B2 (en) 2020-09-28 2023-02-28 Vmware, Inc. Using machine executing on a NIC to access a third party storage not supported by a NIC or host
US12021759B2 (en) 2020-09-28 2024-06-25 VMware LLC Packet processing with hardware offload units
US11636053B2 (en) 2020-09-28 2023-04-25 Vmware, Inc. Emulating a local storage by accessing an external storage through a shared port of a NIC
US11736436B2 (en) 2020-12-31 2023-08-22 Vmware, Inc. Identifying routes with indirect addressing in a datacenter
US11336533B1 (en) 2021-01-08 2022-05-17 Vmware, Inc. Network visualization of correlations between logical elements and associated physical elements
US11687210B2 (en) 2021-07-05 2023-06-27 Vmware, Inc. Criteria-based expansion of group nodes in a network topology visualization
US11711278B2 (en) 2021-07-24 2023-07-25 Vmware, Inc. Visualization of flow trace operation across multiple sites
US12081395B2 (en) 2021-08-24 2024-09-03 VMware LLC Formal verification of network changes
US11706109B2 (en) 2021-09-17 2023-07-18 Vmware, Inc. Performance of traffic monitoring actions
US11863376B2 (en) 2021-12-22 2024-01-02 Vmware, Inc. Smart NIC leader election
US11995024B2 (en) 2021-12-22 2024-05-28 VMware LLC State sharing between smart NICs
US11928367B2 (en) 2022-06-21 2024-03-12 VMware LLC Logical memory addressing for network devices
US11899594B2 (en) 2022-06-21 2024-02-13 VMware LLC Maintenance of data message classification cache on smart NIC
US11928062B2 (en) 2022-06-21 2024-03-12 VMware LLC Accelerating data message classification with smart NICs
US12126528B2 (en) * 2023-03-24 2024-10-22 Nokia Solutions And Networks Oy Egress rerouting of packets at a communication device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5689644A (en) * 1996-03-25 1997-11-18 I-Cube, Inc. Network switch with arbitration sytem
US20010006522A1 (en) * 1999-12-31 2001-07-05 Beshai Maged E. Global distributed switch
US20010033552A1 (en) * 2000-02-24 2001-10-25 Barrack Craig I. Credit-based pacing scheme for heterogeneous speed frame forwarding
US6356546B1 (en) * 1998-08-11 2002-03-12 Nortel Networks Limited Universal transfer method and network with distributed switch
US20020167950A1 (en) * 2001-01-12 2002-11-14 Zarlink Semiconductor V.N. Inc. Fast data path protocol for network switching
US20030067925A1 (en) * 2001-10-05 2003-04-10 Samsung Electronics Co., Ltd. Routing coordination protocol for a massively parallel router architecture
US6563837B2 (en) * 1998-02-10 2003-05-13 Enterasys Networks, Inc. Method and apparatus for providing work-conserving properties in a non-blocking switch with limited speedup independent of switch size
US20030118058A1 (en) * 2001-12-26 2003-06-26 Chan Kim Variable length packet switching system
US6721271B1 (en) * 1999-02-04 2004-04-13 Nortel Networks Limited Rate-controlled multi-class high-capacity packet switch

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151324A (en) * 1996-06-03 2000-11-21 Cabletron Systems, Inc. Aggregation of mac data flows through pre-established path between ingress and egress switch to reduce number of number connections
US6195351B1 (en) * 1998-01-28 2001-02-27 3Com Corporation Logical switch set
US6208644B1 (en) * 1998-03-12 2001-03-27 I-Cube, Inc. Network switch providing dynamic load balancing
US6363416B1 (en) * 1998-08-28 2002-03-26 3Com Corporation System and method for automatic election of a representative node within a communications network with built-in redundancy
US6567417B2 (en) * 2000-06-19 2003-05-20 Broadcom Corporation Frame forwarding in a switch fabric
GB0013571D0 (en) * 2000-06-06 2000-07-26 Power X Limited Switching system
GB0019341D0 (en) * 2000-08-08 2000-09-27 Easics Nv System-on-chip solutions
DE10055476A1 (en) * 2000-11-09 2002-05-29 Siemens Ag Optical switching matrix
US6697368B2 (en) * 2000-11-17 2004-02-24 Foundry Networks, Inc. High-performance network switch
US6954463B1 (en) * 2000-12-11 2005-10-11 Cisco Technology, Inc. Distributed packet processing architecture for network access servers
GB0102743D0 (en) * 2001-02-03 2001-03-21 Power X Ltd A data switch and a method for controlling the data switch
US20020176131A1 (en) * 2001-02-28 2002-11-28 Walters David H. Protection switching for an optical network, and methods and apparatus therefor
US7130303B2 (en) * 2001-03-15 2006-10-31 Lucent Technologies Inc. Ethernet packet encapsulation for metropolitan area ethernet networks
US7599620B2 (en) * 2001-06-01 2009-10-06 Nortel Networks Limited Communications network for a metropolitan area
US7167648B2 (en) * 2001-10-24 2007-01-23 Innovative Fiber Optic Solutions, Llc System and method for an ethernet optical area network
US7145904B2 (en) * 2002-01-03 2006-12-05 Integrated Device Technology, Inc. Switch queue predictive protocol (SQPP) based packet switching technique
US7486894B2 (en) * 2002-06-25 2009-02-03 Finisar Corporation Transceiver module and integrated circuit with dual eye openers
US7206366B2 (en) * 2002-08-07 2007-04-17 Broadcom Corporation System and method for programmably adjusting gain and frequency response in a 10-GigaBit ethernet/fibre channel system
KR100460672B1 (en) * 2002-12-10 2004-12-09 한국전자통신연구원 Line interface apparatus for 10 gigabit ethernet and control method thereof
US7925162B2 (en) * 2003-07-03 2011-04-12 Soto Alexander I Communication system and method for an optical local area network
US7839843B2 (en) * 2003-09-18 2010-11-23 Cisco Technology, Inc. Distributed forwarding in virtual network devices

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5689644A (en) * 1996-03-25 1997-11-18 I-Cube, Inc. Network switch with arbitration sytem
US6563837B2 (en) * 1998-02-10 2003-05-13 Enterasys Networks, Inc. Method and apparatus for providing work-conserving properties in a non-blocking switch with limited speedup independent of switch size
US6356546B1 (en) * 1998-08-11 2002-03-12 Nortel Networks Limited Universal transfer method and network with distributed switch
US6721271B1 (en) * 1999-02-04 2004-04-13 Nortel Networks Limited Rate-controlled multi-class high-capacity packet switch
US20010006522A1 (en) * 1999-12-31 2001-07-05 Beshai Maged E. Global distributed switch
US20010033552A1 (en) * 2000-02-24 2001-10-25 Barrack Craig I. Credit-based pacing scheme for heterogeneous speed frame forwarding
US20020167950A1 (en) * 2001-01-12 2002-11-14 Zarlink Semiconductor V.N. Inc. Fast data path protocol for network switching
US20030067925A1 (en) * 2001-10-05 2003-04-10 Samsung Electronics Co., Ltd. Routing coordination protocol for a massively parallel router architecture
US20030118058A1 (en) * 2001-12-26 2003-06-26 Chan Kim Variable length packet switching system

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8396053B2 (en) * 2008-04-24 2013-03-12 International Business Machines Corporation Method and apparatus for VLAN-based selective path routing
US20090268737A1 (en) * 2008-04-24 2009-10-29 James Ryan Giles Method and Apparatus for VLAN-Based Selective Path Routing
US20100061240A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to low latency within a data center
US8958432B2 (en) 2008-09-11 2015-02-17 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture
US11451491B2 (en) 2008-09-11 2022-09-20 Juniper Networks, Inc. Methods and apparatus related to virtualization of data center resources
US20100061394A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to any-to-any connectivity within a data center
US11271871B2 (en) 2008-09-11 2022-03-08 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture
US20100061391A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to a low cost data center architecture
US20100061367A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to lossless operation within a data center
US8730954B2 (en) 2008-09-11 2014-05-20 Juniper Networks, Inc. Methods and apparatus related to any-to-any connectivity within a data center
US8335213B2 (en) 2008-09-11 2012-12-18 Juniper Networks, Inc. Methods and apparatus related to low latency within a data center
US8340088B2 (en) 2008-09-11 2012-12-25 Juniper Networks, Inc. Methods and apparatus related to a low cost data center architecture
US20100061242A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to a flexible data center security architecture
US12068978B2 (en) 2008-09-11 2024-08-20 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture
US20100061241A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to flow control within a data center switch fabric
US20100061389A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to virtualization of data center resources
US8265071B2 (en) 2008-09-11 2012-09-11 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture
US9847953B2 (en) 2008-09-11 2017-12-19 Juniper Networks, Inc. Methods and apparatus related to virtualization of data center resources
US8755396B2 (en) * 2008-09-11 2014-06-17 Juniper Networks, Inc. Methods and apparatus related to flow control within a data center switch fabric
US9985911B2 (en) 2008-09-11 2018-05-29 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture
US10536400B2 (en) 2008-09-11 2020-01-14 Juniper Networks, Inc. Methods and apparatus related to virtualization of data center resources
US10454849B2 (en) 2008-09-11 2019-10-22 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture
US10645028B2 (en) 2010-03-23 2020-05-05 Juniper Networks, Inc. Methods and apparatus for automatically provisioning resources within a distributed control plane of a switch
US10887119B2 (en) 2010-03-23 2021-01-05 Juniper Networks, Inc. Multicasting within distributed control plane of a switch
US9813252B2 (en) 2010-03-23 2017-11-07 Juniper Networks, Inc. Multicasting within a distributed control plane of a switch
US20110238816A1 (en) * 2010-03-23 2011-09-29 Juniper Networks, Inc. Methods and apparatus for automatically provisioning resources within a distributed control plane of a switch
US9240923B2 (en) 2010-03-23 2016-01-19 Juniper Networks, Inc. Methods and apparatus for automatically provisioning resources within a distributed control plane of a switch
US9455935B2 (en) * 2010-06-08 2016-09-27 Brocade Communications Systems, Inc. Remote port mirroring
US20160134563A1 (en) * 2010-06-08 2016-05-12 Brocade Communications Systems, Inc. Remote port mirroring
US9674036B2 (en) 2010-12-15 2017-06-06 Juniper Networks, Inc. Methods and apparatus for dynamic resource management within a distributed control plane of a switch
US9282060B2 (en) 2010-12-15 2016-03-08 Juniper Networks, Inc. Methods and apparatus for dynamic resource management within a distributed control plane of a switch
US9276953B2 (en) 2011-05-13 2016-03-01 International Business Machines Corporation Method and apparatus to detect and block unauthorized MAC address by virtual machine aware network switches
US8824485B2 (en) 2011-05-13 2014-09-02 International Business Machines Corporation Efficient software-based private VLAN solution for distributed virtual switches
US8837499B2 (en) 2011-05-14 2014-09-16 International Business Machines Corporation Distributed fabric protocol (DFP) switching network architecture
US8798080B2 (en) 2011-05-14 2014-08-05 International Business Machines Corporation Distributed fabric protocol (DFP) switching network architecture
US8588224B2 (en) 2011-05-14 2013-11-19 International Business Machines Corporation Priority based flow control in a distributed fabric protocol (DFP) switching network architecture
US8856801B2 (en) 2011-05-14 2014-10-07 International Business Machines Corporation Techniques for executing normally interruptible threads in a non-preemptive manner
US8635614B2 (en) 2011-05-14 2014-01-21 International Business Machines Corporation Method for providing location independent dynamic port mirroring on distributed virtual switches
US8767722B2 (en) 2011-05-14 2014-07-01 International Business Machines Corporation Data traffic handling in a distributed fabric protocol (DFP) switching network architecture
US8948004B2 (en) 2011-06-17 2015-02-03 International Business Machines Corporation Fault tolerant communication in a trill network
US8948003B2 (en) 2011-06-17 2015-02-03 International Business Machines Corporation Fault tolerant communication in a TRILL network
US8797843B2 (en) 2011-09-12 2014-08-05 International Business Machines Corporation High availability distributed fabric protocol (DFP) switching network architecture
US8767529B2 (en) 2011-09-12 2014-07-01 International Business Machines Corporation High availability distributed fabric protocol (DFP) switching network architecture
US8717874B2 (en) 2011-09-12 2014-05-06 International Business Machines Corporation Updating a switch software image in a distributed fabric protocol (DFP) switching network
US8942094B2 (en) 2011-10-06 2015-01-27 International Business Machines Corporation Credit-based network congestion management
US8750129B2 (en) 2011-10-06 2014-06-10 International Business Machines Corporation Credit-based network congestion management
US9059922B2 (en) 2011-10-06 2015-06-16 International Business Machines Corporation Network traffic distribution
US9065745B2 (en) 2011-10-06 2015-06-23 International Business Machines Corporation Network traffic distribution

Also Published As

Publication number Publication date
EP1673683A2 (en) 2006-06-28
US20060029056A1 (en) 2006-02-09
US20070071014A1 (en) 2007-03-29
US7352745B2 (en) 2008-04-01
JP2007507990A (en) 2007-03-29
WO2005038599A3 (en) 2006-09-08
EP1673683A4 (en) 2010-06-02
US20060029057A1 (en) 2006-02-09
US20060029071A1 (en) 2006-02-09
WO2005038599A2 (en) 2005-04-28
US20060039369A1 (en) 2006-02-23
US20060029055A1 (en) 2006-02-09
US20050105538A1 (en) 2005-05-19

Similar Documents

Publication Publication Date Title
US7352745B2 (en) Switching system with distributed switching fabric
US9628375B2 (en) N-node link aggregation group (LAG) systems that can support various topologies
US9485194B2 (en) Virtual link aggregation of network traffic in an aggregation switch
US7751329B2 (en) Providing an abstraction layer in a cluster switch that includes plural switches
EP2264949B1 (en) Forwarding frames in a computer network using shortest path bridging
EP1557006B1 (en) Modified spanning tree protocol for metropolitan area network
US8320282B2 (en) Automatic control node selection in ring networks
EP3474498A1 (en) Hash-based multi-homing
KR20130100217A (en) Differential forwarding in address-based carrier networks
JPWO2005048540A1 (en) Communication system and communication method
WO2007129699A1 (en) Communication system, node, terminal, communication method, and program
KR20140127904A (en) System and method for virtual fabric link failure recovery
US8228823B2 (en) Avoiding high-speed network partitions in favor of low-speed links
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Overview of the Catalyst 8500 Campus Switch Router
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts

Legal Events

Date Code Title Description
AS Assignment

Owner name: RAPTOR NETWORK TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PERERA, ANANDA;HOFFMAN, EDWIN;REEL/FRAME:017095/0770

Effective date: 20050225

AS Assignment

Owner name: AGILITY CAPITAL, LLC, CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY INC.;REEL/FRAME:017553/0786

Effective date: 20060427

Owner name: AGILITY CAPITAL, LLC, CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY, INC.;REEL/FRAME:017553/0965

Effective date: 20060427

Owner name: BRIDGE BANK N.A., CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY, INC.;REEL/FRAME:017553/0965

Effective date: 20060427

Owner name: BRIDGE BANK N.A., CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY INC.;REEL/FRAME:017553/0786

Effective date: 20060427

AS Assignment

Owner name: RAPTOR NETWORKS TECHNOLOGY, INC., CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNORS:BRIDGE BANK N.A.;AGILITY CAPITAL, LLC;REEL/FRAME:018044/0115

Effective date: 20060727

AS Assignment

Owner name: RAPTOR NETWORKS TECHNOLOGY, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNORS:AGILITY CAPITAL, LLC;BRIDGE BANK N.A.;REEL/FRAME:018286/0474

Effective date: 20060921

Owner name: RAPTOR NETWORKS TECHNOLOGY INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNORS:AGILITY CAPITAL, LLC;BRIDGE BANK N.A.;REEL/FRAME:018286/0474

Effective date: 20060921

AS Assignment

Owner name: CASTLERIGG MASTER INVESTMENTS LTD., AS COLLATERAL

Free format text: GRANT OF SECURITY INTEREST;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY, INC.;REEL/FRAME:021325/0788

Effective date: 20080728

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION