US20160100008A1 - Methods and systems for managing network addresses in a clustered storage environment - Google Patents
Methods and systems for managing network addresses in a clustered storage environment Download PDFInfo
- Publication number
- US20160100008A1 US20160100008A1 US14/505,196 US201414505196A US2016100008A1 US 20160100008 A1 US20160100008 A1 US 20160100008A1 US 201414505196 A US201414505196 A US 201414505196A US 2016100008 A1 US2016100008 A1 US 2016100008A1
- Authority
- US
- United States
- Prior art keywords
- cluster node
- node
- address
- vnic
- network access
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/50—Address allocation
- H04L61/5007—Internet protocol [IP] addresses
- H04L61/5014—Internet protocol [IP] addresses using dynamic host configuration protocol [DHCP] or bootstrap protocol [BOOTP]
-
- H04L61/6068—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L2101/00—Indexing scheme associated with group H04L61/00
- H04L2101/60—Types of network addresses
- H04L2101/668—Internet protocol [IP] address subnets
Definitions
- the present disclosure relates to communication in networked storage systems.
- DAS direct attached storage
- NAS network attached storage
- SANs storage area networks
- Network storage systems are commonly used for a variety of purposes, such as providing multiple users with access to shared data, backing up data and others.
- a storage system typically includes at least one computing system executing a storage operating system for storing and retrieving data on behalf of one or more client computing systems (“clients”).
- the storage operating system stores and manages shared data containers in a set of mass storage devices.
- Storage systems may include a plurality of nodes operating within a cluster for processing client requests.
- the nodes may use virtual network interfaces for client communication. Continuous efforts are being made for efficiently managing network addresses that are used by the cluster nodes.
- a machine implemented method includes assigning a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and using the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
- VNIC virtual network interface card
- a non-transitory, machine readable storage medium having stored thereon instructions for performing a method.
- the machine executable code which when executed by at least one machine, causes the machine to: assign a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
- VNIC virtual network interface card
- a system having a memory with machine readable medium comprising machine executable code with stored instructions is provided.
- a processor module coupled to the memory is configured to execute the machine executable code to: assign a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
- VNIC virtual network interface card
- FIGS. 1A-1B show examples of an operating environment for the various aspects disclosed herein;
- FIG. 1C shows an example of an address data structure, according to one aspect of the present disclosure
- FIGS. 2A-2F show various process flow diagrams, according the various aspects of the present disclosure
- FIG. 3 is an example of a storage node used in the cluster of FIG. 2A , according to one aspect of the present disclosure
- FIG. 4 shows an example of a storage operating system, used according to one aspect of the present disclosure.
- FIG. 5 shows an example of a processing system, used according to one aspect of the present disclosure.
- a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
- both an application running on a server and the server can be a component.
- One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. Also, these components can execute from various non-transitory computer readable media having various data structures stored thereon.
- the components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal).
- Computer executable components can be stored, for example, at non-transitory, computer readable media including, but not limited to, an ASIC (application specific integrated circuit), CD (compact disc), DVD (digital video disk), ROM (read only memory), floppy disk, hard disk, EEPROM (electrically erasable programmable read only memory), memory stick or any other storage device, in accordance with the claimed subject matter.
- ASIC application specific integrated circuit
- CD compact disc
- DVD digital video disk
- ROM read only memory
- floppy disk floppy disk
- hard disk hard disk
- EEPROM electrically erasable programmable read only memory
- memory stick any other storage device, in accordance with the claimed subject matter.
- the method assigns a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
- VNIC virtual network interface card
- FIG. 1A shows a cluster based storage environment 100 having a plurality of nodes for managing storage devices, according to one aspect.
- Storage environment 100 may include a plurality of client systems 104 . 1 - 104 .N, a clustered storage system 102 and at least a network 106 communicably connecting the client systems 104 . 1 - 104 .N and the clustered storage system 102 .
- the clustered storage system 102 may include a plurality of nodes 108 . 1 - 108 . 3 , a cluster switching fabric 110 , and a plurality of mass storage devices 112 . 1 - 112 . 3 (may be also be referred to as 112 ).
- the mass storage devices 112 may include writable storage device media such as magnetic disks, video tape, optical, DVD, magnetic tape, non-volatile memory devices for example, self-encrypting drives, flash memory devices and any other similar media adapted to store information.
- the storage devices 112 may be organized as one or more groups of Redundant Array of Independent (or Inexpensive) Disks (RAID).
- RAID Redundant Array of Independent
- the storage system 102 provides a set of storage volumes for storing information at storage devices 112 .
- a storage operating system executed by the nodes of storage system 102 present or export data stored at storage devices 112 as a volume, or one or more qtree sub-volume units.
- Each volume may be configured to store data files (or data containers or data objects), scripts, word processing documents, executable programs, and any other type of structured or unstructured data. From the perspective of client systems, each volume can appear to be a single storage drive. However, each volume can represent the storage space in at one storage device, an aggregate of some or all of the storage space in multiple storage devices, a RAID group, or any other suitable set of storage space.
- the storage system 102 may be used to store and manage information at storage devices 112 based on a client request.
- the request may be based on file-based access protocols, for example, the Common Internet File System (CIFS) protocol or Network File System (NFS) protocol, over the Transmission Control Protocol/Internet Protocol (TCP/IP).
- CIFS Common Internet File System
- NFS Network File System
- TCP/IP Transmission Control Protocol/Internet Protocol
- the request may use block-based access protocols, for example, the Small Computer Systems Interface (SCSI) protocol encapsulated over TCP (iSCSI) and SCSI encapsulated over Fibre Channel (FCP).
- SCSI Small Computer Systems Interface
- iSCSI SCSI encapsulated over TCP
- FCP Fibre Channel
- Each of the plurality of nodes 108 . 1 - 108 . 3 is configured to include an N-module, a D-module, and an M-Module, each of which can be implemented as a processor executable module.
- node 108 . 1 includes N-module 114 . 1 , D-module 116 . 1 , and M-Module 118 . 1
- node 108 . 2 includes N-module 114 . 2 , D-module 116 . 2 , and M-Module 118 . 2
- node 108 . 3 includes N-module 114 . 3 , D-module 116 . 3 , and M-Module 118 . 3 .
- the N-modules 114 . 1 - 114 . 3 include functionality that enable the respective nodes 108 . 1 - 108 . 3 to connect to one or more of the client systems 104 . 1 - 104 .N over network 106 and with other nodes via switching fabric 110 .
- the D-modules 116 . 1 - 116 . 3 connect to one or more of the storage devices 112 . 1 - 112 . 3 .
- the M-Modules 118 . 1 - 118 . 3 provide management functions for the clustered storage system 102 .
- the M-modules 118 . 1 - 118 . 3 may be used to store an address data structure ( 127 , FIG. 1B ) that is described below in detail.
- a switched virtualization layer including a plurality of virtual interfaces (VIFs) 120 is provided to interface between the respective N-modules 114 . 1 - 114 . 3 and the client systems 104 . 1 - 104 .N, allowing storage 112 . 1 - 112 . 3 associated with the nodes 108 . 1 - 108 . 3 to be presented to the client systems 104 . 1 - 104 .N as a single shared storage pool.
- VIPs virtual interfaces
- the clustered storage system 102 can be organized into any suitable number of virtual servers (may also be referred to as “Vservers” or virtual storage machines).
- Vservers are a virtual representation of a physical storage controller/system and is presented to a client system for storing information at storage devices 112 .
- Each Vserver represents a single storage system namespace with independent network access.
- Each Vserver has a user domain and a security domain that are separate from the user and security domains of other Vservers. Moreover, each Vserver is associated with one or more VIFs 120 and can span one or more physical nodes, each of which can hold one or more VIFs 120 and storage associated with one or more Vservers. Client systems can access the data on a Vserver from any node of the clustered system, but only through the VIFs associated with that Vserver.
- Each of the nodes 108 . 1 - 108 . 3 is defined as a computing system to provide application services to one or more of the client systems 104 . 1 - 104 .N.
- the nodes 108 . 1 - 108 . 3 are interconnected by the switching fabric 110 , which, for example, may be embodied as a switch or any other type of connecting device.
- FIG. 1A depicts an equal number (i.e., 3 ) of the N-modules 114 . 1 - 114 . 3 , the D-modules 116 . 1 - 116 . 3 , and the M-Modules 118 . 1 - 118 . 3
- any other suitable number of N-modules, D-modules, and M-Modules may be provided.
- the clustered storage system 102 may include a plurality of N-modules and a plurality of D-modules interconnected in a configuration that does not reflect a one-to-one correspondence between the N-modules and D-modules.
- Each client system may request the services of one of the respective nodes 108 . 1 , 108 . 2 , 108 . 3 , and that node may return the results of the services requested by the client system by exchanging packets over the computer network 106 , which may be wire-based, optical fiber, wireless, or any other suitable combination thereof.
- the client systems may issue packets according to file-based access protocols, such as the NFS or CIFS protocol, when accessing information in the form of files and directories.
- System 100 also includes a management console 122 executing a management application 121 out of a memory.
- Management console 122 may be used to configure and manage various elements of system 100 .
- the nodes of cluster 102 operate together using a given Internet Protocol (IP) address space.
- IP Internet Protocol
- a Vserver is typically presented with an IP address and a port identifier (referred to as a logical interface or “LIF”).
- the port identifier identifies a port of a network interface that is used for network communication i.e. for sending and receiving information from clients of other devices outside the cluster 102 .
- IPv4 IP Version 4 (IPv4) and IP Version 6 (IPv6).
- IPv4 uses 32 binary bits to create a single unique address on the network.
- An IPv4 address is expressed by four numbers separated by dots. Each number is the decimal (base-10) representation for an eight-digit binary (base-2) number, also called an octet, for example: 216.27.61.137.
- IPv6 uses 128 binary bits to create a single unique address on the network.
- An IPv6 address is expressed by eight groups of hexadecimal (base-16) numbers separated by colons.
- An IP address can be either dynamic or static.
- a static address is one that a user can configure.
- Dynamic addresses are assigned using a Dynamic Host Configuration Protocol (DHCP), a service running on a network.
- DHCP typically runs on network hardware such as routers or dedicated DHCP servers.
- a Vserver at a first cluster node may have to establish a network connection or communication on behalf of a second cluster node.
- the communication may be to send or receive a management command from management console 122 .
- the Vserver may not have any LIFs at all or the first cluster node may not own the LIF for making the connection.
- the various aspects described herein enable the first cluster node to make a network connection on behalf of the second cluster node as described below in detail.
- FIG. 1B shows an example of a clustered system 101 , similar to cluster 102 , except with four nodes 108 . 1 - 108 . 4 for presenting one or more Vservers 140 A- 140 C to client systems 104 . 1 - 104 .N.
- a node in cluster 101 is able to make a network connection using a LIF that is owned and managed by another node.
- Nodes 108 . 1 - 108 . 4 may communicate with each other using cluster adapters 129 via the cluster network 110 .
- Each Vserver is assigned one or more unique LIF to communicate with any device outside the cluster.
- Each LIF includes an external IP address by which clients connect to a node. The IP address may be static or dynamic and is assigned when the cluster is being configured.
- Each node stores an address data structure (ADS) 127 that enables a Vserver to initiate communication with a device outside the cluster using an IP address that is assigned to and managed by (i.e. owned by) another node.
- ADS 127 is used to store a listing of all IP addresses within the cluster, the Vservers that are associated with the IP addresses and the nodes that own specific IP addresses. ADS 127 is described below in detail with respect to FIG. 1C .
- Node 1 108 . 1 includes a physical network interface card (NIC) 1 124 . 1 .
- NIC 124 . 1 may be used by the node to communicate with clients.
- IP address 126 . 1 is associated with NIC 1 124 . 1 and Vserver 1 (Vs 1 ) 140 A may use IP address 126 . 1 to communicate with clients 104 .
- Node 1 108 . 1 may have more than one physical NIC and/or LIFs.
- Node 1 108 . 1 also uses a plurality of virtual NICs (VNICs) to communicate with devices outside the clustered storage system on behalf of other nodes.
- VNIC virtual NICs
- a VNIC is a virtual representation of a physical NIC. It is noteworthy that the Vserver node can use either the physical NIC or a VNIC for communication.
- Node 1 108 . 1 includes or uses VNIC 1 A 128 . 1 for communication on behalf of Vserver 2 140 B and VNIC 1 B 128 . 2 for communication on behalf of Vserver 3 140 C.
- VNIC 1 A 128 . 1 is associated with IP addresses 126 . 2 , 126 . 3 and 126 . 5
- VNIC 1 B 128 . 2 is associated with IP addresses 126 . 4 and 126 . 6 .
- IP addresses 126 . 2 , 126 . 3 , 126 . 4 , 126 . 5 and 126 . 6 are owned by other nodes and not by Node 1 108 . 1
- Node 2 108 . 2 includes NICs 2 A 124 . 2 A and 124 . 2 B associated with IP addresses 126 . 2 and 126 . 6 , respectively.
- Vserver 2 140 B may use NIC 2 A 124 . 2 A and Vserver 3 140 C may use NIC 2 B 124 . 2 B for communication outside the cluster via Node 2 108 . 2 .
- Node 2 uses VNIC 2 A 128 . 3 with IP addresses 126 . 1 that is owned by Node 1 108 . 1 .
- Node 2 108 . 2 may also use VNIC 2 B for establishing a connection for Vserver 2 140 B on behalf of Node 3 108 . 3 .
- IP addresses 126 . 3 and 126 . 5 that are owned by Node 3 108 . 3 are used to make the connection.
- Node 2 108 . 2 may use VNIC 2 C 128 . 3 A with IP address 126 . 4 to make a connection on behalf of Node 4 108 . 4 for Vserver 3 1400 .
- IP address 126 . 4 is owned by Node 4 108 . 4 .
- Node 3 108 . 3 includes NIC 3 124 . 3 with IP addresses 126 . 3 and 126 . 5 .
- Vserver 2 140 B may use these IP addresses to communicate with clients and other entities/devices.
- Node 3 108 . 3 also uses VNIC 3 A 128 . 5 with IP address 126 . 1 owned by Node 1 108 . 1 to communicate on behalf of Node 1 108 . 1 /Vserver 1 140 A.
- Node 3 108 . 3 may also use VNIC 3 B 128 . 6 with IP address 126 . 2 to communicate on behalf of Node 2 108 . 2 /Vserver 2 140 B.
- Node 3 108 . 3 further uses VNIC 3 C with IP addresses 126 . 4 and 126 . 6 to communicate on behalf of Vserver 3 140 C when IP addresses 126 . 4 / 126 . 6 are owned by Node 2 108 . 2 and Node 4 108 . 4 .
- Node 4 108 . 4 includes NIC 4 124 . 4 with IP address 126 . 4 .
- Vserver 3 140 C may use IP address 126 . 4 to communicate with clients.
- Node 4 108 . 4 also includes a plurality of VNICs to communicate on behalf of other nodes using IP addresses that are owned by the other nodes.
- VNIC 4 A 128 . 8 is used to communicate on behalf of Node 1 108 . 1 for Vserver 1 140 A using the IP address 126 . 1 owned by Node 1 108 . 1 .
- VNIC 4 B 128 . 9 may be used to communicate on behalf of Node 2 or Node 3 for Vserver 2 140 B using the IP address 126 . 2 , owned by Node 2 , 108 .
- VNIC 4 C 128 . 9 A may be used to communicate on behalf of Node 2 108 . 2 for Vserver 3 140 C using the IP address 126 . 6 that is owned by Node 2 108 . 2
- a node may use a local LIF or a remote LIF for communicating on behalf of Vservers.
- Node 2 108 . 2 may use the local IP address 126 . 2 for communicating on behalf of Vserver 2 or the non-local IP addresses 126 . 3 and 126 . 5 that are owned by Node 3 108 . 3 .
- ADS 127 ADS 127 :
- FIG. 10 illustrates an example of ADS 127 with a plurality of segments 127 A, 127 B and 127 C.
- ADS 127 enables the configuration that allows one node to communicate on behalf of another node, according to one aspect.
- segment 127 A stores a listing of IP addresses, Vservers, Nodes and NICs
- segment 127 B is a routing data structure for each IP address and the associated Vserver.
- Segment 127 C is maintained to track port reservations made by one node to communicate on behalf of another node. For example, if Node 1 108 . 1 intends to communicate on behalf of a Vserver having a LIF that is owned by another node, then segment 127 C tracks the IP address, a node identifier identifying the node, a port identifier (Port #) that is used by the node and a protocol used for the communication, for example, TCP.
- Port # port identifier
- Segment 127 A includes various IP addresses, for example, 126 . 1 - 126 . 6 .
- Each IP address is associated with a Vserver, for example, 126 . 1 is assigned to Vs 1 (Vserver 1 ), IP addresses 126 . 2 , 126 . 3 , and 126 . 5 are assigned to Vs 2 (Vserver 2 ) and IP addresses 126 . 4 and 126 . 6 are assigned to Vs 3 (Vserver 3 ).
- segment 127 A is used to generate the various VNICs used by the different nodes.
- VNICs are described above with respect to FIG. 1B and can be used by any Vserver node to communicate on behalf of another Vserver node, even when the physical IP address is not present at the particular node of the Vserver.
- Segment 127 B is a routing data structure with a destination (shown as x.x.x.x), a subnet mask (for example, 255.255.255.0) and a gateway (for example, 10.98.10.1) associated with a Vserver (for example, VS 1 ).
- a Subnet is a logical, visible portion of an IP network. All network devices of a subnet are addressed with a common, identical, most-significant bit-group in their IP address. This results in the logical division of an IP address into two fields, a network or routing prefix and a host identifier that identifies a network interface.
- a gateway address is also assigned to the subnet and is used by a computing device within the subnet for routing information.
- a subnet mask is associated with each IP address of a NIC/VNIC. The mask is used to select the NIC/VNIC which can reach the gateway once its IP address has been established.
- Segment 127 C identifies Node 1 as having a port reservation for Port P 1 for communicating on behalf of Node 2 108 . 2 for Vserver 2 using IP address 126 . 2 .
- the port reservation allows Node 1 to communicate without any conflict with other uses of IP address 126 . 2 .
- the process for making the port reservation is now described below with respect to FIG. 2A .
- FIG. 2A shows a process 268 that occurs prior to creating a connection using a VNIC, according to one aspect.
- the connection may be originated by Vserver 2 at Node 1 108 . 1 .
- the process begins in block B 270 .
- VNIC 1 A 128 . 1 looks up in ADS 127 A the node that owns an IP address for a connection, for example, 126 . 2 owned by Node 2 108 . 2 .
- Node 1 108 . 1 sends a port reservation request to Node 2 108 . 2 over the cluster network 110 .
- the port reservation request is made so that a port can be reserved for node 1 108 . 1 to originate packets with this as the source port and IP address 126 . 2 as the source address, so that node 2 108 . 2 can transmit packets via its NIC 2 A which owns the IP address 126 . 2 .
- block B 276 if the port reservation request is successful then in block B 276 , a connection is established by Node 1 108 . 1 with an external client using a source address that is owned by Node 2 (for example, 126 . 2 ) and the newly reserved source port enables Node 2 to transmit packets for Node 1 . If the request is unsuccessful, then the connection attempt is aborted in block B 278 .
- FIG. 2B shows a process 200 for configuring IP addresses, according to one aspect of the present disclosure.
- ADS 127 may be used to configure the IP addresses and generate the associated VNICs.
- the various process blocks of FIG. 2B may be executed by the management application 121 .
- Process 200 begins in block B 202 , when management console 122 and nodes 108 . 1 - 108 . 4 are initialized and operational.
- a VNIC is generated for a node.
- the VNIC can be used by a Vserver to communicate with devices outside the cluster.
- the VNIC is generated when a LIF is configured for a NIC using the management application 121 .
- a graphical user interface (GUI) and/or a command line interface (CLI) may be presented by the management console 122 to directly configure the NIC, which results in creating a VNIC.
- GUI graphical user interface
- CLI command line interface
- a VNIC is created when a first LIF is created for a Vserver. Subsequent LIFs are simply added to an existing VNIC.
- a VNIC is created for the LIF on all cluster nodes except for the node with the physical NIC on which the IP address is configured.
- the VNIC can be configured in various ways, for example, an IP address may be configured for the VNIC, a non-local IP address may be un-configured, an IP address may be moved to a new node or a new route may be configured or removed for a Vserver. It is noteworthy that VNIC configuration process occurs in parallel as a response to NIC configuration. The local/non-local distinction is with respect to the node that owns a NIC on which an IP address is configured. For a VNIC, the IP address is non-local.
- VNIC configurations are now described in detail with respect to various components/modules of FIGS. 1 B/ 1 C. It is noteworthy that the various process blocks described below may occur in parallel or at different times, depending on whether a first LIF is being created or a LIF is being moved or deleted.
- an IP address is configured for a non-local node.
- IP addresses 126 . 2 , 126 . 3 and 126 . 5 are configured for VNIC 1 A 128 . 1 at Node 1 108 . 1 as a result of being configured for their respective NICs on nodes 108 . 2 and 108 . 3 .
- the VNIC 1 A 128 . 1 can be used by Node 1 108 . 1 to communicate on behalf of Node 2 108 . 2 and Node 3 108 . 3 and Vserver 2 140 B.
- IP addresses 126 . 2 , 126 . 3 and 126 . 5 are not owned by Node 1 108 .
- IP addresses for the other VNICs may be configured. It is noteworthy that an IP address for a NIC on one node is configured on a VNIC of every other node of a cluster.
- segment 127 A of ADS 127 is updated to reflect the node location of the new LIF for the VNIC.
- any routes from segment 127 B of the Vserver that are newly supported by the new non-local IP address are configured against the VNIC.
- a non-local IP address is un-configured for a Vserver, according to one aspect. Any routes that are only supported for the IP address are disabled. Thereafter, the non-local IP address for the VNIC is deleted in block B 214 . It is noteworthy if the deleted LIF is the last LIF for the VNIC, then the VNIC is also deleted.
- an existing IP address for a Vserver is moved to a new node.
- an IP address 126 . 1 owned and managed by Node 108 . 1 may be moved to Node 2 108 . 2 .
- the new IP address is then configured at the new node.
- the IP address is configured on a VNIC.
- Process block B 216 includes process blocks B 206 , B 208 and B 210 , described above.
- the moved IP address is un-configured from a VNIC in block B 218 , which includes blocks B 212 and B 214 .
- ADS 127 is updated to indicate that the new node manages the IP address.
- a new route for a Vserver is configured and added to ADS 127 B. If its gateway is reachable by one or more of the addresses of the VNIC, the route is configured in a network stack (not shown) against this VNIC.
- a route is unconfigured from the Vserver and removed from ADS 127 B, then in block B 224 , the route is unconfigured from the network stack for the VNIC, if currently configured.
- FIG. 2C shows a process 226 for transmitting a packet by a node, for example, by Vserver 2 140 B of Node 1 108 . 1 on behalf of Node 2 108 . 2 using IP address 126 . 2 that is owned by Node 2 .
- a packet for Vserver 2 140 B is received at VNIC 1 A 128 . 1 .
- segment 127 A is used to determine the node that owns the source IP address for the packet, for example, Node 2 108 . 2 , if the IP address was 126 . 2 .
- the packet is encapsulated in a message and then in block B 236 , the message is sent to Node 2 108 . 2 for transmitting the packet.
- FIG. 2D shows a process 240 , where the message from process 226 is received at Node 2 108 . 2 .
- the packet is de-capsulated from the message in block B 242 .
- the packet is parsed and a packet header is validated.
- Node 2 108 . 1 uses data structure 127 to determines the NIC that owns the source IP address.
- NIC 2 A 124 . 2 A owns the IP address 126 . 2 .
- the packet is transmitted to its destination by Vserver 2 140 B of Node 2 108 . 2 .
- FIG. 2E shows a process 248 for receiving a packet from outside the cluster, for example, a client, management console or any other device, according to one aspect.
- Process 248 starts in block B 250 , when a client system is initialized and operational.
- a packet is received at a node, for example, Node 2 108 . 2 .
- the packet may be received by Vserver 2 140 B.
- Vserver 2 140 B determines if the connection for the packet originated from another node. This is determined by using segment 127 C of ADS 127 to look up a port reservation for an IP address, which then provides the originating node.
- the packet is forwarded to the other node if the other node originated the connection by encapsulating the packet in a message, otherwise, the packet is processed by Node 2 108 . 2 , if it originated the connection.
- FIG. 2F shows a process 258 , when a message s received by the other node that originated the connection (for example, Vserver 2 140 B of Node 1 108 . 1 ).
- the process begins in block 3260 , when Node 2 108 . 2 forwards the packet to Node 1 108 . 1 in a message.
- Node 1 108 . 1 receives the message via cluster network 110 .
- the packet is de-capsulated in block B 264 and Node 1 determines the VNIC that holds the destination address. Thereafter, the packet is provided as input to the VNIC in block B 266 .
- the various aspects described above enable a node of a Vserver to communicate on behalf of other nodes.
- the ADS 127 is stored and maintained at all nodes and this enables the Vservers to use all the IP addresses within the cluster.
- FIG. 3 is a block diagram of node 108 . 1 that is illustratively embodied as a storage system comprising of a plurality of processors 302 A and 302 B, a memory 304 , a network adapter 310 , a cluster access adapter 312 , a storage adapter 316 and local storage 313 interconnected by a system bus 308 .
- Processors 302 A- 302 B may be used to maintain ADS 127 that has been described above in detail.
- Processors 302 A- 302 B may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such hardware devices.
- the local storage 313 comp rises one or more storage devices utilized by the node to locally store configuration information for example, in a configuration data structure 314 .
- the cluster access adapter 312 (similar to 129 , FIG. 1B ) comprises a plurality of ports adapted to couple node 108 . 1 to other nodes of cluster 100 .
- Ethernet may be used as the clustering protocol and interconnect media, although it will be apparent to those skilled in the art that other types of protocols and interconnects may be utilized within the cluster architecture described herein.
- the cluster access adapter 312 is utilized by the N/D-module for communicating with other N/D-modules in the cluster 100 / 101 .
- Node 108 . 1 is illustratively embodied as a dual processor storage system executing a storage operating system 306 that preferably implements a high-level module, such as a file system, to logically organize the information as a hierarchical structure of named directories and files on storage 112 .
- a storage operating system 306 that preferably implements a high-level module, such as a file system, to logically organize the information as a hierarchical structure of named directories and files on storage 112 .
- the node 108 . 1 may alternatively comprise a single or more than two processor systems.
- one processor 302 A executes the functions of the M-module 118 , N-module 114 on the node, while the other processor 302 B executes the functions of the D-module 116 .
- a dedicated processor may execute the functions of M-module 118 .
- the memory 304 illustratively comprises storage locations that are addressable by the processors and adapters for storing programmable instructions and data structures.
- the processor and adapters may, in turn, comprise processing elements and/or logic circuitry configured to execute the programmable instructions and manipulate the data structures. It will be apparent to those skilled in the art that other processing and memory means, including various computer readable media, may be used for storing and executing program instructions pertaining to the presented disclosure.
- the storage operating system 306 portions of which is typically resident in memory and executed by the processing elements, functionally organizes the node 108 . 1 by, inter alia, invoking storage operation in support of the storage service implemented by the node.
- the network adapter (or NIC) 310 (similar to NICs 124 . 1 , 124 . 2 A, 124 . 2 B, 124 . 3 and 124 . 4 ) comprises a plurality of ports adapted to couple the node 108 . 1 to one or more clients over point-to-point links, wide area networks, virtual private networks implemented over a public network (Internet) or a shared local area network.
- the network adapter 310 thus may comprise the mechanical, electrical and signaling circuitry needed to connect the node to the network.
- the storage adapter 316 cooperates with the storage operating system 306 executing on the node 108 . 1 to access information requested by the clients.
- the information may be stored on any type of attached array of writable storage device media such as video tape, optical, DVD, magnetic tape, bubble memory, electronic random access memory, micro-electro mechanical and any other similar media adapted to store information, including data and parity information.
- the information is preferably stored on storage device 112 .
- the storage adapter 316 comprises a plurality of ports having input/output (I/O) interface circuitry that couples to the storage devices over an I/O interconnect arrangement, such as a conventional high-performance, FC link topology.
- I/O input/output
- FIG. 4 illustrates a generic example of storage operating system 306 executed by node 108 . 1 that interfaces with management application 121 , according to one aspect of the present disclosure.
- storage operating system 306 may include several modules, or “layers” executed by one or both of N-Module 114 and D-Module 116 . These layers include a file system manager 400 that keeps track of a directory structure (hierarchy) of the data stored in storage devices and manages read/write operation, i.e. executes read/write operation on storage in response to client requests.
- a directory structure hierarchy
- Storage operating system 306 may also include a protocol layer 402 and an associated network access layer 406 , to allow node 108 . 1 to communicate over a network with other systems.
- Protocol layer 402 may implement one or more of various higher-level network protocols, such as NFS, CIFS, Hypertext Transfer Protocol (HTTP), TCP/IP and others, as described below.
- Network access layer 406 may include one or more drivers, which implement one or more lower-level protocols to communicate over the network, such as Ethernet. Interactions between clients' and mass storage devices 112 are illustrated schematically as a path, which illustrates the flow of data through storage operating system 306 .
- the storage operating system 306 may also include a storage access layer 404 and an associated storage driver layer 408 to allow D-module 116 to communicate with a storage device.
- the storage access layer 404 may implement a higher-level storage protocol, such as RAID (redundant array of inexpensive disks), while the storage driver layer 408 may implement a lower-level storage device access protocol, such as FC or SCSI.
- the storage driver layer 408 may maintain various data structures (not shown) for storing information LUN, storage volume, aggregate and various storage devices.
- the term “storage operating system” generally refers to the computer-executable code operable on a computer to perform a storage function that manages data access and may, in the case of a node 108 . 1 , implement data access semantics of a general purpose operating system.
- the storage operating system can also be implemented as a microkernel, an application program operating over a general-purpose operating system, such as UNIX@ or Windows XP®, or as a general-purpose operating system with configurable functionality, which is configured for storage applications as described herein.
- the disclosure described herein may apply to any type of special-purpose (e.g., file server, filer or storage serving appliance) or general-purpose computer, including a standalone computer or portion thereof, embodied as or including a storage system.
- the teachings of this disclosure can be adapted to a variety of storage system architectures including, but not limited to, a network-attached storage environment, a storage area network and a storage device directly-attached to a client or host computer.
- storage system should therefore be taken broadly to include such arrangements in addition to any subsystems configured to perform a storage function and associated with other equipment or systems. It should be noted that while this description is written in terms of a write any where file system, the teachings of the present disclosure may be utilized with any suitable file system, including a write in place file system.
- FIG. 5 is a high-level block diagram showing an example of the architecture of a processing system 500 that may be used according to one aspect.
- the processing system 500 can represent the management console 122 or client 104 . Note that certain standard and well-known components which are not germane to the present disclosure are not shown in FIG. 5 .
- the processing system 500 includes one or more processor(s) 502 and memory 504 , coupled to a bus system 505 .
- the bus system 505 shown in FIG. 5 is an abstraction that represents any one or more separate physical buses and/or point-to-point connections, connected by appropriate bridges, adapters and/or controllers.
- the bus system 505 may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (sometimes referred to as “Firewire”).
- PCI Peripheral Component Interconnect
- ISA HyperTransport or industry standard architecture
- SCSI small computer system interface
- USB universal serial bus
- IEEE Institute of Electrical and Electronics Engineers
- the processor(s) 502 are the central processing units (CPUs) of the processing system 500 and, thus, control its overall operation. In certain aspects, the processors 502 accomplish this by executing software stored in memory 504 .
- a processor 502 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.
- Memory 504 represents any form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices.
- Memory 504 includes the main memory of the processing system 500 .
- Instructions 506 implement the process steps described above with respect to FIGS. 2A-2F may reside in and execute (by processors 502 ) from memory 504 .
- Internal mass storage devices 510 may be, or may include any conventional medium for storing large volumes of data in a non-volatile manner, such as one or more magnetic or optical based disks.
- the network adapter 512 provides the processing system 500 with the ability to communicate with remote devices (e.g., storage servers) over a network and may be, for example, an Ethernet adapter, a Fibre Channel adapter, or the like.
- the processing system 500 also includes one or more input/output (I/O) devices 508 coupled to the bus system 505 .
- the I/O devices 508 may include, for example, a display device, a keyboard, a mouse, etc.
- Cloud computing means computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.
- the term “cloud” as used herein refers to a network (for example, the Internet) that enables providing computing as a service.
- Typical cloud computing providers deliver common business applications online which are accessed from another web service or software like a web browser, while the software and data are stored remotely on servers.
- the cloud computing architecture uses a layered approach for providing application services.
- a first layer is an application layer that is executed at client computers.
- the application allows a client to access storage via a cloud.
- the application layer is a cloud platform and cloud infrastructure, followed by a “server” layer that includes hardware and computer software designed for cloud specific services.
- the storage provider 116 (and associated methods thereof) and storage systems described above can be a part of the server layer for providing storage services. Details regarding these layers are not germane to the inventive aspects.
- references throughout this specification to “one aspect” or “an aspect” mean that a particular feature, structure or characteristic described in connection with the aspect is included in at least one aspect of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an aspect” or “one aspect” or “an alternative aspect” in various portions of this specification are not necessarily all referring to the same aspect. Furthermore, the particular features, structures or characteristics being referred to may be combined as suitable in one or more aspects of the disclosure, as will be recognized by those of ordinary skill in the art.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Methods and systems are provided for a clustered storage system. The method assigns a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
Description
- The present disclosure relates to communication in networked storage systems.
- Various forms of storage systems are used today. These forms include direct attached storage (DAS) network attached storage (NAS) systems, storage area networks (SANs), and others. Network storage systems are commonly used for a variety of purposes, such as providing multiple users with access to shared data, backing up data and others.
- A storage system typically includes at least one computing system executing a storage operating system for storing and retrieving data on behalf of one or more client computing systems (“clients”). The storage operating system stores and manages shared data containers in a set of mass storage devices.
- Storage systems may include a plurality of nodes operating within a cluster for processing client requests. The nodes may use virtual network interfaces for client communication. Continuous efforts are being made for efficiently managing network addresses that are used by the cluster nodes.
- In one aspect, a machine implemented method is provided. The method includes assigning a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and using the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
- In another aspect, a non-transitory, machine readable storage medium having stored thereon instructions for performing a method is provided. The machine executable code which when executed by at least one machine, causes the machine to: assign a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
- In yet another aspect, a system having a memory with machine readable medium comprising machine executable code with stored instructions is provided. A processor module coupled to the memory is configured to execute the machine executable code to: assign a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
- This brief summary has been provided so that the nature of this disclosure may be understood quickly. A more complete understanding of the disclosure can be obtained by reference to the following detailed description of the various thereof in connection with the attached drawings.
- The foregoing features and other features will now be described with reference to the drawings of the various aspects. In the drawings, the same components have the same reference numerals. The illustrated aspects are intended to illustrate, but not to limit the present disclosure. The drawings include the following Figures:
-
FIGS. 1A-1B show examples of an operating environment for the various aspects disclosed herein; -
FIG. 1C shows an example of an address data structure, according to one aspect of the present disclosure; -
FIGS. 2A-2F show various process flow diagrams, according the various aspects of the present disclosure; -
FIG. 3 is an example of a storage node used in the cluster ofFIG. 2A , according to one aspect of the present disclosure; -
FIG. 4 shows an example of a storage operating system, used according to one aspect of the present disclosure; and -
FIG. 5 shows an example of a processing system, used according to one aspect of the present disclosure. - As a preliminary note, the terms “component”, “module”, “system,” and the like as used herein are intended to refer to a computer-related entity, either software-executing general purpose processor, hardware, firmware and a combination thereof. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
- By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. Also, these components can execute from various non-transitory computer readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal).
- Computer executable components can be stored, for example, at non-transitory, computer readable media including, but not limited to, an ASIC (application specific integrated circuit), CD (compact disc), DVD (digital video disk), ROM (read only memory), floppy disk, hard disk, EEPROM (electrically erasable programmable read only memory), memory stick or any other storage device, in accordance with the claimed subject matter.
- Methods and systems are provided for a clustered storage system. The method assigns a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
- Clustered System:
-
FIG. 1A shows a cluster basedstorage environment 100 having a plurality of nodes for managing storage devices, according to one aspect.Storage environment 100 may include a plurality of client systems 104.1-104.N, a clusteredstorage system 102 and at least anetwork 106 communicably connecting the client systems 104.1-104.N and the clusteredstorage system 102. - The clustered
storage system 102 may include a plurality of nodes 108.1-108.3, acluster switching fabric 110, and a plurality of mass storage devices 112.1-112.3 (may be also be referred to as 112). The mass storage devices 112 may include writable storage device media such as magnetic disks, video tape, optical, DVD, magnetic tape, non-volatile memory devices for example, self-encrypting drives, flash memory devices and any other similar media adapted to store information. The storage devices 112 may be organized as one or more groups of Redundant Array of Independent (or Inexpensive) Disks (RAID). The aspects disclosed are not limited to any particular storage device or storage device configuration. - The
storage system 102 provides a set of storage volumes for storing information at storage devices 112. A storage operating system executed by the nodes ofstorage system 102 present or export data stored at storage devices 112 as a volume, or one or more qtree sub-volume units. Each volume may be configured to store data files (or data containers or data objects), scripts, word processing documents, executable programs, and any other type of structured or unstructured data. From the perspective of client systems, each volume can appear to be a single storage drive. However, each volume can represent the storage space in at one storage device, an aggregate of some or all of the storage space in multiple storage devices, a RAID group, or any other suitable set of storage space. - The
storage system 102 may be used to store and manage information at storage devices 112 based on a client request. The request may be based on file-based access protocols, for example, the Common Internet File System (CIFS) protocol or Network File System (NFS) protocol, over the Transmission Control Protocol/Internet Protocol (TCP/IP). Alternatively, the request may use block-based access protocols, for example, the Small Computer Systems Interface (SCSI) protocol encapsulated over TCP (iSCSI) and SCSI encapsulated over Fibre Channel (FCP). - Each of the plurality of nodes 108.1-108.3 is configured to include an N-module, a D-module, and an M-Module, each of which can be implemented as a processor executable module. For example, node 108.1 includes N-module 114.1, D-module 116.1, and M-Module 118.1, node 108.2 includes N-module 114.2, D-module 116.2, and M-Module 118.2, and node 108.3 includes N-module 114.3, D-module 116.3, and M-Module 118.3.
- The N-modules 114.1-114.3 include functionality that enable the respective nodes 108.1-108.3 to connect to one or more of the client systems 104.1-104.N over
network 106 and with other nodes via switchingfabric 110. The D-modules 116.1-116.3 connect to one or more of the storage devices 112.1-112.3. The M-Modules 118.1-118.3 provide management functions for the clusteredstorage system 102. The M-modules 118.1-118.3 may be used to store an address data structure (127,FIG. 1B ) that is described below in detail. - A switched virtualization layer including a plurality of virtual interfaces (VIFs) 120 is provided to interface between the respective N-modules 114.1-114.3 and the client systems 104.1-104.N, allowing storage 112.1-112.3 associated with the nodes 108.1-108.3 to be presented to the client systems 104.1-104.N as a single shared storage pool.
- In one aspect, the clustered
storage system 102 can be organized into any suitable number of virtual servers (may also be referred to as “Vservers” or virtual storage machines). A Vserver is a virtual representation of a physical storage controller/system and is presented to a client system for storing information at storage devices 112. Each Vserver represents a single storage system namespace with independent network access. - Each Vserver has a user domain and a security domain that are separate from the user and security domains of other Vservers. Moreover, each Vserver is associated with one or more VIFs 120 and can span one or more physical nodes, each of which can hold one or more VIFs 120 and storage associated with one or more Vservers. Client systems can access the data on a Vserver from any node of the clustered system, but only through the VIFs associated with that Vserver.
- Each of the nodes 108.1-108.3 is defined as a computing system to provide application services to one or more of the client systems 104.1-104.N. The nodes 108.1-108.3 are interconnected by the switching
fabric 110, which, for example, may be embodied as a switch or any other type of connecting device. - Although
FIG. 1A depicts an equal number (i.e., 3) of the N-modules 114.1-114.3, the D-modules 116.1-116.3, and the M-Modules 118.1-118.3, any other suitable number of N-modules, D-modules, and M-Modules may be provided. There may also be different numbers of N-modules, D-modules, and/or M-Modules within the clusteredstorage system 102. For example, in alternative aspects, the clusteredstorage system 102 may include a plurality of N-modules and a plurality of D-modules interconnected in a configuration that does not reflect a one-to-one correspondence between the N-modules and D-modules. - Each client system may request the services of one of the respective nodes 108.1, 108.2, 108.3, and that node may return the results of the services requested by the client system by exchanging packets over the
computer network 106, which may be wire-based, optical fiber, wireless, or any other suitable combination thereof. The client systems may issue packets according to file-based access protocols, such as the NFS or CIFS protocol, when accessing information in the form of files and directories. -
System 100 also includes amanagement console 122 executing amanagement application 121 out of a memory.Management console 122 may be used to configure and manage various elements ofsystem 100. - In one aspect, the nodes of
cluster 102 operate together using a given Internet Protocol (IP) address space. A Vserver is typically presented with an IP address and a port identifier (referred to as a logical interface or “LIF”). The port identifier identifies a port of a network interface that is used for network communication i.e. for sending and receiving information from clients of other devices outside thecluster 102. - Most networks today use the TCP/IP protocol for network communication. In the TCP/IP protocol, an IP address is a network access address that is used to uniquely identify a computing device. As an example, there are two standards for IP addresses: IP Version 4 (IPv4) and IP Version 6 (IPv6). IPv4 uses 32 binary bits to create a single unique address on the network. An IPv4 address is expressed by four numbers separated by dots. Each number is the decimal (base-10) representation for an eight-digit binary (base-2) number, also called an octet, for example: 216.27.61.137.
- IPv6 uses 128 binary bits to create a single unique address on the network. An IPv6 address is expressed by eight groups of hexadecimal (base-16) numbers separated by colons.
- An IP address can be either dynamic or static. A static address is one that a user can configure. Dynamic addresses are assigned using a Dynamic Host Configuration Protocol (DHCP), a service running on a network. DHCP typically runs on network hardware such as routers or dedicated DHCP servers.
- A Vserver at a first cluster node may have to establish a network connection or communication on behalf of a second cluster node. The communication may be to send or receive a management command from
management console 122. The Vserver may not have any LIFs at all or the first cluster node may not own the LIF for making the connection. The various aspects described herein enable the first cluster node to make a network connection on behalf of the second cluster node as described below in detail. -
FIG. 1B shows an example of a clusteredsystem 101, similar tocluster 102, except with four nodes 108.1-108.4 for presenting one ormore Vservers 140A-140C to client systems 104.1-104.N. As described below in detail, a node incluster 101 is able to make a network connection using a LIF that is owned and managed by another node. Nodes 108.1-108.4 may communicate with each other using cluster adapters 129 via thecluster network 110. - Each Vserver is assigned one or more unique LIF to communicate with any device outside the cluster. Each LIF includes an external IP address by which clients connect to a node. The IP address may be static or dynamic and is assigned when the cluster is being configured.
- Each node stores an address data structure (ADS) 127 that enables a Vserver to initiate communication with a device outside the cluster using an IP address that is assigned to and managed by (i.e. owned by) another node.
ADS 127 is used to store a listing of all IP addresses within the cluster, the Vservers that are associated with the IP addresses and the nodes that own specific IP addresses.ADS 127 is described below in detail with respect toFIG. 1C . - As an example,
Node 1 108.1 includes a physical network interface card (NIC) 1 124.1. NIC 124.1 may be used by the node to communicate with clients. IP address 126.1 is associated withNIC 1 124.1 and Vserver 1 (Vs1) 140A may use IP address 126.1 to communicate with clients 104. It is noteworthy thatNode 1 108.1 may have more than one physical NIC and/or LIFs. -
Node 1 108.1 also uses a plurality of virtual NICs (VNICs) to communicate with devices outside the clustered storage system on behalf of other nodes. A VNIC is a virtual representation of a physical NIC. It is noteworthy that the Vserver node can use either the physical NIC or a VNIC for communication. - As an example,
Node 1 108.1 includes or uses VNIC 1A 128.1 for communication on behalf ofVserver 2 140B and VNIC 1B 128.2 for communication on behalf ofVserver 3 140C. VNIC 1A 128.1 is associated with IP addresses 126.2, 126.3 and 126.5, while VNIC 1B 128.2 is associated with IP addresses 126.4 and 126.6. As described below, IP addresses 126.2, 126.3, 126.4, 126.5 and 126.6 are owned by other nodes and not byNode 1 108.1 - As an example,
Node 2 108.2 includesNICs 2A 124.2A and 124.2B associated with IP addresses 126.2 and 126.6, respectively.Vserver 2 140B may useNIC 2A 124.2A andVserver 3 140C may useNIC 2B 124.2B for communication outside the cluster viaNode 2 108.2. To make a connection on behalf ofVserver 1 140A,Node 2 usesVNIC 2A 128.3 with IP addresses 126.1 that is owned byNode 1 108.1.Node 2 108.2 may also useVNIC 2B for establishing a connection forVserver 2 140B on behalf ofNode 3 108.3. In such a case, IP addresses 126.3 and 126.5 that are owned byNode 3 108.3 are used to make the connection. - Similarly,
Node 2 108.2 may useVNIC 2C 128.3A with IP address 126.4 to make a connection on behalf ofNode 4 108.4 forVserver 3 1400. IP address 126.4 is owned byNode 4 108.4. -
Node 3 108.3 includesNIC 3 124.3 with IP addresses 126.3 and 126.5.Vserver 2 140B may use these IP addresses to communicate with clients and other entities/devices.Node 3 108.3 also usesVNIC 3A 128.5 with IP address 126.1 owned byNode 1 108.1 to communicate on behalf ofNode 1 108.1/Vserver 1 140A.Node 3 108.3 may also useVNIC 3B 128.6 with IP address 126.2 to communicate on behalf ofNode 2 108.2/Vserver 2 140B.Node 3 108.3 further usesVNIC 3C with IP addresses 126.4 and 126.6 to communicate on behalf ofVserver 3 140C when IP addresses 126.4/126.6 are owned byNode 2 108.2 andNode 4 108.4. -
Node 4 108.4 includesNIC 4 124.4 with IP address 126.4.Vserver 3 140C may use IP address 126.4 to communicate with clients.Node 4 108.4 also includes a plurality of VNICs to communicate on behalf of other nodes using IP addresses that are owned by the other nodes. For example,VNIC 4A 128.8 is used to communicate on behalf ofNode 1 108.1 forVserver 1 140A using the IP address 126.1 owned byNode 1 108.1.VNIC 4B 128.9 may be used to communicate on behalf ofNode 2 orNode 3 forVserver 2 140B using the IP address 126.2, owned byNode 2, 108.2 and IP addresses 126.3 and 126.5 that are owned byNode 3 108.3. VNIC 4C 128.9A may be used to communicate on behalf ofNode 2 108.2 forVserver 3 140C using the IP address 126.6 that is owned byNode 2 108.2 - It is noteworthy that a node may use a local LIF or a remote LIF for communicating on behalf of Vservers. For example,
Node 2 108.2 may use the local IP address 126.2 for communicating on behalf ofVserver 2 or the non-local IP addresses 126.3 and 126.5 that are owned byNode 3 108.3. - ADS 127:
-
FIG. 10 illustrates an example ofADS 127 with a plurality ofsegments ADS 127 enables the configuration that allows one node to communicate on behalf of another node, according to one aspect. - As an example,
segment 127A stores a listing of IP addresses, Vservers, Nodes and NICs, whilesegment 127B is a routing data structure for each IP address and the associated Vserver.Segment 127C is maintained to track port reservations made by one node to communicate on behalf of another node. For example, ifNode 1 108.1 intends to communicate on behalf of a Vserver having a LIF that is owned by another node, thensegment 127C tracks the IP address, a node identifier identifying the node, a port identifier (Port #) that is used by the node and a protocol used for the communication, for example, TCP. It is noteworthy that for clarity,separate segments 127A-127C have been shown inFIG. 10 , the various aspects disclosed herein are not limited to separate segments and instead, thedata structure 127 may be consolidated into one single structure, or any other number of segments. -
Segment 127A includes various IP addresses, for example, 126.1-126.6. Each IP address is associated with a Vserver, for example, 126.1 is assigned to Vs1 (Vserver 1), IP addresses 126.2, 126.3, and 126.5 are assigned to Vs2 (Vserver 2) and IP addresses 126.4 and 126.6 are assigned to Vs3 (Vserver 3). - In one aspect,
segment 127A is used to generate the various VNICs used by the different nodes. Examples of various VNICs are described above with respect toFIG. 1B and can be used by any Vserver node to communicate on behalf of another Vserver node, even when the physical IP address is not present at the particular node of the Vserver. -
Segment 127B is a routing data structure with a destination (shown as x.x.x.x), a subnet mask (for example, 255.255.255.0) and a gateway (for example, 10.98.10.1) associated with a Vserver (for example, VS1). A Subnet is a logical, visible portion of an IP network. All network devices of a subnet are addressed with a common, identical, most-significant bit-group in their IP address. This results in the logical division of an IP address into two fields, a network or routing prefix and a host identifier that identifies a network interface. A gateway address is also assigned to the subnet and is used by a computing device within the subnet for routing information. A subnet mask is associated with each IP address of a NIC/VNIC. The mask is used to select the NIC/VNIC which can reach the gateway once its IP address has been established. -
Segment 127C identifiesNode 1 as having a port reservation for Port P1 for communicating on behalf ofNode 2 108.2 forVserver 2 using IP address 126.2. The port reservation allowsNode 1 to communicate without any conflict with other uses of IP address 126.2. The process for making the port reservation is now described below with respect toFIG. 2A . - Process Flows:
-
FIG. 2A shows aprocess 268 that occurs prior to creating a connection using a VNIC, according to one aspect. As an example, the connection may be originated byVserver 2 atNode 1 108.1. The process begins in block B270. In block B272, VNIC 1A 128.1 looks up inADS 127A the node that owns an IP address for a connection, for example, 126.2 owned byNode 2 108.2. In block B274,Node 1 108.1 sends a port reservation request toNode 2 108.2 over thecluster network 110. The port reservation request is made so that a port can be reserved fornode 1 108.1 to originate packets with this as the source port and IP address 126.2 as the source address, so thatnode 2 108.2 can transmit packets via itsNIC 2A which owns the IP address 126.2. - In block B276, if the port reservation request is successful then in block B276, a connection is established by
Node 1 108.1 with an external client using a source address that is owned by Node 2 (for example, 126.2) and the newly reserved source port enablesNode 2 to transmit packets forNode 1. If the request is unsuccessful, then the connection attempt is aborted in block B278. -
FIG. 2B shows aprocess 200 for configuring IP addresses, according to one aspect of the present disclosure. In one aspect,ADS 127 may be used to configure the IP addresses and generate the associated VNICs. The various process blocks ofFIG. 2B may be executed by themanagement application 121. -
Process 200 begins in block B202, whenmanagement console 122 and nodes 108.1-108.4 are initialized and operational. In block B204, a VNIC is generated for a node. The VNIC can be used by a Vserver to communicate with devices outside the cluster. The VNIC is generated when a LIF is configured for a NIC using themanagement application 121. In one aspect, a graphical user interface (GUI) and/or a command line interface (CLI) may be presented by themanagement console 122 to directly configure the NIC, which results in creating a VNIC. It is noteworthy that a VNIC is created when a first LIF is created for a Vserver. Subsequent LIFs are simply added to an existing VNIC. It is also noteworthy that a VNIC is created for the LIF on all cluster nodes except for the node with the physical NIC on which the IP address is configured. - The VNIC can be configured in various ways, for example, an IP address may be configured for the VNIC, a non-local IP address may be un-configured, an IP address may be moved to a new node or a new route may be configured or removed for a Vserver. It is noteworthy that VNIC configuration process occurs in parallel as a response to NIC configuration. The local/non-local distinction is with respect to the node that owns a NIC on which an IP address is configured. For a VNIC, the IP address is non-local.
- The various VNIC configurations are now described in detail with respect to various components/modules of FIGS. 1B/1C. It is noteworthy that the various process blocks described below may occur in parallel or at different times, depending on whether a first LIF is being created or a LIF is being moved or deleted.
- In block B206, an IP address is configured for a non-local node. For example, IP addresses 126.2, 126.3 and 126.5 are configured for VNIC 1A 128.1 at
Node 1 108.1 as a result of being configured for their respective NICs on nodes 108.2 and 108.3. The VNIC 1A 128.1 can be used byNode 1 108.1 to communicate on behalf ofNode 2 108.2 andNode 3 108.3 andVserver 2 140B. As described above, IP addresses 126.2, 126.3 and 126.5 are not owned byNode 1 108.1 and hence are considered non-local toNode 1. Similarly, the IP addresses for the other VNICs, as shown inFIG. 1B may be configured. It is noteworthy that an IP address for a NIC on one node is configured on a VNIC of every other node of a cluster. - In block B208,
segment 127A ofADS 127 is updated to reflect the node location of the new LIF for the VNIC. In block B210, any routes fromsegment 127B of the Vserver that are newly supported by the new non-local IP address are configured against the VNIC. - In block B212, a non-local IP address is un-configured for a Vserver, according to one aspect. Any routes that are only supported for the IP address are disabled. Thereafter, the non-local IP address for the VNIC is deleted in block B214. It is noteworthy if the deleted LIF is the last LIF for the VNIC, then the VNIC is also deleted.
- In block B216, an existing IP address for a Vserver is moved to a new node. For example, an IP address 126.1 owned and managed by Node 108.1 may be moved to
Node 2 108.2. The new IP address is then configured at the new node. At the node which owned the IP address prior to the move, the IP address is configured on a VNIC. Process block B216 includes process blocks B206, B208 and B210, described above. - At the new node, the moved IP address is un-configured from a VNIC in block B218, which includes blocks B212 and B214. At the other nodes, in block B220,
ADS 127 is updated to indicate that the new node manages the IP address. - In block B222, a new route for a Vserver is configured and added to
ADS 127B. If its gateway is reachable by one or more of the addresses of the VNIC, the route is configured in a network stack (not shown) against this VNIC. - If a route is unconfigured from the Vserver and removed from
ADS 127B, then in block B224, the route is unconfigured from the network stack for the VNIC, if currently configured. -
FIG. 2C shows aprocess 226 for transmitting a packet by a node, for example, byVserver 2 140B ofNode 1 108.1 on behalf ofNode 2 108.2 using IP address 126.2 that is owned byNode 2. A packet forVserver 2 140B is received at VNIC 1A 128.1. In block B232,segment 127A is used to determine the node that owns the source IP address for the packet, for example,Node 2 108.2, if the IP address was 126.2. In block B234, the packet is encapsulated in a message and then in block B236, the message is sent toNode 2 108.2 for transmitting the packet. -
FIG. 2D shows aprocess 240, where the message fromprocess 226 is received atNode 2 108.2. The packet is de-capsulated from the message in block B242. During this process, the packet is parsed and a packet header is validated. - In block B244,
Node 2 108.1 usesdata structure 127 to determines the NIC that owns the source IP address. In this example,NIC 2A 124.2A owns the IP address 126.2. Thereafter, in block B246, the packet is transmitted to its destination byVserver 2 140B ofNode 2 108.2. -
FIG. 2E shows aprocess 248 for receiving a packet from outside the cluster, for example, a client, management console or any other device, according to one aspect. Process 248 starts in block B250, when a client system is initialized and operational. - In block 3252, a packet is received at a node, for example,
Node 2 108.2. As an example, the packet may be received byVserver 2 140B. In block 3254,Vserver 2 140B determines if the connection for the packet originated from another node. This is determined by usingsegment 127C ofADS 127 to look up a port reservation for an IP address, which then provides the originating node. In block 3256, the packet is forwarded to the other node if the other node originated the connection by encapsulating the packet in a message, otherwise, the packet is processed byNode 2 108.2, if it originated the connection. -
FIG. 2F shows aprocess 258, when a message s received by the other node that originated the connection (for example,Vserver 2 140B ofNode 1 108.1). The process begins in block 3260, whenNode 2 108.2 forwards the packet toNode 1 108.1 in a message. In block 3262,Node 1 108.1 receives the message viacluster network 110. The packet is de-capsulated in block B264 andNode 1 determines the VNIC that holds the destination address. Thereafter, the packet is provided as input to the VNIC in block B266. - The various aspects described above enable a node of a Vserver to communicate on behalf of other nodes. The
ADS 127 is stored and maintained at all nodes and this enables the Vservers to use all the IP addresses within the cluster. - Storage System Node:
-
FIG. 3 is a block diagram of node 108.1 that is illustratively embodied as a storage system comprising of a plurality ofprocessors memory 304, anetwork adapter 310, acluster access adapter 312, astorage adapter 316 andlocal storage 313 interconnected by asystem bus 308.Processors 302A-302B may be used to maintainADS 127 that has been described above in detail. -
Processors 302A-302B may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such hardware devices. Thelocal storage 313 comp rises one or more storage devices utilized by the node to locally store configuration information for example, in aconfiguration data structure 314. - The cluster access adapter 312 (similar to 129,
FIG. 1B ) comprises a plurality of ports adapted to couple node 108.1 to other nodes ofcluster 100. In the illustrative aspect, Ethernet may be used as the clustering protocol and interconnect media, although it will be apparent to those skilled in the art that other types of protocols and interconnects may be utilized within the cluster architecture described herein. In alternate aspects where the N-modules and D-modules are implemented on separate storage systems or computers, thecluster access adapter 312 is utilized by the N/D-module for communicating with other N/D-modules in thecluster 100/101. - Node 108.1 is illustratively embodied as a dual processor storage system executing a
storage operating system 306 that preferably implements a high-level module, such as a file system, to logically organize the information as a hierarchical structure of named directories and files on storage 112. However, it will be apparent to those of ordinary skill in the art that the node 108.1 may alternatively comprise a single or more than two processor systems. Illustratively, oneprocessor 302A executes the functions of the M-module 118, N-module 114 on the node, while theother processor 302B executes the functions of the D-module 116. In another aspect, a dedicated processor may execute the functions of M-module 118. - The
memory 304 illustratively comprises storage locations that are addressable by the processors and adapters for storing programmable instructions and data structures. The processor and adapters may, in turn, comprise processing elements and/or logic circuitry configured to execute the programmable instructions and manipulate the data structures. It will be apparent to those skilled in the art that other processing and memory means, including various computer readable media, may be used for storing and executing program instructions pertaining to the presented disclosure. - The
storage operating system 306 portions of which is typically resident in memory and executed by the processing elements, functionally organizes the node 108.1 by, inter alia, invoking storage operation in support of the storage service implemented by the node. - The network adapter (or NIC) 310 (similar to NICs 124.1, 124.2A, 124.2B, 124.3 and 124.4) comprises a plurality of ports adapted to couple the node 108.1 to one or more clients over point-to-point links, wide area networks, virtual private networks implemented over a public network (Internet) or a shared local area network. The
network adapter 310 thus may comprise the mechanical, electrical and signaling circuitry needed to connect the node to the network. - The
storage adapter 316 cooperates with thestorage operating system 306 executing on the node 108.1 to access information requested by the clients. The information may be stored on any type of attached array of writable storage device media such as video tape, optical, DVD, magnetic tape, bubble memory, electronic random access memory, micro-electro mechanical and any other similar media adapted to store information, including data and parity information. However, as illustratively described herein, the information is preferably stored on storage device 112. Thestorage adapter 316 comprises a plurality of ports having input/output (I/O) interface circuitry that couples to the storage devices over an I/O interconnect arrangement, such as a conventional high-performance, FC link topology. - Operating System:
-
FIG. 4 illustrates a generic example ofstorage operating system 306 executed by node 108.1 that interfaces withmanagement application 121, according to one aspect of the present disclosure. In one example,storage operating system 306 may include several modules, or “layers” executed by one or both of N-Module 114 and D-Module 116. These layers include afile system manager 400 that keeps track of a directory structure (hierarchy) of the data stored in storage devices and manages read/write operation, i.e. executes read/write operation on storage in response to client requests. -
Storage operating system 306 may also include aprotocol layer 402 and an associatednetwork access layer 406, to allow node 108.1 to communicate over a network with other systems.Protocol layer 402 may implement one or more of various higher-level network protocols, such as NFS, CIFS, Hypertext Transfer Protocol (HTTP), TCP/IP and others, as described below. -
Network access layer 406 may include one or more drivers, which implement one or more lower-level protocols to communicate over the network, such as Ethernet. Interactions between clients' and mass storage devices 112 are illustrated schematically as a path, which illustrates the flow of data throughstorage operating system 306. - The
storage operating system 306 may also include astorage access layer 404 and an associatedstorage driver layer 408 to allow D-module 116 to communicate with a storage device. Thestorage access layer 404 may implement a higher-level storage protocol, such as RAID (redundant array of inexpensive disks), while thestorage driver layer 408 may implement a lower-level storage device access protocol, such as FC or SCSI. Thestorage driver layer 408 may maintain various data structures (not shown) for storing information LUN, storage volume, aggregate and various storage devices. - As used herein, the term “storage operating system” generally refers to the computer-executable code operable on a computer to perform a storage function that manages data access and may, in the case of a node 108.1, implement data access semantics of a general purpose operating system. The storage operating system can also be implemented as a microkernel, an application program operating over a general-purpose operating system, such as UNIX@ or Windows XP®, or as a general-purpose operating system with configurable functionality, which is configured for storage applications as described herein.
- In addition, it will be understood to those skilled in the art that the disclosure described herein may apply to any type of special-purpose (e.g., file server, filer or storage serving appliance) or general-purpose computer, including a standalone computer or portion thereof, embodied as or including a storage system. Moreover, the teachings of this disclosure can be adapted to a variety of storage system architectures including, but not limited to, a network-attached storage environment, a storage area network and a storage device directly-attached to a client or host computer. The term “storage system” should therefore be taken broadly to include such arrangements in addition to any subsystems configured to perform a storage function and associated with other equipment or systems. It should be noted that while this description is written in terms of a write any where file system, the teachings of the present disclosure may be utilized with any suitable file system, including a write in place file system.
- Processing System:
-
FIG. 5 is a high-level block diagram showing an example of the architecture of aprocessing system 500 that may be used according to one aspect. Theprocessing system 500 can represent themanagement console 122 or client 104. Note that certain standard and well-known components which are not germane to the present disclosure are not shown inFIG. 5 . - The
processing system 500 includes one or more processor(s) 502 andmemory 504, coupled to abus system 505. Thebus system 505 shown inFIG. 5 is an abstraction that represents any one or more separate physical buses and/or point-to-point connections, connected by appropriate bridges, adapters and/or controllers. Thebus system 505, therefore, may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (sometimes referred to as “Firewire”). - The processor(s) 502 are the central processing units (CPUs) of the
processing system 500 and, thus, control its overall operation. In certain aspects, theprocessors 502 accomplish this by executing software stored inmemory 504. Aprocessor 502 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices. -
Memory 504 represents any form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices.Memory 504 includes the main memory of theprocessing system 500.Instructions 506 implement the process steps described above with respect toFIGS. 2A-2F may reside in and execute (by processors 502) frommemory 504. - Also connected to the
processors 502 through thebus system 505 are one or more internalmass storage devices 510, and anetwork adapter 512. Internalmass storage devices 510 may be, or may include any conventional medium for storing large volumes of data in a non-volatile manner, such as one or more magnetic or optical based disks. Thenetwork adapter 512 provides theprocessing system 500 with the ability to communicate with remote devices (e.g., storage servers) over a network and may be, for example, an Ethernet adapter, a Fibre Channel adapter, or the like. - The
processing system 500 also includes one or more input/output (I/O)devices 508 coupled to thebus system 505. The I/O devices 508 may include, for example, a display device, a keyboard, a mouse, etc. - Cloud Computing:
- The system and techniques described above are applicable and useful in the upcoming cloud computing environment. Cloud computing means computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. The term “cloud” as used herein refers to a network (for example, the Internet) that enables providing computing as a service.
- Typical cloud computing providers deliver common business applications online which are accessed from another web service or software like a web browser, while the software and data are stored remotely on servers. The cloud computing architecture uses a layered approach for providing application services. A first layer is an application layer that is executed at client computers. In this example, the application allows a client to access storage via a cloud.
- After the application layer, is a cloud platform and cloud infrastructure, followed by a “server” layer that includes hardware and computer software designed for cloud specific services. The storage provider 116 (and associated methods thereof) and storage systems described above can be a part of the server layer for providing storage services. Details regarding these layers are not germane to the inventive aspects.
- Thus, a method and apparatus for managing network access addresses have been described. Note that references throughout this specification to “one aspect” or “an aspect” mean that a particular feature, structure or characteristic described in connection with the aspect is included in at least one aspect of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an aspect” or “one aspect” or “an alternative aspect” in various portions of this specification are not necessarily all referring to the same aspect. Furthermore, the particular features, structures or characteristics being referred to may be combined as suitable in one or more aspects of the disclosure, as will be recognized by those of ordinary skill in the art.
- While the present disclosure is described above with respect to what is currently considered its preferred aspects, it is to be understood that the disclosure is not limited to that described above. To the contrary, the disclosure is intended to cover various modifications and equivalent arrangements within the spirit and scope of the appended claims.
Claims (20)
1. A machine implemented method, comprising:
assigning a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and
using the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
2. The method of claim 1 , wherein the first cluster node and the second cluster node store an address data structure for storing a plurality of network access addresses used by a plurality of virtual storage servers executed by at least the first cluster node and the second cluster node.
3. The method of claim 2 , wherein the address data structure is updated when the network access address is assigned to the VNIC.
4. The method of claim 1 , wherein the network access address is an Internet Protocol address that is assigned to the physical network interface card.
5. The method of claim 1 , wherein the VNIC sends a packet to the second cluster node for transmitting the packet outside the cluster storage system.
6. The method of claim 1 , wherein the first cluster node makes a port reservation with the second cluster node to use the network access address to communicate on behalf of the second cluster node.
7. The method of claim 6 , wherein an address data structure stores a port identifier for a port reserved by the first cluster node.
8. A non-transitory, machine readable storage medium having stored thereon instructions for performing a method, comprising machine executable code which when executed by at least one machine, causes the machine to:
assign a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and
use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
9. The storage medium of claim 8 , wherein the first cluster node and the second cluster node store an address data structure for storing a plurality of network access addresses used by a plurality of virtual storage servers executed by at least the first cluster node and the second cluster node.
10. The storage medium of claim 9 , wherein the address data structure is updated when the network access address is assigned to the VNIC.
11. The storage medium of claim 8 , wherein the network access address is an Internet Protocol address that is assigned to the physical network interface card.
12. The storage medium of claim 8 , wherein the VNIC sends a packet to the second cluster node for transmitting the packet outside the cluster storage system.
13. The storage medium of claim 8 , wherein the first cluster node makes a port reservation with the second cluster node to use the network access address to communicate on behalf of the second cluster node.
14. The storage medium of claim 13 , wherein an address data structure stores a port identifier for a port reserved by the first cluster node.
15. A system comprising:
a memory containing machine readable medium comprising machine executable code having stored thereon instructions; and a processor module coupled to the memory, the processor module configured to execute the machine executable code to:
assign a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and
use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
16. The system of claim 15 , wherein the first cluster node and the second cluster node store an address data structure for storing a plurality of network access addresses used by a plurality of virtual storage servers executed by at least the first cluster node and the second cluster node.
17. The system of claim 16 , wherein the address data structure is updated when the network access address is assigned to the VNIC.
18. The system of claim 15 , wherein the network access address is an Internet Protocol address that is assigned to the physical network interface card.
19. The system of claim 15 , wherein the VNIC sends a packet to the second cluster node for transmitting the packet outside the cluster storage system.
20. The system of claim 15 , wherein the first cluster node makes a port reservation with the second cluster node to use the network access address to communicate on behalf of the second cluster node.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/505,196 US20160100008A1 (en) | 2014-10-02 | 2014-10-02 | Methods and systems for managing network addresses in a clustered storage environment |
US16/178,436 US10785304B2 (en) | 2014-10-02 | 2018-11-01 | Methods and systems for managing network addresses in a clustered storage environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/505,196 US20160100008A1 (en) | 2014-10-02 | 2014-10-02 | Methods and systems for managing network addresses in a clustered storage environment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/178,436 Continuation US10785304B2 (en) | 2014-10-02 | 2018-11-01 | Methods and systems for managing network addresses in a clustered storage environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160100008A1 true US20160100008A1 (en) | 2016-04-07 |
Family
ID=55633682
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/505,196 Abandoned US20160100008A1 (en) | 2014-10-02 | 2014-10-02 | Methods and systems for managing network addresses in a clustered storage environment |
US16/178,436 Active 2034-12-12 US10785304B2 (en) | 2014-10-02 | 2018-11-01 | Methods and systems for managing network addresses in a clustered storage environment |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/178,436 Active 2034-12-12 US10785304B2 (en) | 2014-10-02 | 2018-11-01 | Methods and systems for managing network addresses in a clustered storage environment |
Country Status (1)
Country | Link |
---|---|
US (2) | US20160100008A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106790660A (en) * | 2017-01-18 | 2017-05-31 | 咪咕视讯科技有限公司 | A kind of dispositions method and device for realizing distributed memory system |
US20180270191A1 (en) * | 2017-03-17 | 2018-09-20 | Fujitsu Limited | Computer server and method of obtaining information on network connection of computer server |
CN110110004A (en) * | 2018-01-30 | 2019-08-09 | 腾讯科技(深圳)有限公司 | A kind of data manipulation method, device and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10178169B2 (en) * | 2015-04-09 | 2019-01-08 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040006587A1 (en) * | 2002-07-02 | 2004-01-08 | Dell Products L.P. | Information handling system and method for clustering with internal cross coupled storage |
US20040049573A1 (en) * | 2000-09-08 | 2004-03-11 | Olmstead Gregory A | System and method for managing clusters containing multiple nodes |
US20090177720A1 (en) * | 2006-06-08 | 2009-07-09 | Hitachi, Ltd. | Failover method of remotely-mirrored clustered file servers |
US20090254649A1 (en) * | 2008-04-02 | 2009-10-08 | International Business Machines Corporation | High availability of internet protocol addresses within a cluster |
US7707263B1 (en) * | 2002-05-03 | 2010-04-27 | Netapp, Inc. | System and method for associating a network address with a storage device |
US20110113192A1 (en) * | 2005-02-09 | 2011-05-12 | Hitachi, Ltd. | Clustered storage system with external storage systems |
US20120072564A1 (en) * | 2010-09-17 | 2012-03-22 | Oracle International Corporation | System and method for providing ethernet over infiniband virtual hub scalability in a middleware machine environment |
US20130124451A1 (en) * | 2011-11-11 | 2013-05-16 | Symantec Corporation | Cluster systems and methods |
US8549517B2 (en) * | 2008-12-19 | 2013-10-01 | Fujitsu Limited | Address assignment method, computer, and recording medium having program recorded therein |
US20140185627A1 (en) * | 2012-12-31 | 2014-07-03 | Advanced Micro Devices, Inc. | Link aggregation emulation for virtual nics in a cluster server |
US20140185611A1 (en) * | 2012-12-31 | 2014-07-03 | Advanced Micro Devices, Inc. | Distributed packet switching in a source routed cluster server |
US9531668B2 (en) * | 2014-01-06 | 2016-12-27 | Samsung Electronics Co., Ltd. | Micro server, method of allocating MAC address, and computer readable recording medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7031945B1 (en) * | 2000-07-24 | 2006-04-18 | Donner Irah H | System and method for reallocating and/or upgrading and/or rewarding tickets, other event admittance means, goods and/or services |
JP4802295B1 (en) * | 2010-08-31 | 2011-10-26 | 株式会社スプリングソフト | Network system and virtual private connection forming method |
US9223635B2 (en) * | 2012-10-28 | 2015-12-29 | Citrix Systems, Inc. | Network offering in cloud computing environment |
FR3003975A1 (en) * | 2013-03-29 | 2014-10-03 | France Telecom | METHOD OF PROCESSING USER DATA OF A SOCIAL NETWORK |
WO2014155673A1 (en) * | 2013-03-29 | 2014-10-02 | 株式会社日立製作所 | Test environment management device and test environment building method |
-
2014
- 2014-10-02 US US14/505,196 patent/US20160100008A1/en not_active Abandoned
-
2018
- 2018-11-01 US US16/178,436 patent/US10785304B2/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040049573A1 (en) * | 2000-09-08 | 2004-03-11 | Olmstead Gregory A | System and method for managing clusters containing multiple nodes |
US7707263B1 (en) * | 2002-05-03 | 2010-04-27 | Netapp, Inc. | System and method for associating a network address with a storage device |
US20040006587A1 (en) * | 2002-07-02 | 2004-01-08 | Dell Products L.P. | Information handling system and method for clustering with internal cross coupled storage |
US20110113192A1 (en) * | 2005-02-09 | 2011-05-12 | Hitachi, Ltd. | Clustered storage system with external storage systems |
US20090177720A1 (en) * | 2006-06-08 | 2009-07-09 | Hitachi, Ltd. | Failover method of remotely-mirrored clustered file servers |
US20090254649A1 (en) * | 2008-04-02 | 2009-10-08 | International Business Machines Corporation | High availability of internet protocol addresses within a cluster |
US8549517B2 (en) * | 2008-12-19 | 2013-10-01 | Fujitsu Limited | Address assignment method, computer, and recording medium having program recorded therein |
US20120072564A1 (en) * | 2010-09-17 | 2012-03-22 | Oracle International Corporation | System and method for providing ethernet over infiniband virtual hub scalability in a middleware machine environment |
US20130124451A1 (en) * | 2011-11-11 | 2013-05-16 | Symantec Corporation | Cluster systems and methods |
US20140185627A1 (en) * | 2012-12-31 | 2014-07-03 | Advanced Micro Devices, Inc. | Link aggregation emulation for virtual nics in a cluster server |
US20140185611A1 (en) * | 2012-12-31 | 2014-07-03 | Advanced Micro Devices, Inc. | Distributed packet switching in a source routed cluster server |
US9531668B2 (en) * | 2014-01-06 | 2016-12-27 | Samsung Electronics Co., Ltd. | Micro server, method of allocating MAC address, and computer readable recording medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106790660A (en) * | 2017-01-18 | 2017-05-31 | 咪咕视讯科技有限公司 | A kind of dispositions method and device for realizing distributed memory system |
US20180270191A1 (en) * | 2017-03-17 | 2018-09-20 | Fujitsu Limited | Computer server and method of obtaining information on network connection of computer server |
US11038836B2 (en) * | 2017-03-17 | 2021-06-15 | Fujitsu Limited | Computer server and method of obtaining information on network connection of computer server |
CN110110004A (en) * | 2018-01-30 | 2019-08-09 | 腾讯科技(深圳)有限公司 | A kind of data manipulation method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US20190075162A1 (en) | 2019-03-07 |
US10785304B2 (en) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10701151B2 (en) | Methods and systems for accessing virtual storage servers in a clustered environment | |
US20160080255A1 (en) | Method and system for setting up routing in a clustered storage system | |
US11375016B2 (en) | Routing messages between cloud service providers | |
US10785304B2 (en) | Methods and systems for managing network addresses in a clustered storage environment | |
US10282137B2 (en) | Uniquely naming storage devices in a global storage environment | |
US9766833B2 (en) | Method and apparatus of storage volume migration in cooperation with takeover of storage area network configuration | |
US10148758B2 (en) | Converged infrastructure and associated methods thereof | |
US20130138764A1 (en) | Method and system for virtual machine data migration | |
US9042270B2 (en) | Method and apparatus of network configuration for storage federation | |
US8996832B2 (en) | Method and system for responding to client requests for information maintained by storage systems | |
US20160218991A1 (en) | Provisioning of isolated path from computer to co-located storage | |
US9548888B1 (en) | Technique for setting WWNN scope for multi-port fibre channel SCSI target deduplication appliances | |
US9398121B1 (en) | Selecting among virtual networking protocols | |
US10523753B2 (en) | Broadcast data operations in distributed file systems | |
US8719534B1 (en) | Method and system for generating a migration plan | |
WO2016137491A1 (en) | Software defined network controller for implementing tenant specific policy | |
US9432329B2 (en) | Network address assignment with duplication detection | |
US9674312B2 (en) | Dynamic protocol selection | |
US9996422B2 (en) | Methods and systems for a copy-offload operation | |
US8255659B1 (en) | Method and system for accessing storage | |
US9912542B2 (en) | Methods and systems for managing port reachability in a clustered system | |
US10764330B2 (en) | LAN/SAN network security management | |
US20140289377A1 (en) | Configuring network storage system over a network | |
US9614911B2 (en) | Methods and systems for storage access management | |
US7698424B1 (en) | Techniques for presenting multiple data storage arrays to iSCSI clients as a single aggregated network array |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NETAPP, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ACCETTA, MICHAEL JOSEPH;SEMKE, JEFFREY ERIC;PREM, JEFFREY DAVID;REEL/FRAME:033875/0723 Effective date: 20141001 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |