US8185715B1 - Method and system for managing metadata in storage virtualization environment - Google Patents
Method and system for managing metadata in storage virtualization environment Download PDFInfo
- Publication number
- US8185715B1 US8185715B1 US11/694,698 US69469807A US8185715B1 US 8185715 B1 US8185715 B1 US 8185715B1 US 69469807 A US69469807 A US 69469807A US 8185715 B1 US8185715 B1 US 8185715B1
- Authority
- US
- United States
- Prior art keywords
- metadata
- memory chunk
- virtualization
- storage
- data processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 29
- 238000013507 mapping Methods 0.000 claims abstract description 15
- CPLXHLVBOLITMK-UHFFFAOYSA-N Magnesium oxide Chemical compound [Mg]=O CPLXHLVBOLITMK-UHFFFAOYSA-N 0.000 description 22
- 239000000835 fiber Substances 0.000 description 19
- 230000008569 process Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 14
- 239000004744 fabric Substances 0.000 description 10
- 239000003795 chemical substances by application Substances 0.000 description 6
- 230000002085 persistent effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 239000003999 initiator Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000010076 replication Effects 0.000 description 2
- 229920000638 styrene acrylonitrile Polymers 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- XNKICCFGYSXSAI-UHFFFAOYSA-N 1,1-diphenylpropan-2-amine Chemical compound C=1C=CC=CC=1C(C(N)C)C1=CC=CC=C1 XNKICCFGYSXSAI-UHFFFAOYSA-N 0.000 description 1
- 101100257820 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) ssp-1 gene Proteins 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- KDMWFFHKQUJBLB-UHFFFAOYSA-N n-methyl-1,1-diphenylpropan-2-amine;hydrochloride Chemical compound Cl.C=1C=CC=CC=1C(C(C)NC)C1=CC=CC=C1 KDMWFFHKQUJBLB-UHFFFAOYSA-N 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Definitions
- Storage virtualization is desirable in SAN communication.
- the term storage virtualization as used herein means the process by which a logical (virtual) storage device/system/array appears to a host system as being a physical device (or a local device).
- Storage virtualization allows data to be stored in different storage arrays and devices, but can be presented to a host system in a comprehensive manner, as if the arrays and storage devices were local to the host system.
- Persistent information includes mapping metadata and copy of the data stored by the host system.
- metadata as used throughout this specification includes information that describes data.
- file metadata includes, file name, time of creation and modification, read and write permissions and lists of block addresses at which the file's data is stored.
- storage metadata includes the mapping tables that link virtual block addresses to logical block addresses of physical storage devices.
- a method for managing metadata for a plurality of storage platforms that provide virtualization services includes requesting a memory chunk for storing metadata; wherein a data processing agent operating in a storage platform requests the memory chunk and a centralized metadata controller for the plurality of storage platforms receives the request for the memory chunk; determining the memory chunk size and allocating the memory chunk from a pool of memory chunks; and assigning the allocated memory chunk to a virtualization mapping object.
- a storage area network includes a plurality of virtualization modules that are coupled together in a cluster; wherein each virtualization module runs a data processing agent for providing virtualization services and a centralized metadata controller for the cluster controls allocation of memory chunks to store metadata; and the metadata controller receives a request for a memory chunk from the data processing agent and determines the memory chunk size and allocates the memory chunk from a pool of memory chunks; and assigns the allocated memory chunk to a virtualization mapping object.
- a virtualization module coupled to other virtualization modules in a cluster.
- the virtualization module includes a data processing agent for providing virtualization services; and a centralized metadata controller for the cluster that controls allocation of memory chunks to store metadata; and the metadata controller receives a request for a memory chunk from the data processing agent and determines the memory chunk size and allocates the memory chunk from a pool of memory chunks; and assigns the allocated memory chunk to a virtualization mapping object.
- FIG. 1B shows another example of a system using storage virtualization
- FIG. 1C shows a block diagram of a network node, according to one embodiment
- FIG. 1D shows a block diagram of a cluster, according to one embodiment
- FIG. 1E shows a block diagram of “chunk” pool controlled by a centralized metadata controller, according to one embodiment
- FIGS. 2A and 2B show process flow diagrams for managing chunks, according to one embodiment
- FIG. 3 shows a process flow diagram for obtaining control of a chunk, according to one embodiment
- FIG. 4 shows a process flow diagram for gaining control of an active chunk, according to one embodiment.
- FIG. 1A shows a top-level block diagram of a system 100 according to one aspect of the present invention.
- System 100 facilitates communication between plural devices/computing systems.
- Any device in network system 100 for example, a Fibre Channel switch or node
- these elements in the network may also perform storage virtualization functions.
- Various standards protocols may be used for designing and operating SANs.
- network nodes in a SAN communicate using a storage protocol that operates on logical blocks of data, such as small computer system interface (SCSI).
- SCSI small computer system interface
- the SCSI protocol is incorporated herein by reference in its entirety.
- the storage protocol is delivered, by mapping or encapsulation, using a reliable transport protocol.
- Fibre Channel is one such standard transport protocol, which is incorporated herein by reference in its entirety.
- Fibre channel is a set of American National Standard Institute (ANSI) standards, which provide a serial transmission protocol for storage and network protocols such as HIPPI, SCSI, IP, ATM and others.
- the Fibre Channel Protocol (FCP) maps SCSI commands to the Fibre Channel transport protocol.
- Other transport protocols may also support SCSI commands, for example, the SCSI parallel bus, Serial Attached SCSI, TCP/IP, and Infiniband. These standard protocols are incorporated herein by reference in their entirety.
- Fibre channel supports three different topologies: point-to-point, arbitrated loop and Fibre Channel Fabric.
- the point-to-point topology attaches two devices directly.
- the arbitrated loop topology attaches devices in a loop.
- the Fibre Channel Fabric topology attaches devices (i.e., host or storage systems) directly to a Fabric, which may consist of multiple Fabric elements.
- a Fibre Channel switch device is a multi-port device where each port manages routing of network traffic between its attached systems and other systems that may be attached to other switches in the Fabric. Each port can be attached to a server, peripheral, I/O subsystem, bridge, hub, router, or another switch.
- system 100 includes a plurality of host computing systems ( 102 - 104 ) that are coupled to a storage services platform (SSP) (also referred to as a “node” or “network node”) 101 via SAN 105 .
- Host systems ( 102 - 104 ) typically include several functional components. These components may include a central processing unit (CPU), main memory, input/output (“I/O”) devices, and local storage devices (for example, internal disk drives).
- the main memory is coupled to the CPU via a system bus or a local memory bus.
- the main memory is used to provide the CPU access to data and/or program information that is stored in main memory at execution time.
- the main memory is composed of random access memory (RAM) circuits.
- RAM random access memory
- SSP (virtualization module) 101 is coupled to SAN 106 that is operationally coupled to plural storage devices, for example, 107 , 108 and 109 .
- SSP 101 provides virtual storage 110 A to host systems 102 - 104 , while operating as a virtual host 110 B to storage devices 107 - 109 .
- Virtual storage 110 A includes a set of disk blocks presented to a host operating system as a range of consecutively numbered logical blocks with physical disk-like storage and SCSI (or any other protocol based) input/output semantics.
- SSP 101 is a multi-port Fabric element in the SAN (e.g., in Fibre Channel, physical ports function as Fx_Ports). As a Fabric element, SSP 101 can process non-blocking Fibre Channel Class 2 (connectionless, acknowledged) and Class 3 (connectionless, unacknowledged) service between any ports.
- Fibre Channel Fibre Channel
- Class 3 connectionless, unacknowledged
- SSP 101 ports are generic to common Fibre Channel port types, for example, F_Port, FL_Port and E_Port.
- F_Port Fibre Channel port
- FL_Port FL_Port
- E_Port Fibre Channel port
- each GL port can function as any type of switch port.
- the GL port may function as a special port useful in Fabric element linking, as described below.
- SSP 101 is a multi-port network node element in a SAN (e.g., in a Fibre Channel based network, physical ports function as Nx_Ports).
- SSP 101 may originate or respond to network communication (e.g., in a Fibre Channel based network, originate or respond to an exchange).
- SSP 101 may support both switch ports and node ports simultaneously.
- the node ports may be supported directly at a physical interface (not shown) or indirectly as a virtual entity that may be reached via one or more of the physical interfaces (not shown) operating as switch ports. For the latter, these virtual node ports are visible to other network elements as if they were physically attached to switch ports on SSP 101 .
- SSP 101 supports plural upper level protocols, such as SCSI.
- SCSI SCSI on Fibre Channel (FCP)
- SSP 101 supports SCSI operation on any of its Nx_Ports.
- Each SCSI port can support either initiator or target mode operation, or both.
- FIG. 1B shows a block diagram of a network system where plural SSPs 101 (SSP 1 . . . SSP N ) are operationally coupled in a cluster 100 A.
- SSP 101 provides virtualization services to different host systems and storage devices.
- SSP 1 101 provides virtual disk A ( 110 A) (referred to as virtual storage earlier) to host 102
- SSP N provides virtual disk N ( 110 A) to Host 104 .
- FIG. 1C shows a top-level block diagram of SSP 101 .
- SSP 101 has plural ports (shown as Port 1 -Port N ( 115 A, 115 B and 115 C). The ports allow SSP 101 to connect with host systems and other devices including storage devices, either directly or via a SAN.
- SSP 101 includes a data plane (module/component) 111 and a control plane (module/component) 117 .
- Data plane 111 and control plane 117 communicate via control path 116 .
- Control path 116 is a logical communication path that may consist of one or more physical interconnects.
- control path 116 includes a high speed PCI/PCI-X/PCI-Express bus.
- control path 116 includes a Fibre Channel connection. It is noteworthy that the adaptive aspects of the present invention are not limited to the type of link 116 .
- Data plane 111 includes memory (not shown), a backplane 113 , plural ports ( 115 A- 115 C) and plural packet processing engines (PPEs) (shown as 114 A- 114 C).
- Data plane 111 receives network packets (e.g., command frames, data frames) from host system 102 via plurals ports ( 115 A- 115 C).
- PPE 114 A- 114 C
- I_T_Ls are used to process SCSI based commands, where I stands for initiator; T for a target and L for a logical unit number value.
- PPEs may forward packets via any Port 115 A- 115 C by sending them through backplane 113 .
- commands that are autonomously processed by data plane 111 , without assistance from control plane 117 are sent directly through back plane 113 .
- PPEs 114 A- 114 C may also forward packets to control plane 117 via control path 116 .
- commands which require assistance from control plane 117 , are sent via control path 116 .
- Control plane 117 includes processor 118 , memory 119 and a data plane interface 118 A.
- Data plane interface 118 A facilitates communication with data plane 113 via control path 116 for example, for sending/receiving commands.
- data plane interface 118 A may include a network adapter, such as a Fibre Channel host bus adapter (HBA).
- HBA Fibre Channel host bus adapter
- data plane interface 118 A includes a bus interface, such as a PCI bridge.
- Processor 118 may be a generic microprocessor (for example, Intel® Xeon®)) and an associated chip set (e.g., Intel E7500), a reduced instruction set computer (RISC) or a state machine. Processor 118 executes software for processing input/output (I/O) requests and processing virtual commands.
- Intel® Xeon® for example, Intel® Xeon®
- RISC reduced instruction set computer
- Processor 118 executes software for processing input/output (I/O) requests and processing virtual commands.
- the following provides an example of processing a virtual command.
- host 102 sends a command to write to virtual storage 110 A, it is considered a virtual command, since it involves a virtual entity 110 A.
- a physical command involves actual physical entities.
- the I/O context for the virtual command i.e. remapped directly to a single corresponding physical command
- the “Q” in I_T_L_Q identifies the command type.
- SSP 101 provides various storage related services, including, mirroring, snapshots (including copy on write (COW), journaling and others.
- mirror as used herein includes creating an exact copy of disk data written in real time to a secondary array or disk.
- snapshot means a “point-in-time” copy of block level data. Snapshots are used to restore data accesses to a known good point in time if data corruption subsequently occurs or to preserve an image for non-disruptive tape backup.
- COW means copying only that data that has been modified after an initial snapshot has been taken.
- journaling as used herein means an operation that maintains a list of storage writes in a log file.
- Metadata for the foregoing operations changes dynamically.
- the adaptive aspects disclosed herein provide an efficient system and method to manage metadata, as described below.
- FIG. 1D shows an example of a system for managing metadata, according to one embodiment.
- Cluster 100 A includes various nodes (SSPs). For example, Nodes 1 , 2 and N.
- Nodes 1 and 2 include a data path agent (DPA) 120 (shown as DPA 1 and DPA 2 ).
- DPA data path agent
- Each DPA includes software instructions that are executed by processor 118 .
- DPA 120 provides virtualization services for a plurality of host systems, for example, volume management, data replication, data protection and others.
- Node N includes a metadata controller (MDC) 121 for cluster 100 A.
- MDC 121 coordinates actions of all DPAs in different nodes and manages metadata.
- MDC 121 controls allocation of chunks that are used for persistent storage of metadata, as described below, according to one aspect.
- the term “chunk” as used herein is persistent storage that is used to store metadata and replicated data.
- Node N shows MDC 121 , it also runs a DPA (not shown), i.e. at any given time, all nodes execute a DPA, while one of the nodes executes MDC 121 .
- FIG. 1E shows a chunk pool 125 with plural chunks 122 , 123 and 124 .
- MDC 121 allocates these chunks to DPAs and once a DPA completes writing metadata in the chunk, MDC 121 regains control back from the DPA. If the DPA fails while the chunk is being written, MDC 121 regains control back, even before the entire chunk is written, as described below.
- chunk pool 125 may change statically or dynamically.
- chunks may not be pre-allocated and instead MDC 121 is aware of available chunk storage and may use a dynamic allocation process to retrieve a chunk when needed.
- FIG. 2A shows a process flow diagram for managing metadata chunks, according to one embodiment.
- the process starts in step S 200 , when a DPA (for example, 120 ) requests a chunk from MDC 121 for metadata storage services.
- MDC 121 examines the request from DPA 120 and determines the chunk size that it needs to allocate.
- step S 204 MDC 121 allocates a chunk to the request from the chunk pool, for example the chunk pool 125 ( FIG. 1E ).
- Steps S 202 and S 204 are executed as a part of the same transaction, i.e., either both happen or neither step takes place.
- step S 206 the chunk is assigned to a virtualization-mapping object in a designated node.
- Virtual disks are composed of a hierarchical layering of mapping objects. Each mapping object represents a particular transformation of an I/O operation. Each mapping object contains metadata that directs the transformation of individual I/Os.
- step S 208 DPA 120 for the designated node gets control of the chunk and DPA 120 populates the chunk with metadata.
- FIG. 2B shows a process flow diagram for storing metadata in chunks, according to one embodiment.
- the process starts in step S 210 , when a DPA (for example, 120 ) that gets control of a chunk stores metadata in the assigned chunk.
- a DPA for example, 120
- Metadata for this example is an initiator port on an SSP 101 , a remote target port, a LUN identifier, and a logical block address (LBA) offset value
- Segment map metadata for this example is a table of fixed size segments, each of which maps a virtual LBA region to an underlying mapping object
- Point-in-time metadata in this example includes a table of fixed size segments, managed by an application to manage COW operations.
- control regarding the chunk is passed to MDC 121 in step S 212 . Thereafter, DPA 120 requests another chunk from MDC 121 in step S 214 .
- FIG. 2C shows an example of metadata 217 for a segment in disk 216 .
- Metadata 217 includes header and version number 218 and is identified as md-bluhdl.
- Metadata 217 includes various metadata entries (MD entries 219 ) for example, md_plba (logical block address); UD disk_ID (user data (UD) identifier); UD StartLBA (start of user data LBA); UD size (size of each UD entry); flags; characters and then other metadata entries.
- MD entries 219 for example, md_plba (logical block address); UD disk_ID (user data (UD) identifier); UD StartLBA (start of user data LBA); UD size (size of each UD entry); flags; characters and then other metadata entries.
- md_plba logical block address
- UD disk_ID user data (UD) identifier
- UD StartLBA start of user data LBA
- UD size size of each UD entry
- FIG. 3 shows a process flow diagram for managing chunks when a DPA fails before passing off a chunk to MDC 121 , according to one embodiment.
- the process starts in step S 300 .
- a DPA fails before passing off a chunk to MDC 121 .
- MDC 121 reclaims the control of the allocated chunk, after MDC 121 receives notification of DPA failure.
- step S 306 when the DPA recovers, it requests another chunk from MDC 121 .
- FIG. 4 shows a process flow diagram for obtaining control of a chunk, according to one embodiment.
- the process starts in step S 400 .
- MDC 121 determines if there is a trigger event to gain control of a chunk from a DPA to MDC 121 .
- the trigger event my be generated from a user action, for example, creation of a new Point-In-Time copy. If there is no trigger event, the process simply continues to monitor. If yes, then in step S 404 , MDC 121 sends a request to obtain control of an active chunk.
- An active chunk is a chunk that at any given time is under control of a DPA and is being written by the DPA.
- step S 406 the DPA completes the pending operation and in step S 408 , the DPA returns control of the chunk to MDC 121 .
- step S 410 MDC 121 stores a flag indicating that it “owns” (i.e. controls) the chunk.
- MDC 121 is not aware of any metadata format and simply allocates chunks before the chunk is populated by a DPA. In another aspect, if a DPA fails for whatever reason, MDC 121 obtains control of the chunk.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
-
- uint32_t version;
- uint32_t num_entries;
- dpa_objhdl_t md_bluhdl;
- dpa_lba_t md_plba;
- dpa_objhdl_t ud_bluhdl;
- dpa_lba_t ud_plba;
- uint32_t ud_size;
- uint32_t checkdata;
- dpa_chunkmd_t *mdentries;
-
- dpa_lba_t vlba;
- dpa_checkmode_t checkmode;
- uint32_t checkdata;
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/694,698 US8185715B1 (en) | 2007-03-30 | 2007-03-30 | Method and system for managing metadata in storage virtualization environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/694,698 US8185715B1 (en) | 2007-03-30 | 2007-03-30 | Method and system for managing metadata in storage virtualization environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US8185715B1 true US8185715B1 (en) | 2012-05-22 |
Family
ID=46061370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/694,698 Active 2029-01-08 US8185715B1 (en) | 2007-03-30 | 2007-03-30 | Method and system for managing metadata in storage virtualization environment |
Country Status (1)
Country | Link |
---|---|
US (1) | US8185715B1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120185618A1 (en) * | 2011-01-13 | 2012-07-19 | Avaya Inc. | Method for providing scalable storage virtualization |
US20130041872A1 (en) * | 2011-08-12 | 2013-02-14 | Alexander AIZMAN | Cloud storage system with distributed metadata |
US20130204849A1 (en) * | 2010-10-01 | 2013-08-08 | Peter Chacko | Distributed virtual storage cloud architecture and a method thereof |
US8849759B2 (en) | 2012-01-13 | 2014-09-30 | Nexenta Systems, Inc. | Unified local storage supporting file and cloud object access |
CN109542342A (en) * | 2018-11-09 | 2019-03-29 | 锐捷网络股份有限公司 | Metadata management and data reconstruction method, equipment and storage medium |
US12001355B1 (en) | 2019-05-24 | 2024-06-04 | Pure Storage, Inc. | Chunked memory efficient storage data transfers |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020049731A1 (en) * | 2000-05-31 | 2002-04-25 | Takuya Kotani | Information processing method and apparatus |
US20020059309A1 (en) * | 2000-06-26 | 2002-05-16 | International Business Machines Corporation | Implementing data management application programming interface access rights in a parallel file system |
US20020184463A1 (en) * | 2000-07-06 | 2002-12-05 | Hitachi, Ltd. | Computer system |
US20030028514A1 (en) * | 2001-06-05 | 2003-02-06 | Lord Stephen Philip | Extended attribute caching in clustered filesystem |
US20070055702A1 (en) * | 2005-09-07 | 2007-03-08 | Fridella Stephen A | Metadata offload for a file server cluster |
-
2007
- 2007-03-30 US US11/694,698 patent/US8185715B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020049731A1 (en) * | 2000-05-31 | 2002-04-25 | Takuya Kotani | Information processing method and apparatus |
US20020059309A1 (en) * | 2000-06-26 | 2002-05-16 | International Business Machines Corporation | Implementing data management application programming interface access rights in a parallel file system |
US20020184463A1 (en) * | 2000-07-06 | 2002-12-05 | Hitachi, Ltd. | Computer system |
US20030028514A1 (en) * | 2001-06-05 | 2003-02-06 | Lord Stephen Philip | Extended attribute caching in clustered filesystem |
US20070055702A1 (en) * | 2005-09-07 | 2007-03-08 | Fridella Stephen A | Metadata offload for a file server cluster |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130204849A1 (en) * | 2010-10-01 | 2013-08-08 | Peter Chacko | Distributed virtual storage cloud architecture and a method thereof |
US9128626B2 (en) * | 2010-10-01 | 2015-09-08 | Peter Chacko | Distributed virtual storage cloud architecture and a method thereof |
US20120185618A1 (en) * | 2011-01-13 | 2012-07-19 | Avaya Inc. | Method for providing scalable storage virtualization |
US20130041872A1 (en) * | 2011-08-12 | 2013-02-14 | Alexander AIZMAN | Cloud storage system with distributed metadata |
US8533231B2 (en) * | 2011-08-12 | 2013-09-10 | Nexenta Systems, Inc. | Cloud storage system with distributed metadata |
US8849759B2 (en) | 2012-01-13 | 2014-09-30 | Nexenta Systems, Inc. | Unified local storage supporting file and cloud object access |
CN109542342A (en) * | 2018-11-09 | 2019-03-29 | 锐捷网络股份有限公司 | Metadata management and data reconstruction method, equipment and storage medium |
CN109542342B (en) * | 2018-11-09 | 2022-04-26 | 锐捷网络股份有限公司 | Metadata management and data reconstruction method, equipment and storage medium |
US12001355B1 (en) | 2019-05-24 | 2024-06-04 | Pure Storage, Inc. | Chunked memory efficient storage data transfers |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11249857B2 (en) | Methods for managing clusters of a storage system using a cloud resident orchestrator and devices thereof | |
US9032164B2 (en) | Apparatus for performing storage virtualization | |
US9639277B2 (en) | Storage system with virtual volume having data arranged astride storage devices, and volume management method | |
US7447197B2 (en) | System and method of providing network node services | |
US9311001B1 (en) | System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network | |
RU2302034C2 (en) | Multi-protocol data storage device realizing integrated support of file access and block access protocols | |
EP2247076B1 (en) | Method and apparatus for logical volume management | |
US8452923B2 (en) | Storage system and management method thereof | |
US10769024B2 (en) | Incremental transfer with unused data block reclamation | |
US9098466B2 (en) | Switching between mirrored volumes | |
US7315914B1 (en) | Systems and methods for managing virtualized logical units using vendor specific storage array commands | |
US8782245B1 (en) | System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network | |
US20240045807A1 (en) | Methods for managing input-output operations in zone translation layer architecture and devices thereof | |
JP2007141216A (en) | System, method and apparatus for multiple-protocol-accessible osd storage subsystem | |
US8266191B1 (en) | System and method for flexible space reservations in a file system supporting persistent consistency point image | |
US10855556B2 (en) | Methods for facilitating adaptive quality of service in storage networks and devices thereof | |
JP5352490B2 (en) | Server image capacity optimization | |
US9158637B2 (en) | Storage control grid and method of operating thereof | |
US8185715B1 (en) | Method and system for managing metadata in storage virtualization environment | |
US10620843B2 (en) | Methods for managing distributed snapshot for low latency storage and devices thereof | |
US9875059B2 (en) | Storage system | |
US8127307B1 (en) | Methods and apparatus for storage virtualization system having switch level event processing | |
US10872036B1 (en) | Methods for facilitating efficient storage operations using host-managed solid-state disks and devices thereof | |
US10782889B2 (en) | Fibre channel scale-out with physical path discovery and volume move | |
US7299332B1 (en) | System and method for managing sessions and allocating memory resources used for replication of data in a data storage environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: QLOGIC, CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHOW, WILLIAM W.;LOGAN, JOHN;REEL/FRAME:019095/0768 Effective date: 20070329 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, IL Free format text: SECURITY AGREEMENT;ASSIGNOR:QLOGIC CORPORATION;REEL/FRAME:041854/0119 Effective date: 20170228 |
|
AS | Assignment |
Owner name: CAVIUM, INC., CALIFORNIA Free format text: MERGER;ASSIGNOR:QLOGIC CORPORATION;REEL/FRAME:044812/0504 Effective date: 20160615 |
|
AS | Assignment |
Owner name: CAVIUM, INC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JP MORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:046496/0001 Effective date: 20180706 Owner name: QLOGIC CORPORATION, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JP MORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:046496/0001 Effective date: 20180706 Owner name: CAVIUM NETWORKS LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JP MORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:046496/0001 Effective date: 20180706 |
|
AS | Assignment |
Owner name: CAVIUM, LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:CAVIUM, INC.;REEL/FRAME:047205/0953 Effective date: 20180921 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: CAVIUM INTERNATIONAL, CAYMAN ISLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CAVIUM, LLC;REEL/FRAME:051948/0807 Effective date: 20191231 |
|
AS | Assignment |
Owner name: MARVELL ASIA PTE, LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CAVIUM INTERNATIONAL;REEL/FRAME:053179/0320 Effective date: 20191231 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |