[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20060179170A1 - Failover and data migration using data replication - Google Patents

Failover and data migration using data replication Download PDF

Info

Publication number
US20060179170A1
US20060179170A1 US11/393,596 US39359606A US2006179170A1 US 20060179170 A1 US20060179170 A1 US 20060179170A1 US 39359606 A US39359606 A US 39359606A US 2006179170 A1 US2006179170 A1 US 2006179170A1
Authority
US
United States
Prior art keywords
volume
storage system
storage
data
host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/393,596
Inventor
Shoji Kodama
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to US11/393,596 priority Critical patent/US20060179170A1/en
Publication of US20060179170A1 publication Critical patent/US20060179170A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2058Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2069Management of state, configuration or failover
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2087Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring with a common controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2079Bidirectional techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F2003/0697Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers device management, e.g. handlers, drivers, I/O schedulers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems

Definitions

  • the present invention is related to data storage systems and in particular to failover processing and data migration.
  • a traditional multipath system shown in FIG. 7 shows the use of multipath software to increase accessibility to a storage system.
  • a host 0701 provides the hardware and underlying system software to support user applications 070101 .
  • Data communication paths 070301 , 070303 provide an input-output (I/O) path to a storage facility 0705 (storage system).
  • Multipath software 070105 is provided to increase accessibility to the storage system 0705 .
  • the software provides typical features including failover handling for failures on the I/O paths 070301 , 070303 between the host 0701 and the storage system 0705 .
  • the host 0701 has two or more Fibre Channel (FC) host bus adapters 070107 .
  • the storage system 0705 likewise, includes multiple Fibre Channel interfaces 070501 , where each interface is associated with a volume. In the example shown in FIG. 7 , a single volume 070505 is shown.
  • a disk controller ( 070503 ) handles I/O requests received from the host 0701 via the FC interfaces 070501 .
  • the host has multiple physically independent paths 070301 , 070403 to the volume(s) in the storage system 0705 .
  • Fibre Channel switches which are not shown in the figure can be used for connecting the host and the storage system. It can be appreciated of course that other suitable communication networks can be used; e.g., Ethernet and InfiniBand.
  • user applications 070101 and system software (e.g., the OS file system, volume manager, etc.) issue I/O requests to the volume(s) 070505 in the storage system 0705 via SCSI (small computer system interface) 070103 .
  • the multipath software 070105 intercepts the requests and determines a path 070301 , 070303 over which the request will be sent. The request is sent to the disk controller 070503 over the selected path.
  • Path selection depends on various criteria including, for example, whether or not all the paths are available. If multiple paths are available, the least loaded path can be selected. If one or more paths are unavailable, the multipath software selects one of the available paths. A path may be unavailable because of a failure of a physical cable that connects a host's HBA (host bus adapter) and a storage system's FC interface, a failure of an HBA, a failure of an FC interface, and so on.
  • HBA host bus adapter
  • FC interface failure of an HBA
  • FC interface failure of an FC interface
  • Typical commercial systems include Hitachi Dynamic Link ManagerTM by Hitachi Data Systems; VERITAS Volume ManagerTM by VERITAS Software Corporation; and EMC PowerPath by EMC Corporation.
  • FIG. 8 shows a storage system configured for data migration.
  • volume X-P To switch the host machine 1301 over to storage system B 1307 , the data stored in Volume X-P needs to be migrated to Volume X-S (the assumption is that Volume X-S does not have a copy of the data on Volume X-P).
  • a communication channel from the host machine 1301 to storage system B 1307 must be provided.
  • physical cabling 130301 that connects the host machine 1301 to storage system A 1305 needs to be reconnected to storage system B 1307 .
  • the reconnected cable is shown in dashed lines 130303 .
  • Data migration from storage system A 1305 to storage system B 1307 is accomplished by the following steps. It is noted here that some of all of the data in storage system A can be migrated to storage system B. The amount of data that is migrated will depend on the particular situation.
  • the user must stop all I/O activity with the storage system A 1305 . This might involve stopping the user's applications 130101 , or otherwise indicating to (signaling) the applications to suspend I/O operations to storage system A. Depending on the host machine, the host machine itself may have to be shut down.
  • the physical cabling 130301 must be reconfigured to connect the host machine 1301 to storage system B 1307 .
  • FC fibre channel
  • volume X-P On the storage system side, the data in Volume X-P must be migrated to Volume X-S. To do this, the disk controller 130703 of storage system B initiates a copy operation to copy data from Volume X-P to Volume X-S. The data migration is performed over the FC network 130505 . Once the data migration is under way, the user applications 130101 can once again resume their I/O activity, now with storage system B, where the migration operation continues as a background process. Depending on the host machine, this may involve restarting (rebooting) the host machine.
  • the disk controller B 130703 accesses the data of the requested data block from storage system A.
  • the migration takes place on a block-by-block basis in sequential order.
  • a read operation will likely access a block that is out of sequence with respect to the sequence of migration of the data blocks.
  • the disk controller B can use a bitmap (or some other suitable mechanism) to keep track of which blocks have been updated by the migration operation and by the write operations.
  • the bitmap can also be used to prevent a newly written block location from being over-written with data from Storage System A during the data migration process.
  • Typical commercial systems include Hitachi On-Line Data Migration by Hitachi Data Systems and Peer-to-peer Remote Copy (PPRC) Dynamic Address Switching (DAS) by IBM, Inc.
  • PPRC Peer-to-peer Remote Copy
  • DAS Dynamic Address Switching
  • FIG. 9 shows a conventional server clustering system.
  • Clustering is a technique for increasing system availability.
  • host systems 0901 and 0909 each can be configured respectively with suitable clustering software 090103 and 090903 , to provide failover capability among the hosts.
  • FIG. 9 shows that Host 1 is connected to storage system A 0905 over an FC network 090301 .
  • Host 2 is connected to storage system B 0907 over an FC network 090309 .
  • Storage system A and storage system B are in data communication with each other over yet another FC network 090305 .
  • the network passes through a wide area network (WAN), meaning that Host 2 and storage system B can be located at a remote data center that is far from Host 1 and storage system A.
  • WAN wide area network
  • Host 1 accesses (read, write) Volume X-P in storage system A.
  • the disk controller A 090503 replicates data that is written to Volume X-P by Host 1 to Volume X-S in storage system B.
  • the replication is performed over the FC network 090305 .
  • the replication can occur synchronously, in which case the storage system A does not acknowledge a write request from the Host 1 until it is determined that the data associated with the write request has been replicated to storage system B.
  • the replication can occur asynchronously, in which case storage system A acknowledges the write request from Host 1 independently of when the data associated with the write request is replicated to the storage system B.
  • Host 2 can detect a failure in Host 1 by using a heartbeat message, where Host 1 periodically transmits a message (“heartbeat”) to the Host 2.
  • a failure in Host 1 is indicated if Host 2 fails to receive the heartbeat message within a span of time. If the failure occurs in the storage system A, the Host 1 can detect such failure; e.g., by receiving a failure response from the storage system, by timing out waiting for a response, etc.
  • the clustering software 090103 in the Host 1 can signal the Host 2 of the occurrence.
  • the Host 2 When the Host 2 detects the occurrence of a failure, it performs a split pair operation (in the case where remote copy technology is being used) between Volume X-P and Volume X-S. When the split pair operation is complete, the Host 2 can mount the Volume X-S and start the applications 090901 to resume operations in Host 2.
  • the split pair operation causes the data replication between Volume X-P and Volume X-S to complete without interruption, Host 1 cannot update Volume X-P during a split pair operation. This ensures that the Volume X-S is a true copy of the Volume X-P when Host 2 takes over for Host 1.
  • active-sleep failover Host 2 is not active (sleep, standby mode) from a user application perspective until a failure is detected in Host 1 or in storage system A.
  • Typical commercial systems include VERITAS Volume ManagerTM by VERITAS Software Corporation and Oracle Real Application Clusters (RAC) 10 g by Oracle Corp.
  • FIG. 10 shows a conventional remote data replication configuration (remote copy). This configuration is similar to the configuration shown in FIG. 9 except that the host 1101 in FIG. 10 is not clustered. Data written by applications 110101 to the Volume X-P is replicated by the disk controller A 110503 in the storage system A 1105 . The data is replicated to Volume X-S in storage system B 1107 over an FC network 110305 . Although it is not shown, the storage system B can be a remote system accessed over a WAN.
  • Typical commercial systems include Hitachi TrueCopyTM Remote Replication Software by Hitachi Data Systems and VERITAS Storage Replicator and VERITAS Volume Replicator, both by VERITAS Software Corporation.
  • a data access method and system includes a host system having a virtual volume module.
  • the virtual volume module receive I/O operations originating from I/O requests made by applications executing on the host system.
  • the I/O operations are directed to a virtual volume.
  • the virtual volume module produces “corresponding I/O operations” that are directed to a target physical volume in a target storage system.
  • the target storage system can be selected from among two or more storage systems. Date written to the target storage system is replicated to another of the storage systems.
  • the virtual volume module designates another storage system as the target storage system for subsequent corresponding I/O operations.
  • FIG. 1 is a block diagram showing a configuration of a computer system to which first and second embodiments of the present invention are applied;
  • FIG. 2 illustrates in tabular format configuration information used by the virtual volume module
  • FIGS. 2A and 2B illustrate particular states of the configuration information
  • FIG. 2C shows a transition diagram of the typical pairing states of a remote copy pair
  • FIG. 3 is a block diagram showing a configuration of a computer system to which a third embodiment of the present invention is applied;
  • FIG. 4 is a block diagram showing a configuration of a computer system to which a fourth embodiment of the present invention is applied;
  • FIG. 4A shows failover processing when the production volume fails
  • FIG. 4B shows failover processing when the backup volume fails
  • FIG. 5 is a block diagram showing a configuration of a computer system to which a fifth embodiment of the present invention is applied;
  • FIG. 6 is a block diagram showing a configuration of a computer system to which a variation of the first embodiment of the present invention is applied;
  • FIG. 7 shows a conventional multipath configuration in a storage system
  • FIG. 8 shows a conventional data migration configuration in a storage system
  • FIG. 9 shows a conventional server clustering configuration in a storage system
  • FIG. 10 shows a conventional remote data replication configuration in a storage system.
  • FIG. 1 shows an illustrative embodiment of a first aspect of the present invention. This embodiment illustrates path failover between two storage systems, although the present invention can be extended to cover more that two storage systems.
  • the embodiment described is a multiple storage system which employs remote copy technology to provide failover recovery.
  • a virtual volume module is provided in a host system.
  • the host system is in data communication with a first storage system. Data written to the first storage system is duplicated, or otherwise replicated, to a second storage system.
  • the virtual volume module interacts with the first and second storage systems to provide virtual storage access for applications running or executing on the host system.
  • the virtual volume can detect a failure that can occur in either or both of the first and second storage systems and direct subsequent data I/O requests to the surviving storage system, if there is a surviving storage system.
  • a system includes a host 0101 that is in data communication with storage systems 0105 , 0107 , via suitable communication network links.
  • a Fibre Channel (FC) network 010301 connects the host 0101 to a storage system 0105 (Storage System A).
  • An FC network 010303 connects the host 0101 to a storage system 0107 (Storage System B).
  • the storage systems 0105 , 0107 are linked by an FC network 010305 .
  • Fibre Channel switches 0109 can be used to create a storage area network (SAN) among the storage systems.
  • SAN storage area network
  • the FC networks shown in FIG. 1 can be individual networks, or part of the same network, or may comprise two or more different networks.
  • the host 0101 comprises standard hardware components typically found in a host computer system, including a data processing unit (e.g., CPU), memory (e.g., RAM, boot ROM, etc.), local hard disk storage, and so on.
  • the host 0101 further comprises one or more FC host bus adapters (FBC HBAs) 010107 to connect to the storage systems 0105 , 0107 .
  • FBC HBAs FC host bus adapters
  • FIG. 1 shows two FC HBAs illustrated in phantom, each having a connection to one of the storage systems 0105 , 0107 .
  • the host 0101 further includes a virtual volume manager 010105 , a small computer system interface (SCSI) 010103 , and one or more applications 010101 .
  • SCSI small computer system interface
  • the applications can be user-level software that runs on top of an operating system (OS), or is system-level software that are components of the OS.
  • the applications access (read, write) the storage systems 0105 , 0107 by making input/output (I/O) requests to the storage systems.
  • OSs include Unix, Linux, Windows 2000/XP/2003, MVS, and so on.
  • User-level applications includes typical systems such as database systems, but of course can be any software that has occasion to access data on a storage system.
  • Typical system-level applications include system services such as file systems and volume managers. Typically, there is data associated with an access request, whether it is data to be read from storage or data to be written to storage.
  • the SCSI interface 010103 is a typical interface to access volumes provided by the storage systems 0105 , 0107 .
  • the virtual volume module 010105 presents “virtual volumes” to the host applications 010101 .
  • the virtual volume module interacts with the SCSI interface 010103 to map virtual volumes to physical volumes in storage systems 0105 , 0107 .
  • the OS is configured with one or more virtual volumes.
  • the OS accesses the volume it directs one or more suitable SCSI commands to the virtual volume, by way of the virtual volume module 010105 .
  • the virtual volume module 010105 produce corresponding commands or operations that are targeted to one of the physical volumes (e.g., Volume X-P, Volume X-S) in the storage systems 0105 , 0107 .
  • the corresponding command or operation may be a modification of the original SCSI command or operation if a parameter of the command includes a reference to the virtual volume (e.g., open). The modification would be to replace the reference to the virtual volume with a reference to a physical volume (the target physical volume).
  • Subsequent commands need only be directed to the appropriate physical volume, including communicating over the appropriate communication interface ( 010107 , FIG. 1 ).
  • the application can make a file system call, which is translated by the OS to a series of SCSI accesses that are targeted to a virtual volume.
  • the virtual volume module in turn makes corresponding SCSI accesses to one of the physical volumes.
  • the OS provides the capability
  • the user-level application can make direct calls to the SCSI interface to access a virtual volume. Again, the virtual volume module would modify the calls to access one of the physical volumes. Further detail about this aspect of the present invention will be discussed below.
  • Each storage system 0105 , 0107 includes one or more FC interfaces 010501 , 010701 , one or more disk controllers 010503 , 010703 , one or more cache memories 010507 , 010707 , and one or more volumes 010505 , 010705 .
  • the FC interface is physically connected to the host 0101 or to the other storage system, and receives I/O and other operations from the connected device. The received operation is forwarded to the disk controller which then interacts with the storage device to process the I/O request.
  • the cache memory is a well known technique for improved read and write access.
  • Storage system 0105 provides a volume designated as Volume X-P 010505 for storing data.
  • Storage system 0107 likewise, provides a volume designated as Volume X-S 010705 for storing data.
  • a volume is a logical unit of storage that is composed of one or more physical disk drive units. The physical disk drive units that constitute the volume can be part of the storage system or can be external storage that is separate from the storage system.
  • the virtual volume module 010105 performs a discover operation.
  • a discover operation In the example shown in FIG. 1 , assume that Volume X-P and Volume X-S will be discovered.
  • a configuration file stored in the host 0101 will indicate to the virtual volume module 10105 that Volume X-S is the target of a replication operation that is performed on Volume X-P.
  • Table I below is an illustrative example of the relevant contents of a configuration file: TABLE I #Configuration File MultiPathSet Name: Pair 1 Primary Volume: Volume X-P in Storage System A Secondary Volume: Volume X-S in Storage System B Virtual Volume Name: VVolX MultiPathSet Name: Pair 2 Primary Volume: Volume Y-P in Storage System C Secondary Volume: Volume Y-S in Storage System B Virtual Volume Name: VVolY
  • a command line interface can be provided, allowing a user (e.g., system administrator) to interactively configure the virtual volume module 010105 .
  • a command line interface can be provided, allowing a user (e.g., system administrator) to interactively configure the virtual volume module 010105 .
  • IPC interprocess communication
  • some other similar mechanism can be used to signal the virtual volume module with the information contained in the configuration table (TABLE I).
  • volume X-P is referred to as a primary volume, meaning that it is the volume with which the host will perform I/O operations.
  • the storage system 0105 containing Volume X-P can be referred to as the primary system.
  • Volume X-S is referred to as the secondary volume; the storage system 0107 containing Volume X-S can be referred to as the secondary system.
  • data written to the primary volume is replicated to the secondary volume.
  • the secondary volume also serves as a failover volume in case the primary volume goes off line for some reason, whether scheduled (e.g., for maintenance activity), or unexpectedly (e.g., failure).
  • the example configuration file shown in Table I identifies two replication pairs; or, viewed from a failover point of view, two failover paths.
  • a virtual volume module issues a pair creation request to a primary storage system. Then the disk controller of the primary controller creates the requested pair, sets the pair status to SYNCING and sends a completion response to the virtual volume module. After the pair is created, the disk controller of the primary storage system starts to copy data in the primary volume to the secondary volume in the secondary storage system. This is called Initial Copy. Initial copy is an asynchronous and independent processing from I/O request processing by the disk controller. The disk controller knows which blocks in the primary volume have been copied to the secondary volume by using a bitmap table. When the primary volume and the secondary volume become identical, the disk controller changes the pair status to SYNCED.
  • the virtual volume module 010105 and the storage systems 0105 , 0107 use remote copy technology, it can be appreciated that embodiments of the present invention can be implemented with any suitable data replication or data backup technology or method.
  • the virtual volume module can be readily configured to operate according to the data replication or data backup technology that is provided by the storage systems.
  • the primary volume serves as the production volume for data I/O operations made by user-level and system-level applications running on the host.
  • the secondary volume serves as a backup volume for the production volume.
  • the backup volume can become the production volume if failure of the production volume is detected.
  • primary volume and secondary volume do not refer to the particular underlying data replication or data backup technology but rather to the function being served, namely, production volume and backup volume. It will be further understood that some of the operations performed by the virtual volume module are dictated by the remote copy methodology of the storage systems.
  • the virtual volume module 010105 provides virtual volume access to the applications 010101 executing on the host 0101 via the SCSI interface 010103 .
  • the OS and in some cases user-level applications, “see” a virtual volume that is presented by the virtual volume module 010105 .
  • Table I shows a virtual volume that is identified as VVolX.
  • the applications e.g., via the OS
  • send conventional SCSI commands including but not limited to read and write operations
  • the virtual volume module intercepts the SCSI commands and translates the commands to corresponding I/O operations that are suitable for accessing Volume X-P in the storage system 0105 or for accessing Volume X-S in the storage system 0107 .
  • volume X-P is the primary volume and Volume X-S is the secondary volume
  • the virtual volume module 010105 can learn of the availability status of the volumes (Volume X-P, Volume X-S) by issuing suitable I/O operations (or some other SCSI command) to the volumes.
  • the availability of the volumes can be determined based on the response. For example, if a response to a request issued to a volume is not received within a predetermined period, then it can be concluded that the volume is not available.
  • explicit commands may be provided to obtain this information.
  • FIG. 2 shows in tabular form information that is managed and used by the virtual volume manager 010105 .
  • the information indicates the availability and pairing state of the volumes.
  • a Pair Name field contains the name of the remote copy pair as shown in the configuration table (Table I); e.g., “Pair 1” and “Pair 2”.
  • a Volumes field contains the names of the volumes which constitute the identified pairs, also shown in the configuration table (Table I).
  • a Storage field contains the names of the storage systems in which the volumes reside; e.g., Storage System A ( 0105 ), Storage System B ( 0107 ).
  • a Roles field indicates which volume is acting as the primary volume and which volume is the corresponding secondary volume, for each remote copy pair.
  • An HBA field identifies the HBA from which a volume can be accessed.
  • An Availability field indicates if a volume is available or not.
  • a Pair field indicates the pair status of the pair; e.g., SYNCING, SYNCED, SPLIT, REVERSE-SYNCING, REVERSE-SYNCED, and DECOUPLED.
  • the paired volume may be SPLIT, which means they are still considered as paired volumes.
  • SPLIT SPLIT state
  • remote copy operations are not performed when the primary volume receives and services write requests.
  • write requests can be serviced by the secondary volume.
  • the remote copy operations may be re-started. If during the SPLIT state, the secondary volume did not service any write requests, then we need only ensure that write requests performed by the primary volume are mirrored to the secondary volume; the volumes thus transition through the SYNCING state to the SYNCED state.
  • the secondary volume is permitted to service write requests in addition to the primary the volume.
  • Each volume can receive write requests from the same host, or from different host machines.
  • the data state of each volume will diverge from they SYNCED state.
  • any data that had been written to the secondary volume is discarded.
  • data that was written to the primary volume during the SPLIT state is copied to the secondary volume.
  • any blocks that were updated in the secondary volume during the SPLIT state must be replaced with data from the corresponding blocks in the primary volume. In this way, the data state of the secondary volume is once again synchronized to the data state of the primary volume.
  • the pair status goes from SPLIT state, to SYNCING state, to SYNCED state.
  • any data that had been written to the primary volume is discarded.
  • data that was written to the secondary volume during the SPLIT state is copied to the primary volume.
  • any blocks that were updated in the primary volume during the SPLIT state must be replaced with data from the corresponding blocks in the secondary volume.
  • the data state of the primary volume is now synchronized to the data state of the primary volume.
  • the pair status goes from SPLIT state, to REVERSE-SYNCING state, to REVERSE-SYNCED state because of the role reversal between the primary volume and the secondary volume.
  • the virtual volume module 010105 determines that Volume X-P is not available, and it is the primary volume (as determined from the configuration table, Table I) and the pair status is SYNCED, then the virtual volume module will instruct the storage system 0107 to split the remote copy pair (i.e., the Volume X-P and Volume X-S pair).
  • the storage system 0107 changes the pair status from SYNCED (which means that any updates on Volume X-P are reflected to Volume X-S so these two volumes remain identical, and it is not possible for a host to write data onto Volume X-S) to SPLIT (which means that Volume X-P and Volume X-S are still associated as a remote copy pair but updates made to Volume X-P are not reflected to Volume X-S and updates made to Volume X-S are not reflected to Volume X-P).
  • the virtual volume module subsequently uses the Volume X-S in the storage system 0107 to service I/O requests made by the host, instead of Volume X-P. Referring to FIG.
  • the Availability and Pair fields for Volume X-P and for Volume X-S would be updated as shown.
  • the Availability field for Volume X-P would be “No”.
  • the Availability field for Volume X-S would be “Yes”.
  • the Pair field for Volume X-P and X-S would be SPLIT.
  • the table in FIG. 2 is managed by the virtual volume module. So this is correct. Information required to manage pairs of volumes are managed by both storage systems because they need to know how to replicate volumes across storage systems. They keep the same information.
  • the virtual volume module needs to ask only one of the storage systems to change the pair status and then such changes are reflected to the other storage system by communications between the storage systems.
  • Volume X-P is not available, the virtual volume manager is not sure where a problem is. So the virtual volume module asks Storage System B to change the status then it is storage system's responsibility to reflect the change to the other storage system. If the storage system A is alive, then the change is reflected to the storage system A; otherwise it is not.
  • the virtual volume module 010105 determines that Volume X-P is not available, and it is the primary volume (as determined from the configuration table, Table I) and the pair status is SYNCING, then the virtual volume module will fail to process I/O requests from applications so the virtual volume module sends an error to the applications.
  • the virtual volume module 010105 determines that Volume X-S is not available and the role of Volume X-S is the primary volume and the pair status is REVERSE-SYNCING, then the virtual volume module will communicate a command to the storage system 0105 to split the remote copy pair of Volume X-P and Volume X-S.
  • the storage system 0105 changes the pair status from REVERSE-SYNCED (which means any updates on Volume X-S are reflected to Volume X-P and the two volumes remain identical, and it is not possible for a host to write data onto Volume X-P) to SPLIT.
  • the virtual volume module subsequently forwards I/O operations (and other SCSI commands) to service I/O requests from the applications 010101 to the Volume X-P in the storage system 0105 , instead of Volume X-S.
  • the Availability and Pair fields for Volume X-P and for Volume X-S would be updated as shown.
  • the Availability field for Volume X-P would be “Yes”.
  • the Availability field for Volume X-S would be “No”.
  • the Pair field for Volume X-P and X-S would be SPLIT.
  • the virtual volume module 010105 determines that Volume X-S is not available, and it is the primary volume (as determined from the configuration table, Table I) and the pair status is REVERSE-SYNCING, then the virtual volume module will fail to process I/O requests from applications so the virtual volume module sends an error to the applications.
  • volume X-P when Volume X-P becomes unavailable, the virtual volume module 010105 begins to use Volume X-S.
  • the virtual volume manager sends a reverse-sync request to the storage system 0107 . The purpose of doing this is to re-establish Volume X-P as the primary volume.
  • the reverse-sync request initiates an operation to copy data that was written to Volume X-S, during the time that Volume X-P was unavailable (i.e., subsequent to the SPLIT), back to Volume X-P. Recall that Volume X-P is initially the primary volume and Volume X-S is the secondary volume.
  • the disk controller 010703 In response to receiving the reverse-sync request, the disk controller 010703 changes the pair status of the pair to REVERSE-SYNCING and responds with a suitable response to the host 0101 .
  • the disk controller 010703 begins copying data that was written to Volume X-S during the SPLIT state to Volume X-P.
  • a write request logging mechanism is used to determine which blocks on the volume had changed and in which order.
  • the copy of changed blocks from Volume X-S to Volume X-P is performed asynchronously from processing new I/O requests from hosts, meaning that during this copy, the disk controller accepts I/O requests from the hosts to Volume X-S.
  • the copy When the copy is complete, the volumes become identical, and the pair status is changed to REVERSE_SYNCED status.
  • the virtual volume module 010105 then updates the table shown in FIG. 2 .
  • the virtual volume module then changes the role of Volume X-S to primary volume (the Role field for Volume X-S is set to “Primary”) and the role of Volume X-P to secondary volume (the Role field for Volume X-P is set to “Secondary”).
  • the Availability field for Volume X-P is changed to “Yes”.
  • the virtual volume module 010105 communicates with the storage system 0107 to determine whether the pair status is REVERSE-SYNCED or not. If not, then the virtual volume module 010105 waits for the state to be achieved.
  • the REVERSE-SYNCED state means data in Volume X-S is identical to the data in Volume X-P.
  • the virtual volume module stops processing any more I/O requests from applications. I/O requests are queued in a wait queue which a virtual volume module manages.
  • the virtual volume module 010105 split the pair. Disk controller 010503 and disk controller 010703 change the status to SPLIT.
  • the disk controller 010503 informs the host 0101 that the SPLIT has occurred.
  • the virtual volume module 010105 then changes the role of Volume X-P to primary volume in the table shown in FIG. 2 , and the role of Volume X-S is changed to secondary volume.
  • the virtual volume module starts to processing I/O requests in the wait queue and new I/O requests from applications. At this time, I/O requests are issued by a host to Volume X-P.
  • the virtual volume module resyncs the pair comprising Volume X-P and Volume X-S.
  • Disk controllers 010503 and 010703 change the pair status to SYNCING. Data which has been written to Volume X-P is copied to Volume X-S. During this copy, a disk controller accepts I/O requests from a host to Volume X-P. Data that is subsequently written to Volume X-P will then be copied to Volume X-S synchronously. When the copy has completed, the volume pairs contain identical data.
  • the disk controller 010503 changes the pair status from SYNCING to SYNCED and then informs the host 0101 that the pair has been re-synced.
  • the disk controller 010503 receives a data write request from the host 0101 . If the pair status of the pair consisting of Volume X-P and Volume X-S is in the SYNCING or SYNCED state, then the disk controller 010503 writes the data to Volume X-P. If there is a failure during the attempt to perform the write operation to service the write request, a suitable error message is returned to the host 0101 . Assuming the write operation to Volume X-P is successful, then the disk controller 010503 will send the data to the storage system 0107 via the FC network 010305 . It is noted that the data can be cached in the cache 010507 before being actually written to Volume X-P.
  • the disk controller 010703 in the storage system 0107 Upon receiving the data from the disk controller 010503 , the disk controller 010703 in the storage system 0107 will write the data to Volume X-S. The disk controller 010703 sends a suitable response back to the disk controller 010503 indicating a successful write operation. Upon receiving a positive indication from the disk controller 010703 , the disk controller 010503 in the storage system 0105 will communicate a response to the host 0101 indicating that the data was written to Volume X-P and to Volume X-S. It is noted that the data can be cached in the cache 010707 before being actually written to Volume X-S.
  • the disk controller 010703 encounters an error in writing to Volume X-S, then it will send a suitable negative response to the storage system 0105 .
  • the disk controller 010503 in response, will send a suitable response to the host 0101 indicating that the data was written to Volume X-P, but not to Volume X-S.
  • the disk controller 010503 receives a write request and the pair status of Volume X-P and Volume X-S is SPLIT.
  • the disk controller 010503 will perform a write operation to Volume X-P. If there is a failure during this attempt, then the disk controller will respond to the host 0101 with a response indicating the data could not be written to Volume X-P. If the write operation to Volume X-P succeeded, then a suitable positive response is sent back to the host 0101 . It is noted that the data can be cached in the cache 010507 before being actually written to Volume X-P. Since the pair status is SPLIT, there is no step of sending the data to the storage system 0107 .
  • the disk controller 010503 logs write requests in its memory or a temporary disk space. By using the log, when the pair status is changed to SYNCING, the disk controller 010503 can send the write requests being kept in the log to the disk controller 010703 in the order of which the disk controller 010503 received the write requests from a host.
  • the disk controller 010703 receives a data write request from the host 0101 . If the status of the volume pair of Volume X-P and Volume X-S is SYNCING or SYNCED, then the disk controller 010703 will reject the request and send a response to the host 0101 indicating that the request is being rejected. No attempt to service the write request from the host 0101 will be made.
  • the disk controller 010703 will service the write operation and write the data to Volume X-S.
  • a suitable response indicating the success or failure of the write operation is then sent to the host 0101 . It is noted that the data can be cached in the cache 010707 before being actually written to Volume X-S.
  • the disk controller 010703 services the write request by writing to Volume X-S. If the write operation fails, then a suitable response indicating the success or failure of the write operation is sent to the host 0101 .
  • the disk controller 010703 will send the data to the storage system 0105 via the FC Network 010305 .
  • the disk controller 010503 writes the received data to Volume X-P.
  • the disk controller 010503 will communicate a message to the storage system 0107 indicating the success or failure of the write operation. If the write operation to Volume X-P was successful, then, the disk controller 010703 will send a response to the host 0101 indicating that the data was written to both Volume X-S and to Volume X-P. If an error occurred during the write attempt to Volume X-P, then the disk controller 010703 will send a message indicating the successful write to Volume X-S and a failed attempt to Volume X-P.
  • the disk controller 010503 receives a data write request from the host 0101 when the volume pair is in the REVERSE-SYNCING or REVERSE-SYNCED state.
  • the disk controller 010503 would respond with an error message to the host 010 indicating that the request is being rejected and thus no attempt to service the write request will be made.
  • FIG. 6 This figure illustrates a variation of the embodiment of the present invention shown in FIG. 1 for load balancing of I/O.
  • the storage system 0605 includes a second volume 060509 (Volume Y-S).
  • the storage system 0607 includes a second volume 060709 (Volume Y-P).
  • Two path failover configurations are provided: Volume X-P and Volume X-S constitute one path failover configuration, where Volume X-P on the storage system 0605 serves as the production volume and Volume X-S serves as the backup.
  • Volume Y-P and Volume Y-S constitute another path failover configuration, where Volume Y-P on the storage system 0607 serves as the production volume and Volume Y-S serves as the backup.
  • the virtual volume module 060105 executing on the host machine 0101 in this variation of the embodiment shown in FIG. 1 can service I/O requests from the applications 010101 by sending corresponding I/O operations to either Volume X-P or Volume Y-P. Since the two production volumes are in separate storage systems, I/O can be load balanced between the two storage systems. Thus, the selection of Volume X-P or Volume Y-P can be made based on load-balancing criteria (e.g., load conditions in each of the volumes) in accordance with conventional load-balancing methods. This configuration thus, offers load-balancing with the failover handling of the present invention.
  • load-balancing criteria e.g., load conditions in each of the volumes
  • FIG. 1 also illustrates a second aspect of the present invention. This aspect of the present invention relates to non-disruptive data migration.
  • a host system includes a virtual volume module in data communication with a first storage system.
  • a second storage system is provided.
  • the virtual volume module can initiate a copy operation in the first storage system so that data stored on the first storage system is migrated to the second storage system.
  • the virtual volume module can periodically monitor the status of the copy operation.
  • the virtual volume module receives I/O requests from applications running on the host and services them by accessing the first storage system.
  • the virtual volume module can direct I/O requests from the applications to the second storage system.
  • FIG. 1 the system configuration shown in FIG. 1 can be used to explain this aspect of the present invention.
  • Storage System A 0105 is a pre-existing (e.g., legacy) storage system.
  • Storage System B 0107 is a replacement storage system.
  • storage system 0107 will replace the legacy storage system 0105 . Consequently, it is desirable to copy (migrate) data from Volume X-P in the storage system 0105 to Volume X-S in storage system 0107 .
  • the virtual volume module discovers Volume X-P and Volume X-S.
  • a configuration file stored in the host 0101 includes the following information: TABLE II #Configuration File Data Migration Set: DMS1 Primary Volume: Volume X-P in Storage System A Secondary Volume: Volume X-S in Storage System B Virtual Volume Name: VVolX Data Migration Set: DMS2 Primary Volume: Volume Y-P in Storage System C Secondary Volume: Volume Y-S in Storage System B Virtual Volume Name: VVolY
  • This table can be used to initialize the virtual volume module 010105 .
  • a command line interface as discussed above can be used to communication the above information to the virtual volume module.
  • This configuration table identifies data migration volume sets.
  • the primary volume indicates a legacy (old) storage volume.
  • the secondary volume designates a new storage volume.
  • the virtual volume module 010105 presents applications 010101 running on the host 0101 with a virtual storage volume.
  • the primary volume serves the role of the legacy storage system.
  • the secondary volume serves the role of a new storage system.
  • the virtual volume module 010105 communicates a request to the storage system 0105 to create a data replication pair between the primary volume that is specified in the configuration file (here, Volume X-P) and the secondary volume that is specified in the configuration file (here, Volume X-S).
  • the disk controller 010503 in response, will set the volume pair to the RESYNCING state.
  • the disk controller then initiates data copy operations from Volume X-P to Volume X-S. This is typically a background process, thus allowing for servicing of I/O requests from the host 0101 .
  • a bitmap or some similar mechanism is used to keep track of which blocks have been copied.
  • the disk controller in the storage system 0105 will then write the data to the targeted data blocks in Volume X-P. After that the disk controller 010503 will write the received to data to the storage system 0107 .
  • the disk controller 010703 will write the data to Volume X-S and respond to the disk controller 010503 accordingly.
  • the disk controller 010503 will the respond to the host 0101 accordingly.
  • the disk controller 010503 will change the volume pair status to SYNCED.
  • the virtual volume module 010105 provides a virtual volume to the applications 010101 running on the host 0101 via the SCSI interface 010103 .
  • the applications can issue any SCSI command (including I/O related commands) to the SCSI interface.
  • the virtual volume module 010105 intercepts the SCSI commands and issues suitable corresponding requests to the storage system 0105 to service the command.
  • the virtual volume module 010105 periodically checks the pair status of the Volume X-P/Volume X-S pair.
  • the pair status is SYNCED
  • the virtual volume module will communicate a request to the disk controller 010503 to delete the pair.
  • the disk controller 010503 will then take steps to delete the volume pair, and will stop any data copy or data synchronization between Volume X-P and Volume X-S.
  • the disk controller 010503 will then respond to the host 0101 with a response indicating completion of the delete operation. It is noted that I/O requests from the host 0101 during this time are not processed. They are merely queued up. To the applications 010101 , it will appear as if the storage system (the virtual storage system as presented by the virtual volume module 010105 ) is behaving slowly.
  • the virtual volume module 010105 When the virtual volume module 010105 receives a positive response from the disk controller 010503 indicating the delete operation has succeeded, then the entry in the configuration table for the data migration pair consisting of Volume X-P and Volume X-S is eliminated. I/O requests that have queued up will now be serviced by the storage system 0107 . Likewise, when the virtual volume module receives subsequent SCSI commands, it will direct them to the storage system 0107 via the FC channel 010303 .
  • This aspect of the present invention allows for data migration to take place in a transparent fashion. Moreover, when the migration has completed, the old storage system 0105 can be taken offline without disruption of service to the applications 010101 . This is made possible by the virtual volume module which transparently redirects I/O to the storage system 0107 via the communication link 010303 .
  • FIG. 3 shows an embodiment of a system according to a third aspect of the present invention. This aspect of the present invention reduces the time for failover processing.
  • a first host and a second host are configured for clustering. Each host can access a first storage system and a second storage system.
  • the first storage system serves as a production storage system.
  • the second storage system serves as a backup to the primary storage system.
  • a virtual volume module in each host provides a virtual volume view to applications running on the host. By default, the virtual volume modules access the first storage system (the production storage system) to service I/O requests from the hosts.
  • the virtual volume modules are configured to detect a failure in the first storage system. In response, subsequent access to storage is directed by the virtual volume modules to the second storage system. If the virtual volume module in the second host detects a failure of the first storage system, the virtual volume module will direct I/O requests to the second storage system.
  • FIG. 3 shows one or more FC networks.
  • An FC network 030301 connects a host 0301 to a storage system 0305 (Storage System A); Storage System A is associated with the host 0301 .
  • An FC network 030303 connects the host 0301 to a storage system 0307 (Storage System B).
  • An FC network 030307 connects a host 0309 to the storage system 0305 .
  • An FC network 030309 connects the host 0309 to the storage system 0307 ; Storage System B is associated with the host 0309 .
  • other types of networks can be used; e.g., InfiniBand and Ethernet.
  • FC switches which are not shown in the figure can be used to create Storage Area Networks (SAN) among the host and the storage systems.
  • SAN Storage Area Networks
  • the hosts 0301 , 0309 are configured in a manner similar to the host 0101 shown in FIG. 1 .
  • each host 0301 , 0309 includes respectively one or more FC HBA's 030107 , 030907 for connection to the respective FC network.
  • FIG. 3 shows that each host 0301 , 0309 includes two FC HBA's.
  • Each host 0301 , 0309 includes respectively a virtual volume module 030105 , 030905 , a SCSI Interface 030103 , 030903 , Cluster Software 030109 , 030909 , and one or more applications 030101 , 030901 .
  • the underlying OS on each host can be any suitable OS, such as Windows 2000/XP/2003, Linux, UNIX, MVS, etc. The OS can be different for each host.
  • User-level applications 030101 , 030901 includes typical applications such as database systems, but of course can be any software that has occasion to access data on a storage system.
  • Typical system-level applications include system services such as file systems and volume managers.
  • system services such as file systems and volume managers.
  • there is data associated with an access request whether it is data to be read from storage or data to be written to storage.
  • the cluster software 030109 , 030909 cooperate to provide load balancing and failover capability.
  • a communication channel indicated by the dashed line provides a communication channel to facilitate operation of the cluster software in each host 0301 , 0309 .
  • a heartbeat signal can be passed between the software modules 030109 , 030909 to determine when a host has failed.
  • the cluster software components 030109 , 030903 are configured for ACTIVE-ACTIVE operation.
  • each host can serve as a standby host for the other host. Both hosts are active and operate concurrently to provide load balancing between them and to serve as standby hosts for each other. Both hosts access the same volume, in this case Volume X-P.
  • the cluster software manages data consistency between the hosts.
  • An example of this kind of cluster software is Real Application Clusters by Oracle Corporation.
  • the SCSI interface 030103 , 030903 in each host 0301 , 0309 is configured as discussed above in FIG. 1 .
  • the virtual volume modules 030105 , 030905 are configured as in FIG. 1 , to provide virtual volumes to the applications running on their respective host machines 0301 , 0309 .
  • the storage systems 0305 and 0307 are similarly configured as described in FIG. 1 .
  • each virtual volume module 030105 , 030905 functions much in the same way as discussed in Embodiment 1.
  • the cluster software 030109 , 030909 both access Volume X-P 030505 in the storage system 0305 as the primary (production) volume; the secondary volume is provided by Volume X-S 030705 in the storage system 0307 and serves as a backup volume.
  • the virtual volume module configures Volume X-P and Volume X-S as a remote copy pair, by sending appropriate commands to the disk controller 030503 .
  • the volumes pair is initialized to be in the PAIR state by the disk controller 030503 . In the pair state, the disk controller 030503 copies data that is written to Volume X-P to Volume X-S.
  • the cluster software 030109 , 030909 is configured for ACTIVE-ACTIVE operation.
  • Each host 0301 , 0309 can access Volume X-P for I/O operations.
  • the cluster software is responsible for maintaining data integrity so that both hosts 0301 , 0309 can access the volume. For example, cluster software 030109 (or 030909 ) first obtains a lock on all or a portion of Volume X-P before it writes data to Volume X-P, so that only one host at a time can write data to the volume.
  • the virtual volume module in each host 0301 , 0309 will detect the failure and perform a failover process as discussed in Embodiment 1. Thus, both virtual volume modules will issue a split command to the primary storage system 0305 .
  • the disk controller 030503 will change the volume pair status to SPLIT, in response to receiving the first split command which the disk controller received. The disk controller will ignore the second split command.
  • the virtual volume modules 030103 , 030903 will then reconfigure themselves so that subsequent I/O requests from the hosts 0301 , 0309 can then be serviced by communicating with Volume X-S.
  • the cluster software continues to operate without being aware of the failed storage system since the failover processing was handled by the virtual volume modules 030105 , 030905 . If the pair status is SYNCING or REVERSE-SYNCING, the split command is failed. As the result, the hosts can not continue to work or failed.
  • the cluster software in the surviving host will perform failover processing to handle the failed host.
  • the virtual volume module in the surviving host will perform path failover as discussed above for Embodiment 1 to provide uninterrupted service to the applications running on the surviving host.
  • the virtual volume module in the surviving host will direct I/O requests to the surviving storage system. It is noted that there is no synchronization is required between the cluster software and the virtual volume module because the cluster software doesn't see any storage system or any volume failure.
  • FIG. 4 shows an embodiment of a fourth aspect of the present invention, in which redundant data replication capability is provided.
  • a host is connected to first and second storage systems.
  • a virtual volume module executing on the host provides a virtual volume view to applications executing on the host machine.
  • the first storage system is backed up by the second storage system.
  • the virtual volume module can perform a failover to the second storage system if the first storage system fails.
  • Third and fourth storage systems serves as backup systems respectively for the first and second storage systems. Thus, data backup can continue if either the first storage system fails or if the second storage system fails.
  • an FC network 050301 provides a data connection between a host 0501 and a first storage system 0505 (Storage System A).
  • An FC network 050303 provides a data connection between the host 0501 and a second storage system 0507 (Storage System B).
  • An FC network 050305 provides a data connection between the storage system 0505 and the storage system 0507 .
  • An FC network 050309 provides a data connection between the storage system 0507 and a third storage system 0509 (Storage System C).
  • An FC network 050307 provides a data connection between the storage system 0505 and a fourth storage system 0511 (Storage System D).
  • FC switches can be used to create a storage area network (SAN) among the storage systems. It will be understood that other storage architectures can also be used.
  • the host 0501 and the storage systems 0505 , 0507 are located at a first data center in a location A.
  • the storage systems 0509 , 0511 are located in another data center at a location B that is separate from location A.
  • location B is a substantial distance from location A; e.g., different cities.
  • the two data centers can be connected by a WAN, so the FC networks 05030 , 050309 pass through the WAN.
  • the host 0501 includes one or more FC HBA's 050107 .
  • the host includes two FC HBA's.
  • the host includes a virtual volume module 050105 , a SCSI interface 050103 , and one or more user applications 050101 .
  • a suitable OS is provided on the host 0501 , such as Windows 2000/XP/2003, Linux, UNIX, and MVS.
  • the virtual volume module 050105 provides a virtual volume view to the applications 050101 as discussed above.
  • the virtual volume module 050105 operates in the manner as discussed in connection with Embodiment 1. Particular aspects of the operation in accordance with this embodiment of the invention include the virtual volume module using Volume X-P 050505 in the storage system 0505 as the primary volume and Volume X-S 1 050705 in the storage system 0507 as the secondary volume.
  • the primary volume serves as the production volume for I/O operations made by the user-level and system-level applications 050101 running on the host 0501 .
  • the virtual volume module 050105 configures the storage systems for various data backup/replication operations, which will now be discussed.
  • the disk controller 050503 in the storage system 0505 is configured for remote copy operations using Volume X-P and Volume X-S 1 as the remote copy pair.
  • Volume X-P serves as the production volume to which the virtual volume module 050105 directs I/O operations to service data I/O requests from the applications 050101 .
  • remote copy takes place via the FC network 050305 , where Volume X-P is the primary volume and Volume X-S 1 is the secondary volume.
  • the remote copy operations are performed synchronously.
  • Redundant replication is provided by the storage system 0505 .
  • Volume X-P and Volume X-S 3 051105 are paired for remote copy operations via the FC network 050307 .
  • Volume X-P is the primary volume and Volume X-S 3 is the secondary volume.
  • the data transfers can be performed synchronously or asynchronously. This is a user choice which one the user selects, synchronous replication or asynchronous replication.
  • Synchronous replication provides no data loss but a short distance replication and sometimes slower I/O performance of a host.
  • Asynchronous replication provides a long distance replication and no I/O performance degradation at a host but may lost data when a primary volume is broken. There is a tradeoff.
  • synchronous data transfer from device A to device B means that device A writes data to its local volume and then sends the data to device B and then waits for a response to the data transfer operation from device B before device A sends a response to a host.
  • device A sends a response to a host immediately after device A writes data to its local volume. The written data is transferred to device B after the response. This data transfer is independent from processing I/O requests from a host by device A.
  • volume X-S 1 and Volume X-S 2 050905 form a remote copy pair, where Volume X-S is the primary volume and Volume X-S 2 is the secondary volume.
  • the data transfer can be synchronous or asynchronous.
  • the virtual volume module 050105 receives I/O requests via the SCSI interface 050103 , and directs corresponding I/O operations to Volume X-P, via the FC network 050301 , as shown in FIG. 4 by the bolded line.
  • Data replication (by way of remote copy operations) occurs between Volume X-P and Volume X-S 1 , where changes to Volume X-P are copied to Volume X-S 1 synchronously.
  • Data replication occurs between Volume X-P and Volume X-S 3 , where changes to Volume X-P are copied to Volume X-S 3 synchronously or asynchronously; this is a redundant replication since Volume X-S 1 also has a copy of Volume X-P.
  • Data replication occurs between Volume X-S 1 and Volume X-S 2 , where changes to Volume X-S 1 are copied to Volume X-S 2 synchronously or asynchronously.
  • FIG. 4A where the storage system 0505 has failed.
  • the virtual volume module 050105 will detect this and will perform failover processing to Volume X-S 1 as discussed in Embodiment 1.
  • I/O processing can continue with Volume X-S 1 .
  • data replication continues to the provided by the volume pair of Volume X-S 1 and Volume X-S 2 .
  • FIG. 4B where the storage system 0507 has failed.
  • the virtual volume module 050105 will continue to direct I/O operations to Volume X-P, since Volume X-P remains operational.
  • Data replication will not occur between Volume X-P and Volume X-S 1 due to the failure of the storage system 0507 .
  • data replication will continue between Volume X-P and Volume X-S 3 .
  • the configuration of FIG. 4 therefore, is able to provide redundancy for data backup and/or replication capability.
  • This aspect of the invention provides for disaster recovery using redundant data replication.
  • a first host and a second host each is connected to a pair of storage systems.
  • One host is configured for standby operation and becomes active when the other host fails.
  • a virtual volume module is provided in each host.
  • the virtual volume module services I/O requests from applications running on the host by accessing one of the storage systems connected to the host.
  • Data replication is performed between the pair of storage systems associated with the host, and between the pairs of storage systems.
  • the standby host takes over and uses the pair of storage systems associated with the standby host. Since data replication was being performed between the two pairs of storage systems, the standby host has access to the latest data; i.e., the data at the time of failure of the active host.
  • An FC network 150301 connects host 1501 to a storage system 1505 (Storage System A).
  • An FC network 150303 connects the host 1501 to a storage system 1507 (Storage System B).
  • An FC network 150305 connects the storage system 1505 to the storage system 1507 .
  • an FC network 150311 connects the host 1513 to a storage system 1509 (Storage System C).
  • An FC network 150313 connects the host 1513 to a storage system 1511 (Storage System D).
  • An FC network 150315 connects the storage system 1509 to the storage system 1511 .
  • An FC network 150307 connects the storage system 1505 to the storage system 1511 .
  • An FC network 150309 connects the storage system 1507 to the storage system 1509 .
  • the host 1501 and its associated storage systems 1505 , 1507 are located in a data center in a location A.
  • the host 1513 and its associated storage systems 1509 , 1511 are located in a data center at a location B.
  • the data centers can be connected in a WAN that includes FC networks 150307 , 150309 .
  • Each host 1501 , 1513 is configured as described in Embodiment 3.
  • each host 1501 , 15132 includes respective cluster software 150109 , 151309 .
  • the cluster software is configured for ACTIVE-SLEEP operation (also known as active/passive mode).
  • ACTIVE-SLEEP operation also known as active/passive mode.
  • one host is active (e.g., host 1501 )
  • the other host e.g., host 15013 ) is in a standby mode.
  • the standby host detects or otherwise determines that the active host has failed, it then becomes the active host.
  • Veritas Cluster Server by VERITAS Software Corporation provides this mode of cluster operation.
  • volume X-P 150505 which serves as the production volume.
  • Volume X-P and Volume X-S 1 150705 are configured as a remote copy pair via a suitable interaction between the virtual volume module 150105 and the disk controller 150503 .
  • Write operations made to Volume X-P are thereby replicated to Volume X-S 1 via the FC network 150305 synchronously.
  • Volume X-S 1 thus serves as the backup for the production volume.
  • the data transfer is a synchronous operation.
  • the host 1513 is in standby mode and thus the virtual volume module 151305 is inactive as well.
  • the virtual volume module 150505 configures the volumes for the following data replication and backup operations: Volume X-P and Volume X-S 3 151105 are also configured as a remote copy pair. Write operations made to Volume X-P are thereby replicated to Volume X-S 3 via the FC network 150307 .
  • the data transfer can be synchronous or asynchronous.
  • Volume X-S 1 and Volume X-S 2 150905 are configured as a remote copy pair. Write operations made to Volume X-S 1 are thereby replicated to Volume X-S 2 via the FC network 150309 .
  • the data transfer can be synchronous or asynchronous.
  • Volume X-S 2 and Volume X-S 5 151109 are configured as a remote copy pair. Write operations made to Volume X-S 2 are thereby replicated to Volume X-S 5 via the FC network 150315 . The data transfer is synchronous.
  • Volume X-S 3 and Volume X-S 4 150909 are configured as a remote copy pair. Write operations made to Volume X-S 3 are thereby replicated to Volume X-S 4 via the FC network 150315 . The data transfer is synchronous.
  • the virtual volume module 150105 will detect this and perform a failover process as discussed in Embodiment 1. Subsequent I/O requests by the applications running on the host 1501 will be serviced by the virtual volume module 150105 by accessing Volume X-S 1 . Note that data replication continues despite the failure of the storage system 1505 because Volume X-S 1 is backed up by Volume X-S 2 .
  • the cluster software 151309 will detect the condition and activate the host 1513 .
  • Applications 151301 will execute to take over the functions provided by the failed host 1501 .
  • the virtual volume module 151305 in the now-active host 1513 will access either Volume X-S 2 in storage system 1509 or Volume X-S 3 in storage system 1511 to service I/O requests from the applications. Since it is possible that the storage system 1505 or the storage system 1507 could have failed before their respective remote copy sites (i.e., storage system 1511 and storage system 1509 ) were fully synchronized, it is necessary to determine which storage system is synchronized.
  • This determination can be made by asking the storage system 1511 and the storage system 1509 the statuses of the volume pairs, X-P to X-S 3 and X-S 1 to X-S 2 . If one of the statuses is SYNCING or SYNCED, then the host splits the pair and uses the secondary volume of the pair as the primary volume of the host. If both statuses are SPLIT, the host checks when the pairs were split and selects the secondary volume of the last split pair as the primary volume for the host. To determine when the pairs were split, as one of the possible implementations, the storage system sends an error message to the host when the pair is split and the host records the error.
  • volume X-S 2 has the latest data
  • the virtual volume module 151305 will service I/O requests from the applications 151301 using Volume X-S 2 .
  • Volume X-S 5 will serve as backup by virtue of the volume pair configuration discussed above.
  • the virtual volume module 151305 will service I/O requests from the applications 151301 using Volume X-S 3 .
  • Volume X-S 4 will serve as backup by virtue of the volume pair configuration discussed above.
  • Failover processing by the standby host 1513 includes the cluster software 151309 instructing the disk controller 150903 to perform a SPLIT operation to split the volume pair Volume X-S 1 and Volume X-S 2 .
  • the virtual volume module also instructs the disk controller 151103 to split the Volume X-P and Volume X-S 3 pair.
  • the virtual volume module 151305 knows which volume (Volume X-S 2 or Volume X-S 3 ) has the latest data. If Volume X-S 2 has the latest data (or both volumes have the latest data, a situation where there was no failure at either of storage system 1505 or storage system 1507 ), then a script which is installed on the host and is initiated to start by the cluster software 151309 configures the virtual volume module 151305 to use Volume X-S 2 as the primary volume and Volume X-S 5 as the secondary volume. If, on the other hand, Volume X-S 3 has the latest data, then the script configures the virtual volume module to use Volume X-S 3 as the primary volume and Volume X-S 4 as the secondary volume.
  • virtualized storage systems also include a virtualization component that can be located external of the host, between the host machine and the storage system.
  • a storage virtualization product like the Cisco MDS 9000 provides a virtualization component (in the form of software) in the switch.
  • the functions performed by the virtualization component discussed above can be performed in the switch, if the virtualization component is part of the switch.
  • the virtualization component can be located in an intelligent storage system.
  • the intelligent storage system stores data not only in local volumes but also in external volumes.
  • the local volumes are volumes which the intelligent storage system has in itself.
  • the external volumes are volumes which external storage systems have and the intelligent storage system can access the external volumes via networking switches.
  • the virtual volume module running on the intelligent storage system performs the functions discussed above. In this case, the primary volumes can be the local volumes and the secondary volumes can be the external volumes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A virtual volume module in a host system provides virtual volume view to user-level and system-level applications executing on the host system. The virtual volume module maps I/O from the applications which are directed to a virtual volume to a first physical volume in a first storage system. When necessary, the virtual volume module can map application I/O's to a second volume in a second storage system. The second storage system replicates data in the first storage system, so that when re-mapping occurs it is transparent to the applications running on the host system.

Description

    CROSS-REFERENCES TO RELATED APPLICATIONS
  • The present application is a Continuation Application of U.S. application Ser. No. 10/911,107, filed Aug. 3, 2004, which is herein incorporated by reference for all purposes.
  • BACKGROUND OF THE INVENTION
  • The present invention is related to data storage systems and in particular to failover processing and data migration.
  • A multitude of storage system configurations exist to provide solutions to the various storage requirements of modern businesses.
  • A traditional multipath system shown in FIG. 7 shows the use of multipath software to increase accessibility to a storage system. A host 0701 provides the hardware and underlying system software to support user applications 070101. Data communication paths 070301, 070303 provide an input-output (I/O) path to a storage facility 0705 (storage system). Multipath software 070105 is provided to increase accessibility to the storage system 0705. The software provides typical features including failover handling for failures on the I/ O paths 070301, 070303 between the host 0701 and the storage system 0705.
  • In a multipath configuration, the host 0701 has two or more Fibre Channel (FC) host bus adapters 070107. The storage system 0705, likewise, includes multiple Fibre Channel interfaces 070501, where each interface is associated with a volume. In the example shown in FIG. 7, a single volume 070505 is shown. A disk controller (070503) handles I/O requests received from the host 0701 via the FC interfaces 070501. As noted above, the host has multiple physically independent paths 070301, 070403 to the volume(s) in the storage system 0705. Fibre Channel switches which are not shown in the figure can be used for connecting the host and the storage system. It can be appreciated of course that other suitable communication networks can be used; e.g., Ethernet and InfiniBand.
  • In a typical operation, user applications 070101 and system software (e.g., the OS file system, volume manager, etc.) issue I/O requests to the volume(s) 070505 in the storage system 0705 via SCSI (small computer system interface) 070103. The multipath software 070105 intercepts the requests and determines a path 070301, 070303 over which the request will be sent. The request is sent to the disk controller 070503 over the selected path.
  • Path selection depends on various criteria including, for example, whether or not all the paths are available. If multiple paths are available, the least loaded path can be selected. If one or more paths are unavailable, the multipath software selects one of the available paths. A path may be unavailable because of a failure of a physical cable that connects a host's HBA (host bus adapter) and a storage system's FC interface, a failure of an HBA, a failure of an FC interface, and so on. By providing the host with the host 0701 with multiple physically independent paths to volumes in the storage system 0705, multipath software can increase the availability of the storage system from I/O path's perspective.
  • Typical commercial systems include Hitachi Dynamic Link Manager™ by Hitachi Data Systems; VERITAS Volume Manager™ by VERITAS Software Corporation; and EMC PowerPath by EMC Corporation.
  • FIG. 8 shows a storage system configured for data migration. Consider the situation where a user on the host machine 1301 has been accessing and storing data in a storage system A 1305; e.g., Volume X-P 130505. Suppose the user now wants to use the volume designated as Volume X-S 130705 on storage system B 1307. The host machine 1301 therefore needs to subsequently access Volume X-S.
  • To switch the host machine 1301 over to storage system B 1307, the data stored in Volume X-P needs to be migrated to Volume X-S (the assumption is that Volume X-S does not have a copy of the data on Volume X-P). In addition, a communication channel from the host machine 1301 to storage system B 1307 must be provided. For example, physical cabling 130301 that connects the host machine 1301 to storage system A 1305 needs to be reconnected to storage system B 1307. The reconnected cable is shown in dashed lines 130303.
  • Data migration from storage system A 1305 to storage system B 1307 is accomplished by the following steps. It is noted here that some of all of the data in storage system A can be migrated to storage system B. The amount of data that is migrated will depend on the particular situation. First, the user must stop all I/O activity with the storage system A 1305. This might involve stopping the user's applications 130101, or otherwise indicating to (signaling) the applications to suspend I/O operations to storage system A. Depending on the host machine, the host machine itself may have to be shut down. Next, the physical cabling 130301 must be reconfigured to connect the host machine 1301 to storage system B 1307. For example, in a fibre channel (FC) installation, a physical cable is disconnected from the FC interface 130501 of storage system A and connected to the FC interface 130701 of storage system B. Next, the host machine 1301 must be reconfigured to use Volume X-S in storage system B instead of Volume X-P in storage system A.
  • On the storage system side, the data in Volume X-P must be migrated to Volume X-S. To do this, the disk controller 130703 of storage system B initiates a copy operation to copy data from Volume X-P to Volume X-S. The data migration is performed over the FC network 130505. Once the data migration is under way, the user applications 130101 can once again resume their I/O activity, now with storage system B, where the migration operation continues as a background process. Depending on the host machine, this may involve restarting (rebooting) the host machine.
  • If the host machine 1301 makes a read access of a data block on Volume X-S that has not yet been updated by the migration operation, the disk controller B 130703 accesses the data of the requested data block from storage system A. Typically, the migration takes place on a block-by-block basis in sequential order. However, a read operation will likely access a block that is out of sequence with respect to the sequence of migration of the data blocks. The disk controller B can use a bitmap (or some other suitable mechanism) to keep track of which blocks have been updated by the migration operation and by the write operations. The bitmap can also be used to prevent a newly written block location from being over-written with data from Storage System A during the data migration process.
  • Typical commercial systems include Hitachi On-Line Data Migration by Hitachi Data Systems and Peer-to-peer Remote Copy (PPRC) Dynamic Address Switching (DAS) by IBM, Inc.
  • FIG. 9 shows a conventional server clustering system. Clustering is a technique for increasing system availability. Thus, host systems 0901 and 0909 each can be configured respectively with suitable clustering software 090103 and 090903, to provide failover capability among the hosts.
  • In a server cluster configuration, there are two or more physically independent host systems. There are two or more physically independent storage systems. FIG. 9, for example, shows that Host 1 is connected to storage system A 0905 over an FC network 090301. Similarly, Host 2 is connected to storage system B 0907 over an FC network 090309. Storage system A and storage system B are in data communication with each other over yet another FC network 090305. Although it is not shown, it can be appreciated that the network passes through a wide area network (WAN), meaning that Host 2 and storage system B can be located at a remote data center that is far from Host 1 and storage system A.
  • Under normal operations, Host 1 accesses (read, write) Volume X-P in storage system A. The disk controller A 090503 replicates data that is written to Volume X-P by Host 1 to Volume X-S in storage system B. The replication is performed over the FC network 090305. The replication can occur synchronously, in which case the storage system A does not acknowledge a write request from the Host 1 until it is determined that the data associated with the write request has been replicated to storage system B. Alternatively, the replication can occur asynchronously, in which case storage system A acknowledges the write request from Host 1 independently of when the data associated with the write request is replicated to the storage system B.
  • When a failure in either Host 1 or in storage system A occurs, failover processing takes place so that Host 2 can take over the tasks of Host 1. Host 2 can detect a failure in Host 1 by using a heartbeat message, where Host 1 periodically transmits a message (“heartbeat”) to the Host 2. A failure in Host 1 is indicated if Host 2 fails to receive the heartbeat message within a span of time. If the failure occurs in the storage system A, the Host 1 can detect such failure; e.g., by receiving a failure response from the storage system, by timing out waiting for a response, etc. The clustering software 090103 in the Host 1 can signal the Host 2 of the occurrence.
  • When the Host 2 detects the occurrence of a failure, it performs a split pair operation (in the case where remote copy technology is being used) between Volume X-P and Volume X-S. When the split pair operation is complete, the Host 2 can mount the Volume X-S and start the applications 090901 to resume operations in Host 2. The split pair operation causes the data replication between Volume X-P and Volume X-S to complete without interruption, Host 1 cannot update Volume X-P during a split pair operation. This ensures that the Volume X-S is a true copy of the Volume X-P when Host 2 takes over for Host 1. The foregoing is referred to as active-sleep failover. Host 2 is not active (sleep, standby mode) from a user application perspective until a failure is detected in Host 1 or in storage system A.
  • Typical commercial systems include VERITAS Volume Manager™ by VERITAS Software Corporation and Oracle Real Application Clusters (RAC) 10 g by Oracle Corp.
  • FIG. 10 shows a conventional remote data replication configuration (remote copy). This configuration is similar to the configuration shown in FIG. 9 except that the host 1101 in FIG. 10 is not clustered. Data written by applications 110101 to the Volume X-P is replicated by the disk controller A 110503 in the storage system A 1105. The data is replicated to Volume X-S in storage system B 1107 over an FC network 110305. Although it is not shown, the storage system B can be a remote system accessed over a WAN.
  • Typical commercial systems include Hitachi TrueCopy™ Remote Replication Software by Hitachi Data Systems and VERITAS Storage Replicator and VERITAS Volume Replicator, both by VERITAS Software Corporation.
  • SUMMARY OF THE INVENTION
  • A data access method and system includes a host system having a virtual volume module. The virtual volume module receive I/O operations originating from I/O requests made by applications executing on the host system. The I/O operations are directed to a virtual volume. The virtual volume module produces “corresponding I/O operations” that are directed to a target physical volume in a target storage system. The target storage system can be selected from among two or more storage systems. Date written to the target storage system is replicated to another of the storage systems. When a failure in a storage system that is designated as the target storage system, the virtual volume module designates another storage system as the target storage system for subsequent corresponding I/O operations.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Aspects, advantages and novel features of the present invention will become apparent from the following description of the invention presented in conjunction with the accompanying drawings, wherein:
  • FIG. 1 is a block diagram showing a configuration of a computer system to which first and second embodiments of the present invention are applied;
  • FIG. 2 illustrates in tabular format configuration information used by the virtual volume module;
  • FIGS. 2A and 2B illustrate particular states of the configuration information;
  • FIG. 2C shows a transition diagram of the typical pairing states of a remote copy pair;
  • FIG. 3 is a block diagram showing a configuration of a computer system to which a third embodiment of the present invention is applied;
  • FIG. 4 is a block diagram showing a configuration of a computer system to which a fourth embodiment of the present invention is applied;
  • FIG. 4A shows failover processing when the production volume fails;
  • FIG. 4B shows failover processing when the backup volume fails;
  • FIG. 5 is a block diagram showing a configuration of a computer system to which a fifth embodiment of the present invention is applied;
  • FIG. 6 is a block diagram showing a configuration of a computer system to which a variation of the first embodiment of the present invention is applied;
  • FIG. 7 shows a conventional multipath configuration in a storage system;
  • FIG. 8 shows a conventional data migration configuration in a storage system;
  • FIG. 9 shows a conventional server clustering configuration in a storage system; and
  • FIG. 10 shows a conventional remote data replication configuration in a storage system.
  • DESCRIPTION OF THE SPECIFIC EMBODIMENTS Embodiment 1
  • FIG. 1 shows an illustrative embodiment of a first aspect of the present invention. This embodiment illustrates path failover between two storage systems, although the present invention can be extended to cover more that two storage systems. The embodiment described is a multiple storage system which employs remote copy technology to provide failover recovery.
  • Generally, a virtual volume module is provided in a host system. The host system is in data communication with a first storage system. Data written to the first storage system is duplicated, or otherwise replicated, to a second storage system. The virtual volume module interacts with the first and second storage systems to provide virtual storage access for applications running or executing on the host system. The virtual volume can detect a failure that can occur in either or both of the first and second storage systems and direct subsequent data I/O requests to the surviving storage system, if there is a surviving storage system. Following is description of an illustrative embodiment of this aspect of the present invention.
  • A system according to one such embodiment includes a host 0101 that is in data communication with storage systems 0105, 0107, via suitable communication network links. According to the embodiment shown in FIG. 1, a Fibre Channel (FC) network 010301 connects the host 0101 to a storage system 0105 (Storage System A). An FC network 010303 connects the host 0101 to a storage system 0107 (Storage System B). The storage systems 0105, 0107 are linked by an FC network 010305. It can be appreciated of course that other types of networks can be used instead of Fibre Channel; for example, InfiniBand and Ethernet. It can be further appreciated that Fibre Channel switches 0109 can be used to create a storage area network (SAN) among the storage systems. It will be understood that other storage architectures can also be used. It is further understood that the FC networks shown in FIG. 1 (and in the subsequent embodiments) can be individual networks, or part of the same network, or may comprise two or more different networks.
  • Though not shown, it can be appreciated that the host 0101 comprises standard hardware components typically found in a host computer system, including a data processing unit (e.g., CPU), memory (e.g., RAM, boot ROM, etc.), local hard disk storage, and so on. The host 0101 further comprises one or more FC host bus adapters (FBC HBAs) 010107 to connect to the storage systems 0105, 0107. The embodiment in FIG. 1 shows two FC HBAs illustrated in phantom, each having a connection to one of the storage systems 0105, 0107. The host 0101 further includes a virtual volume manager 010105, a small computer system interface (SCSI) 010103, and one or more applications 010101. The applications can be user-level software that runs on top of an operating system (OS), or is system-level software that are components of the OS. The applications access (read, write) the storage systems 0105, 0107 by making input/output (I/O) requests to the storage systems. Typical OSs include Unix, Linux, Windows 2000/XP/2003, MVS, and so on. User-level applications includes typical systems such as database systems, but of course can be any software that has occasion to access data on a storage system. Typical system-level applications include system services such as file systems and volume managers. Typically, there is data associated with an access request, whether it is data to be read from storage or data to be written to storage.
  • The SCSI interface 010103 is a typical interface to access volumes provided by the storage systems 0105, 0107. The virtual volume module 010105 presents “virtual volumes” to the host applications 010101. The virtual volume module interacts with the SCSI interface 010103 to map virtual volumes to physical volumes in storage systems 0105, 0107.
  • For system-level applications, the OS is configured with one or more virtual volumes. When the OS accesses the volume it directs one or more suitable SCSI commands to the virtual volume, by way of the virtual volume module 010105. The virtual volume module 010105 produce corresponding commands or operations that are targeted to one of the physical volumes (e.g., Volume X-P, Volume X-S) in the storage systems 0105, 0107. The corresponding command or operation may be a modification of the original SCSI command or operation if a parameter of the command includes a reference to the virtual volume (e.g., open). The modification would be to replace the reference to the virtual volume with a reference to a physical volume (the target physical volume). Subsequent commands need only be directed to the appropriate physical volume, including communicating over the appropriate communication interface (010107, FIG. 1).
  • For user-level applications, the application can make a file system call, which is translated by the OS to a series of SCSI accesses that are targeted to a virtual volume. The virtual volume module in turn makes corresponding SCSI accesses to one of the physical volumes. If the OS provides the capability, the user-level application can make direct calls to the SCSI interface to access a virtual volume. Again, the virtual volume module would modify the calls to access one of the physical volumes. Further detail about this aspect of the present invention will be discussed below.
  • Each storage system 0105, 0107 includes one or more FC interfaces 010501, 010701, one or more disk controllers 010503, 010703, one or more cache memories 010507, 010707, and one or more volumes 010505, 010705. The FC interface is physically connected to the host 0101 or to the other storage system, and receives I/O and other operations from the connected device. The received operation is forwarded to the disk controller which then interacts with the storage device to process the I/O request. The cache memory is a well known technique for improved read and write access.
  • Storage system 0105 provides a volume designated as Volume X-P 010505 for storing data. Storage system 0107, likewise, provides a volume designated as Volume X-S 010705 for storing data. A volume is a logical unit of storage that is composed of one or more physical disk drive units. The physical disk drive units that constitute the volume can be part of the storage system or can be external storage that is separate from the storage system.
  • Operation of the virtual volume module 010105 will now be discussed in further detail. First, the virtual volume module 010105 performs a discover operation. In the example shown in FIG. 1, assume that Volume X-P and Volume X-S will be discovered. A configuration file stored in the host 0101 will indicate to the virtual volume module 10105 that Volume X-S is the target of a replication operation that is performed on Volume X-P. Table I below is an illustrative example of the relevant contents of a configuration file:
    TABLE I
    #Configuration File
    MultiPathSet Name: Pair 1
    Primary Volume: Volume X-P in Storage System A
    Secondary Volume: Volume X-S in Storage System B
    Virtual Volume Name: VVolX
    MultiPathSet Name: Pair 2
    Primary Volume: Volume Y-P in Storage System C
    Secondary Volume: Volume Y-S in Storage System B
    Virtual Volume Name: VVolY

    It is noted that instead of using a configuration file, a command line interface can be provided, allowing a user (e.g., system administrator) to interactively configure the virtual volume module 010105. For example, if the host 0101 is running on a UNIX OS, interprocess communication (IPC) or some other similar mechanism can be used to signal the virtual volume module with the information contained in the configuration table (TABLE I).
  • There is an entry for each pair of volumes that are configured as primary and secondary volumes (collectively referred to as a “remote copy pair”) for data replication (remote copy) operations. For example, Volume X-P is referred to as a primary volume, meaning that it is the volume with which the host will perform I/O operations. The storage system 0105 containing Volume X-P can be referred to as the primary system. Volume X-S is referred to as the secondary volume; the storage system 0107 containing Volume X-S can be referred to as the secondary system. In accordance with conventional replication operations, data written to the primary volume is replicated to the secondary volume. In accordance with the present invention, the secondary volume also serves as a failover volume in case the primary volume goes off line for some reason, whether scheduled (e.g., for maintenance activity), or unexpectedly (e.g., failure). The example configuration file shown in Table I identifies two replication pairs; or, viewed from a failover point of view, two failover paths.
  • When a new pair of volumes is created in the configuration file, a virtual volume module issues a pair creation request to a primary storage system. Then the disk controller of the primary controller creates the requested pair, sets the pair status to SYNCING and sends a completion response to the virtual volume module. After the pair is created, the disk controller of the primary storage system starts to copy data in the primary volume to the secondary volume in the secondary storage system. This is called Initial Copy. Initial copy is an asynchronous and independent processing from I/O request processing by the disk controller. The disk controller knows which blocks in the primary volume have been copied to the secondary volume by using a bitmap table. When the primary volume and the secondary volume become identical, the disk controller changes the pair status to SYNCED. In both SYNCING and SYNCED states, when the disk controller receives a write request to the primary volume, the disk controller sends the write request to the disk controller of the secondary volume and waits for the response before the disk controller of the primary storage returns a response to the host. This is called synchronous remote data replication.
  • Though the virtual volume module 010105 and the storage systems 0105, 0107 use remote copy technology, it can be appreciated that embodiments of the present invention can be implemented with any suitable data replication or data backup technology or method. The virtual volume module can be readily configured to operate according to the data replication or data backup technology that is provided by the storage systems. Generally, the primary volume serves as the production volume for data I/O operations made by user-level and system-level applications running on the host. The secondary volume serves as a backup volume for the production volume. As will be explained, in various aspects of the present invention, the backup volume can become the production volume if failure of the production volume is detected. It will be understood therefore that the terms primary volume and secondary volume do not refer to the particular underlying data replication or data backup technology but rather to the function being served, namely, production volume and backup volume. It will be further understood that some of the operations performed by the virtual volume module are dictated by the remote copy methodology of the storage systems.
  • Continuing with FIG. 1, the virtual volume module 010105 provides virtual volume access to the applications 010101 executing on the host 0101 via the SCSI interface 010103. The OS, and in some cases user-level applications, “see” a virtual volume that is presented by the virtual volume module 010105. For example, Table I shows a virtual volume that is identified as VVolX. The applications (e.g., via the OS) send conventional SCSI commands (including but not limited to read and write operations) via the SCSI interface to access the virtual volume. The virtual volume module intercepts the SCSI commands and translates the commands to corresponding I/O operations that are suitable for accessing Volume X-P in the storage system 0105 or for accessing Volume X-S in the storage system 0107.
  • In this first embodiment of the present invention the selection between Volume X-P and Volume X-S as the target volume is made in accordance with the following situations (assuming an initial pairing wherein the Volume X-P is the primary volume and Volume X-S is the secondary volume):
      • a) If Volume X-P is a primary volume and is available, Virtual Volume Module services the I/O requests using Volume X-P via FC network (010301).
      • b) If Volume X-P is a primary volume but is not available and Volume X-S is available and the pair status is SYNCED, Virtual Volume Module services the I/O requests using Volume X-S via FC network (010303). Further detail about how the Virtual Volume Module achieves this is discussed below.
      • c) If Volume X-S is a primary volume and is available, Virtual Volume Module services the I/O requests using Volume X-S via FC network (010303). This situation can arise if the status of the remote copy pair is REVERSE-SYNCING or REVERSE-SYNCED, where data in the secondary volume (e.g., Volume X-S) has been copied to the primary volume (e.g., Volume X-P). The roles of primary volume and secondary volume are reversed in these states. Further detail about how the Virtual Volume Module achieves this is discussed below.
      • d) If Volume X-S is a primary volume and is not available and Volume X-P is available and the pair status is REVERSE-SYNCED, Virtual Volume Module services the I/O requests using Volume X-P via FC network (010301). Further detail about how the Virtual Volume Module achieves this is discussed below.
      • e) If both volumes are un-available, Virtual Volume Module tells the requesting applications that it cannot complete the requested I/O request because of failures in both the primary and the secondary volumes. Further detail about how the Virtual Volume Module achieves this is discussed below.
      • f) If Volume X-P is a primary volume and is not available and the pair status is SYNCING, or if Volume X-S is a primary volume and is not available and the pair status is REVERSE-SYNCING, then Virtual Volume Module tells the requesting applications that it cannot complete the I/Os because of failures. The SYNCING status or the REVERSE-SYNCING indicates that Volume X-P and Volume X-S are not identical. Because the secondary volume is not updated, the Virtual Volume Module cannot process I/O requests from the secondary volume. For example, an application wants to read data which has been written to the primary volume and the data has not yet been copied to the secondary volume, and the primary volume is not available. The Virtual Volume Module cannot find the requested data in the secondary volume.
  • The virtual volume module 010105 can learn of the availability status of the volumes (Volume X-P, Volume X-S) by issuing suitable I/O operations (or some other SCSI command) to the volumes. The availability of the volumes can be determined based on the response. For example, if a response to a request issued to a volume is not received within a predetermined period, then it can be concluded that the volume is not available. Of course, depending on the storage systems used in a particular implementation, explicit commands may be provided to obtain this information.
  • FIG. 2 shows in tabular form information that is managed and used by the virtual volume manager 010105. The information indicates the availability and pairing state of the volumes. A Pair Name field contains the name of the remote copy pair as shown in the configuration table (Table I); e.g., “Pair 1” and “Pair 2”. A Volumes field contains the names of the volumes which constitute the identified pairs, also shown in the configuration table (Table I). A Storage field contains the names of the storage systems in which the volumes reside; e.g., Storage System A (0105), Storage System B (0107). A Roles field indicates which volume is acting as the primary volume and which volume is the corresponding secondary volume, for each remote copy pair. An HBA field identifies the HBA from which a volume can be accessed. An Availability field indicates if a volume is available or not. A Pair field indicates the pair status of the pair; e.g., SYNCING, SYNCED, SPLIT, REVERSE-SYNCING, REVERSE-SYNCED, and DECOUPLED.
  • Referring to FIG. 2C for a moment, a brief discussion of the different pairing states of a remote copy pair will be made. Consider two storage volumes. Initially, they have no relation to each in terms of remote copy and so they exist in a NON-PAIR state. When one of the volumes communicates to the other volume a command to create a remote copy pair (typically performed by the disk controller), the volumes exist in a SYNCING pair state. This signifies that the two volumes are in the process of becoming a remote copy pair. This involves copying (mirroring) the data from one volume (the primary volume) to the other volume (the secondary volume). When the copy or mirroring operation is complete, the two volumes have identical data and are now in a SYNCED state. Typically in the SYNCED state, write requests can only be serviced by the primary volume; data that is written to the primary volume is mirrored to the secondary volume (remote copy operation). In the SYNCED state, read requests may be serviced by the secondary volume.
  • At some point, the paired volume may be SPLIT, which means they are still considered as paired volumes. In the SPLIT state, remote copy operations are not performed when the primary volume receives and services write requests. In addition, write requests can be serviced by the secondary volume. At some point, the remote copy operations may be re-started. If during the SPLIT state, the secondary volume did not service any write requests, then we need only ensure that write requests performed by the primary volume are mirrored to the secondary volume; the volumes thus transition through the SYNCING state to the SYNCED state.
  • During the SPLIT state, the secondary volume is permitted to service write requests in addition to the primary the volume. Each volume can receive write requests from the same host, or from different host machines. As a result, the data state of each volume will diverge from they SYNCED state. When a subsequent re-sync operation is performed to synchronized the two volumes, there are two ways to incorporate data that had been written to the primary volume and data that had been written to the secondary volume. In the first case, any data that had been written to the secondary volume is discarded. Thus, data that was written to the primary volume during the SPLIT state is copied to the secondary volume. In addition, any blocks that were updated in the secondary volume during the SPLIT state, must be replaced with data from the corresponding blocks in the primary volume. In this way, the data state of the secondary volume is once again synchronized to the data state of the primary volume. Thus, the pair status goes from SPLIT state, to SYNCING state, to SYNCED state.
  • In the second case, any data that had been written to the primary volume is discarded. Thus, data that was written to the secondary volume during the SPLIT state is copied to the primary volume. In addition, any blocks that were updated in the primary volume during the SPLIT state, must be replaced with data from the corresponding blocks in the secondary volume. In this way, the data state of the primary volume is now synchronized to the data state of the primary volume. In this situation, the pair status goes from SPLIT state, to REVERSE-SYNCING state, to REVERSE-SYNCED state because of the role reversal between the primary volume and the secondary volume.
  • The foregoing is general explanation of remote copy operations performed by storage systems. However in the present invention, there is no such case where both the primary volume and the secondary volume will service write requests during the SPLIT state. Only one of the volumes will receive write requests from a host machine and so there is no need to discard any write data.
  • Returning to FIG. 2, if the virtual volume module 010105 determines that Volume X-P is not available, and it is the primary volume (as determined from the configuration table, Table I) and the pair status is SYNCED, then the virtual volume module will instruct the storage system 0107 to split the remote copy pair (i.e., the Volume X-P and Volume X-S pair). The storage system 0107 changes the pair status from SYNCED (which means that any updates on Volume X-P are reflected to Volume X-S so these two volumes remain identical, and it is not possible for a host to write data onto Volume X-S) to SPLIT (which means that Volume X-P and Volume X-S are still associated as a remote copy pair but updates made to Volume X-P are not reflected to Volume X-S and updates made to Volume X-S are not reflected to Volume X-P). The virtual volume module subsequently uses the Volume X-S in the storage system 0107 to service I/O requests made by the host, instead of Volume X-P. Referring to FIG. 2A, the Availability and Pair fields for Volume X-P and for Volume X-S would be updated as shown. Thus, in this situation, the Availability field for Volume X-P would be “No”. The Availability field for Volume X-S would be “Yes”. The Pair field for Volume X-P and X-S would be SPLIT.
  • The table in FIG. 2 is managed by the virtual volume module. So this is correct. Information required to manage pairs of volumes are managed by both storage systems because they need to know how to replicate volumes across storage systems. They keep the same information. The virtual volume module needs to ask only one of the storage systems to change the pair status and then such changes are reflected to the other storage system by communications between the storage systems. When Volume X-P is not available, the virtual volume manager is not sure where a problem is. So the virtual volume module asks Storage System B to change the status then it is storage system's responsibility to reflect the change to the other storage system. If the storage system A is alive, then the change is reflected to the storage system A; otherwise it is not.
  • If the virtual volume module 010105 determines that Volume X-P is not available, and it is the primary volume (as determined from the configuration table, Table I) and the pair status is SYNCING, then the virtual volume module will fail to process I/O requests from applications so the virtual volume module sends an error to the applications.
  • If the virtual volume module 010105 determines that Volume X-S is not available and the role of Volume X-S is the primary volume and the pair status is REVERSE-SYNCING, then the virtual volume module will communicate a command to the storage system 0105 to split the remote copy pair of Volume X-P and Volume X-S. The storage system 0105 changes the pair status from REVERSE-SYNCED (which means any updates on Volume X-S are reflected to Volume X-P and the two volumes remain identical, and it is not possible for a host to write data onto Volume X-P) to SPLIT. The virtual volume module subsequently forwards I/O operations (and other SCSI commands) to service I/O requests from the applications 010101 to the Volume X-P in the storage system 0105, instead of Volume X-S. Referring to FIG. 2B, the Availability and Pair fields for Volume X-P and for Volume X-S would be updated as shown. Thus, in this situation, the Availability field for Volume X-P would be “Yes”. The Availability field for Volume X-S would be “No”. The Pair field for Volume X-P and X-S would be SPLIT.
  • If the virtual volume module 010105 determines that Volume X-S is not available, and it is the primary volume (as determined from the configuration table, Table I) and the pair status is REVERSE-SYNCING, then the virtual volume module will fail to process I/O requests from applications so the virtual volume module sends an error to the applications.
  • As discussed above, when Volume X-P becomes unavailable, the virtual volume module 010105 begins to use Volume X-S. When Volume X-P becomes available, the virtual volume manager sends a reverse-sync request to the storage system 0107. The purpose of doing this is to re-establish Volume X-P as the primary volume. The reverse-sync request initiates an operation to copy data that was written to Volume X-S, during the time that Volume X-P was unavailable (i.e., subsequent to the SPLIT), back to Volume X-P. Recall that Volume X-P is initially the primary volume and Volume X-S is the secondary volume.
  • In response to receiving the reverse-sync request, the disk controller 010703 changes the pair status of the pair to REVERSE-SYNCING and responds with a suitable response to the host 0101. The disk controller 010703 begins copying data that was written to Volume X-S during the SPLIT state to Volume X-P. Typically, a write request logging mechanism is used to determine which blocks on the volume had changed and in which order. Typically, the copy of changed blocks from Volume X-S to Volume X-P is performed asynchronously from processing new I/O requests from hosts, meaning that during this copy, the disk controller accepts I/O requests from the hosts to Volume X-S. When the copy is complete, the volumes become identical, and the pair status is changed to REVERSE_SYNCED status.
  • After the pair status changed to REVERSE_SYNCING, the virtual volume module 010105 then updates the table shown in FIG. 2. The virtual volume module then changes the role of Volume X-S to primary volume (the Role field for Volume X-S is set to “Primary”) and the role of Volume X-P to secondary volume (the Role field for Volume X-P is set to “Secondary”). The Availability field for Volume X-P is changed to “Yes”.
  • If a user subsequently, wants to use Volume X-P as the primary volume, the virtual volume module 010105 communicates with the storage system 0107 to determine whether the pair status is REVERSE-SYNCED or not. If not, then the virtual volume module 010105 waits for the state to be achieved. The REVERSE-SYNCED state means data in Volume X-S is identical to the data in Volume X-P.
  • The virtual volume module stops processing any more I/O requests from applications. I/O requests are queued in a wait queue which a virtual volume module manages.
  • The virtual volume module 010105 split the pair. Disk controller 010503 and disk controller 010703 change the status to SPLIT.
  • The disk controller 010503 informs the host 0101 that the SPLIT has occurred. In response, the virtual volume module 010105 then changes the role of Volume X-P to primary volume in the table shown in FIG. 2, and the role of Volume X-S is changed to secondary volume.
  • The virtual volume module starts to processing I/O requests in the wait queue and new I/O requests from applications. At this time, I/O requests are issued by a host to Volume X-P.
  • The virtual volume module resyncs the pair comprising Volume X-P and Volume X-S. Disk controllers 010503 and 010703 change the pair status to SYNCING. Data which has been written to Volume X-P is copied to Volume X-S. During this copy, a disk controller accepts I/O requests from a host to Volume X-P. Data that is subsequently written to Volume X-P will then be copied to Volume X-S synchronously. When the copy has completed, the volume pairs contain identical data. The disk controller 010503 changes the pair status from SYNCING to SYNCED and then informs the host 0101 that the pair has been re-synced.
  • Operation of the disk controllers 010503 and 010703 will now be discussed. Suppose the disk controller 010503 receives a data write request from the host 0101. If the pair status of the pair consisting of Volume X-P and Volume X-S is in the SYNCING or SYNCED state, then the disk controller 010503 writes the data to Volume X-P. If there is a failure during the attempt to perform the write operation to service the write request, a suitable error message is returned to the host 0101. Assuming the write operation to Volume X-P is successful, then the disk controller 010503 will send the data to the storage system 0107 via the FC network 010305. It is noted that the data can be cached in the cache 010507 before being actually written to Volume X-P.
  • Upon receiving the data from the disk controller 010503, the disk controller 010703 in the storage system 0107 will write the data to Volume X-S. The disk controller 010703 sends a suitable response back to the disk controller 010503 indicating a successful write operation. Upon receiving a positive indication from the disk controller 010703, the disk controller 010503 in the storage system 0105 will communicate a response to the host 0101 indicating that the data was written to Volume X-P and to Volume X-S. It is noted that the data can be cached in the cache 010707 before being actually written to Volume X-S.
  • If on the other hand, the disk controller 010703 encounters an error in writing to Volume X-S, then it will send a suitable negative response to the storage system 0105. The disk controller 010503, in response, will send a suitable response to the host 0101 indicating that the data was written to Volume X-P, but not to Volume X-S.
  • Suppose the disk controller 010503 receives a write request and the pair status of Volume X-P and Volume X-S is SPLIT. The disk controller 010503 will perform a write operation to Volume X-P. If there is a failure during this attempt, then the disk controller will respond to the host 0101 with a response indicating the data could not be written to Volume X-P. If the write operation to Volume X-P succeeded, then a suitable positive response is sent back to the host 0101. It is noted that the data can be cached in the cache 010507 before being actually written to Volume X-P. Since the pair status is SPLIT, there is no step of sending the data to the storage system 0107. The disk controller 010503 logs write requests in its memory or a temporary disk space. By using the log, when the pair status is changed to SYNCING, the disk controller 010503 can send the write requests being kept in the log to the disk controller 010703 in the order of which the disk controller 010503 received the write requests from a host.
  • Suppose that the disk controller 010703 receives a data write request from the host 0101. If the status of the volume pair of Volume X-P and Volume X-S is SYNCING or SYNCED, then the disk controller 010703 will reject the request and send a response to the host 0101 indicating that the request is being rejected. No attempt to service the write request from the host 0101 will be made.
  • If the status of the volume pair is SPLIT, then the disk controller 010703 will service the write operation and write the data to Volume X-S. A suitable response indicating the success or failure of the write operation is then sent to the host 0101. It is noted that the data can be cached in the cache 010707 before being actually written to Volume X-S.
  • If the status of the volume pair is REVERSE-SYNCED, then the disk controller 010703 services the write request by writing to Volume X-S. If the write operation fails, then a suitable response indicating the success or failure of the write operation is sent to the host 0101.
  • If the write operation to Volume X-S was successful, then the disk controller 010703 will send the data to the storage system 0105 via the FC Network 010305. The disk controller 010503 writes the received data to Volume X-P. The disk controller 010503 will communicate a message to the storage system 0107 indicating the success or failure of the write operation. If the write operation to Volume X-P was successful, then, the disk controller 010703 will send a response to the host 0101 indicating that the data was written to both Volume X-S and to Volume X-P. If an error occurred during the write attempt to Volume X-P, then the disk controller 010703 will send a message indicating the successful write to Volume X-S and a failed attempt to Volume X-P.
  • Suppose the disk controller 010503 receives a data write request from the host 0101 when the volume pair is in the REVERSE-SYNCING or REVERSE-SYNCED state. The disk controller 010503 would respond with an error message to the host 010 indicating that the request is being rejected and thus no attempt to service the write request will be made.
  • Refer for a moment to FIG. 6. This figure illustrates a variation of the embodiment of the present invention shown in FIG. 1 for load balancing of I/O. Here, the storage system 0605 includes a second volume 060509 (Volume Y-S). The storage system 0607 includes a second volume 060709 (Volume Y-P). Two path failover configurations are provided: Volume X-P and Volume X-S constitute one path failover configuration, where Volume X-P on the storage system 0605 serves as the production volume and Volume X-S serves as the backup. Volume Y-P and Volume Y-S constitute another path failover configuration, where Volume Y-P on the storage system 0607 serves as the production volume and Volume Y-S serves as the backup. The virtual volume module 060105 executing on the host machine 0101 in this variation of the embodiment shown in FIG. 1 can service I/O requests from the applications 010101 by sending corresponding I/O operations to either Volume X-P or Volume Y-P. Since the two production volumes are in separate storage systems, I/O can be load balanced between the two storage systems. Thus, the selection of Volume X-P or Volume Y-P can be made based on load-balancing criteria (e.g., load conditions in each of the volumes) in accordance with conventional load-balancing methods. This configuration thus, offers load-balancing with the failover handling of the present invention.
  • Embodiment 2
  • FIG. 1 also illustrates a second aspect of the present invention. This aspect of the present invention relates to non-disruptive data migration.
  • Generally in accordance with this second aspect of the present invention, a host system includes a virtual volume module in data communication with a first storage system. A second storage system is provided. The virtual volume module can initiate a copy operation in the first storage system so that data stored on the first storage system is migrated to the second storage system. The virtual volume module can periodically monitor the status of the copy operation. In the meanwhile, the virtual volume module receives I/O requests from applications running on the host and services them by accessing the first storage system. When the migration operation has completed, the virtual volume module can direct I/O requests from the applications to the second storage system. Following is a discussion of an illustrative embodiment of this aspect of the present invention.
  • As mentioned, the system configuration shown in FIG. 1 can be used to explain this aspect of the present invention. For this aspect of the present invention, suppose Storage System A 0105 is a pre-existing (e.g., legacy) storage system. Suppose further that Storage System B 0107 is a replacement storage system. In this situation, it is assumed that storage system 0107 will replace the legacy storage system 0105. Consequently, it is desirable to copy (migrate) data from Volume X-P in the storage system 0105 to Volume X-S in storage system 0107. Moreover, it is desirable to do this on a “live” system, where users can access Volume X-P during the data migration.
  • As in the first aspect of the present invention, the virtual volume module discovers Volume X-P and Volume X-S. A configuration file stored in the host 0101 includes the following information:
    TABLE II
    #Configuration File
    Data Migration Set: DMS1
    Primary Volume: Volume X-P in Storage System A
    Secondary Volume: Volume X-S in Storage System B
    Virtual Volume Name: VVolX
    Data Migration Set: DMS2
    Primary Volume: Volume Y-P in Storage System C
    Secondary Volume: Volume Y-S in Storage System B
    Virtual Volume Name: VVolY

    This table can be used to initialize the virtual volume module 010105. Alternatively, a command line interface as discussed above can be used to communication the above information to the virtual volume module.
  • This configuration table identifies data migration volume sets. The primary volume indicates a legacy (old) storage volume. The secondary volume designates a new storage volume. As in Embodiment 1, the virtual volume module 010105 presents applications 010101 running on the host 0101 with a virtual storage volume.
  • Remote copy technology is used in this embodiment of the present invention. However, it will be appreciated that any suitable data duplication technology can be adapted in accordance with the present invention. Thus, in this embodiment of the present invention, the primary volume serves the role of the legacy storage system. The secondary volume serves the role of a new storage system.
  • When a data migration operation is initiated, the virtual volume module 010105 communicates a request to the storage system 0105 to create a data replication pair between the primary volume that is specified in the configuration file (here, Volume X-P) and the secondary volume that is specified in the configuration file (here, Volume X-S). The disk controller 010503, in response, will set the volume pair to the RESYNCING state. The disk controller then initiates data copy operations from Volume X-P to Volume X-S. This is typically a background process, thus allowing for servicing of I/O requests from the host 0101. Typically, a bitmap or some similar mechanism is used to keep track of which blocks have been copied.
  • If the storage system 0105 receives a data write request during the data migration, the disk controller in the storage system 0105 will then write the data to the targeted data blocks in Volume X-P. After that the disk controller 010503 will write the received to data to the storage system 0107. The disk controller 010703 will write the data to Volume X-S and respond to the disk controller 010503 accordingly. The disk controller 010503 will the respond to the host 0101 accordingly.
  • When the data migration has completed, the disk controller 010503 will change the volume pair status to SYNCED.
  • As discussed above, the virtual volume module 010105 provides a virtual volume to the applications 010101 running on the host 0101 via the SCSI interface 010103. The applications can issue any SCSI command (including I/O related commands) to the SCSI interface. The virtual volume module 010105 intercepts the SCSI commands and issues suitable corresponding requests to the storage system 0105 to service the command.
  • In accordance this second aspect of the present invention, the virtual volume module 010105 periodically checks the pair status of the Volume X-P/Volume X-S pair. When the pair status is SYNCED, the virtual volume module will communicate a request to the disk controller 010503 to delete the pair. The disk controller 010503 will then take steps to delete the volume pair, and will stop any data copy or data synchronization between Volume X-P and Volume X-S. The disk controller 010503 will then respond to the host 0101 with a response indicating completion of the delete operation. It is noted that I/O requests from the host 0101 during this time are not processed. They are merely queued up. To the applications 010101, it will appear as if the storage system (the virtual storage system as presented by the virtual volume module 010105) is behaving slowly.
  • When the virtual volume module 010105 receives a positive response from the disk controller 010503 indicating the delete operation has succeeded, then the entry in the configuration table for the data migration pair consisting of Volume X-P and Volume X-S is eliminated. I/O requests that have queued up will now be serviced by the storage system 0107. Likewise, when the virtual volume module receives subsequent SCSI commands, it will direct them to the storage system 0107 via the FC channel 010303.
  • This aspect of the present invention allows for data migration to take place in a transparent fashion. Moreover, when the migration has completed, the old storage system 0105 can be taken offline without disruption of service to the applications 010101. This is made possible by the virtual volume module which transparently redirects I/O to the storage system 0107 via the communication link 010303.
  • Operation of the disk controller 010503 and of the disk controller 010703 is as discussed above in connection with the first embodiment of the present invention.
  • Embodiment 3
  • FIG. 3 shows an embodiment of a system according to a third aspect of the present invention. This aspect of the present invention reduces the time for failover processing.
  • Generally in accordance with this third aspect of the present invention, a first host and a second host are configured for clustering. Each host can access a first storage system and a second storage system. The first storage system serves as a production storage system. The second storage system serves as a backup to the primary storage system. A virtual volume module in each host provides a virtual volume view to applications running on the host. By default, the virtual volume modules access the first storage system (the production storage system) to service I/O requests from the hosts. When a host detects that the other host is not operational, it performs conventional failover processing to take over the failed host. The virtual volume modules are configured to detect a failure in the first storage system. In response, subsequent access to storage is directed by the virtual volume modules to the second storage system. If the virtual volume module in the second host detects a failure of the first storage system, the virtual volume module will direct I/O requests to the second storage system. An illustrative embodiment of this aspect of the present invention will now be discussed.
  • FIG. 3 shows one or more FC networks. An FC network 030301 connects a host 0301 to a storage system 0305 (Storage System A); Storage System A is associated with the host 0301. An FC network 030303 connects the host 0301 to a storage system 0307 (Storage System B). An FC network 030307 connects a host 0309 to the storage system 0305. An FC network 030309 connects the host 0309 to the storage system 0307; Storage System B is associated with the host 0309. It can be appreciated that other types of networks can be used; e.g., InfiniBand and Ethernet. It can also be appreciated that FC switches which are not shown in the figure can be used to create Storage Area Networks (SAN) among the host and the storage systems.
  • The hosts 0301, 0309 are configured in a manner similar to the host 0101 shown in FIG. 1. For example, each host 0301, 0309 includes respectively one or more FC HBA's 030107, 030907 for connection to the respective FC network. FIG. 3 shows that each host 0301, 0309 includes two FC HBA's.
  • Each host 0301, 0309 includes respectively a virtual volume module 030105, 030905, a SCSI Interface 030103, 030903, Cluster Software 030109, 030909, and one or more applications 030101, 030901. The underlying OS on each host can be any suitable OS, such as Windows 2000/XP/2003, Linux, UNIX, MVS, etc. The OS can be different for each host.
  • User- level applications 030101, 030901 includes typical applications such as database systems, but of course can be any software that has occasion to access data on a storage system. Typical system-level applications include system services such as file systems and volume managers. Typically, there is data associated with an access request, whether it is data to be read from storage or data to be written to storage.
  • The cluster software 030109, 030909 cooperate to provide load balancing and failover capability. A communication channel indicated by the dashed line provides a communication channel to facilitate operation of the cluster software in each host 0301, 0309. For example, a heartbeat signal can be passed between the software modules 030109, 030909 to determine when a host has failed. In the configuration shown, the cluster software components 030109, 030903 are configured for ACTIVE-ACTIVE operation. Thus, each host can serve as a standby host for the other host. Both hosts are active and operate concurrently to provide load balancing between them and to serve as standby hosts for each other. Both hosts access the same volume, in this case Volume X-P. The cluster software manages data consistency between the hosts. An example of this kind of cluster software is Real Application Clusters by Oracle Corporation.
  • The SCSI interface 030103, 030903 in each host 0301, 0309 is configured as discussed above in FIG. 1. Similarly, the virtual volume modules 030105, 030905 are configured as in FIG. 1, to provide virtual volumes to the applications running on their respective host machines 0301, 0309. The storage systems 0305 and 0307 are similarly configured as described in FIG. 1.
  • In operation, each virtual volume module 030105, 030905 functions much in the same way as discussed in Embodiment 1. The cluster software 030109, 030909 both access Volume X-P 030505 in the storage system 0305 as the primary (production) volume; the secondary volume is provided by Volume X-S 030705 in the storage system 0307 and serves as a backup volume. The virtual volume module configures Volume X-P and Volume X-S as a remote copy pair, by sending appropriate commands to the disk controller 030503. The volumes pair is initialized to be in the PAIR state by the disk controller 030503. In the pair state, the disk controller 030503 copies data that is written to Volume X-P to Volume X-S.
  • As mentioned above, the cluster software 030109, 030909 is configured for ACTIVE-ACTIVE operation. Each host 0301, 0309 can access Volume X-P for I/O operations. The cluster software is responsible for maintaining data integrity so that both hosts 0301, 0309 can access the volume. For example, cluster software 030109 (or 030909) first obtains a lock on all or a portion of Volume X-P before it writes data to Volume X-P, so that only one host at a time can write data to the volume.
  • If one host fails, applications running on the surviving host can continue to operate; the cluster software in the surviving host will perform the necessary failover processing for a failed host. The virtual volume module of the surviving host is not aware of such failure. Consequently, the virtual volume modules do not perform any failover processing, and will continue to access Volume X-P to service I/O requests from applications executing on the surviving host.
  • If, on the other hand, the storage system 0305 fails, the virtual volume module in each host 0301, 0309 will detect the failure and perform a failover process as discussed in Embodiment 1. Thus, both virtual volume modules will issue a split command to the primary storage system 0305. The disk controller 030503 will change the volume pair status to SPLIT, in response to receiving the first split command which the disk controller received. The disk controller will ignore the second split command. The virtual volume modules 030103, 030903 will then reconfigure themselves so that subsequent I/O requests from the hosts 0301, 0309 can then be serviced by communicating with Volume X-S. The cluster software continues to operate without being aware of the failed storage system since the failover processing was handled by the virtual volume modules 030105, 030905. If the pair status is SYNCING or REVERSE-SYNCING, the split command is failed. As the result, the hosts can not continue to work or failed.
  • If one of the hosts and the primary storage system both fail, then the cluster software in the surviving host will perform failover processing to handle the failed host. The virtual volume module in the surviving host will perform path failover as discussed above for Embodiment 1 to provide uninterrupted service to the applications running on the surviving host. The virtual volume module in the surviving host will direct I/O requests to the surviving storage system. It is noted that there is no synchronization is required between the cluster software and the virtual volume module because the cluster software doesn't see any storage system or any volume failure.
  • Embodiment 4
  • FIG. 4 shows an embodiment of a fourth aspect of the present invention, in which redundant data replication capability is provided.
  • Generally in accordance with this fourth aspect of the present invention, a host is connected to first and second storage systems. A virtual volume module executing on the host provides a virtual volume view to applications executing on the host machine. The first storage system is backed up by the second storage system. The virtual volume module can perform a failover to the second storage system if the first storage system fails. Third and fourth storage systems serves as backup systems respectively for the first and second storage systems. Thus, data backup can continue if either the first storage system fails or if the second storage system fails. A discussion of an illustrative embodiment of this aspect of the present invention follows.
  • In the configuration shown in FIG. 4, an FC network 050301 provides a data connection between a host 0501 and a first storage system 0505 (Storage System A). An FC network 050303 provides a data connection between the host 0501 and a second storage system 0507 (Storage System B). An FC network 050305 provides a data connection between the storage system 0505 and the storage system 0507. An FC network 050309 provides a data connection between the storage system 0507 and a third storage system 0509 (Storage System C). An FC network 050307 provides a data connection between the storage system 0505 and a fourth storage system 0511 (Storage System D). It can be appreciated of course that other types of networks can be used instead of FC; for example, InfiniBand and Ethernet. It can be further appreciated that FC switches (not shown) can be used to create a storage area network (SAN) among the storage systems. It will be understood that other storage architectures can also be used.
  • The host 0501 and the storage systems 0505, 0507 are located at a first data center in a location A. The storage systems 0509, 0511 are located in another data center at a location B that is separate from location A. Typically, location B is a substantial distance from location A; e.g., different cities. The two data centers can be connected by a WAN, so the FC networks 05030, 050309 pass through the WAN.
  • The host 0501 includes one or more FC HBA's 050107. In the embodiment shown, the host includes two FC HBA's. The host includes a virtual volume module 050105, a SCSI interface 050103, and one or more user applications 050101. It can be appreciated that a suitable OS is provided on the host 0501, such as Windows 2000/XP/2003, Linux, UNIX, and MVS. The virtual volume module 050105 provides a virtual volume view to the applications 050101 as discussed above.
  • In operation, the virtual volume module 050105 operates in the manner as discussed in connection with Embodiment 1. Particular aspects of the operation in accordance with this embodiment of the invention include the virtual volume module using Volume X-P 050505 in the storage system 0505 as the primary volume and Volume X-S 1 050705 in the storage system 0507 as the secondary volume. The primary volume serves as the production volume for I/O operations made by the user-level and system-level applications 050101 running on the host 0501.
  • The virtual volume module 050105 configures the storage systems for various data backup/replication operations, which will now be discussed. The disk controller 050503 in the storage system 0505 is configured for remote copy operations using Volume X-P and Volume X-S 1 as the remote copy pair. Volume X-P serves as the production volume to which the virtual volume module 050105 directs I/O operations to service data I/O requests from the applications 050101. In the storage system 0505, remote copy takes place via the FC network 050305, where Volume X-P is the primary volume and Volume X-S 1 is the secondary volume. The remote copy operations are performed synchronously.
  • Redundant replication is provided by the storage system 0505. Volume X-P and Volume X-S3 051105 are paired for remote copy operations via the FC network 050307. Volume X-P is the primary volume and Volume X-S3 is the secondary volume. The data transfers can be performed synchronously or asynchronously. This is a user choice which one the user selects, synchronous replication or asynchronous replication. Synchronous replication provides no data loss but a short distance replication and sometimes slower I/O performance of a host. Asynchronous replication provides a long distance replication and no I/O performance degradation at a host but may lost data when a primary volume is broken. There is a tradeoff.
  • As mentioned above, synchronous data transfer from device A to device B means that device A writes data to its local volume and then sends the data to device B and then waits for a response to the data transfer operation from device B before device A sends a response to a host. With asynchronous data transfer, device A sends a response to a host immediately after device A writes data to its local volume. The written data is transferred to device B after the response. This data transfer is independent from processing I/O requests from a host by device A.
  • Continuing, redundant replication is also provided by the storage system 0507. Volume X-S 1 and Volume X-S 2 050905 form a remote copy pair, where Volume X-S is the primary volume and Volume X-S 2 is the secondary volume. The data transfer can be synchronous or asynchronous.
  • During normal operation, the virtual volume module 050105 receives I/O requests via the SCSI interface 050103, and directs corresponding I/O operations to Volume X-P, via the FC network 050301, as shown in FIG. 4 by the bolded line. Data replication (by way of remote copy operations) occurs between Volume X-P and Volume X-S 1, where changes to Volume X-P are copied to Volume X-S 1 synchronously. Data replication (also by way of remote copy operations) occurs between Volume X-P and Volume X-S3, where changes to Volume X-P are copied to Volume X-S3 synchronously or asynchronously; this is a redundant replication since Volume X-S 1 also has a copy of Volume X-P. Data replication (also by way of remote copy operations) occurs between Volume X-S 1 and Volume X-S 2, where changes to Volume X-S 1 are copied to Volume X-S 2 synchronously or asynchronously.
  • Consider FIG. 4A, where the storage system 0505 has failed. The virtual volume module 050105 will detect this and will perform failover processing to Volume X-S 1 as discussed in Embodiment 1. Thus, I/O processing can continue with Volume X-S 1. In addition, data replication (backup) continues to the provided by the volume pair of Volume X-S 1 and Volume X-S 2.
  • Consider FIG. 4B, where the storage system 0507 has failed. The virtual volume module 050105 will continue to direct I/O operations to Volume X-P, since Volume X-P remains operational. Data replication will not occur between Volume X-P and Volume X-S 1 due to the failure of the storage system 0507. However, data replication will continue between Volume X-P and Volume X-S3. The configuration of FIG. 4, therefore, is able to provide redundancy for data backup and/or replication capability.
  • Embodiment 5
  • Refer now to FIG. 5 for a discussion of an embodiment according to a fifth aspect of the present invention. This aspect of the invention provides for disaster recovery using redundant data replication.
  • Generally in accordance with this fifth aspect of the present invention, a first host and a second host each is connected to a pair of storage systems. One host is configured for standby operation and becomes active when the other host fails. A virtual volume module is provided in each host. In the active host, the virtual volume module services I/O requests from applications running on the host by accessing one of the storage systems connected to the host. Data replication is performed between the pair of storage systems associated with the host, and between the pairs of storage systems. When the active host fails, the standby host takes over and uses the pair of storage systems associated with the standby host. Since data replication was being performed between the two pairs of storage systems, the standby host has access to the latest data; i.e., the data at the time of failure of the active host. Following is a discussion of an illustrative embodiment of this aspect of the present invention.
  • Two hosts 1501, 1513 are coupled to storage systems via FC networks. An FC network 150301 connects host 1501 to a storage system 1505 (Storage System A). An FC network 150303 connects the host 1501 to a storage system 1507 (Storage System B). An FC network 150305 connects the storage system 1505 to the storage system 1507. For the host 1513, an FC network 150311 connects the host 1513 to a storage system 1509 (Storage System C). An FC network 150313 connects the host 1513 to a storage system 1511 (Storage System D). An FC network 150315 connects the storage system 1509 to the storage system 1511. An FC network 150307 connects the storage system 1505 to the storage system 1511. An FC network 150309 connects the storage system 1507 to the storage system 1509.
  • The host 1501 and its associated storage systems 1505, 1507 are located in a data center in a location A. The host 1513 and its associated storage systems 1509, 1511 are located in a data center at a location B. The data centers can be connected in a WAN that includes FC networks 150307, 150309.
  • Each host 1501, 1513 is configured as described in Embodiment 3. In particular, each host 1501, 15132 includes respective cluster software 150109, 151309. In this embodiment, however, the cluster software is configured for ACTIVE-SLEEP operation (also known as active/passive mode). In this mode of operating a cluster, one host is active (e.g., host 1501), the other host (e.g., host 15013) is in a standby mode. Thus, from the point of view of storage access, there is only one active host. When the standby host detects or otherwise determines that the active host has failed, it then becomes the active host. For example, Veritas Cluster Server by VERITAS Software Corporation provides this mode of cluster operation.
  • Under normal operating conditions, applications 150101 executing in the active host 1501 make I/O requests. The virtual volume module 150105 services the request by communicating corresponding I/O operations to Volume X-P 150505, which serves as the production volume. Volume X-P and Volume X-S 1 150705 are configured as a remote copy pair via a suitable interaction between the virtual volume module 150105 and the disk controller 150503. Write operations made to Volume X-P are thereby replicated to Volume X-S 1 via the FC network 150305 synchronously. Volume X-S 1 thus serves as the backup for the production volume. The data transfer is a synchronous operation. The host 1513 is in standby mode and thus the virtual volume module 151305 is inactive as well.
  • The virtual volume module 150505 configures the volumes for the following data replication and backup operations: Volume X-P and Volume X-S3 151105 are also configured as a remote copy pair. Write operations made to Volume X-P are thereby replicated to Volume X-S3 via the FC network 150307. The data transfer can be synchronous or asynchronous.
  • Volume X-S 1 and Volume X-S 2 150905 are configured as a remote copy pair. Write operations made to Volume X-S 1 are thereby replicated to Volume X-S 2 via the FC network 150309. The data transfer can be synchronous or asynchronous.
  • Volume X-S 2 and Volume X-S5 151109 are configured as a remote copy pair. Write operations made to Volume X-S 2 are thereby replicated to Volume X-S5 via the FC network 150315. The data transfer is synchronous.
  • Volume X-S3 and Volume X-S4 150909 are configured as a remote copy pair. Write operations made to Volume X-S3 are thereby replicated to Volume X-S4 via the FC network 150315. The data transfer is synchronous.
  • Consider the failover situation in which the storage system 1505 fails. The virtual volume module 150105 will detect this and perform a failover process as discussed in Embodiment 1. Subsequent I/O requests by the applications running on the host 1501 will be serviced by the virtual volume module 150105 by accessing Volume X-S 1. Note that data replication continues despite the failure of the storage system 1505 because Volume X-S 1 is backed up by Volume X-S 2.
  • Consider the failover situation in which the storage system 1507 fails. Data I/O requests made by the applications running on the host 1501 will continue to be serviced by the virtual volume module 150105 by accessing Volume X-P. Moreover, data replication of Volume X-P continues with Volume X-S3, despite the failure of the storage system 1507.
  • Consider the failover condition in which the active host 1501 fails. The cluster software 151309 will detect the condition and activate the host 1513. Applications 151301 will execute to take over the functions provided by the failed host 1501. The virtual volume module 151305 in the now-active host 1513 will access either Volume X-S2 in storage system 1509 or Volume X-S3 in storage system 1511 to service I/O requests from the applications. Since it is possible that the storage system 1505 or the storage system 1507 could have failed before their respective remote copy sites (i.e., storage system 1511 and storage system 1509) were fully synchronized, it is necessary to determine which storage system is synchronized. This determination can be made by asking the storage system 1511 and the storage system 1509 the statuses of the volume pairs, X-P to X-S3 and X-S 1 to X-S 2. If one of the statuses is SYNCING or SYNCED, then the host splits the pair and uses the secondary volume of the pair as the primary volume of the host. If both statuses are SPLIT, the host checks when the pairs were split and selects the secondary volume of the last split pair as the primary volume for the host. To determine when the pairs were split, as one of the possible implementations, the storage system sends an error message to the host when the pair is split and the host records the error.
  • If it is determined that Volume X-S 2 has the latest data, then the virtual volume module 151305 will service I/O requests from the applications 151301 using Volume X-S 2. Volume X-S5 will serve as backup by virtue of the volume pair configuration discussed above. If it is determined that Volume X-S3 has the latest data, then the virtual volume module 151305 will service I/O requests from the applications 151301 using Volume X-S3. Volume X-S4 will serve as backup by virtue of the volume pair configuration discussed above.
  • Failover processing by the standby host 1513 includes the cluster software 151309 instructing the disk controller 150903 to perform a SPLIT operation to split the volume pair Volume X-S 1 and Volume X-S 2. The virtual volume module also instructs the disk controller 151103 to split the Volume X-P and Volume X-S3 pair.
  • As noted above, the virtual volume module 151305 knows which volume (Volume X-S2 or Volume X-S3) has the latest data. If Volume X-S 2 has the latest data (or both volumes have the latest data, a situation where there was no failure at either of storage system 1505 or storage system 1507), then a script which is installed on the host and is initiated to start by the cluster software 151309 configures the virtual volume module 151305 to use Volume X-S 2 as the primary volume and Volume X-S5 as the secondary volume. If, on the other hand, Volume X-S3 has the latest data, then the script configures the virtual volume module to use Volume X-S3 as the primary volume and Volume X-S4 as the secondary volume.
  • The embodiments described above each have the virtualization module in the host. However, virtualized storage systems also include a virtualization component that can be located external of the host, between the host machine and the storage system. For example, a storage virtualization product like the Cisco MDS 9000 provides a virtualization component (in the form of software) in the switch. In accordance with the present invention, the functions performed by the virtualization component discussed above can be performed in the switch, if the virtualization component is part of the switch. Also the virtualization component can be located in an intelligent storage system. The intelligent storage system stores data not only in local volumes but also in external volumes. The local volumes are volumes which the intelligent storage system has in itself. The external volumes are volumes which external storage systems have and the intelligent storage system can access the external volumes via networking switches. The virtual volume module running on the intelligent storage system performs the functions discussed above. In this case, the primary volumes can be the local volumes and the secondary volumes can be the external volumes.

Claims (11)

1. A method for accessing data from a host computer coupled to a first storage system and a second storage system, the method comprising:
receiving I/O (input/output) requests from one or more applications in the host computer;
designating one of the first storage system or the second storage system as a target storage system;
maintaining pairing information relating to a pairing state of storage volumes in the first storage system and the second storage system;
for each I/O request, producing one or more corresponding I/O operations that are directed to a target storage volume, the target storage volume being contained in the target storage system,
communicating the one or more corresponding I/O operations to the target storage system to conduct the I/O requests; and
communicating a request to initiate a data copy process in which data in one of the first and second storage system, designated as a primary system, is copied to another of the first and second storage system, designated as a secondary system, wherein the primary system is designated as the target storage system.
2. The method of claim 1, wherein at least one of the storage volumes is mapped to a storage volume in a third storage volume which is coupled to the first and second storage system to store data from the host computer as a external storage system.
3. The method of claim 1 further comprising receiving an indication of an error in the primary system, wherein the data copy process is initiated in response to receiving the indication of the error.
4. The method of claim 3 further comprising detecting a failure in the primary system whereby the primary system cannot service the I/O requests and in response thereto designating the secondary system as the target storage system and designating a storage volume in the secondary system as the target volume, whereby the secondary system services subsequent I/O requests from the host computer.
5. The method of claim 4 further comprising detecting a recovery in the primary system whereby the primary system is capable of servicing the I/O requests, and in response thereto initiating a second data copy process to copy data in the secondary system to the primary system, and subsequent to completion of the second copy process designating the primary system as the target storage system and designating a storage volume in the primary system as the target volume, whereby the primary system services subsequent I/O requests from the host computer.
6. The method of claim 3 wherein the data copy process is a data migration operation, the method further comprising detecting completion of the data migration operation and in response thereto designating the secondary system as the target storage system and designating a storage volume in the secondary system as the target storage volume, whereby the primary system is no longer being used as the target storage system.
7. The method of claim 1 wherein the applications include user-level applications and system-level applications.
8. The method of claim 1 wherein the one or more corresponding I/O operations are SCSI commands.
9. The method of claim 1 wherein the step of communicating includes communicating over one or more FC (Fibre Channel) networks.
10. A host device coupled to a first storage system and a second storage system, the first storage system and the second storage system coupled to a third storage system which stores data written by the host device, the third storage system presenting at least one storage volume as a storage resource to the first storage system and the second storage system, the first storage system presenting a first storage volume mapped to the at least one storage volume in the third storage system and the second storage system presenting a second storage volume mapped to the at least one storage volume in the third storage system as a storage resource to the host device, the host device comprising:
a data processing unit operable to receive I/O operations and to produce corresponding I/O operations that are directed to a target storage volume being one of the first storage volume or the second storage volume, the data processing unit maintaining pairing information relating to a pairing state of storage volumes including the first and second storage volume in the first and second storage system;
a first communication interface configured for connection to a communication network; and
a second communication interface configured for connection to a communication network,
wherein the data processing unit is further operable to selectively communicate the corresponding I/O operations to the first storage system via the first communication interface and to the second storage system via the second communication interface.
11. The host device of claim 10 wherein the data processing unit is further operable to detect a failure in the first storage system when the target storage volume is the first storage volume and in response thereto to subsequently communicate corresponding I/O operations to the second storage system, the target storage volume being the second storage volume in the second storage system.
US11/393,596 2004-08-03 2006-03-29 Failover and data migration using data replication Abandoned US20060179170A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/393,596 US20060179170A1 (en) 2004-08-03 2006-03-29 Failover and data migration using data replication

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/911,107 US7058731B2 (en) 2004-08-03 2004-08-03 Failover and data migration using data replication
US11/393,596 US20060179170A1 (en) 2004-08-03 2006-03-29 Failover and data migration using data replication

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/911,107 Continuation US7058731B2 (en) 2004-08-03 2004-08-03 Failover and data migration using data replication

Publications (1)

Publication Number Publication Date
US20060179170A1 true US20060179170A1 (en) 2006-08-10

Family

ID=35758820

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/911,107 Expired - Fee Related US7058731B2 (en) 2004-08-03 2004-08-03 Failover and data migration using data replication
US11/393,596 Abandoned US20060179170A1 (en) 2004-08-03 2006-03-29 Failover and data migration using data replication

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/911,107 Expired - Fee Related US7058731B2 (en) 2004-08-03 2004-08-03 Failover and data migration using data replication

Country Status (2)

Country Link
US (2) US7058731B2 (en)
JP (1) JP4751117B2 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070220223A1 (en) * 2006-03-17 2007-09-20 Boyd Kenneth W Remote copying of updates to primary and secondary storage locations subject to a copy relationship
US20080104347A1 (en) * 2006-10-30 2008-05-01 Takashige Iwamura Information system and data transfer method of information system
US20080133831A1 (en) * 2006-12-01 2008-06-05 Lsi Logic Corporation System and method of volume group creation based on an automatic drive selection scheme
US20080228770A1 (en) * 2007-03-15 2008-09-18 Halcrow Michael A Method for Performing Recoverable Live Context Migration in a Stacked File System
US20090182996A1 (en) * 2008-01-14 2009-07-16 International Business Machines Corporation Methods and Computer Program Products for Swapping Synchronous Replication Secondaries from a Subchannel Set Other Than Zero to Subchannel Set Zero Using Dynamic I/O
US20090193292A1 (en) * 2008-01-25 2009-07-30 International Business Machines Corporation Methods And Computer Program Products For Defing Synchronous Replication Devices In A Subchannel Set Other Than Subchannel Set Zero
US20090271582A1 (en) * 2008-04-28 2009-10-29 Hitachi, Ltd. Information System and I/O Processing Method
US20100023647A1 (en) * 2008-07-28 2010-01-28 International Business Machines Corporation Swapping pprc secondaries from a subchannel set other than zero to subchannel set zero using control block field manipulation
US20100095066A1 (en) * 2008-09-23 2010-04-15 1060 Research Limited Method for caching resource representations in a contextual address space
US20100131950A1 (en) * 2008-11-27 2010-05-27 Hitachi, Ltd. Storage system and virtual interface management method
US7805565B1 (en) * 2005-12-23 2010-09-28 Oracle America, Inc. Virtualization metadata promotion
US20110202718A1 (en) * 2006-04-18 2011-08-18 Nobuhiro Maki Dual writing device and its control method
US20110289292A1 (en) * 2007-08-22 2011-11-24 Nobuhiro Maki Storage system performing virtual volume backup and method thereof
US20120042142A1 (en) * 2008-08-08 2012-02-16 Amazon Technologies, Inc. Providing executing programs with reliable access to non-local block data storage
US20120060006A1 (en) * 2008-08-08 2012-03-08 Amazon Technologies, Inc. Managing access of multiple executing programs to non-local block data storage
US20120303913A1 (en) * 2011-05-26 2012-11-29 International Business Machines Corporation Transparent file system migration to a new physical location
US8341119B1 (en) * 2009-09-14 2012-12-25 Netapp, Inc. Flexible copies having different sub-types
US8392753B1 (en) * 2010-03-30 2013-03-05 Emc Corporation Automatic failover during online data migration
US20130067163A1 (en) * 2011-09-09 2013-03-14 Vinu Velayudhan Methods and structure for transferring ownership of a logical volume by transfer of native-format metadata in a clustered storage environment
US8429360B1 (en) * 2009-09-28 2013-04-23 Network Appliance, Inc. Method and system for efficient migration of a storage object between storage servers based on an ancestry of the storage object in a network storage system
US20140082295A1 (en) * 2012-09-18 2014-03-20 Netapp, Inc. Detection of out-of-band access to a cached file system
US8914540B1 (en) * 2008-07-01 2014-12-16 Cisco Technology, Inc. Multi-fabric SAN based data migration
US9032172B2 (en) 2013-02-11 2015-05-12 International Business Machines Corporation Systems, methods and computer program products for selective copying of track data through peer-to-peer remote copy
US9552265B2 (en) 2014-03-28 2017-01-24 Fujitsu Limited Information processing apparatus and storage system
US9633038B2 (en) 2013-08-27 2017-04-25 Netapp, Inc. Detecting out-of-band (OOB) changes when replicating a source file system using an in-line system
CN107329709A (en) * 2017-07-05 2017-11-07 长沙开雅电子科技有限公司 A kind of Storage Virtualization New Virtual rolls up implementation method

Families Citing this family (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR0108797A (en) * 2000-03-01 2003-02-18 Computer Ass Think Inc Method and system for updating a compressed file from a computer file
US7792797B2 (en) * 2002-12-24 2010-09-07 International Business Machines Corporation Fail over resource manager access in a content management system
JP2005018510A (en) * 2003-06-27 2005-01-20 Hitachi Ltd Data center system and its control method
US8266406B2 (en) 2004-04-30 2012-09-11 Commvault Systems, Inc. System and method for allocation of organizational resources
CA2564967C (en) 2004-04-30 2014-09-30 Commvault Systems, Inc. Hierarchical systems and methods for providing a unified view of storage information
JP4887618B2 (en) * 2004-11-19 2012-02-29 日本電気株式会社 Storage system, replication method and program thereof
US7337350B2 (en) * 2005-02-09 2008-02-26 Hitachi, Ltd. Clustered storage system with external storage systems
JP2006260141A (en) * 2005-03-17 2006-09-28 Fujitsu Ltd Control method for storage system, storage system, storage control device, control program for storage system, and information processing system
JP4843976B2 (en) * 2005-03-25 2011-12-21 日本電気株式会社 Replication systems and methods
JP4550717B2 (en) * 2005-10-28 2010-09-22 富士通株式会社 Virtual storage system control apparatus, virtual storage system control program, and virtual storage system control method
JP4813872B2 (en) * 2005-11-21 2011-11-09 株式会社日立製作所 Computer system and data replication method of computer system
US20110010518A1 (en) * 2005-12-19 2011-01-13 Srinivas Kavuri Systems and Methods for Migrating Components in a Hierarchical Storage Network
CN100423491C (en) * 2006-03-08 2008-10-01 杭州华三通信技术有限公司 Virtual network storing system and network storing equipment thereof
US7529887B1 (en) * 2006-03-30 2009-05-05 Emc Corporation Methods, systems, and computer program products for postponing bitmap transfers and eliminating configuration information transfers during trespass operations in a disk array environment
JP4530372B2 (en) * 2006-06-29 2010-08-25 シーゲイト テクノロジー エルエルシー Widespread distributed storage system
JP4902403B2 (en) * 2006-10-30 2012-03-21 株式会社日立製作所 Information system and data transfer method
JP5244332B2 (en) * 2006-10-30 2013-07-24 株式会社日立製作所 Information system, data transfer method, and data protection method
JP5020601B2 (en) * 2006-11-10 2012-09-05 株式会社日立製作所 Access environment construction system and method
JP5090022B2 (en) * 2007-03-12 2012-12-05 株式会社日立製作所 Computer system, access control method, and management computer
JP4744480B2 (en) 2007-05-30 2011-08-10 株式会社日立製作所 Virtual computer system
US7904743B2 (en) * 2007-08-27 2011-03-08 International Business Machines Corporation Propagation by a controller of reservation made by a host for remote storage
JP4898609B2 (en) * 2007-09-11 2012-03-21 株式会社日立製作所 Storage device, data recovery method, and computer system
JP2009104421A (en) * 2007-10-23 2009-05-14 Hitachi Ltd Storage access device
US8060710B1 (en) * 2007-12-12 2011-11-15 Emc Corporation Non-disruptive migration using device identity spoofing and passive/active ORS pull sessions
US7979652B1 (en) 2007-12-20 2011-07-12 Amazon Technologies, Inc. System and method for M-synchronous replication
JP4633837B2 (en) * 2008-01-22 2011-02-16 富士通株式会社 Address distribution system, method and program therefor
US8548956B2 (en) 2008-02-28 2013-10-01 Mcafee, Inc. Automated computing appliance cloning or migration
JP5026309B2 (en) * 2008-03-06 2012-09-12 株式会社日立製作所 Backup data management system and backup data management method
JP2009230239A (en) * 2008-03-19 2009-10-08 Hitachi Ltd Data migration method for tape device and tape management system
JP2009251972A (en) * 2008-04-07 2009-10-29 Nec Corp Storage system
JP5072692B2 (en) * 2008-04-07 2012-11-14 株式会社日立製作所 Storage system with multiple storage system modules
US8090907B2 (en) * 2008-07-09 2012-01-03 International Business Machines Corporation Method for migration of synchronous remote copy service to a virtualization appliance
JP5015351B2 (en) * 2008-08-08 2012-08-29 アマゾン テクノロジーズ インコーポレイテッド Realization of reliable access to non-local block data storage by executing programs
JP2010049634A (en) * 2008-08-25 2010-03-04 Hitachi Ltd Storage system, and data migration method in storage system
US9207984B2 (en) 2009-03-31 2015-12-08 Amazon Technologies, Inc. Monitoring and automatic scaling of data volumes
US8713060B2 (en) 2009-03-31 2014-04-29 Amazon Technologies, Inc. Control service for relational data management
US8332365B2 (en) 2009-03-31 2012-12-11 Amazon Technologies, Inc. Cloning and recovery of data volumes
US9705888B2 (en) 2009-03-31 2017-07-11 Amazon Technologies, Inc. Managing security groups for data instances
US9135283B2 (en) 2009-10-07 2015-09-15 Amazon Technologies, Inc. Self-service configuration for data environment
US8335765B2 (en) * 2009-10-26 2012-12-18 Amazon Technologies, Inc. Provisioning and managing replicated data instances
US8676753B2 (en) 2009-10-26 2014-03-18 Amazon Technologies, Inc. Monitoring of replicated data instances
US8751878B1 (en) * 2010-03-30 2014-06-10 Emc Corporation Automatic failover during online data migration
US8495019B2 (en) 2011-03-08 2013-07-23 Ca, Inc. System and method for providing assured recovery and replication
US8819374B1 (en) * 2011-06-15 2014-08-26 Emc Corporation Techniques for performing data migration
US9811272B1 (en) * 2011-12-28 2017-11-07 EMC IP Holding Company LLC Four site data replication using host based active/active model
US9229656B1 (en) * 2012-06-28 2016-01-05 Emc Corporation Managing settings and queries in host-based data migration
US8954783B2 (en) 2012-06-28 2015-02-10 Microsoft Technology Licensing, Llc Two-tier failover service for data disaster recovery
CN103631532B (en) * 2012-08-29 2016-06-15 国际商业机器公司 For method and the equipment of visit data in data-storage system
US9098466B2 (en) * 2012-10-29 2015-08-04 International Business Machines Corporation Switching between mirrored volumes
US8904133B1 (en) 2012-12-03 2014-12-02 Hitachi, Ltd. Storage apparatus and storage apparatus migration method
US10379988B2 (en) 2012-12-21 2019-08-13 Commvault Systems, Inc. Systems and methods for performance monitoring
US9594822B1 (en) * 2013-03-13 2017-03-14 EMC IP Holding Company LLC Method and apparatus for bandwidth management in a metro cluster environment
US9697082B2 (en) 2013-03-14 2017-07-04 Hitachi, Ltd. Method and apparatus of disaster recovery virtualization
US9983992B2 (en) * 2013-04-30 2018-05-29 WMware Inc. Trim support for a solid-state drive in a virtualized environment
US9031910B2 (en) 2013-06-24 2015-05-12 Sap Se System and method for maintaining a cluster setup
US9378219B1 (en) * 2013-09-30 2016-06-28 Emc Corporation Metro-cluster based on synchronous replication of virtualized storage processors
US10142424B2 (en) * 2013-10-16 2018-11-27 Empire Technology Development Llc Two-level cloud system migration
US9880777B1 (en) 2013-12-23 2018-01-30 EMC IP Holding Company LLC Embedded synchronous replication for block and file objects
US9489275B2 (en) * 2014-10-02 2016-11-08 Netapp, Inc. Techniques for error handling in parallel splitting of storage commands
US20160098331A1 (en) * 2014-10-07 2016-04-07 Netapp, Inc. Methods for facilitating high availability in virtualized cloud environments and devices thereof
US10089307B2 (en) * 2014-12-31 2018-10-02 International Business Machines Corporation Scalable distributed data store
JP2016143166A (en) * 2015-01-30 2016-08-08 富士通株式会社 Control apparatus, storage system, and control program
JP6000391B2 (en) * 2015-03-18 2016-09-28 株式会社日立製作所 Storage system data migration method
US10275320B2 (en) 2015-06-26 2019-04-30 Commvault Systems, Inc. Incrementally accumulating in-process performance data and hierarchical reporting thereof for a data stream in a secondary copy operation
US10176036B2 (en) 2015-10-29 2019-01-08 Commvault Systems, Inc. Monitoring, diagnosing, and repairing a management database in a data storage management system
US11226985B2 (en) 2015-12-15 2022-01-18 Microsoft Technology Licensing, Llc Replication of structured data records among partitioned data storage spaces
US10235406B2 (en) 2015-12-15 2019-03-19 Microsoft Technology Licensing, Llc Reminder processing of structured data records among partitioned data storage spaces
US10599676B2 (en) 2015-12-15 2020-03-24 Microsoft Technology Licensing, Llc Replication control among redundant data centers
US10248709B2 (en) 2015-12-15 2019-04-02 Microsoft Technology Licensing, Llc Promoted properties in relational structured data
JP6668733B2 (en) * 2015-12-15 2020-03-18 富士通株式会社 Control device, management device, storage system, control program, management program, control method, and management method
US10223222B2 (en) * 2015-12-21 2019-03-05 International Business Machines Corporation Storage system-based replication for disaster recovery in virtualized environments
US10761767B2 (en) * 2016-07-12 2020-09-01 Hitachi, Ltd. Computer system and method for controlling storage apparatus that has replication direction from first logical device (in first storage) to second logical device (in second storage) and from said second logical device to third logical device (in said second storage), wherein said replication direction is reversed when second computer takes over for first computer
US10416905B2 (en) * 2017-02-09 2019-09-17 Hewlett Packard Enterprise Development Lp Modifying membership of replication groups via journal operations
US20180246648A1 (en) * 2017-02-28 2018-08-30 Dell Products L.P. Continuous disaster protection for migrated volumes of data
US11138226B2 (en) 2017-04-06 2021-10-05 Technion Research And Development Foundation Ltd. Moving replicated data in a cloud environment
US10664172B1 (en) 2017-12-18 2020-05-26 Seagate Technology Llc Coupling multiple controller chips to a host via a single host interface
US10990489B2 (en) 2017-12-28 2021-04-27 Amazon Technologies, Inc. Replication system with network failover
US10831591B2 (en) 2018-01-11 2020-11-10 Commvault Systems, Inc. Remedial action based on maintaining process awareness in data storage management
US10896094B2 (en) * 2018-07-23 2021-01-19 Mastercard International Incorporated Automated failover of data traffic routes for network-based applications
US10482911B1 (en) * 2018-08-13 2019-11-19 Seagate Technology Llc Multiple-actuator drive that provides duplication using multiple volumes
US10838625B2 (en) 2018-10-06 2020-11-17 International Business Machines Corporation I/O response times in data replication environments
US20200192572A1 (en) 2018-12-14 2020-06-18 Commvault Systems, Inc. Disk usage growth prediction system
US11048430B2 (en) 2019-04-12 2021-06-29 Netapp, Inc. Object store mirroring where during resync of two storage bucket, objects are transmitted to each of the two storage bucket
US11797312B2 (en) * 2021-02-26 2023-10-24 EMC IP Holding Company LLC Synchronization of multi-pathing settings across clustered nodes
CN113625944B (en) * 2021-06-25 2024-02-02 济南浪潮数据技术有限公司 Disaster recovery method and system based on multipath and remote replication technology
US11704289B2 (en) * 2021-07-06 2023-07-18 International Business Machines Corporation Role reversal of primary and secondary sites with minimal replication delay
US11438224B1 (en) 2022-01-14 2022-09-06 Bank Of America Corporation Systems and methods for synchronizing configurations across multiple computing clusters

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030131207A1 (en) * 2002-01-09 2003-07-10 Hiroshi Arakawa Virtualized volume snapshot formation method
US20040111485A1 (en) * 2002-12-09 2004-06-10 Yasuyuki Mimatsu Connecting device of storage device and computer system including the same connecting device
US6810491B1 (en) * 2000-10-12 2004-10-26 Hitachi America, Ltd. Method and apparatus for the takeover of primary volume in multiple volume mirroring
US20040260899A1 (en) * 2003-06-18 2004-12-23 Kern Robert Frederic Method, system, and program for handling a failover to a remote storage location
US20050102553A1 (en) * 2003-10-29 2005-05-12 Hewlett-Packard Development Company, L.P. System for preserving logical object integrity within a remote mirror cache
US20050278391A1 (en) * 2004-05-27 2005-12-15 Spear Gail A Fast reverse restore

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0736760A (en) * 1993-07-20 1995-02-07 Nippon Telegr & Teleph Corp <Ntt> Method for acquiring high reliability for external memory device provided with device multiplexing function also with inter-module sharing function
US5680640A (en) * 1995-09-01 1997-10-21 Emc Corporation System for migrating data by selecting a first or second transfer means based on the status of a data element map initialized to a predetermined state
JP4175764B2 (en) * 2000-05-18 2008-11-05 株式会社日立製作所 Computer system
US6691245B1 (en) * 2000-10-10 2004-02-10 Lsi Logic Corporation Data storage with host-initiated synchronization and fail-over of remote mirror
JP4073161B2 (en) * 2000-12-06 2008-04-09 株式会社日立製作所 Disk storage access system
WO2002065275A1 (en) 2001-01-11 2002-08-22 Yottayotta, Inc. Storage virtualization system and methods
US7231430B2 (en) * 2001-04-20 2007-06-12 Egenera, Inc. Reconfigurable, virtual processing system, cluster, network and method
US6832289B2 (en) 2001-10-11 2004-12-14 International Business Machines Corporation System and method for migrating data
US7263593B2 (en) 2002-11-25 2007-08-28 Hitachi, Ltd. Virtualization controller and data transfer control method
JP4438457B2 (en) 2003-05-28 2010-03-24 株式会社日立製作所 Storage area allocation method, system, and virtualization apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6810491B1 (en) * 2000-10-12 2004-10-26 Hitachi America, Ltd. Method and apparatus for the takeover of primary volume in multiple volume mirroring
US20030131207A1 (en) * 2002-01-09 2003-07-10 Hiroshi Arakawa Virtualized volume snapshot formation method
US20040111485A1 (en) * 2002-12-09 2004-06-10 Yasuyuki Mimatsu Connecting device of storage device and computer system including the same connecting device
US20040260899A1 (en) * 2003-06-18 2004-12-23 Kern Robert Frederic Method, system, and program for handling a failover to a remote storage location
US20050102553A1 (en) * 2003-10-29 2005-05-12 Hewlett-Packard Development Company, L.P. System for preserving logical object integrity within a remote mirror cache
US20050278391A1 (en) * 2004-05-27 2005-12-15 Spear Gail A Fast reverse restore

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7805565B1 (en) * 2005-12-23 2010-09-28 Oracle America, Inc. Virtualization metadata promotion
US7603581B2 (en) * 2006-03-17 2009-10-13 International Business Machines Corporation Remote copying of updates to primary and secondary storage locations subject to a copy relationship
US20070220223A1 (en) * 2006-03-17 2007-09-20 Boyd Kenneth W Remote copying of updates to primary and secondary storage locations subject to a copy relationship
US8127097B2 (en) 2006-04-18 2012-02-28 Hitachi, Ltd. Dual writing device and its control method
US8332603B2 (en) 2006-04-18 2012-12-11 Hitachi, Ltd. Dual writing device and its control method
US20110202718A1 (en) * 2006-04-18 2011-08-18 Nobuhiro Maki Dual writing device and its control method
US8595453B2 (en) 2006-10-30 2013-11-26 Hitachi, Ltd. Information system and data transfer method of information system
US20080104347A1 (en) * 2006-10-30 2008-05-01 Takashige Iwamura Information system and data transfer method of information system
US8832397B2 (en) 2006-10-30 2014-09-09 Hitachi, Ltd. Information system and data transfer method of information system
US7774545B2 (en) * 2006-12-01 2010-08-10 Lsi Corporation System and method of volume group creation based on an automatic drive selection scheme
US20080133831A1 (en) * 2006-12-01 2008-06-05 Lsi Logic Corporation System and method of volume group creation based on an automatic drive selection scheme
US20080228770A1 (en) * 2007-03-15 2008-09-18 Halcrow Michael A Method for Performing Recoverable Live Context Migration in a Stacked File System
US8612703B2 (en) * 2007-08-22 2013-12-17 Hitachi, Ltd. Storage system performing virtual volume backup and method thereof
US20110289292A1 (en) * 2007-08-22 2011-11-24 Nobuhiro Maki Storage system performing virtual volume backup and method thereof
US20090182996A1 (en) * 2008-01-14 2009-07-16 International Business Machines Corporation Methods and Computer Program Products for Swapping Synchronous Replication Secondaries from a Subchannel Set Other Than Zero to Subchannel Set Zero Using Dynamic I/O
US8307129B2 (en) 2008-01-14 2012-11-06 International Business Machines Corporation Methods and computer program products for swapping synchronous replication secondaries from a subchannel set other than zero to subchannel set zero using dynamic I/O
US7761610B2 (en) 2008-01-25 2010-07-20 International Business Machines Corporation Methods and computer program products for defining synchronous replication devices in a subchannel set other than subchannel set zero
US20090193292A1 (en) * 2008-01-25 2009-07-30 International Business Machines Corporation Methods And Computer Program Products For Defing Synchronous Replication Devices In A Subchannel Set Other Than Subchannel Set Zero
US8060777B2 (en) * 2008-04-28 2011-11-15 Hitachi, Ltd. Information system and I/O processing method
US20090271582A1 (en) * 2008-04-28 2009-10-29 Hitachi, Ltd. Information System and I/O Processing Method
US8595549B2 (en) 2008-04-28 2013-11-26 Hitachi, Ltd. Information system and I/O processing method
US8352783B2 (en) 2008-04-28 2013-01-08 Hitachi, Ltd. Information system and I/O processing method
US8914540B1 (en) * 2008-07-01 2014-12-16 Cisco Technology, Inc. Multi-fabric SAN based data migration
US20100023647A1 (en) * 2008-07-28 2010-01-28 International Business Machines Corporation Swapping pprc secondaries from a subchannel set other than zero to subchannel set zero using control block field manipulation
US8516173B2 (en) 2008-07-28 2013-08-20 International Business Machines Corporation Swapping PPRC secondaries from a subchannel set other than zero to subchannel set zero using control block field manipulation
US8806105B2 (en) * 2008-08-08 2014-08-12 Amazon Technologies, Inc. Managing access of multiple executing programs to non-local block data storage
US9529550B2 (en) 2008-08-08 2016-12-27 Amazon Technologies, Inc. Managing access of multiple executing programs to non-local block data storage
US11768609B2 (en) 2008-08-08 2023-09-26 Amazon Technologies, Inc. Managing access of multiple executing programs to nonlocal block data storage
US10824343B2 (en) 2008-08-08 2020-11-03 Amazon Technologies, Inc. Managing access of multiple executing programs to non-local block data storage
US9262273B2 (en) 2008-08-08 2016-02-16 Amazon Technologies, Inc. Providing executing programs with reliable access to non-local block data storage
US20120042142A1 (en) * 2008-08-08 2012-02-16 Amazon Technologies, Inc. Providing executing programs with reliable access to non-local block data storage
US8769186B2 (en) * 2008-08-08 2014-07-01 Amazon Technologies, Inc. Providing executing programs with reliable access to non-local block data storage
US20120060006A1 (en) * 2008-08-08 2012-03-08 Amazon Technologies, Inc. Managing access of multiple executing programs to non-local block data storage
US20100095066A1 (en) * 2008-09-23 2010-04-15 1060 Research Limited Method for caching resource representations in a contextual address space
US20100131950A1 (en) * 2008-11-27 2010-05-27 Hitachi, Ltd. Storage system and virtual interface management method
US8387044B2 (en) * 2008-11-27 2013-02-26 Hitachi, Ltd. Storage system and virtual interface management method using physical interface identifiers and virtual interface identifiers to facilitate setting of assignments between a host computer and a storage apparatus
US8341119B1 (en) * 2009-09-14 2012-12-25 Netapp, Inc. Flexible copies having different sub-types
US8429360B1 (en) * 2009-09-28 2013-04-23 Network Appliance, Inc. Method and system for efficient migration of a storage object between storage servers based on an ancestry of the storage object in a network storage system
US8392753B1 (en) * 2010-03-30 2013-03-05 Emc Corporation Automatic failover during online data migration
US20120303913A1 (en) * 2011-05-26 2012-11-29 International Business Machines Corporation Transparent file system migration to a new physical location
US9003149B2 (en) * 2011-05-26 2015-04-07 International Business Machines Corporation Transparent file system migration to a new physical location
CN103562879A (en) * 2011-05-26 2014-02-05 国际商业机器公司 Transparent file system migration to a new physical location
US8806124B2 (en) * 2011-09-09 2014-08-12 Lsi Corporation Methods and structure for transferring ownership of a logical volume by transfer of native-format metadata in a clustered storage environment
US20130067163A1 (en) * 2011-09-09 2013-03-14 Vinu Velayudhan Methods and structure for transferring ownership of a logical volume by transfer of native-format metadata in a clustered storage environment
US20140082295A1 (en) * 2012-09-18 2014-03-20 Netapp, Inc. Detection of out-of-band access to a cached file system
US9032172B2 (en) 2013-02-11 2015-05-12 International Business Machines Corporation Systems, methods and computer program products for selective copying of track data through peer-to-peer remote copy
US10021148B2 (en) 2013-02-11 2018-07-10 International Business Machines Corporation Selective copying of track data through peer-to-peer remote copy
US9361026B2 (en) 2013-02-11 2016-06-07 International Business Machines Corporation Selective copying of track data based on track data characteristics through map-mediated peer-to-peer remote copy
US9633038B2 (en) 2013-08-27 2017-04-25 Netapp, Inc. Detecting out-of-band (OOB) changes when replicating a source file system using an in-line system
US9552265B2 (en) 2014-03-28 2017-01-24 Fujitsu Limited Information processing apparatus and storage system
CN107329709A (en) * 2017-07-05 2017-11-07 长沙开雅电子科技有限公司 A kind of Storage Virtualization New Virtual rolls up implementation method

Also Published As

Publication number Publication date
US7058731B2 (en) 2006-06-06
US20060031594A1 (en) 2006-02-09
JP4751117B2 (en) 2011-08-17
JP2006048676A (en) 2006-02-16

Similar Documents

Publication Publication Date Title
US7058731B2 (en) Failover and data migration using data replication
US20210176513A1 (en) Storage virtual machine relocation
US20220350817A1 (en) Non-disruptive baseline and resynchronization of a synchronous replication relationship
US11144211B2 (en) Low overhead resynchronization snapshot creation and utilization
US7542987B2 (en) Automatic site failover
US6629264B1 (en) Controller-based remote copy system with logical unit grouping
US6643795B1 (en) Controller-based bi-directional remote copy system with storage site failover capability
EP2883147B1 (en) Synchronous local and cross-site failover in clustered storage systems
US8335899B1 (en) Active/active remote synchronous mirroring
US9098466B2 (en) Switching between mirrored volumes
US20030188218A1 (en) System and method for active-active data replication
US11416354B2 (en) Techniques for providing intersite high availability of data nodes in a virtual cluster
JP2005267327A (en) Storage system
CN113849136B (en) Automatic FC block storage processing method and system based on domestic platform
US10552060B1 (en) Using replication facility to provide secure host network connectivity wherein a first logical device is used exclusively for sending messages from host to second host
US12086159B1 (en) Techniques for adding and removing storage objects from groups
US20240346045A1 (en) Techniques for implementing group modifications

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION