US20050022213A1 - Method and apparatus for synchronizing applications for data recovery using storage based journaling - Google Patents
Method and apparatus for synchronizing applications for data recovery using storage based journaling Download PDFInfo
- Publication number
- US20050022213A1 US20050022213A1 US10/627,507 US62750703A US2005022213A1 US 20050022213 A1 US20050022213 A1 US 20050022213A1 US 62750703 A US62750703 A US 62750703A US 2005022213 A1 US2005022213 A1 US 2005022213A1
- Authority
- US
- United States
- Prior art keywords
- marker
- journal
- data store
- data
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1471—Saving, restoring, recovering or retrying involving logging of persistent data for recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/84—Using snapshots, i.e. a logical point-in-time copy of the data
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99954—Version management
Definitions
- the present invention is related to computer storage and in particular to the recovery of data.
- Journaling is a backup and restore technique commonly used in database systems. An image of the data to be backed up is taken. Then, as changes are made to the data, a journal of the changes is maintained. Recovery of data is accomplished by applying the journal to an appropriate image to recover data at any point in time.
- Typical database systems such as Oracle, can perform journaling.
- Recovering data at any point in time addresses the following types of administrative requirements. For example, a typical request might be, “I deleted a file by mistake at around 10:00 am yesterday. I have to recover the file just before it was deleted.”
- a storage system exposes an application programmer's interface (API) for applications program running on a host.
- the API allows execution of program code to create marker journal entries.
- the API also provides for retrieval of marker journals, and recovery operations.
- Another aspect of the invention is the monitoring of operations being performed on a data store and the creation of marker journal entries upon detection one or more predetermined operations.
- Still another aspect of the invention is the retrieval of marker journal entries to facilitate recovery of a desired data state.
- FIG. 1 is a high level generalized block diagram of an illustrative embodiment of the present invention
- FIG. 2 is a generalized illustration of a illustrative embodiment of a data structure for storing journal entries in accordance with the present invention
- FIG. 3 is a generalized illustration of an illustrative embodiment of a data structure for managing the snapshot volumes and the journal entry volumes in accordance with the present invention
- FIG. 4 is a high level flow diagram highlighting the processing between the recovery manager and the controller in the storage system
- FIG. 5 illustrates the relationship between a snapshot and a plurality of journal entries
- FIG. 5A illustrates the relationship among a plurality of snapshots and a plurality of journal entries
- FIG. 6 is a high level illustration of the data flow when an overflow condition arises
- FIG. 7 is a high level flow chart highlighting an aspect of the controller in the storage system to handle an overflow condition
- FIG. 7A illustrates an alternative to a processing step shown in FIG. 7 ;
- FIG. 8 illustrates the use of marker journal entries
- FIG. 9 shows a SCSI-based implementation of the embodiment shown in FIG. 8 ;
- FIG. 10 shows a block diagram of the API's according to another aspect of the invention.
- FIG. 11 is a flowchart highlighting the steps for a recovery operation.
- FIG. 1 is a high level generalized block diagram of an illustrative embodiment of a backup and recovery system according to the present invention.
- a snapshot is taken for production data volumes (DVOL) 101 .
- the term “snapshot” in this context conventionally refers to a data image of at the data volume at a given point in time.
- the snapshot can be of the entire data volume, or some portion or portions of the data volume(s).
- a journal entry is made for every write operation issued from the host to the data volumes. As will be discussed below, by applying a series of journal entries to an appropriate snapshot, data can be recovered at any point in time.
- the backup and recovery system shown in FIG. 1 includes at least one storage system 100 . Though not shown, one of ordinary skill can appreciate that the storage system includes suitable processor(s), memory, and control circuitry to perform 10 between a host 110 and its storage media (e.g., disks). The backup and recovery system also requires at least one host 110 . A suitable communication path 130 is provided between the host and the storage system.
- the host 110 typically will have one or more user applications (APP) 112 executing on it. These applications will read and/or write data to storage media contained in the data volumes 101 of storage system 100 . Thus, applications 112 and the data volumes 101 represent the target resources to be protected. It can be appreciated that data used by the user applications can be stored in one or more data volumes.
- APP user applications
- journal group JNLG 102
- the data volumes 101 are organized into the journal group.
- a journal group is the smallest unit of data volumes where journaling of the write operations from the host 110 to the data volumes is guaranteed.
- the associated journal records the order of write operations from the host to the data volumes in proper sequence.
- the journal data produced by the journaling activity can be stored in one or more journal volumes (JVOL) 106 .
- the host 110 also includes a recovery manager (RM) 111 .
- RM recovery manager
- This component provides a high level coordination of the backup and recovery operations. Additional discussion about the recovery manager will be discussed below.
- the storage system 100 provides a snapshot (SS) 105 of the data volumes comprising a journal group.
- the snapshot 105 is representative of the data volumes 101 in the journal group 106 at the point in time that the snapshot was taken.
- Conventional methods are known for producing the snapshot image.
- One or more snapshot volumes (SVOL) 107 are provided in the storage system which contain the snapshot data.
- a snapshot can be contained in one or more snapshot volumes.
- the disclosed embodiment illustrates separate storage components for the journal data and the snapshot data, it can be appreciated that other implementations can provide a single storage component for storing the journal data and the snapshot data.
- a management table (MT) 108 is provided to store the information relating to the journal group 102 , the snapshot 105 , and the journal volume(s) 106 .
- FIG. 3 and the accompanying discussion below reveal additional detail about the management table.
- a controller component 140 is also provided which coordinates the journaling of write operations and snapshots of the data volumes, and the corresponding movement of data among the different storage components 101 , 106 , 107 . It can be appreciated that the controller component is a logical representation of a physical implementation which may comprise one or more sub-components distributed within the storage system 100 .
- FIG. 2 shows the data used in an implementation of the journal.
- a journal is generated in response.
- the journal comprises a Journal Header 219 and Journal Data 225 .
- the Journal Header 219 contains information about its corresponding Journal Data 225 .
- the Journal Data 225 comprises the data (write data) that is the subject of the write operation.
- the Journal Header 219 comprises an offset number (JH_OFS) 211 .
- the offset number identifies a particular data volume 101 in the journal group 102 .
- the data volumes are ordered as the 0 th data volume, the 1 st data volume, the 2 nd data volume and so on.
- the offset numbers might be 0, 1, 2, etc.
- a starting address in the data volume (identified by the offset number 211 ) to which the write data is to be written is stored to a field in the Journal Header 219 to contain an address (JH_ADR) 212 .
- the address can be represented as a block number (LBA, Logical Block Address).
- a field in the Journal Header 219 stores a data length (JH_LEN) 213 , which represents the data length of the write data. Typically it is represented as a number of blocks.
- a field in the Journal Header 219 stores the write time (JH_TIME) 214 , which represents the time when the write request arrives at the storage system 100 .
- the write time can include the calendar date, hours, minutes, seconds and even milliseconds. This time can be provided by the disk controller 140 or by the host 110 .
- two or more mainframe hosts share a timer and can provide the time when a write command is issued.
- a sequence number (JH_SEQ) 215 is assigned to each write request.
- the sequence number is stored in a field in the Journal Header 219 . Every sequence number within a given journal group 102 is unique. The sequence number is assigned to a journal entry when it is created.
- a journal volume identifier (JH_JVOL) 216 is also stored in the Journal Header 219 .
- the volume identifier identifies the journal volume 106 associated with the Journal Data 225 .
- the identifier is indicative of the journal volume containing the Journal Data. It is noted that the Journal Data can be stored in a journal volume that is different from the journal volume which contains the Journal Header.
- a journal data address (JH_JADR) 217 stored in the Journal Header 219 contains the beginning address of the Journal Data 225 in the associated journal volume 106 that contains the Journal Data.
- FIG. 2 shows that the journal volume 106 comprises two data areas: a Journal Header Area 210 and a Journal Data Area 220 .
- the Journal Header Area 210 contains only Journal Headers 219
- Journal Data Area 220 contains only Journal Data 225 .
- the Journal Header is a fixed size data structure.
- a Journal Header is allocated sequentially from the beginning of the Journal Header Area. This sequential organization corresponds to the chronological order of the journal entries.
- data is provided that points to the first journal entry in the list, which represents the “oldest” journal entry. It is typically necessary to find the Journal Header 219 for a given sequence number (as stored in the sequence number field 215 ) or for a given write time (as stored in the time field 214 ).
- a journal type field (JH_TYPE) 218 identifies the type of journal entry. The value contained in this field indicates a type of MARKER or INTERNAL. If the type is MARKER, then the journal is a marker journal. The purpose of a MARKER type of journal will be discussed below. If the type is INTERNAL, then the journal records the data that is the subject of the write operation issued from the host 110 .
- Journal Header 219 and Journal Data 225 are contained in chronological order in their respective areas in the journal volume 106 .
- the order in which the Journal Header and the Journal Data are stored in the journal volume is the same order as the assigned sequence number.
- an aspect of the present invention is that the journal information 219 , 225 wrap within their respective areas 210 , 220 .
- FIG. 3 shows detail about the management table 108 ( FIG. 1 ).
- the management table maintains configuration information about a journal group 102 and the relationship between the journal group and its associated journal volume(s) 106 and snapshot image 105 .
- the management table 300 shown in FIG. 3 illustrates an example management table and its contents.
- the management table stores a journal group ID (GRID) 310 which identifies a particular journal group 102 in a storage system 100 .
- GRID journal group ID
- GRNAME journal group name
- a journal attribute (GRATTR) 312 is associated with the journal group 102 .
- two attributes are defined: MASTER and RESTORE.
- the MASTER attribute indicates the journal group is being journaled.
- the RESTORE attribute indicates that the journal group is being restored from a journal.
- a journal status (GRSTS) 315 is associated with the journal group 102 . There are two statuses: ACTIVE and INACTIVE.
- the management table includes a field to hold a sequence counter (SEQ) 313 .
- This counter serves as the source of sequence numbers used in the Journal Header 219 .
- SEQ sequence counter
- the number (NUM_DVOL) 314 of data volumes 101 contained in a give journal group 102 is stored in the management table.
- a data volume list (DVOL_LIST) 320 lists the data volumes in a journal group.
- DVOL_LIST is a pointer to the first entry of a data structure which holds the data volume information. This can be seen in FIG. 3 .
- Each data volume information comprises an offset number (DVOL_OFFS) 321 .
- DVD_OFFS offset number
- a data volume identifier (DVOL_ID) 322 uniquely identifies a data volume within the entire storage system 100 .
- a pointer (DVOL_NEXT) 324 points to the data structure holding information for the next data volume in the journal group; it is a NULL value otherwise.
- the management table includes a field to store the number of journal volumes (NUM_JVOL) 330 that are being used to contain the data (journal header and journal data) associated with a journal group 102 .
- the Journal Header Area 210 contains the Journal Headers 219 for each journal; likewise for the Journal Data components 225 .
- an aspect of the invention is that the data areas 210 , 220 wrap. This allows for journaling to continue despite the fact that there is limited space in each data area.
- the management table includes fields to store pointers to different parts of the data areas 210 , 220 to facilitate wrapping. Fields are provided to identify where the next journal entry is to be stored.
- a field (JI_HEAD_VOL) 331 identifies the journal volume 106 that contains the Journal Header Area 210 which will store the next new Journal Header 219 .
- a field (JI_HEAD_ADR) 332 identifies an address on the journal volume of the location in the Journal Header Area where the next Journal Header will be stored.
- the journal volume that contains the Journal Data Area 220 into which the journal data will be stored is identified by information in a field (JI_DATA_VOL) 335 .
- JI_DATA_ADR A field (JI_DATA_ADR) 336 identifies the specific address in the Journal Data Area where the data will be stored. Thus, the next journal entry to be written is “pointed” to by the information contained in the “JI_” fields 331 , 332 , 335 , 336 .
- the management table also includes fields which identify the “oldest” journal entry. The use of this information will be described below.
- a field (JO_HEAD_VOL) 333 identifies the journal volume which stores the Journal Header Area 210 that contains the oldest Journal Header 219 .
- a field (JO_HEAD_ADR) 334 identifies the address within the Journal Header Area of the location of the journal header of the oldest journal.
- a field (JO_DATA_VOL) 337 identifies the journal volume which stores the Journal Data Area 220 that contains the data of the oldest journal. The location of the data in the Journal Data Area is stored in a field (JO_DATA_ADR) 338 .
- the management table includes a list of journal volumes (JVOL_LIST) 340 associated with a particular journal group 102 .
- JVOL_LIST is a pointer to a data structure of information for journal volumes.
- each data structure comprises an offset number (JVOL_OFS) 341 which identifies a particular journal volume 106 associated with a given journal group 102 .
- JVOL_OFS offset number
- a journal volume identifier (JVOL_ID) 342 uniquely identifies the journal volume within the storage system 100 .
- a pointer (JVOL_NEXT) 344 points to the next data structure entry pertaining to the next journal volume associated with the journal group; it is a NULL value otherwise.
- the management table includes a list (SS_LIST) 350 of snapshot images 105 associated with a given journal group 102 .
- SS_LIST is a pointer to snapshot information data structures, as indicated in FIG. 3 .
- Each snapshot information data structure includes a sequence number (SS_SEQ) 351 that is assigned when the snapshot is taken. As discussed above, the number comes from the sequence counter 313 .
- a time value (SS_TIME) 352 indicates the time when the snapshot was taken.
- a status (SS_STS) 358 is associated with each snapshot; valid values include VALID and INVALID.
- a pointer (SS_NEXT) 353 points to the next snapshot information data structure; it is a NULL value otherwise.
- Each snapshot information data structure also includes a list of snapshot volumes 107 ( FIG. 1 ) used to store the snapshot images 105 .
- a pointer (SVOL_LIST) 354 to a snapshot volume information data structure is stored in each snapshot information data structure.
- Each snapshot volume information data structure includes an offset number (SVOL_OFFS) 355 which identifies a snapshot volume that contains at least a portion of the snapshot image. It is possible that a snapshot image will be segmented or otherwise partitioned and stored in more than one snapshot volume.
- the offset identifies the i th snapshot volume which contains a portion (segment, partition, etc) of the snapshot image.
- Each snapshot volume information data structure further includes a snapshot volume identifier (SVOL_ID) 356 that uniquely identifies the snapshot volume in the storage system 100 .
- a pointer (SVOL_NEXT) 357 points to the next snapshot volume information data structure for a given snapshot image.
- FIG. 4 shows a flowchart highlighting the processing performed by the recovery manager 111 and Storage System 100 to initiate backup processing in accordance with the illustrative embodiment of the invention as shown in the figures.
- a single sequence of numbers (SEQ) 313 are associated with each of one or more snapshots and journal entries, as they are created. The purpose of associating the same sequence of numbers to both the snapshots and the journal entries will be discussed below.
- the recovery manager 111 might define, in a step 410 , a journal group (JNLG) 102 if one has not already been defined. As indicated in FIG. 1 , this may include identifying one or data volumes (DVOL) 101 for which journaling is performed, and identifying one or journal volumes (JVOL) 106 which are used to store the journal-related information.
- the recovery manager performs a suitable sequence of interactions with the storage system 100 to accomplish this.
- the storage system may create a management table 108 ( FIG. 1 ), incorporating the various information shown in the table detail 300 illustrated in FIG. 3 .
- the process includes initializing the JVOL_LIST 340 to list the journal volumes which comprise the journal group 102 Likewise, the list of data volumes DVOL_LIST 320 is created.
- the fields which identify the next journal entry (or in this case where the table is first created, the first journal entry) are initialized.
- JI_HEAD_VOL 331 might identify the first in the list of journal volumes and JI_HEAD_ADR 332 might point to the first entry in the Journal Header Area 210 located in the first journal volume.
- JI_DATA_VOL 335 might identify the first in the list of journal volumes and JI_DATA_ADR 336 might point to the beginning of the Journal Data Area 220 in the first journal volume.
- the header and the data areas 210 , 220 may reside on different journal volumes, so JI_DATA_VOL might identify a journal volume different from the first journal volume.
- a step 420 the recovery manager 111 will initiate the journaling process. Suitable communication(s) are made to the storage system 100 to perform journaling. In a step 425 , the storage system will make a journal entry for each write operation that issues from the host 110 .
- making a journal entry includes, among other things, identifying the location for the next journal entry.
- the fields JI_HEAD_VOL 331 and JI_HEAD_ADR 332 identify the journal volume 106 and the location in the Journal Header Area 210 of the next Journal Header 219 .
- the sequence counter (SEQ) 313 from the management table is copied to (associated with) the JH_SEQ 215 field of the next header.
- the sequence counter is then incremented and stored back to the management table.
- the sequence counter can be incremented first, copied to JH_SEQ, and then stored back to the management table.
- the fields JI_DATA_VOL 335 and in the management table identify the journal volume and the beginning of the Journal Data Area 220 for storing the data associated with the write operation.
- the JI_DATA_VOL and JI_DATA_ADR fields are copied to JH_JVOL 216 and to JH_ADR 212 , respectively, of the Journal Header, thus providing the Journal Header with a pointer to its corresponding Journal Data.
- the data of the write operation is stored.
- JI_HEAD_VOL 331 and JI_HEAD_ADR 332 fields are updated to point to the next Journal Header 219 for the next journal entry. This involves taking the next contiguous Journal Header entry in the Journal Header Area 210 .
- the JI_DATA_ADR field (and perhaps JI_DATA_VOL field) is updated to reflect the beginning of the Journal Data Area for the next journal entry. This involves advancing to the next available location in the Journal Data Area.
- These fields therefore can be viewed as pointing to a list of journal entries. Journal entries in the list are linked together by virtue of the sequential organization of the Journal Headers 219 in the Journal Header Area 210 .
- the Journal Header 219 for the next journal entry wraps to the beginning of the Journal Header Area.
- the Journal Data 225 For the Journal Data 225 .
- the present invention provides for a procedure to free up entries in the journal volume 106 . This aspect of the invention is discussed below.
- the JO_HEAD_VOL field 333 , JO_HEAD_ADR field 334 , JO_DATA_VOL field 337 , and the JO_DATA_ADR field 338 are set to contain their contents of their corresponding “JI_” fields.
- the “JO_” fields point to the oldest journal entry.
- the “JO_” fields do not advance while the “JI_” fields do advance. Update of the “JO_” fields is discussed below.
- journaling process when the journaling process has been initiated, all write operations issuing from the host are journaled. Then in a step 430 , the recovery manager 111 will initiate taking a snapshot of the data volumes 101 .
- the storage system 100 receives an indication from the recovery manager to take a snapshot.
- the storage system performs the process of taking a snapshot of the data volumes. Among other things, this includes accessing SS_LIST 350 from the management table ( FIG. 3 ). A suitable amount of memory is allocated for fields 351 - 354 to represent the next snapshot.
- the sequence counter (SEQ) 313 is copied to the field SS_SEQ 351 and incremented, in the manner discussed above for JH_SEQ 215 .
- a sequence of numbers is produced from SEQ 313 , each number in the sequence being assigned either to a journal entry or a snapshot entry.
- the snapshot is stored in one (or more) snapshot volumes (SVOL) 107 .
- a suitable amount of memory is allocated for fields 355 - 357 .
- the information relating to the SVOLs for storing the snapshot are then stored into the fields 355 - 357 . If additional volumes are required to store the snapshot, then additional memory is allocated for fields 355 - 357 .
- FIG. 5 illustrates the relationship between journal entries and snapshots.
- the snapshot 520 represents the first snapshot image of the data volumes 101 belonging to a journal group 102 .
- journal entries ( 510 ) having sequence numbers SEQ 0 and SEQ 1 have been made, and represent journal entries for two write operations. These entries show that journaling has been initiated at a time prior to the snapshot being taken (step 420 ).
- the recovery manager 111 initiates the taking of a snapshot, and since journaling has been initiated, any write operations occurring during the taking of the snapshot are journaled.
- the write operations 500 associated with the sequence numbers SEQ 3 and higher show that those operations are being journaled.
- the journal entries identified by sequence numbers SEQ 0 and SEQ 1 can be discarded or otherwise ignored.
- Recovering data typically requires recover the data state of at least a portion of the data volumes 101 at a specific time. Generally, this is accomplished by applying one or more journal entries to a snapshot that was taken earlier in time relative to the journal entries.
- the sequence number SEQ 313 is incremented each time it is assigned to a journal entry or to a snapshot. Therefore, it is a simple matter to identify which journal entries can be applied to a selected snapshot; i.e., those journal entries whose associated sequence numbers (JH_SEQ, 215 ) are greater than the sequence number (SS_SEQ, 351 ) associated with the selected snapshot.
- the administrator may specify some point in time, presumably a time that is earlier than the time (the “target time”) at which the data in the data volume was lost or otherwise corrupted.
- the time field SS_TIME 352 for each snapshot is searched until a time earlier than the target time is found.
- the Journal Headers 219 in the Journal Header Area 210 is searched, beginning from the “oldest” Journal Header.
- the oldest Journal Header can be identified by the “JO_” fields 333 , 334 , 337 , and 338 in the management table.
- the Journal Headers are searched sequentially in the area 210 for the first header whose sequence number JH_SEQ 215 is greater than the sequence number SS_SEQ 351 associated with the selected snapshot.
- the selected snapshot is incrementally updated by applying each journal entry, one at a time, to the snapshot in sequential order, thus reproducing the sequence of write operations. This continues as long as the time field JH_TIME 214 of the journal entry is prior to the target time. The update ceases with the first journal entry whose time field 214 is past the target time.
- a single snapshot is taken. All journal entries subsequent to that snapshot can then be applied to reconstruct the data state at a given time.
- multiple snapshots can be taken. This is shown in FIG. 5A where multiple snapshots 520 ′ are taken.
- each snapshot and journal entry is assigned a sequence number in the order in which the object (snapshot or journal entry) is recorded. It can be appreciated that there typically will be many journal entries 510 recorded between each snapshot 520 ′. Having multiple snapshots allows for quicker recovery time for restoring data. The snapshot closest in time to the target recovery time would be selected. The journal entries made subsequent to the snapshot could then be applied to restore the desired data state.
- FIG. 6 illustrates another aspect of the present invention.
- a journal entry is made for every write operation issued from the host; this can result in a rather large number of journal entries.
- the one or more journal volumes 106 defined by the recovery manager 111 for a journal group 102 will eventually fill up. At that time no more journal entries can be made. As a consequence, subsequent write operations would not be journaled and recovery of the data state subsequent to the time the journal volumes become filled would not be possible.
- FIG. 6 shows that the storage system 100 will apply journal entries to a suitable snapshot in response to detection of an “overflow” condition.
- An “overflow” is deemed to exist when the available space in the journal volume(s) falls below some predetermined threshold. It can be appreciated that many criteria can be used to determine if an overflow condition exists. A straightforward threshold is based on the total storage capacity of the journal volume(s) assigned for a journal group. When the free space becomes some percentage (say, 10%) of the total storage capacity, then an overflow condition exists. Another threshold might be used for each journal volume.
- the free space capacity in the journal volume(s) is periodically monitored. Alternatively, the free space can be monitored in an aperiodic manner. For example, the intervals between monitoring can be randomly spaced. As another example, the monitoring intervals can be spaced apart depending on the level of free space; i.e., the monitoring interval can vary as a function of the free space level.
- FIG. 7 highlights the processing which takes place in the storage system 100 to detect an overflow condition.
- the storage system periodically checks the total free space of the journal volume(s) 106 ; e.g., every ten seconds.
- the free space can easily be calculated since the pointers (e.g., JI_CTL_VOL 331 , JI_CTL_ADDR 332 ) in the management table 300 maintain the current state of the storage consumed by the journal volumes. If the free space is above the threshold, then the monitoring process simply waits for a period of time to pass and then repeats its check of the journal volume free space.
- journal entries are applied to a snapshot to update the snapshot.
- the oldest journal entry(ies) are applied to the snapshot.
- the Journal Header 219 of the “oldest” journal entry is identified by the JO_HEAD_VOL field 333 and the JO_HEAD_ADR field 334 . These fields identify the journal volume and the location in the journal volume of the Journal Header Area 210 of the oldest journal entry.
- the Journal Data of the oldest journal entry is identified by the JO_DATA_VOL field 337 and the JO_DATA_ADR field 338 .
- the journal entry identified by these fields is applied to a snapshot.
- the snapshot that is selected is the snapshot having an associated sequence number closest to the sequence number of the journal entry and earlier in time than the journal entry.
- the snapshot having the sequence number closest to but less than the sequence number of the journal entry is selected (i.e., “earlier in time).
- the applied journal entry is freed. This can simply involve updating the JO_HEAD_VOL field 333 , JO_HEAD_ADR field 334 , JO_DATA_VOL field 337 , and the JO_DATA_ADR field 338 to the next journal entry.
- sequence numbers will eventually wrap, and start counting from zero again. It is well within the level of ordinary skill to provide a suitable mechanism for keeping track of this when comparing sequence numbers.
- the free space can be compared against the threshold criterion used in step 710 .
- a different threshold can be used. For example, here a higher amount of free space may be required to terminate this process than was used to initiate the process. This avoids invoking the process too frequently, but once invoked the second higher threshold encourages recovering as much free space as is reasonable. It can be appreciated that these thresholds can be determined empirically over time by an administrator.
- step 730 if the threshold for stopping the process is met (i.e., free space exceeds threshold), then the process stops. Otherwise, step 720 is repeated for the next oldest journal entry. Steps 730 and 720 are repeated until the free space level meets the threshold criterion used in step 730 .
- FIG. 7A highlights sub-steps for an alternative embodiment to step 720 shown in FIG. 7 .
- Step 720 frees up a journal entry by applying it to the latest snapshot that is not later in time than the journal entry. However, where multiple snapshots are available, it may be possible to avoid the time consuming process of applying the journal entry to a snapshot in order to update the snapshot.
- FIG. 7A shows details for a step 720 ′ that is an alternate to step 720 of FIG. 7 .
- a determination is made whether a snapshot exists that is later in time than the oldest journal entry. This determination can be made by searching for the first snapshot whose associated sequence number is greater than that of the oldest journal entry. Alternatively, this determination can be made by looking for a snapshot that is a predetermined amount of time later than the oldest journal entry can be selected; for example, the criterion may be that the snapshot must be at least one hour later in time than the oldest journal entry. Still another alternate is to use the sequence numbers associated with the snapshots and the journal entries, rather than time. For example, the criterion might be to select a snapshot whose sequence number is N increments away from the sequence number of the oldest journal entry.
- journal entries can be removed without having to apply them to a snapshot.
- the “JO_” fields (JO_HEAD_VOL 333 , JO_HEAD_ADR 334 , JO_DATA_VOL 337 , and JO_DATA_ADR 338 ) are simply moved to a point in the list of journal entries that is later in time than the selected snapshot. If no such snapshot can be found, then in a step 723 the oldest journal entry is applied to a snapshot that is earlier in time than the oldest journal entry, as discussed for step 720 .
- step 721 Still another alternative for step 721 is simply to select the most recent snapshot. All the journal entries whose sequence numbers are less than that of the most recent snapshot can be freed. Again, this simply involves updating the “JO_” fields so they point to the first journal entry whose sequence number is greater than that of the most recent snapshot.
- an aspect of the invention is being able to recover the data state for any desired point in time. This can be accomplished by storing as many journal entries as possible and then applying the journal entries to a snapshot to reproduce the write operations.
- This last embodiment has the potential effect of removing large numbers of journal entries, thus reducing the range of time within which the data state can be recovered. Nevertheless, for a particular configuration it may be desirable to remove large numbers of journal entries for a given operating environment.
- Another aspect of the present invention is the ability to place a “marker” among the journal entries.
- an application programming interface can be provided to manipulate these markers, referred to herein as marker journal entries, marker journals, etc.
- Marker journals can be created and inserted among the journal entries to note actions performed on the data volume (production volume) 101 or events in general (e.g., system boot up). Marker journals can be searched and used to identify previously marked actions and events.
- the API can be used by high-level (or user-level) applications.
- the API can include functions that are limited to system level processes.
- FIG. 8 shows additional detail in the block diagram illustrated in FIG. 1 .
- a Management Program (MP) 811 component comprises a Manager 814 and a Driver 813 .
- the Driver component provides a set of API's to provide journaling functions implemented in storage system 100 in accordance with this aspect of the invention.
- the Manager component represents an example of an application program that uses the API's provided by the Driver component.
- user applications 112 can use parts of the API provided by the Driver. Following is a usage exemplar, illustrating the basic functionality provided by an API in accordance with the present invention.
- the Manager component 814 can be configured to monitor operations on all or parts of a data volume (production data store) 101 such as a database, a directory, one or more files, or other objects of a the file system.
- a user can be provided with access to the Manager via a suitable interface; e.g., command line interface, GUI, etc.
- the user can interact with the Manager to specify objects and operations on those objects to be monitored.
- the Manager detects a specified operation on the object, it calls an appropriate marker journal function via the API to create a marker journal to mark the event or action.
- the marker journal can include information such as a filename, the detected operation, the name of the host 110 , and a timestamp.
- the Driver component 813 can interact with the storage system 100 accordingly to create the marker.
- the storage system 100 creates the marker journal in the same manner as discussed above for journal entries associated with write operations.
- the journal type field (JH_TYPE) 218 can be set to MARKER to indicate that journal entry is a marker journal.
- Journal entries associated with write operations would have a field value of INTERNAL. Any information that is associated with the marker journal entry can be stored in the journal data area of the journal entry.
- FIG. 9 illustrates an example for implementing an API based on a storage system 100 that implements the SCSI (small computer system interface) standard.
- a special device referred to herein as a command device (CMD) 902 , can be defined in the storage system 100 .
- the storage system 100 can intercept the request and treat it as a special command.
- a write request to the CMD device can contain data (write data) that indicates a function relating to a marker journal such as creating a marker journal.
- the write data can include marker information such as time range, filename, operation, and so on.
- the Manager component 814 can also specify to read special information from the storage system 100 .
- the write command indicates information to be read, and following a read command to the CMD device 902 actually reads the information.
- a pair of write and read requests to the CMD device can be used to retrieve a marker journal entry and the data associated with the marker journal.
- SCSI command set An alternative implementation is to extend the SCSI command set.
- the SCSI standard allows developers to extend the SCSI common command set (CCS) which describes the core set of commands supported by SCSI.
- CCS SCSI common command set
- special commands can be defined to provide the API functionality.
- FIG. 10 illustrates the interaction among the components shown in FIG. 1 .
- a user 1002 on the host 110 can interact via a suitable API with the Manager component 814 or directly with the Driver component 813 to manipulate marker journals.
- the user can be an application level user or a system administrator.
- the “user” can be a machine that is suitably interfaced to the Manager component and/or the Driver component.
- the Manager component 814 can provide its own API 814 a to the user 1002 .
- the functions provided by this API can be similar to the marker journal functions provided by the API 813 a of the Driver component 813 .
- the Manager component provides a higher level of functionality, its API is likely to include functions not needed for managing marker journals. It can be appreciated that in other embodiments of the invention, a single API can be defined which includes the functionality of API's 813 a and 814 a.
- the Driver component 813 communicates with the storage system 100 to initiate the desired action.
- typical actions include, among others, generating marker journals, periodically retrieving journal entries, and recovery using marker journals.
- Objects can be monitored for certain actions.
- the Manager component 814 can be configured to monitor the data volume 101 for user-specified activity (data operations) to be performed on objects contained in the volume.
- the object can be the entire volume, a file system or portions of a file system.
- the object can include application objects such as files, database components, and so on. Activities include, among others, closing a file, removing an object, manipulation (creation, deletion, etc) of symbolic links to files and/or directories, formatting all or a portion of a volume, and so on.
- a user can specify which actions to detect.
- the Manager 814 detects a specified operation, the Manager can issue a GENERATE MARKER request to mark the event.
- the user can specify an action or actions to be performed on an object or objects.
- a GENERATE MARKER request can be issued to mark the occurrence of that event.
- the user can also mark events that take place within the data volume 101 .
- she might issue a SYNC command (in the case of a UNIX OS) to sync the file system and also invoke the GENERATE MARKER command to mark the event of syncing the file system. She might mark the event of booting up the system.
- the Manager component 114 can be configured to detect and automatically act on these events as well. It is observed that an event can be marked before or after the occurrence of the event. For example, the actions of deleting a file or SYNC'ing a file system probably are preferably performed prior to marking the action. If a major update of a data file or a database is about to be performed, it might be prudent to create a marker journal before proceeding; this can be referred to as “pre-marking” the event.
- the foregoing mechanisms for manipulating marker journals can be used to facilitate recovery. For example, suppose a system administrator configures the Manager component 814 to mark every “delete” operation that is performed on “file” objects. Each time a user in the host 110 performs a file delete, a marker journal entry can be created (using the GENERATE MARKER command) and stored in the journal volume 106 . This operation is a type where it might be desirable to “pre-mark” each such event; that is, a marker journal entry is created prior to carrying out the delete operation to mark a point in time just prior to the operation. Thus, over time, the journal entries contained in the journal volumes will be sprinkled with marker journal entries identifying points in time prior to each file deletion operation.
- the marker journals can be used to find a suitable recovery point. For example, the user is likely to know roughly when he deleted a file.
- a GET MARKER command that specifies a time prior to the estimated time of deletion and further specifying an operation of “delete” on objects of “file” with the name of the deleted file as an object can be issued to the storage system 100 .
- the matching marker journal entry is then retrieved. This journal entry identifies a point in time prior to the delete operation, and can then serve as the recovery point for a subsequent recovery operation.
- all journal entries, including marker journals have a sequence number.
- sequence number of the retrieved marker journal entry can be used to determine the latest journal entry just prior to the deletion action.
- a suitable snapshot is obtained and updated with journal entries of type INTERNAL, up to the latest journal entry.
- the data state of the volume reflects the time just before the file was deleted, thus allowing for the deleted file to be restored.
- FIG. 11 illustrates recovery processing according to an illustrative embodiment of the present invention.
- the storage system 100 determines in a step 1110 whether recovery is possible.
- a snapshot must have been taken between the oldest journal entry and latest journal entry.
- every snapshot has a sequence number taken from the same sequence of numbers used for the journal entries. The sequence number can be used to identify a suitable snapshot. If the sequence number of a candidate snapshot is greater than that of the oldest journal and smaller than that of the latest journal, then the snapshot is suitable.
- the recovery volume is set to an offline state.
- recovery volume is used in a generic sense to refer to one or more volumes on which the data recovery process is being performed.
- offline is taken to mean that the user, and more generally the host device 110 , cannot access the recovery volume.
- the host 110 in the case that the production volume is being used as the recovery volume, it is likely to be desirable that the host 110 be prevented at least from issuing write operations to the volume. Also, the host typically will not be permitted to perform read operations.
- the storage system itself has full access to the recovery volume in order to perform the recovery task.
- the snapshot is copied to the recovery volume in preparation for the recovery operation.
- the production volume itself can be the recovery volume.
- the recovery manager 111 can allow the user to specify a volume other than the production volume to serve as the target of the data recovery operation.
- the recovery volume can be the volume on which the snapshot is stored. Using a volume other than the production volume to perform the recovery operation may be preferred where it is desirable to provide continued use of the production volume.
- journal entries are applied to update the snapshot volume in the manner as discussed previously. Enough journal entries are applied to update the snapshot to a point in time just prior to the occurrence of the file deletion. At that point the recovery volume can be brought “online.” In the context of the present invention, the “online” state is taken to mean that the host device 110 is given access to the recovery volume.
- periodic retrievals of marker journal entries can be made and stored locally in the host 110 using the GET MARKER command and specifying suitable criteria.
- the Driver component 813 might periodically issue a GET MARKER for “delete” operations performed on “file” objects.
- Other retrieval criteria can be specified. Having a locally accessible copy of certain marker journals reduces delay in retrieving one marker journal at a time from the storage system 100 . This can greatly speed up a search for a recovery point.
- API definition can be readily extended to provide additional functionality.
- the disclosed embodiments typically can be provided using a combination of hardware and software implementations; e.g., combinations of software, firmware, and/or custom logic such as ASICs (application specific ICs) are possible.
- ASICs application specific ICs
- One of ordinary skill can readily appreciate that the underlying technical implementation will be determined based on factors including but not limited to or restricted to system cost, system performance, the existence of legacy software and legacy hardware, operating environment, and so on.
- the disclosed embodiments can be readily reduced to specific implementations without undue experimentation by those of ordinary skill in the relevant art.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Disclosed is a method to synchronize the state of an application and an application's objects with data stored on the storage system. The storage system provides API's to create special data, called a marker journal, and stores it on a journal volume. The marker contains application information, e.g. file name, operation on the file, timestamp, etc. Since the journal volume contains markers as well as any changed data in the chronological order, IO activities to the storage system and application activities can be synchronized.
Description
- This application is related to the following commonly owned and co-pending U.S. applications:
-
- “Method and Apparatus for Data Recovery Using Storage Based Journaling,” Attorney Docket Number 16869B-082700US, and
- “Method and Apparatus for Data Recovery Using Storage Based Journaling,” Attorney Docket Number 16869B-082800US,
both of which are herein incorporated by reference for all purposes.
- The present invention is related to computer storage and in particular to the recovery of data.
- Several methods are conventionally used to prevent the loss of data. Typically, data is backed up in a periodic manner (e.g., once a day) by a system administrator. Many systems are commercially available which provide backup and recovery of data; e.g., Veritas NetBackup, Legato/Networker, and so on. Another technique is known as volume shadowing. This technique produces a mirror image of data onto a secondary storage system as it is being written to the primary storage system.
- Journaling is a backup and restore technique commonly used in database systems. An image of the data to be backed up is taken. Then, as changes are made to the data, a journal of the changes is maintained. Recovery of data is accomplished by applying the journal to an appropriate image to recover data at any point in time. Typical database systems, such as Oracle, can perform journaling.
- Except for database systems, however, there are no ways to recover data at any point in time. Even for database systems, applying a journal takes time since the procedure includes:
-
- reading the journal data from storage (e.g., disk)
- the journal must be analyzed to determine at where in the journal the desired data can be found
- apply the journal data to a suitable image of the data to reproduce the activities performed on the data—this usually involves accessing the image, and writing out data as the journal is applied
- Recovering data at any point in time addresses the following types of administrative requirements. For example, a typical request might be, “I deleted a file by mistake at around 10:00 am yesterday. I have to recover the file just before it was deleted.”
- If the data is not in a database system, this kind of request cannot be conveniently, if at all, serviced. A need therefore exists for processing data in a manner that facilitates recovery of lost data. A need exists for being able to provide data processing that facilitates data recovery in user environments other than in a database application.
- In accordance with an aspect of the present invention, a storage system exposes an application programmer's interface (API) for applications program running on a host. The API allows execution of program code to create marker journal entries. The API also provides for retrieval of marker journals, and recovery operations. Another aspect of the invention, is the monitoring of operations being performed on a data store and the creation of marker journal entries upon detection one or more predetermined operations. Still another aspect of the invention is the retrieval of marker journal entries to facilitate recovery of a desired data state.
- Aspects, advantages and novel features of the present invention will become apparent from the following description of the invention presented in conjunction with the accompanying drawings:
-
FIG. 1 is a high level generalized block diagram of an illustrative embodiment of the present invention; -
FIG. 2 is a generalized illustration of a illustrative embodiment of a data structure for storing journal entries in accordance with the present invention; -
FIG. 3 is a generalized illustration of an illustrative embodiment of a data structure for managing the snapshot volumes and the journal entry volumes in accordance with the present invention; -
FIG. 4 is a high level flow diagram highlighting the processing between the recovery manager and the controller in the storage system; -
FIG. 5 illustrates the relationship between a snapshot and a plurality of journal entries; -
FIG. 5A illustrates the relationship among a plurality of snapshots and a plurality of journal entries; -
FIG. 6 is a high level illustration of the data flow when an overflow condition arises; -
FIG. 7 is a high level flow chart highlighting an aspect of the controller in the storage system to handle an overflow condition; -
FIG. 7A illustrates an alternative to a processing step shown inFIG. 7 ; -
FIG. 8 illustrates the use of marker journal entries; -
FIG. 9 shows a SCSI-based implementation of the embodiment shown inFIG. 8 ; -
FIG. 10 shows a block diagram of the API's according to another aspect of the invention; and -
FIG. 11 is a flowchart highlighting the steps for a recovery operation. -
FIG. 1 is a high level generalized block diagram of an illustrative embodiment of a backup and recovery system according to the present invention. When the system is activated, a snapshot is taken for production data volumes (DVOL) 101. The term “snapshot” in this context conventionally refers to a data image of at the data volume at a given point in time. Depending on system requirements, implementation, and so on, the snapshot can be of the entire data volume, or some portion or portions of the data volume(s). During the normal course of operation of the system in accordance with the invention, a journal entry is made for every write operation issued from the host to the data volumes. As will be discussed below, by applying a series of journal entries to an appropriate snapshot, data can be recovered at any point in time. - The backup and recovery system shown in
FIG. 1 includes at least onestorage system 100. Though not shown, one of ordinary skill can appreciate that the storage system includes suitable processor(s), memory, and control circuitry to perform 10 between ahost 110 and its storage media (e.g., disks). The backup and recovery system also requires at least onehost 110. Asuitable communication path 130 is provided between the host and the storage system. - The
host 110 typically will have one or more user applications (APP) 112 executing on it. These applications will read and/or write data to storage media contained in thedata volumes 101 ofstorage system 100. Thus,applications 112 and thedata volumes 101 represent the target resources to be protected. It can be appreciated that data used by the user applications can be stored in one or more data volumes. - In accordance with the invention, a journal group (JNLG) 102 is defined. The
data volumes 101 are organized into the journal group. In accordance with the present invention, a journal group is the smallest unit of data volumes where journaling of the write operations from thehost 110 to the data volumes is guaranteed. The associated journal records the order of write operations from the host to the data volumes in proper sequence. The journal data produced by the journaling activity can be stored in one or more journal volumes (JVOL) 106. - The
host 110 also includes a recovery manager (RM) 111. This component provides a high level coordination of the backup and recovery operations. Additional discussion about the recovery manager will be discussed below. - The
storage system 100 provides a snapshot (SS) 105 of the data volumes comprising a journal group. For example, thesnapshot 105 is representative of thedata volumes 101 in thejournal group 106 at the point in time that the snapshot was taken. Conventional methods are known for producing the snapshot image. One or more snapshot volumes (SVOL) 107 are provided in the storage system which contain the snapshot data. A snapshot can be contained in one or more snapshot volumes. Though the disclosed embodiment illustrates separate storage components for the journal data and the snapshot data, it can be appreciated that other implementations can provide a single storage component for storing the journal data and the snapshot data. - A management table (MT) 108 is provided to store the information relating to the
journal group 102, thesnapshot 105, and the journal volume(s) 106.FIG. 3 and the accompanying discussion below reveal additional detail about the management table. - A
controller component 140 is also provided which coordinates the journaling of write operations and snapshots of the data volumes, and the corresponding movement of data among thedifferent storage components storage system 100. -
FIG. 2 shows the data used in an implementation of the journal. When a write request from thehost 110 arrives at thestorage system 100, a journal is generated in response. The journal comprises aJournal Header 219 andJournal Data 225. TheJournal Header 219 contains information about its correspondingJournal Data 225. TheJournal Data 225 comprises the data (write data) that is the subject of the write operation. - The
Journal Header 219 comprises an offset number (JH_OFS) 211. The offset number identifies aparticular data volume 101 in thejournal group 102. In this particular implementation, the data volumes are ordered as the 0th data volume, the 1st data volume, the 2nd data volume and so on. The offset numbers might be 0, 1, 2, etc. - A starting address in the data volume (identified by the offset number 211) to which the write data is to be written is stored to a field in the
Journal Header 219 to contain an address (JH_ADR) 212. For example, the address can be represented as a block number (LBA, Logical Block Address). - A field in the
Journal Header 219 stores a data length (JH_LEN) 213, which represents the data length of the write data. Typically it is represented as a number of blocks. - A field in the
Journal Header 219 stores the write time (JH_TIME) 214, which represents the time when the write request arrives at thestorage system 100. The write time can include the calendar date, hours, minutes, seconds and even milliseconds. This time can be provided by thedisk controller 140 or by thehost 110. For example, in a mainframe computing environment, two or more mainframe hosts share a timer and can provide the time when a write command is issued. - A sequence number (JH_SEQ) 215 is assigned to each write request. The sequence number is stored in a field in the
Journal Header 219. Every sequence number within a givenjournal group 102 is unique. The sequence number is assigned to a journal entry when it is created. - A journal volume identifier (JH_JVOL) 216 is also stored in the
Journal Header 219. The volume identifier identifies thejournal volume 106 associated with theJournal Data 225. The identifier is indicative of the journal volume containing the Journal Data. It is noted that the Journal Data can be stored in a journal volume that is different from the journal volume which contains the Journal Header. - A journal data address (JH_JADR) 217 stored in the
Journal Header 219 contains the beginning address of theJournal Data 225 in the associatedjournal volume 106 that contains the Journal Data. -
FIG. 2 shows that thejournal volume 106 comprises two data areas: aJournal Header Area 210 and aJournal Data Area 220. TheJournal Header Area 210 contains onlyJournal Headers 219, andJournal Data Area 220 contains onlyJournal Data 225. The Journal Header is a fixed size data structure. A Journal Header is allocated sequentially from the beginning of the Journal Header Area. This sequential organization corresponds to the chronological order of the journal entries. As will be discussed, data is provided that points to the first journal entry in the list, which represents the “oldest” journal entry. It is typically necessary to find theJournal Header 219 for a given sequence number (as stored in the sequence number field 215) or for a given write time (as stored in the time field 214). - A journal type field (JH_TYPE) 218 identifies the type of journal entry. The value contained in this field indicates a type of MARKER or INTERNAL. If the type is MARKER, then the journal is a marker journal. The purpose of a MARKER type of journal will be discussed below. If the type is INTERNAL, then the journal records the data that is the subject of the write operation issued from the
host 110. -
Journal Header 219 andJournal Data 225 are contained in chronological order in their respective areas in thejournal volume 106. Thus, the order in which the Journal Header and the Journal Data are stored in the journal volume is the same order as the assigned sequence number. As will be discussed below, an aspect of the present invention is that thejournal information respective areas -
FIG. 3 shows detail about the management table 108 (FIG. 1 ). In order to manage theJournal Header Area 210 andJournal Data Area 220, pointers for each area are needed. As mentioned above, the management table maintains configuration information about ajournal group 102 and the relationship between the journal group and its associated journal volume(s) 106 andsnapshot image 105. - The management table 300 shown in
FIG. 3 illustrates an example management table and its contents. The management table stores a journal group ID (GRID) 310 which identifies aparticular journal group 102 in astorage system 100. A journal group name (GRNAME) 311 can also be provided to identify the journal group with a human recognizable identifier. - A journal attribute (GRATTR) 312 is associated with the
journal group 102. In accordance with this particular implementation, two attributes are defined: MASTER and RESTORE. The MASTER attribute indicates the journal group is being journaled. The RESTORE attribute indicates that the journal group is being restored from a journal. - A journal status (GRSTS) 315 is associated with the
journal group 102. There are two statuses: ACTIVE and INACTIVE. - The management table includes a field to hold a sequence counter (SEQ) 313. This counter serves as the source of sequence numbers used in the
Journal Header 219. When creating a new journal, thesequence number 313 is read and assigned to the new journal. Then, the sequence number is incremented and written back into the management table. - The number (NUM_DVOL) 314 of
data volumes 101 contained in agive journal group 102 is stored in the management table. - A data volume list (DVOL_LIST) 320 lists the data volumes in a journal group. In a particular implementation, DVOL_LIST is a pointer to the first entry of a data structure which holds the data volume information. This can be seen in
FIG. 3 . Each data volume information comprises an offset number (DVOL_OFFS) 321. For example, if thejournal group 102 comprises three data volumes, the offset values could be 0, 1 and 2. A data volume identifier (DVOL_ID) 322 uniquely identifies a data volume within theentire storage system 100. A pointer (DVOL_NEXT) 324 points to the data structure holding information for the next data volume in the journal group; it is a NULL value otherwise. - The management table includes a field to store the number of journal volumes (NUM_JVOL) 330 that are being used to contain the data (journal header and journal data) associated with a
journal group 102. - As described in
FIG. 2 , theJournal Header Area 210 contains theJournal Headers 219 for each journal; likewise for theJournal Data components 225. As mentioned above, an aspect of the invention is that thedata areas - The management table includes fields to store pointers to different parts of the
data areas journal volume 106 that contains theJournal Header Area 210 which will store the nextnew Journal Header 219. A field (JI_HEAD_ADR) 332 identifies an address on the journal volume of the location in the Journal Header Area where the next Journal Header will be stored. The journal volume that contains theJournal Data Area 220 into which the journal data will be stored is identified by information in a field (JI_DATA_VOL) 335. A field (JI_DATA_ADR) 336 identifies the specific address in the Journal Data Area where the data will be stored. Thus, the next journal entry to be written is “pointed” to by the information contained in the “JI_” fields 331, 332, 335, 336. - The management table also includes fields which identify the “oldest” journal entry. The use of this information will be described below. A field (JO_HEAD_VOL) 333 identifies the journal volume which stores the
Journal Header Area 210 that contains theoldest Journal Header 219. A field (JO_HEAD_ADR) 334 identifies the address within the Journal Header Area of the location of the journal header of the oldest journal. A field (JO_DATA_VOL) 337 identifies the journal volume which stores theJournal Data Area 220 that contains the data of the oldest journal. The location of the data in the Journal Data Area is stored in a field (JO_DATA_ADR) 338. - The management table includes a list of journal volumes (JVOL_LIST) 340 associated with a
particular journal group 102. In a particular implementation, JVOL_LIST is a pointer to a data structure of information for journal volumes. As can be seen inFIG. 3 , each data structure comprises an offset number (JVOL_OFS) 341 which identifies aparticular journal volume 106 associated with a givenjournal group 102. For example, if a journal group is associated with twojournal volumes 106, then each journal volume might be identified by a 0 or a 1. A journal volume identifier (JVOL_ID) 342 uniquely identifies the journal volume within thestorage system 100. Finally, a pointer (JVOL_NEXT) 344 points to the next data structure entry pertaining to the next journal volume associated with the journal group; it is a NULL value otherwise. - The management table includes a list (SS_LIST) 350 of
snapshot images 105 associated with a givenjournal group 102. In this particular implementation, SS_LIST is a pointer to snapshot information data structures, as indicated inFIG. 3 . Each snapshot information data structure includes a sequence number (SS_SEQ) 351 that is assigned when the snapshot is taken. As discussed above, the number comes from thesequence counter 313. A time value (SS_TIME) 352 indicates the time when the snapshot was taken. A status (SS_STS) 358 is associated with each snapshot; valid values include VALID and INVALID. A pointer (SS_NEXT) 353 points to the next snapshot information data structure; it is a NULL value otherwise. - Each snapshot information data structure also includes a list of snapshot volumes 107 (
FIG. 1 ) used to store thesnapshot images 105. As can be seen inFIG. 3 , a pointer (SVOL_LIST) 354 to a snapshot volume information data structure is stored in each snapshot information data structure. Each snapshot volume information data structure includes an offset number (SVOL_OFFS) 355 which identifies a snapshot volume that contains at least a portion of the snapshot image. It is possible that a snapshot image will be segmented or otherwise partitioned and stored in more than one snapshot volume. In this particular implementation, the offset identifies the ith snapshot volume which contains a portion (segment, partition, etc) of the snapshot image. In one implementation, the ith segment of the snapshot image might be stored in the ith snapshot volume. Each snapshot volume information data structure further includes a snapshot volume identifier (SVOL_ID) 356 that uniquely identifies the snapshot volume in thestorage system 100. A pointer (SVOL_NEXT) 357 points to the next snapshot volume information data structure for a given snapshot image. -
FIG. 4 shows a flowchart highlighting the processing performed by therecovery manager 111 andStorage System 100 to initiate backup processing in accordance with the illustrative embodiment of the invention as shown in the figures. If journal entries are not recorded during the taking of a snapshot, the write operations corresponding to those journal entries would be lost and data corruption could occur during a data restoration operation. Thus, in accordance with an aspect of the invention, the journaling process is started prior to taking the first snapshot. Doing this ensures that any write operations which occur during the taking of a snapshot are journaled. As a note, any journal entries recorded prior to the completion of the snapshot can be ignored. - Further in accordance with the invention, a single sequence of numbers (SEQ) 313 are associated with each of one or more snapshots and journal entries, as they are created. The purpose of associating the same sequence of numbers to both the snapshots and the journal entries will be discussed below.
- Continuing with
FIG. 4 , therecovery manager 111 might define, in astep 410, a journal group (JNLG) 102 if one has not already been defined. As indicated inFIG. 1 , this may include identifying one or data volumes (DVOL) 101 for which journaling is performed, and identifying one or journal volumes (JVOL) 106 which are used to store the journal-related information. The recovery manager performs a suitable sequence of interactions with thestorage system 100 to accomplish this. In a step 415, the storage system may create a management table 108 (FIG. 1 ), incorporating the various information shown in the table detail 300 illustrated inFIG. 3 . Among other things, the process includes initializing theJVOL_LIST 340 to list the journal volumes which comprise thejournal group 102 Likewise, the list ofdata volumes DVOL_LIST 320 is created. The fields which identify the next journal entry (or in this case where the table is first created, the first journal entry) are initialized. Thus,JI_HEAD_VOL 331 might identify the first in the list of journal volumes andJI_HEAD_ADR 332 might point to the first entry in theJournal Header Area 210 located in the first journal volume. Likewise,JI_DATA_VOL 335 might identify the first in the list of journal volumes andJI_DATA_ADR 336 might point to the beginning of theJournal Data Area 220 in the first journal volume. Note, that the header and thedata areas - In a
step 420, therecovery manager 111 will initiate the journaling process. Suitable communication(s) are made to thestorage system 100 to perform journaling. In a step 425, the storage system will make a journal entry for each write operation that issues from thehost 110. - With reference to
FIG. 3 , making a journal entry includes, among other things, identifying the location for the next journal entry. The fields JI_HEAD_VOL 331 andJI_HEAD_ADR 332 identify thejournal volume 106 and the location in theJournal Header Area 210 of thenext Journal Header 219. The sequence counter (SEQ) 313 from the management table is copied to (associated with) theJH_SEQ 215 field of the next header. The sequence counter is then incremented and stored back to the management table. Of course, the sequence counter can be incremented first, copied to JH_SEQ, and then stored back to the management table. - The fields JI_DATA_VOL 335 and in the management table identify the journal volume and the beginning of the
Journal Data Area 220 for storing the data associated with the write operation. The JI_DATA_VOL and JI_DATA_ADR fields are copied to JH_JVOL 216 and toJH_ADR 212, respectively, of the Journal Header, thus providing the Journal Header with a pointer to its corresponding Journal Data. The data of the write operation is stored. - The
JI_HEAD_VOL 331 andJI_HEAD_ADR 332 fields are updated to point to thenext Journal Header 219 for the next journal entry. This involves taking the next contiguous Journal Header entry in theJournal Header Area 210. Likewise, the JI_DATA_ADR field (and perhaps JI_DATA_VOL field) is updated to reflect the beginning of the Journal Data Area for the next journal entry. This involves advancing to the next available location in the Journal Data Area. These fields therefore can be viewed as pointing to a list of journal entries. Journal entries in the list are linked together by virtue of the sequential organization of theJournal Headers 219 in theJournal Header Area 210. - When the end of the
Journal Header Area 210 is reached, theJournal Header 219 for the next journal entry wraps to the beginning of the Journal Header Area. Similarly for theJournal Data 225. To prevent overwriting earlier journal entries, the present invention provides for a procedure to free up entries in thejournal volume 106. This aspect of the invention is discussed below. - For the very first journal entry, the
JO_HEAD_VOL field 333,JO_HEAD_ADR field 334,JO_DATA_VOL field 337, and theJO_DATA_ADR field 338 are set to contain their contents of their corresponding “JI_” fields. As will be explained the “JO_” fields point to the oldest journal entry. Thus, as new journal entries are made, the “JO_” fields do not advance while the “JI_” fields do advance. Update of the “JO_” fields is discussed below. - Continuing with the flowchart of
FIG. 4 , when the journaling process has been initiated, all write operations issuing from the host are journaled. Then in astep 430, therecovery manager 111 will initiate taking a snapshot of thedata volumes 101. Thestorage system 100 receives an indication from the recovery manager to take a snapshot. In a step 435, the storage system performs the process of taking a snapshot of the data volumes. Among other things, this includes accessingSS_LIST 350 from the management table (FIG. 3 ). A suitable amount of memory is allocated for fields 351-354 to represent the next snapshot. The sequence counter (SEQ) 313 is copied to thefield SS_SEQ 351 and incremented, in the manner discussed above forJH_SEQ 215. Thus, over time, a sequence of numbers is produced fromSEQ 313, each number in the sequence being assigned either to a journal entry or a snapshot entry. - The snapshot is stored in one (or more) snapshot volumes (SVOL) 107. A suitable amount of memory is allocated for fields 355-357. The information relating to the SVOLs for storing the snapshot are then stored into the fields 355-357. If additional volumes are required to store the snapshot, then additional memory is allocated for fields 355-357.
-
FIG. 5 illustrates the relationship between journal entries and snapshots. Thesnapshot 520 represents the first snapshot image of thedata volumes 101 belonging to ajournal group 102. Note that journal entries (510) having sequence numbers SEQ0 and SEQ1 have been made, and represent journal entries for two write operations. These entries show that journaling has been initiated at a time prior to the snapshot being taken (step 420). Thus, at a time corresponding to the sequence number SEQ2, therecovery manager 111 initiates the taking of a snapshot, and since journaling has been initiated, any write operations occurring during the taking of the snapshot are journaled. Thus, the write operations 500 associated with the sequence numbers SEQ3 and higher show that those operations are being journaled. As an observation, the journal entries identified by sequence numbers SEQ0 and SEQ1 can be discarded or otherwise ignored. - Recovering data typically requires recover the data state of at least a portion of the
data volumes 101 at a specific time. Generally, this is accomplished by applying one or more journal entries to a snapshot that was taken earlier in time relative to the journal entries. In the disclosed illustrative embodiment, thesequence number SEQ 313 is incremented each time it is assigned to a journal entry or to a snapshot. Therefore, it is a simple matter to identify which journal entries can be applied to a selected snapshot; i.e., those journal entries whose associated sequence numbers (JH_SEQ, 215) are greater than the sequence number (SS_SEQ, 351) associated with the selected snapshot. - For example, the administrator may specify some point in time, presumably a time that is earlier than the time (the “target time”) at which the data in the data volume was lost or otherwise corrupted. The
time field SS_TIME 352 for each snapshot is searched until a time earlier than the target time is found. Next, theJournal Headers 219 in theJournal Header Area 210 is searched, beginning from the “oldest” Journal Header. The oldest Journal Header can be identified by the “JO_” fields 333, 334, 337, and 338 in the management table. The Journal Headers are searched sequentially in thearea 210 for the first header whosesequence number JH_SEQ 215 is greater than thesequence number SS_SEQ 351 associated with the selected snapshot. The selected snapshot is incrementally updated by applying each journal entry, one at a time, to the snapshot in sequential order, thus reproducing the sequence of write operations. This continues as long as the time field JH_TIME 214 of the journal entry is prior to the target time. The update ceases with the first journal entry whose time field 214 is past the target time. - In accordance with one aspect of the invention, a single snapshot is taken. All journal entries subsequent to that snapshot can then be applied to reconstruct the data state at a given time. In accordance with another aspect of the present invention, multiple snapshots can be taken. This is shown in
FIG. 5A wheremultiple snapshots 520′ are taken. In accordance with the invention, each snapshot and journal entry is assigned a sequence number in the order in which the object (snapshot or journal entry) is recorded. It can be appreciated that there typically will bemany journal entries 510 recorded between eachsnapshot 520′. Having multiple snapshots allows for quicker recovery time for restoring data. The snapshot closest in time to the target recovery time would be selected. The journal entries made subsequent to the snapshot could then be applied to restore the desired data state. -
FIG. 6 illustrates another aspect of the present invention. In accordance with the invention, a journal entry is made for every write operation issued from the host; this can result in a rather large number of journal entries. As time passes and journal entries accumulate, the one ormore journal volumes 106 defined by therecovery manager 111 for ajournal group 102 will eventually fill up. At that time no more journal entries can be made. As a consequence, subsequent write operations would not be journaled and recovery of the data state subsequent to the time the journal volumes become filled would not be possible. -
FIG. 6 shows that thestorage system 100 will apply journal entries to a suitable snapshot in response to detection of an “overflow” condition. An “overflow” is deemed to exist when the available space in the journal volume(s) falls below some predetermined threshold. It can be appreciated that many criteria can be used to determine if an overflow condition exists. A straightforward threshold is based on the total storage capacity of the journal volume(s) assigned for a journal group. When the free space becomes some percentage (say, 10%) of the total storage capacity, then an overflow condition exists. Another threshold might be used for each journal volume. In an aspect of the invention, the free space capacity in the journal volume(s) is periodically monitored. Alternatively, the free space can be monitored in an aperiodic manner. For example, the intervals between monitoring can be randomly spaced. As another example, the monitoring intervals can be spaced apart depending on the level of free space; i.e., the monitoring interval can vary as a function of the free space level. -
FIG. 7 highlights the processing which takes place in thestorage system 100 to detect an overflow condition. Thus, in a step, 710, the storage system periodically checks the total free space of the journal volume(s) 106; e.g., every ten seconds. The free space can easily be calculated since the pointers (e.g.,JI_CTL_VOL 331, JI_CTL_ADDR 332) in the management table 300 maintain the current state of the storage consumed by the journal volumes. If the free space is above the threshold, then the monitoring process simply waits for a period of time to pass and then repeats its check of the journal volume free space. - If the free space falls below a predetermined threshold, then in a
step 720 some of the journal entries are applied to a snapshot to update the snapshot. In particular, the oldest journal entry(ies) are applied to the snapshot. - Referring to
FIG. 3 , theJournal Header 219 of the “oldest” journal entry is identified by theJO_HEAD_VOL field 333 and theJO_HEAD_ADR field 334. These fields identify the journal volume and the location in the journal volume of theJournal Header Area 210 of the oldest journal entry. Likewise, the Journal Data of the oldest journal entry is identified by theJO_DATA_VOL field 337 and theJO_DATA_ADR field 338. The journal entry identified by these fields is applied to a snapshot. The snapshot that is selected is the snapshot having an associated sequence number closest to the sequence number of the journal entry and earlier in time than the journal entry. Thus, in this particular implementation where the sequence number is incremented each time, the snapshot having the sequence number closest to but less than the sequence number of the journal entry is selected (i.e., “earlier in time). When the snapshot is updated by applying the journal entry to it, the applied journal entry is freed. This can simply involve updating theJO_HEAD_VOL field 333,JO_HEAD_ADR field 334,JO_DATA_VOL field 337, and theJO_DATA_ADR field 338 to the next journal entry. - As an observation, it can be appreciated by those of ordinary skill, that the sequence numbers will eventually wrap, and start counting from zero again. It is well within the level of ordinary skill to provide a suitable mechanism for keeping track of this when comparing sequence numbers.
- Continuing with
FIG. 7 , after applying the journal entry to the snapshot to update the snapshot, a check is made of the increase in the journal volume free space as a result of the applied journal entry being freed up (step 730). The free space can be compared against the threshold criterion used instep 710. Alternatively, a different threshold can be used. For example, here a higher amount of free space may be required to terminate this process than was used to initiate the process. This avoids invoking the process too frequently, but once invoked the second higher threshold encourages recovering as much free space as is reasonable. It can be appreciated that these thresholds can be determined empirically over time by an administrator. - Thus, in
step 730, if the threshold for stopping the process is met (i.e., free space exceeds threshold), then the process stops. Otherwise,step 720 is repeated for the next oldest journal entry.Steps step 730. -
FIG. 7A highlights sub-steps for an alternative embodiment to step 720 shown inFIG. 7 . Step 720 frees up a journal entry by applying it to the latest snapshot that is not later in time than the journal entry. However, where multiple snapshots are available, it may be possible to avoid the time consuming process of applying the journal entry to a snapshot in order to update the snapshot. -
FIG. 7A shows details for astep 720′ that is an alternate to step 720 ofFIG. 7 . At astep 721, a determination is made whether a snapshot exists that is later in time than the oldest journal entry. This determination can be made by searching for the first snapshot whose associated sequence number is greater than that of the oldest journal entry. Alternatively, this determination can be made by looking for a snapshot that is a predetermined amount of time later than the oldest journal entry can be selected; for example, the criterion may be that the snapshot must be at least one hour later in time than the oldest journal entry. Still another alternate is to use the sequence numbers associated with the snapshots and the journal entries, rather than time. For example, the criterion might be to select a snapshot whose sequence number is N increments away from the sequence number of the oldest journal entry. - If such a snapshot can be found in
step 721, then the earlier journal entries can be removed without having to apply them to a snapshot. Thus, in astep 722, the “JO_” fields (JO_HEAD_VOL 333,JO_HEAD_ADR 334,JO_DATA_VOL 337, and JO_DATA_ADR 338) are simply moved to a point in the list of journal entries that is later in time than the selected snapshot. If no such snapshot can be found, then in astep 723 the oldest journal entry is applied to a snapshot that is earlier in time than the oldest journal entry, as discussed forstep 720. - Still another alternative for
step 721 is simply to select the most recent snapshot. All the journal entries whose sequence numbers are less than that of the most recent snapshot can be freed. Again, this simply involves updating the “JO_” fields so they point to the first journal entry whose sequence number is greater than that of the most recent snapshot. Recall that an aspect of the invention is being able to recover the data state for any desired point in time. This can be accomplished by storing as many journal entries as possible and then applying the journal entries to a snapshot to reproduce the write operations. This last embodiment has the potential effect of removing large numbers of journal entries, thus reducing the range of time within which the data state can be recovered. Nevertheless, for a particular configuration it may be desirable to remove large numbers of journal entries for a given operating environment. - Another aspect of the present invention is the ability to place a “marker” among the journal entries. In accordance with an illustrative embodiment of this aspect of the invention, an application programming interface (API) can be provided to manipulate these markers, referred to herein as marker journal entries, marker journals, etc. Marker journals can be created and inserted among the journal entries to note actions performed on the data volume (production volume) 101 or events in general (e.g., system boot up). Marker journals can be searched and used to identify previously marked actions and events. The API can be used by high-level (or user-level) applications. The API can include functions that are limited to system level processes.
-
FIG. 8 shows additional detail in the block diagram illustrated inFIG. 1 . A Management Program (MP) 811 component comprises aManager 814 and aDriver 813. The Driver component provides a set of API's to provide journaling functions implemented instorage system 100 in accordance with this aspect of the invention. The Manager component represents an example of an application program that uses the API's provided by the Driver component. As will be discussed below,user applications 112 can use parts of the API provided by the Driver. Following is a usage exemplar, illustrating the basic functionality provided by an API in accordance with the present invention. - The
Manager component 814 can be configured to monitor operations on all or parts of a data volume (production data store) 101 such as a database, a directory, one or more files, or other objects of a the file system. A user can be provided with access to the Manager via a suitable interface; e.g., command line interface, GUI, etc. The user can interact with the Manager to specify objects and operations on those objects to be monitored. When the Manager detects a specified operation on the object, it calls an appropriate marker journal function via the API to create a marker journal to mark the event or action. Among other things, the marker journal can include information such as a filename, the detected operation, the name of thehost 110, and a timestamp. - The
Driver component 813 can interact with thestorage system 100 accordingly to create the marker. In response, thestorage system 100 creates the marker journal in the same manner as discussed above for journal entries associated with write operations. Referring for a moment toFIG. 2 , the journal type field (JH_TYPE) 218 can be set to MARKER to indicate that journal entry is a marker journal. Journal entries associated with write operations would have a field value of INTERNAL. Any information that is associated with the marker journal entry can be stored in the journal data area of the journal entry. -
FIG. 9 illustrates an example for implementing an API based on astorage system 100 that implements the SCSI (small computer system interface) standard. A special device, referred to herein as a command device (CMD) 902, can be defined in thestorage system 100. When theDriver component 813 issues a read request or a write request to the CMD device, thestorage system 100 can intercept the request and treat it as a special command. For example, a write request to the CMD device can contain data (write data) that indicates a function relating to a marker journal such as creating a marker journal. Other functions will be discussed below. The write data can include marker information such as time range, filename, operation, and so on. - With a write command, the
Manager component 814 can also specify to read special information from thestorage system 100. In this case, the write command indicates information to be read, and following a read command to theCMD device 902 actually reads the information. Thus, for example, a pair of write and read requests to the CMD device can be used to retrieve a marker journal entry and the data associated with the marker journal. - An alternative implementation is to extend the SCSI command set. For example, the SCSI standard allows developers to extend the SCSI common command set (CCS) which describes the core set of commands supported by SCSI. Thus, special commands can be defined to provide the API functionality. From these implementation examples, one of ordinary skill in the relevant arts can readily appreciate that other implementations are possible.
-
FIG. 10 illustrates the interaction among the components shown inFIG. 1 . Auser 1002 on thehost 110 can interact via a suitable API with theManager component 814 or directly with theDriver component 813 to manipulate marker journals. The user can be an application level user or a system administrator. The “user” can be a machine that is suitably interfaced to the Manager component and/or the Driver component. - The
Manager component 814 can provide itsown API 814 a to theuser 1002. The functions provided by this API can be similar to the marker journal functions provided by theAPI 813 a of theDriver component 813. However, since the Manager component provides a higher level of functionality, its API is likely to include functions not needed for managing marker journals. It can be appreciated that in other embodiments of the invention, a single API can be defined which includes the functionality of API's 813 a and 814 a. - The
Driver component 813 communicates with thestorage system 100 to initiate the desired action. As illustrated inFIG. 10 , typical actions include, among others, generating marker journals, periodically retrieving journal entries, and recovery using marker journals. - Following is a list of functions provided by the API's according to an embodiment of the present invention:
- GENERATE MARKER
-
- This function will generate a marker journal entry. This function can be invoked by the user or by the Manager component 114 to generate a marker journal. The following information can be provided:
- 1. operation—this specifies a data operation that is being performed on the object; e.g., deletion, re-format, closing a file, renaming, etc. It is possible that no data operation is specified. The user may simply wish to create a marker journal to identify the data state of the
data volume 101 at some point in time. - 2. timestamp
- 3. object name, e.g., filename, volume name, a database identifier, etc.
- 4. hostname
- 5. host IP Address
- 6. comments
- 1. operation—this specifies a data operation that is being performed on the object; e.g., deletion, re-format, closing a file, renaming, etc. It is possible that no data operation is specified. The user may simply wish to create a marker journal to identify the data state of the
- The GENERATE MARKER request is sent through the Driver component 113 to the
storage system 100. The storage system performs the following:- 1. Assign the next number in the
sequence number SEQ 313 to the marker. In addition, a time value can be placed in the JH_TIME 214 field, thus associating a time of creation with the marker journal. - 2. Store the marker on the
journal volume JVOL 106. The accompanying information is stored in thejournal data area 225.
- 1. Assign the next number in the
- The created marker journal entry is now inserted, in timewise sequence, into the list of journal entries.
- GET MARKER
- Retrieve one or more marker journal entries by specifying at least one or more of the following retrieval criteria:
- 1. time—This can be a range of times, or a single time value. If a single time value is provided, the marker journals prior to the time value or subsequent to the time value can be retrieved. Some convention would be required to specify whether prior-in-time marker journals are obtained, or subsequent-in-time marker journals are obtained; e.g., a “+” sign and a “−” sign can be used.
- 2. object name, e.g., filename, volume name, a database identifier, etc.
- 3. operation—A specific operation can be used to specify which marker journal(s) to obtain.
- Generally, any of the data in the marker journal entry can be used as the retrieval criterion (a). For example, it may be desirable to allow a user to search the “comment” that is stored with the marker journal.
- The following information from the retrieved marker journals can be obtained, although it is understood that any information associated with the marker journal can be obtained.
- sequence number
- timestamp
- other information in
journal data area 225
- READ HEADER
- The next two function allow a user to see makers stored to the journal volume JVOL106 at any time. The
Driver 813 searches markers that a user wants to see. In order to speed up the search,Driver 813 periodically readsjournal headers 219, finds markers, readsjournal data 225, and stores them to a file. This stores all the markers to a file in advance. - This function obtains the header portion of a marker journal entry.
- A sequence number is provided to identify which journal header to read next. This is used to calculate the location of the first header.
- The number of journal headers is provided to indicate how many journal headers are to be communicated to the
driver 813.
- The next two function allow a user to see makers stored to the journal volume JVOL106 at any time. The
- READ JOURNAL
- This function reads the journal header.
- A sequence number is provided to identify which journal header to read next. This is used to calculate the location of the first header.
- The location and length of the journal data are obtained from the JH_JNL 216,
JH_JADR 217 and JH_LEN 213 fields. This information determines how much data is in a given marker journal. - INVOKE RECOVERY
- This invokes a recovery action. A user can invoke recovery using the following parameters:
- timestamp as the recovery target time, or
- sequence number as the recovery target time.
- This function will generate a marker journal entry. This function can be invoked by the user or by the Manager component 114 to generate a marker journal. The following information can be provided:
- Objects can be monitored for certain actions. For example, the
Manager component 814 can be configured to monitor thedata volume 101 for user-specified activity (data operations) to be performed on objects contained in the volume. The object can be the entire volume, a file system or portions of a file system. The object can include application objects such as files, database components, and so on. Activities include, among others, closing a file, removing an object, manipulation (creation, deletion, etc) of symbolic links to files and/or directories, formatting all or a portion of a volume, and so on. - A user can specify which actions to detect. When the
Manager 814 detects a specified operation, the Manager can issue a GENERATE MARKER request to mark the event. Similarly, the user can specify an action or actions to be performed on an object or objects. When the Manager detects a specified action on a specified object, a GENERATE MARKER request can be issued to mark the occurrence of that event. - The user can also mark events that take place within the
data volume 101. For example, when the user shuts down the system, she might issue a SYNC command (in the case of a UNIX OS) to sync the file system and also invoke the GENERATE MARKER command to mark the event of syncing the file system. She might mark the event of booting up the system. It can be appreciated that the Manager component 114 can be configured to detect and automatically act on these events as well. It is observed that an event can be marked before or after the occurrence of the event. For example, the actions of deleting a file or SYNC'ing a file system probably are preferably performed prior to marking the action. If a major update of a data file or a database is about to be performed, it might be prudent to create a marker journal before proceeding; this can be referred to as “pre-marking” the event. - The foregoing mechanisms for manipulating marker journals can be used to facilitate recovery. For example, suppose a system administrator configures the
Manager component 814 to mark every “delete” operation that is performed on “file” objects. Each time a user in thehost 110 performs a file delete, a marker journal entry can be created (using the GENERATE MARKER command) and stored in thejournal volume 106. This operation is a type where it might be desirable to “pre-mark” each such event; that is, a marker journal entry is created prior to carrying out the delete operation to mark a point in time just prior to the operation. Thus, over time, the journal entries contained in the journal volumes will be sprinkled with marker journal entries identifying points in time prior to each file deletion operation. - If a user later wishes to recover an inadvertently deleted file, the marker journals can be used to find a suitable recovery point. For example, the user is likely to know roughly when he deleted a file. A GET MARKER command that specifies a time prior to the estimated time of deletion and further specifying an operation of “delete” on objects of “file” with the name of the deleted file as an object can be issued to the
storage system 100. The matching marker journal entry is then retrieved. This journal entry identifies a point in time prior to the delete operation, and can then serve as the recovery point for a subsequent recovery operation. As can be seen inFIG. 2 , all journal entries, including marker journals, have a sequence number. Thus, the sequence number of the retrieved marker journal entry can be used to determine the latest journal entry just prior to the deletion action. A suitable snapshot is obtained and updated with journal entries of type INTERNAL, up to the latest journal entry. At that point, the data state of the volume reflects the time just before the file was deleted, thus allowing for the deleted file to be restored. -
FIG. 11 illustrates recovery processing according to an illustrative embodiment of the present invention. Thestorage system 100 determines in astep 1110 whether recovery is possible. A snapshot must have been taken between the oldest journal entry and latest journal entry. As discussed above, every snapshot has a sequence number taken from the same sequence of numbers used for the journal entries. The sequence number can be used to identify a suitable snapshot. If the sequence number of a candidate snapshot is greater than that of the oldest journal and smaller than that of the latest journal, then the snapshot is suitable. - Then in a
step 1120, the recovery volume is set to an offline state. The term “recovery volume” is used in a generic sense to refer to one or more volumes on which the data recovery process is being performed. In the context of the present invention, “offline” is taken to mean that the user, and more generally thehost device 110, cannot access the recovery volume. For example, in the case that the production volume is being used as the recovery volume, it is likely to be desirable that thehost 110 be prevented at least from issuing write operations to the volume. Also, the host typically will not be permitted to perform read operations. Of course, the storage system itself has full access to the recovery volume in order to perform the recovery task. - In a
step 1130, the snapshot is copied to the recovery volume in preparation for the recovery operation. The production volume itself can be the recovery volume. However, it can be appreciated that therecovery manager 111 can allow the user to specify a volume other than the production volume to serve as the target of the data recovery operation. For example, the recovery volume can be the volume on which the snapshot is stored. Using a volume other than the production volume to perform the recovery operation may be preferred where it is desirable to provide continued use of the production volume. - In a
step 1140, one or more journal entries are applied to update the snapshot volume in the manner as discussed previously. Enough journal entries are applied to update the snapshot to a point in time just prior to the occurrence of the file deletion. At that point the recovery volume can be brought “online.” In the context of the present invention, the “online” state is taken to mean that thehost device 110 is given access to the recovery volume. - Referring again to
FIG. 10 , according to another aspect of the invention, periodic retrievals of marker journal entries can be made and stored locally in thehost 110 using the GET MARKER command and specifying suitable criteria. For example, theDriver component 813 might periodically issue a GET MARKER for “delete” operations performed on “file” objects. Other retrieval criteria can be specified. Having a locally accessible copy of certain marker journals reduces delay in retrieving one marker journal at a time from thestorage system 100. This can greatly speed up a search for a recovery point. - From the foregoing, it can be appreciated that the API definition can be readily extended to provide additional functionality. The disclosed embodiments typically can be provided using a combination of hardware and software implementations; e.g., combinations of software, firmware, and/or custom logic such as ASICs (application specific ICs) are possible. One of ordinary skill can readily appreciate that the underlying technical implementation will be determined based on factors including but not limited to or restricted to system cost, system performance, the existence of legacy software and legacy hardware, operating environment, and so on. The disclosed embodiments can be readily reduced to specific implementations without undue experimentation by those of ordinary skill in the relevant art.
Claims (30)
1. A method for accessing data contained in a data store comprising:
detecting a user-request to perform an operation on an object stored in a data store and in response thereto communicating a request to the data store to perform the operation and communicating a marker request to the data store, the marker request including information indicative of the operation and the object, wherein the marker request produces a marker journal entry;
detecting a user-request to retrieve a specified marker journal entry and in response thereto communicating a request to the data store to retrieve the specified marker journal entry; and
detecting a user-request to perform a recovery operation and in response thereto communicating a recovery request to the data store to restore a data state of the data store, the user-request including information including a target time of the data state, the target time being based on a time associated with a previously retrieved marker journal entry.
2. The method of claim 1 wherein the user-request to retrieve a specified marker journal entry includes information indicating at least one of a target time, an operation, and an object name.
3. The method of claim 1 further comprising obtaining the previously retrieved marker journal entry based on one of an operation on an object and an object name.
4. The method of claim 1 further comprising retrieving a plurality of marker journal entries and presenting one or more of the marker journal entries to a user, wherein the previously retrieved marker journal entry is a user selected one of the marker journal entries.
5. The method of claim 1 wherein the marker journal entries are retrieved periodically over a span of time.
6. A method for processing data on a data store comprising:
receiving user-requests for operations to be performed on a data store;
for each user-request, communicating one or more requests to the data store to perform the user-request;
monitoring the user-requests; and
if a user-request is a predetermined operation, then communicating a marker journal request to the data store in addition to communicating the one or more requests, thereby creating a marker journal entry to mark a time of occurrence of the predetermined operation,
wherein the marker journal request includes information representative of the predetermined operation,
wherein communicating a marker journal request includes invoking first application program interface (API) program code to transmit the marker journal request to the data store.
7. The method of claim 6 further comprising receiving a user-request to retrieve a marker journal entry and in response thereto communicating a marker retrieval request to the data store, wherein the marker retrieval request includes one or more retrieval criteria, wherein the communicating includes invoking second API program code to transmit the marker retrieval request to the data store.
8. The method of claim 7 further comprising receiving a retrieved marker journal entry from the data store and storing the retrieved marker journal entry, wherein the retrieved marker journal entry satisfies the one or more retrieval criteria.
9. The method of claim 8 further comprising communicating additional marker retrieval requests to the data store and storing additional retrieved marker journal entries.
10. The method of claim 6 further comprising receiving user-information indicative of one of more predetermined operations to be monitored.
11. Method for processing data contained in a data store comprising:
receiving user-requests for operations to be performed on a data store;
for each user-request, communicating one or more associated requests to the data store to perform the user-request;
for at least some of the user-requests, communicating a marker journal request to the data store in addition to communicating the one or more associated requests, thereby creating one or more marker journal entries to mark a time of occurrence of some of the user-requests;
retrieving one or more first marker journal entries from the data store, based on one or more retrieval criteria;
displaying the first marker journal entries;
receiving a user-selected one of the first marker journal entries; and
performing a recovery operation based on a target time associated with the user-selected one of the first marker journal entries.
12. The method of claim 11 wherein communicating a marker journal request includes invoking first API program code to communicate with the data store.
13. The method of claim 12 wherein retrieving one or more first marker journal entries includes performing one or more invocations of second API program code to communicate with the data store.
14. The method of claim 13 wherein performing a recovery operation includes performing one or more invocations of third API program code to communicate with the data store.
15. The method of claim 11 further comprising receiving user-information representative of the at least some of the user-requests.
16. The method of claim 15 wherein the user-information includes one or more of an operation to be performed in the data store and an object contained in the data store.
17. A method for processing data in a data store comprising:
producing one or more snapshots of a data store;
detecting write requests directed to the data store and in response thereto producing journal entries corresponding to the write requests, wherein the journal entries can be applied to one of the snapshots to recreate one or more data states of the data store;
detecting a marker request and in response thereto producing a marker journal entry, wherein the journal entries and the marker journal entries are ordered according to the time of their respective write requests and marker requests;
detecting a request to retrieve a specified marker journal entry and in response thereto accessing the specified marker journal entry; and
detecting a request to perform a recovery operation, the request including a target time based on a time associated with a previously retrieved marker journal entry.
18. The method of claim 17 further comprising assigning a sequence number to each journal entry and to the marker journal entry in the order in which the entries are produced.
19. The method of claim 17 wherein the marker request is detected as part of performing a predetermined operation on an object stored on the data store.
20. Computer apparatus for processing data contained in a data store comprising:
a data processing component;
a communication component configured to communicate between a host device and a data store; and
computer program code configured to operate one or more of the data processing component and the communication component to perform steps of:
communicating marker journal requests to the data store, to create a plurality of marker journals;
communicating marker retrieval requests to the data store, to retrieve one or more of the marker journal entries; and
communicating a data recovery request to the data store, to perform a recovery operation to recover a data state in the data store;
wherein the computer program code is configured as an application programming interface (API) to allow an application program to perform one or more of the steps of communicating.
21. The computer apparatus of claim 20 wherein each marker journal request includes information indicative of one of an object contained in the data store and an operation to be performed on an object contained in the data store.
22. The computer apparatus of claim 20 wherein the marker retrieval requests are based on information associated with the marker journal entries.
23. The computer apparatus of claim 20 wherein the data recovery request includes a target time indicative of the data state to be recovered.
24. The computer apparatus of claim 23 wherein the target time is based on a time associated with a previously retrieved marker journal entry.
25. A computer program product for processing data on a data store comprising:
a storage component having stored therein computer program code,
the computer program code comprising an application program interface (API), the API comprising:
a first API component configured to allow execution of first program code, the first program code configured to communicate a maker journal request to a data store to create a marker journal entry, the marker journal request including marker information indicative of one or more of an object contained in the data store and an operation on an object contained in the data store, the marker information being associated with the marker journal entry;
a second API component configured to allow execution of second program code, the second program code configured to communicate a marker retrieval request to the data store to retrieve at least one marker journal entry, the marker retrieval request including retrieval criteria based on the marker information; and
a third API component configured to allow execution of third program code, the third program code configured to communicate a recovery request to the data store to recover a data state of the data store.
26. The computer program product of claim 25 wherein the recovery request includes a target time that is based on a time associated with a previously retrieved marker journal entry.
27. The computer program product of claim 25 wherein the API further comprises a fourth API component configured to allow execution of fourth program code, the fourth program code configured to monitor one or more operations on one or more objects contained in the data store.
28. The computer program product of claim 27 wherein the API further comprises a fifth API component configured to allow execution of fifth program code, the fifth program code configured to communicate a marker retrieval request to the data store to retrieve a marker journal entry.
29. The computer program product of claim 28 wherein the fifth program code is further configured to communicate a plurality of marker retrieval requests to retrieve a plurality of retrieved marker journal entries, wherein the recovery request includes a target time that is based on a time associated with one of the retrieved marker journal entries.
30. The computer program product of claim 27 wherein the API further comprises:
a fifth API component configured to allow execution of fifth program code, the fifth program code configured to communicate a plurality of marker retrieval requests to the data store to retrieve a plurality of marker journal entries; and
a sixth API component configured to allow execution of sixth program code, the sixth program code configured to display the plurality of marker journal entries, wherein the recovery request includes a target time that is based on a time associated with one of the retrieved marker journal entries.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/627,507 US20050022213A1 (en) | 2003-07-25 | 2003-07-25 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US11/365,085 US7555505B2 (en) | 2003-07-25 | 2006-02-28 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US12/473,415 US8005796B2 (en) | 2003-07-25 | 2009-05-28 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US13/181,055 US8296265B2 (en) | 2003-07-25 | 2011-07-12 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US13/551,892 US9092379B2 (en) | 2003-06-26 | 2012-07-18 | Method and apparatus for backup and recovery using storage based journaling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/627,507 US20050022213A1 (en) | 2003-07-25 | 2003-07-25 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/365,085 Continuation US7555505B2 (en) | 2003-07-25 | 2006-02-28 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050022213A1 true US20050022213A1 (en) | 2005-01-27 |
Family
ID=34080659
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/627,507 Abandoned US20050022213A1 (en) | 2003-06-26 | 2003-07-25 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US11/365,085 Expired - Fee Related US7555505B2 (en) | 2003-07-25 | 2006-02-28 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US12/473,415 Expired - Fee Related US8005796B2 (en) | 2003-07-25 | 2009-05-28 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US13/181,055 Expired - Fee Related US8296265B2 (en) | 2003-07-25 | 2011-07-12 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/365,085 Expired - Fee Related US7555505B2 (en) | 2003-07-25 | 2006-02-28 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US12/473,415 Expired - Fee Related US8005796B2 (en) | 2003-07-25 | 2009-05-28 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US13/181,055 Expired - Fee Related US8296265B2 (en) | 2003-07-25 | 2011-07-12 | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
Country Status (1)
Country | Link |
---|---|
US (4) | US20050022213A1 (en) |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050165863A1 (en) * | 2004-01-23 | 2005-07-28 | Atul Mukker | File recovery under Linux operating system |
US20050278382A1 (en) * | 2004-05-28 | 2005-12-15 | Network Appliance, Inc. | Method and apparatus for recovery of a current read-write unit of a file system |
US20070174694A1 (en) * | 2005-11-16 | 2007-07-26 | Hitachi, Ltd. | Data recovery method for computer system |
EP1814032A2 (en) * | 2006-01-31 | 2007-08-01 | Hitachi, Ltd. | Recovery for storage system |
EP1845449A2 (en) | 2006-04-14 | 2007-10-17 | Hitachi, Ltd. | System and method for processing a plurality kinds of event markers of a continuous data protection |
US20070271429A1 (en) * | 2006-05-18 | 2007-11-22 | Hitachi, Ltd. | Storage System and method of producing recovery volume |
US20070271422A1 (en) * | 2006-05-19 | 2007-11-22 | Nobuyuki Osaki | Method and apparatus for data recovery |
US20070294274A1 (en) * | 2006-06-19 | 2007-12-20 | Hitachi, Ltd. | System and method for managing a consistency among volumes in a continuous data protection environment |
US20070300013A1 (en) * | 2006-06-21 | 2007-12-27 | Manabu Kitamura | Storage system having transaction monitoring capability |
US20080027998A1 (en) * | 2006-07-27 | 2008-01-31 | Hitachi, Ltd. | Method and apparatus of continuous data protection for NAS |
US20080091744A1 (en) * | 2006-10-11 | 2008-04-17 | Hidehisa Shitomi | Method and apparatus for indexing and searching data in a storage system |
US20080162840A1 (en) * | 2007-01-03 | 2008-07-03 | Oliver Augenstein | Methods and infrastructure for performing repetitive data protection and a corresponding restore of data |
US20080168218A1 (en) * | 2007-01-05 | 2008-07-10 | Hitachi, Ltd. | Backup system with continuous data protection |
US20090037482A1 (en) * | 2007-08-01 | 2009-02-05 | Hitachi, Ltd. | Method and apparatus for achieving consistency of files in continuous data protection |
US8112398B1 (en) * | 2007-06-28 | 2012-02-07 | Emc Corporation | Methods, systems, and computer program products for selectively marking and retrieving data from an event log file |
WO2016069423A1 (en) * | 2014-10-28 | 2016-05-06 | Microsoft Technology Licensing, Llc | Point in time database restore from storage snapshots |
US20160259559A1 (en) * | 2014-05-12 | 2016-09-08 | Hitachi, Ltd. | Storage system and control method thereof |
US9547560B1 (en) * | 2015-06-26 | 2017-01-17 | Amazon Technologies, Inc. | Amortized snapshots |
EP3159796A1 (en) * | 2015-09-21 | 2017-04-26 | Zerto Ltd. | System and method for generating backups of a protected system from a recovery system |
US10228962B2 (en) | 2015-12-09 | 2019-03-12 | Commvault Systems, Inc. | Live synchronization and management of virtual machines across computing and virtualization platforms and using live synchronization to support disaster recovery |
US10387266B2 (en) * | 2015-12-23 | 2019-08-20 | Commvault Systems, Inc. | Application-level live synchronization across computing platforms including synchronizing co-resident applications to disparate standby destinations and selectively synchronizing some applications and not others |
US10423493B1 (en) * | 2015-12-21 | 2019-09-24 | Amazon Technologies, Inc. | Scalable log-based continuous data protection for distributed databases |
US10567500B1 (en) | 2015-12-21 | 2020-02-18 | Amazon Technologies, Inc. | Continuous backup of data in a distributed data store |
US10621049B1 (en) | 2018-03-12 | 2020-04-14 | Amazon Technologies, Inc. | Consistent backups based on local node clock |
US10754844B1 (en) | 2017-09-27 | 2020-08-25 | Amazon Technologies, Inc. | Efficient database snapshot generation |
US10831614B2 (en) | 2014-08-18 | 2020-11-10 | Amazon Technologies, Inc. | Visualizing restoration operation granularity for a database |
US10922319B2 (en) * | 2017-04-19 | 2021-02-16 | Ebay Inc. | Consistency mitigation techniques for real-time streams |
US10990581B1 (en) | 2017-09-27 | 2021-04-27 | Amazon Technologies, Inc. | Tracking a size of a database change log |
US11042454B1 (en) | 2018-11-20 | 2021-06-22 | Amazon Technologies, Inc. | Restoration of a data source |
US11042503B1 (en) | 2017-11-22 | 2021-06-22 | Amazon Technologies, Inc. | Continuous data protection and restoration |
US11126505B1 (en) | 2018-08-10 | 2021-09-21 | Amazon Technologies, Inc. | Past-state backup generator and interface for database systems |
US11182372B1 (en) | 2017-11-08 | 2021-11-23 | Amazon Technologies, Inc. | Tracking database partition change log dependencies |
US11269731B1 (en) | 2017-11-22 | 2022-03-08 | Amazon Technologies, Inc. | Continuous data protection |
US11269737B2 (en) * | 2019-09-16 | 2022-03-08 | Microsoft Technology Licensing, Llc | Incrementally updating recovery map data for a memory system |
US11327663B2 (en) | 2020-06-09 | 2022-05-10 | Commvault Systems, Inc. | Ensuring the integrity of data storage volumes used in block-level live synchronization operations in a data storage management system |
US11385969B2 (en) | 2009-03-31 | 2022-07-12 | Amazon Technologies, Inc. | Cloning and recovery of data volumes |
US11755415B2 (en) | 2014-05-09 | 2023-09-12 | Amazon Technologies, Inc. | Variable data replication for storage implementing data backup |
US12229011B2 (en) | 2019-09-18 | 2025-02-18 | Amazon Technologies, Inc. | Scalable log-based continuous data protection for distributed databases |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6746483B1 (en) * | 2000-03-16 | 2004-06-08 | Smith & Nephew, Inc. | Sheaths for implantable fixation devices |
US20050022213A1 (en) | 2003-07-25 | 2005-01-27 | Hitachi, Ltd. | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
US7111136B2 (en) * | 2003-06-26 | 2006-09-19 | Hitachi, Ltd. | Method and apparatus for backup and recovery system using storage based journaling |
FI20035235A0 (en) * | 2003-12-12 | 2003-12-12 | Nokia Corp | Arrangement for processing files at a terminal |
US7716260B2 (en) * | 2004-12-16 | 2010-05-11 | Oracle International Corporation | Techniques for transaction semantics for a database server performing file operations |
US8615482B1 (en) * | 2005-06-20 | 2013-12-24 | Symantec Operating Corporation | Method and apparatus for improving the utilization of snapshots of server data storage volumes |
US7809675B2 (en) * | 2005-06-29 | 2010-10-05 | Oracle International Corporation | Sharing state information among a plurality of file operation servers |
JP2007219609A (en) * | 2006-02-14 | 2007-08-30 | Hitachi Ltd | Snapshot management device and method |
US7644308B2 (en) * | 2006-03-06 | 2010-01-05 | Hewlett-Packard Development Company, L.P. | Hierarchical timestamps |
US7809778B2 (en) * | 2006-03-08 | 2010-10-05 | Omneon Video Networks | Idempotent journal mechanism for file system |
US9417969B2 (en) * | 2010-05-13 | 2016-08-16 | Sony Corporation | Distributed network backup of multimedia files |
US20130007028A1 (en) * | 2011-06-29 | 2013-01-03 | International Business Machines Corporation | Discovering related files and providing differentiating information |
US9652520B2 (en) | 2013-08-29 | 2017-05-16 | Oracle International Corporation | System and method for supporting parallel asynchronous synchronization between clusters in a distributed data grid |
WO2016077570A1 (en) * | 2014-11-13 | 2016-05-19 | Virtual Software Systems, Inc. | System for cross-host, multi-thread session alignment |
US10853182B1 (en) * | 2015-12-21 | 2020-12-01 | Amazon Technologies, Inc. | Scalable log-based secondary indexes for non-relational databases |
US10353813B2 (en) | 2016-06-29 | 2019-07-16 | Western Digital Technologies, Inc. | Checkpoint based technique for bootstrapping forward map under constrained memory for flash devices |
US10235287B2 (en) | 2016-06-29 | 2019-03-19 | Western Digital Technologies, Inc. | Efficient management of paged translation maps in memory and flash |
US10175896B2 (en) | 2016-06-29 | 2019-01-08 | Western Digital Technologies, Inc. | Incremental snapshot based technique on paged translation systems |
US11216361B2 (en) | 2016-06-29 | 2022-01-04 | Western Digital Technologies, Inc. | Translation lookup and garbage collection optimizations on storage system with paged translation table |
US10229048B2 (en) | 2016-06-29 | 2019-03-12 | Western Digital Technologies, Inc. | Unified paging scheme for dense and sparse translation tables on flash storage systems |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5085502A (en) * | 1987-04-30 | 1992-02-04 | Eastman Kodak Company | Method and apparatus for digital morie profilometry calibrated for accurate conversion of phase information into distance measurements in a plurality of directions |
US5404508A (en) * | 1992-12-03 | 1995-04-04 | Unisys Corporation | Data base backup and recovery system and method |
US6301877B1 (en) * | 1995-11-13 | 2001-10-16 | United Technologies Corporation | Ejector extension cooling for exhaust nozzle |
US20050193031A1 (en) * | 1999-12-16 | 2005-09-01 | Livevault Corporation | Systems and methods for backing up data files |
US6981114B1 (en) * | 2002-10-16 | 2005-12-27 | Veritas Operating Corporation | Snapshot reconstruction from an existing snapshot and one or more modification logs |
Family Cites Families (94)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US588242A (en) * | 1897-08-17 | John j | ||
US4077059A (en) | 1975-12-18 | 1978-02-28 | Cordi Vincent A | Multi-processing system with a hierarchial memory having journaling and copyback |
US4823261A (en) | 1986-11-24 | 1989-04-18 | International Business Machines Corp. | Multiprocessor system for updating status information through flip-flopping read version and write version of checkpoint data |
US5065311A (en) | 1987-04-20 | 1991-11-12 | Hitachi, Ltd. | Distributed data base system of composite subsystem type, and method fault recovery for the system |
GB8915875D0 (en) * | 1989-07-11 | 1989-08-31 | Intelligence Quotient United K | A method of operating a data processing system |
JPH03103941A (en) | 1989-09-18 | 1991-04-30 | Nec Corp | Automatic commitment control system |
US5479654A (en) | 1990-04-26 | 1995-12-26 | Squibb Data Systems, Inc. | Apparatus and method for reconstructing a file from a difference signature and an original file |
US6816872B1 (en) | 1990-04-26 | 2004-11-09 | Timespring Software Corporation | Apparatus and method for reconstructing a file from a difference signature and an original file |
US5369757A (en) | 1991-06-18 | 1994-11-29 | Digital Equipment Corporation | Recovery logging in the presence of snapshot files by ordering of buffer pool flushing |
JPH052517A (en) | 1991-06-26 | 1993-01-08 | Nec Corp | Data base journal control system |
US5701480A (en) | 1991-10-17 | 1997-12-23 | Digital Equipment Corporation | Distributed multi-version commitment ordering protocols for guaranteeing serializability during transaction processing |
US5263154A (en) | 1992-04-20 | 1993-11-16 | International Business Machines Corporation | Method and system for incremental time zero backup copying of data |
JPH0827754B2 (en) | 1992-05-21 | 1996-03-21 | インターナショナル・ビジネス・マシーンズ・コーポレイション | File management method and file management system in computer system |
US5416915A (en) | 1992-12-11 | 1995-05-16 | International Business Machines Corporation | Method and system for minimizing seek affinity and enhancing write sensitivity in a DASD array |
US5555371A (en) | 1992-12-17 | 1996-09-10 | International Business Machines Corporation | Data backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage |
ATE153149T1 (en) | 1993-01-21 | 1997-05-15 | Apple Computer | DEVICE AND METHOD FOR DATA BACKUP OF STORAGE UNITS IN A COMPUTER NETWORK |
JP3250156B2 (en) | 1993-01-21 | 2002-01-28 | アップル コンピューター インコーポレーテッド | Method and apparatus for data transfer and data storage in a highly parallel computer network environment |
JPH0869404A (en) | 1994-08-29 | 1996-03-12 | Fujitsu Ltd | Data backup method and data processing apparatus using the same |
US5835953A (en) | 1994-10-13 | 1998-11-10 | Vinca Corporation | Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating |
US5644696A (en) | 1995-06-06 | 1997-07-01 | International Business Machines Corporation | Recovering multi-volume data sets during volume recovery |
US5720029A (en) | 1995-07-25 | 1998-02-17 | International Business Machines Corporation | Asynchronously shadowing record updates in a remote copy session using track arrays |
US5680640A (en) | 1995-09-01 | 1997-10-21 | Emc Corporation | System for migrating data by selecting a first or second transfer means based on the status of a data element map initialized to a predetermined state |
US5870758A (en) | 1996-03-11 | 1999-02-09 | Oracle Corporation | Method and apparatus for providing isolation levels in a database system |
US6959387B2 (en) * | 1996-03-21 | 2005-10-25 | Walker Digital, Llc | Method and apparatus for verifying secure document timestamping |
US6889214B1 (en) | 1996-10-02 | 2005-05-03 | Stamps.Com Inc. | Virtual security device |
CA2221216A1 (en) * | 1996-11-15 | 1998-05-15 | Mark Squibb | System and apparatus for merging a write event journal and an original storage to produce an updated storage using an event map |
US6081875A (en) | 1997-05-19 | 2000-06-27 | Emc Corporation | Apparatus and method for backup of a disk storage system |
US6490610B1 (en) | 1997-05-30 | 2002-12-03 | Oracle Corporation | Automatic failover for clients accessing a resource through a server |
US5991772A (en) * | 1997-10-31 | 1999-11-23 | Oracle Corporation | Method and apparatus for restoring a portion of a database |
US6128630A (en) | 1997-12-18 | 2000-10-03 | International Business Machines Corporation | Journal space release for log-structured storage systems |
JPH11272427A (en) | 1998-03-24 | 1999-10-08 | Hitachi Ltd | Method for saving data and outside storage device |
US6324654B1 (en) | 1998-03-30 | 2001-11-27 | Legato Systems, Inc. | Computer network remote data mirroring system |
US6154852A (en) | 1998-06-10 | 2000-11-28 | International Business Machines Corporation | Method and apparatus for data backup and recovery |
JPH11353215A (en) | 1998-06-11 | 1999-12-24 | Nec Corp | Journal-after-update collecting process system |
US6189016B1 (en) * | 1998-06-12 | 2001-02-13 | Microsoft Corporation | Journaling ordered changes in a storage volume |
US6269381B1 (en) | 1998-06-30 | 2001-07-31 | Emc Corporation | Method and apparatus for backing up data before updating the data and for restoring from the backups |
US6298345B1 (en) | 1998-07-10 | 2001-10-02 | International Business Machines Corporation | Database journal mechanism and method that supports multiple simultaneous deposits |
US6353878B1 (en) | 1998-08-13 | 2002-03-05 | Emc Corporation | Remote control of backup media in a secondary storage subsystem through access to a primary storage subsystem |
US6269431B1 (en) | 1998-08-13 | 2001-07-31 | Emc Corporation | Virtual storage and block level direct access of secondary storage for recovery of backup data |
US6260124B1 (en) | 1998-08-13 | 2001-07-10 | International Business Machines Corporation | System and method for dynamically resynchronizing backup data |
US6397351B1 (en) | 1998-09-28 | 2002-05-28 | International Business Machines Corporation | Method and apparatus for rapid data restoration including on-demand output of sorted logged changes |
JP2000155708A (en) | 1998-11-24 | 2000-06-06 | Nec Corp | Automatic monitoring method for use state of journal file |
JP2000284987A (en) | 1999-03-31 | 2000-10-13 | Fujitsu Ltd | Computer, computer network system and recording medium |
US6829819B1 (en) | 1999-05-03 | 2004-12-14 | Western Digital (Fremont), Inc. | Method of forming a magnetoresistive device |
US7099875B2 (en) | 1999-06-29 | 2006-08-29 | Emc Corporation | Method and apparatus for making independent data copies in a data processing system |
US6539462B1 (en) | 1999-07-12 | 2003-03-25 | Hitachi Data Systems Corporation | Remote data copy using a prospective suspend command |
TW454120B (en) | 1999-11-11 | 2001-09-11 | Miralink Corp | Flexible remote data mirroring |
US7203732B2 (en) | 1999-11-11 | 2007-04-10 | Miralink Corporation | Flexible remote data mirroring |
US6560614B1 (en) | 1999-11-12 | 2003-05-06 | Xosoft Inc. | Nonintrusive update of files |
US6711409B1 (en) | 1999-12-15 | 2004-03-23 | Bbnt Solutions Llc | Node belonging to multiple clusters in an ad hoc wireless network |
JP4115060B2 (en) | 2000-02-02 | 2008-07-09 | 株式会社日立製作所 | Data recovery method for information processing system and disk subsystem |
US7065538B2 (en) * | 2000-02-11 | 2006-06-20 | Quest Software, Inc. | System and method for reconciling transactions between a replication system and a recovered database |
US6473775B1 (en) | 2000-02-16 | 2002-10-29 | Microsoft Corporation | System and method for growing differential file on a base volume of a snapshot |
US6587970B1 (en) | 2000-03-22 | 2003-07-01 | Emc Corporation | Method and apparatus for performing site failover |
US6971018B1 (en) * | 2000-04-28 | 2005-11-29 | Microsoft Corporation | File protection service for a computer system |
JP3968207B2 (en) | 2000-05-25 | 2007-08-29 | 株式会社日立製作所 | Data multiplexing method and data multiplexing system |
US6711572B2 (en) | 2000-06-14 | 2004-03-23 | Xosoft Inc. | File system for distributing content in a data network and related methods |
US6665815B1 (en) | 2000-06-22 | 2003-12-16 | Hewlett-Packard Development Company, L.P. | Physical incremental backup using snapshots |
US7031986B2 (en) | 2000-06-27 | 2006-04-18 | Fujitsu Limited | Database system with backup and recovery mechanisms |
US6732125B1 (en) | 2000-09-08 | 2004-05-04 | Storage Technology Corporation | Self archiving log structured volume with intrinsic data protection |
US6691245B1 (en) | 2000-10-10 | 2004-02-10 | Lsi Logic Corporation | Data storage with host-initiated synchronization and fail-over of remote mirror |
US6324854B1 (en) * | 2000-11-22 | 2001-12-04 | Copeland Corporation | Air-conditioning servicing system and method |
US7730213B2 (en) | 2000-12-18 | 2010-06-01 | Oracle America, Inc. | Object-based storage device with improved reliability and fast crash recovery |
US6662281B2 (en) | 2001-01-31 | 2003-12-09 | Hewlett-Packard Development Company, L.P. | Redundant backup device |
US6742138B1 (en) | 2001-06-12 | 2004-05-25 | Emc Corporation | Data recovery method and apparatus |
US6745209B2 (en) * | 2001-08-15 | 2004-06-01 | Iti, Inc. | Synchronization of plural databases in a database replication system |
US6978282B1 (en) | 2001-09-04 | 2005-12-20 | Emc Corporation | Information replication system having automated replication storage |
EP1436873B1 (en) | 2001-09-28 | 2009-04-29 | Commvault Systems, Inc. | System and method for generating and managing quick recovery volumes |
US6832289B2 (en) | 2001-10-11 | 2004-12-14 | International Business Machines Corporation | System and method for migrating data |
JP4108973B2 (en) | 2001-12-26 | 2008-06-25 | 株式会社日立製作所 | Backup system |
US7036043B2 (en) * | 2001-12-28 | 2006-04-25 | Storage Technology Corporation | Data management with virtual recovery mapping and backward moves |
US6898688B2 (en) * | 2001-12-28 | 2005-05-24 | Storage Technology Corporation | Data management appliance |
US6839819B2 (en) | 2001-12-28 | 2005-01-04 | Storage Technology Corporation | Data management appliance |
US7237075B2 (en) | 2002-01-22 | 2007-06-26 | Columbia Data Products, Inc. | Persistent snapshot methods |
US20030177306A1 (en) | 2002-03-14 | 2003-09-18 | Cochran Robert Alan | Track level snapshot |
US7225204B2 (en) | 2002-03-19 | 2007-05-29 | Network Appliance, Inc. | System and method for asynchronous mirroring of snapshots at a destination using a purgatory directory and inode mapping |
US7778958B2 (en) | 2002-04-11 | 2010-08-17 | Quantum Corporation | Recovery of data on a primary data volume |
AU2003214624A1 (en) | 2002-04-25 | 2003-11-10 | Kashya Israel Ltd. | An apparatus for continuous compression of large volumes of data |
US20030220935A1 (en) | 2002-05-21 | 2003-11-27 | Vivian Stephen J. | Method of logical database snapshot for log-based replication |
JP2004013367A (en) | 2002-06-05 | 2004-01-15 | Hitachi Ltd | Data storage subsystem |
US7844577B2 (en) | 2002-07-15 | 2010-11-30 | Symantec Corporation | System and method for maintaining a backup storage system for a computer system |
US6842825B2 (en) | 2002-08-07 | 2005-01-11 | International Business Machines Corporation | Adjusting timestamps to preserve update timing information for cached data objects |
US7020755B2 (en) | 2002-08-29 | 2006-03-28 | International Business Machines Corporation | Method and apparatus for read-only recovery in a dual copy storage system |
US8219777B2 (en) | 2002-10-03 | 2012-07-10 | Hewlett-Packard Development Company, L.P. | Virtual storage systems, virtual storage methods and methods of over committing a virtual raid storage system |
CA2508089A1 (en) | 2002-10-07 | 2004-04-22 | Commvault Systems, Inc. | System and method for managing stored data |
US20040153558A1 (en) | 2002-10-31 | 2004-08-05 | Mesut Gunduc | System and method for providing java based high availability clustering framework |
US7007043B2 (en) | 2002-12-23 | 2006-02-28 | Storage Technology Corporation | Storage backup system that creates mountable representations of past contents of storage volumes |
US7010645B2 (en) | 2002-12-27 | 2006-03-07 | International Business Machines Corporation | System and method for sequentially staging received data to a write cache in advance of storing the received data |
US7231544B2 (en) | 2003-02-27 | 2007-06-12 | Hewlett-Packard Development Company, L.P. | Restoring data from point-in-time representations of the data |
US20050039069A1 (en) | 2003-04-03 | 2005-02-17 | Anand Prahlad | Remote disaster data recovery system and method |
US7181476B2 (en) * | 2003-04-30 | 2007-02-20 | Oracle International Corporation | Flashback database |
US20040225689A1 (en) | 2003-05-08 | 2004-11-11 | International Business Machines Corporation | Autonomic logging support |
US7143317B2 (en) | 2003-06-04 | 2006-11-28 | Hewlett-Packard Development Company, L.P. | Computer event log overwriting intermediate events |
US20050022213A1 (en) * | 2003-07-25 | 2005-01-27 | Hitachi, Ltd. | Method and apparatus for synchronizing applications for data recovery using storage based journaling |
-
2003
- 2003-07-25 US US10/627,507 patent/US20050022213A1/en not_active Abandoned
-
2006
- 2006-02-28 US US11/365,085 patent/US7555505B2/en not_active Expired - Fee Related
-
2009
- 2009-05-28 US US12/473,415 patent/US8005796B2/en not_active Expired - Fee Related
-
2011
- 2011-07-12 US US13/181,055 patent/US8296265B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5085502A (en) * | 1987-04-30 | 1992-02-04 | Eastman Kodak Company | Method and apparatus for digital morie profilometry calibrated for accurate conversion of phase information into distance measurements in a plurality of directions |
US5404508A (en) * | 1992-12-03 | 1995-04-04 | Unisys Corporation | Data base backup and recovery system and method |
US6301877B1 (en) * | 1995-11-13 | 2001-10-16 | United Technologies Corporation | Ejector extension cooling for exhaust nozzle |
US20050193031A1 (en) * | 1999-12-16 | 2005-09-01 | Livevault Corporation | Systems and methods for backing up data files |
US6981114B1 (en) * | 2002-10-16 | 2005-12-27 | Veritas Operating Corporation | Snapshot reconstruction from an existing snapshot and one or more modification logs |
Cited By (71)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050165863A1 (en) * | 2004-01-23 | 2005-07-28 | Atul Mukker | File recovery under Linux operating system |
US7921082B2 (en) * | 2004-01-23 | 2011-04-05 | Lsi Corporation | File recovery under linux operating system |
US20050278382A1 (en) * | 2004-05-28 | 2005-12-15 | Network Appliance, Inc. | Method and apparatus for recovery of a current read-write unit of a file system |
US7506117B2 (en) | 2005-11-16 | 2009-03-17 | Hitachi, Ltd. | Data recovery method for computer system |
EP1788483A3 (en) * | 2005-11-16 | 2008-04-09 | Hitachi, Ltd. | Data recovery method for computer system |
US20070174694A1 (en) * | 2005-11-16 | 2007-07-26 | Hitachi, Ltd. | Data recovery method for computer system |
US7571348B2 (en) | 2006-01-31 | 2009-08-04 | Hitachi, Ltd. | Storage system creating a recovery request point enabling execution of a recovery |
EP1814032A3 (en) * | 2006-01-31 | 2008-07-02 | Hitachi, Ltd. | Recovery for storage system |
US20090276661A1 (en) * | 2006-01-31 | 2009-11-05 | Akira Deguchi | Storage system creating a recovery request point enabling execution of a recovery |
EP1814032A2 (en) * | 2006-01-31 | 2007-08-01 | Hitachi, Ltd. | Recovery for storage system |
US8327183B2 (en) | 2006-01-31 | 2012-12-04 | Hitachi, Ltd. | Storage system creating a recovery request enabling execution of a recovery and comprising a switch that detects recovery request point events |
JP2007206759A (en) * | 2006-01-31 | 2007-08-16 | Hitachi Ltd | Storage system |
EP1845449A2 (en) | 2006-04-14 | 2007-10-17 | Hitachi, Ltd. | System and method for processing a plurality kinds of event markers of a continuous data protection |
EP1845449A3 (en) * | 2006-04-14 | 2008-09-10 | Hitachi, Ltd. | System and method for processing a plurality kinds of event markers of a continuous data protection |
US20070245107A1 (en) * | 2006-04-14 | 2007-10-18 | Hitachi, Ltd. | System and method for processing a plurality kinds of event markers of a continuous data protection |
EP1860559A3 (en) * | 2006-05-18 | 2008-01-23 | Hitachi, Ltd. | Storage system and method of producing recovery volume |
EP2131284A1 (en) | 2006-05-18 | 2009-12-09 | Hitachi, Ltd. | Storage system and method of producing recovery volume |
US20110055506A1 (en) * | 2006-05-18 | 2011-03-03 | Hitachi, Ltd. | Storage System and Method of Producing Recovery Volume |
US7840766B2 (en) | 2006-05-18 | 2010-11-23 | Hitachi, Ltd. | Storage system and method of producing recovery volume |
US8131962B2 (en) | 2006-05-18 | 2012-03-06 | Hitachi, Ltd. | Storage system and method of producing recovery volume |
US20070271429A1 (en) * | 2006-05-18 | 2007-11-22 | Hitachi, Ltd. | Storage System and method of producing recovery volume |
EP1860559A2 (en) | 2006-05-18 | 2007-11-28 | Hitachi, Ltd. | Storage system and method of producing recovery volume |
US7581136B2 (en) | 2006-05-19 | 2009-08-25 | Hitachi, Ltd. | Method and apparatus for data recovery |
US20070271422A1 (en) * | 2006-05-19 | 2007-11-22 | Nobuyuki Osaki | Method and apparatus for data recovery |
US20070294274A1 (en) * | 2006-06-19 | 2007-12-20 | Hitachi, Ltd. | System and method for managing a consistency among volumes in a continuous data protection environment |
US7647360B2 (en) * | 2006-06-19 | 2010-01-12 | Hitachi, Ltd. | System and method for managing a consistency among volumes in a continuous data protection environment |
US20070300013A1 (en) * | 2006-06-21 | 2007-12-27 | Manabu Kitamura | Storage system having transaction monitoring capability |
US20080027998A1 (en) * | 2006-07-27 | 2008-01-31 | Hitachi, Ltd. | Method and apparatus of continuous data protection for NAS |
US20080091744A1 (en) * | 2006-10-11 | 2008-04-17 | Hidehisa Shitomi | Method and apparatus for indexing and searching data in a storage system |
US20080162840A1 (en) * | 2007-01-03 | 2008-07-03 | Oliver Augenstein | Methods and infrastructure for performing repetitive data protection and a corresponding restore of data |
US20080168218A1 (en) * | 2007-01-05 | 2008-07-10 | Hitachi, Ltd. | Backup system with continuous data protection |
US7747830B2 (en) | 2007-01-05 | 2010-06-29 | Hitachi, Ltd. | Backup system with continuous data protection |
US8112398B1 (en) * | 2007-06-28 | 2012-02-07 | Emc Corporation | Methods, systems, and computer program products for selectively marking and retrieving data from an event log file |
US20090037482A1 (en) * | 2007-08-01 | 2009-02-05 | Hitachi, Ltd. | Method and apparatus for achieving consistency of files in continuous data protection |
US11914486B2 (en) | 2009-03-31 | 2024-02-27 | Amazon Technologies, Inc. | Cloning and recovery of data volumes |
US11385969B2 (en) | 2009-03-31 | 2022-07-12 | Amazon Technologies, Inc. | Cloning and recovery of data volumes |
US11755415B2 (en) | 2014-05-09 | 2023-09-12 | Amazon Technologies, Inc. | Variable data replication for storage implementing data backup |
US20160259559A1 (en) * | 2014-05-12 | 2016-09-08 | Hitachi, Ltd. | Storage system and control method thereof |
US9563383B2 (en) * | 2014-05-12 | 2017-02-07 | Hitachi, Ltd. | Storage system with primary and secondary data storage groups and control method thereof |
US10831614B2 (en) | 2014-08-18 | 2020-11-10 | Amazon Technologies, Inc. | Visualizing restoration operation granularity for a database |
CN107077404A (en) * | 2014-10-28 | 2017-08-18 | 微软技术许可有限责任公司 | From the time point database recovery of storage snapshot |
US9558078B2 (en) | 2014-10-28 | 2017-01-31 | Microsoft Technology Licensing, Llc | Point in time database restore from storage snapshots |
WO2016069423A1 (en) * | 2014-10-28 | 2016-05-06 | Microsoft Technology Licensing, Llc | Point in time database restore from storage snapshots |
US10019184B2 (en) | 2015-06-26 | 2018-07-10 | Amazon Technologies, Inc. | Amortized snapshots |
US9547560B1 (en) * | 2015-06-26 | 2017-01-17 | Amazon Technologies, Inc. | Amortized snapshots |
EP3159796A1 (en) * | 2015-09-21 | 2017-04-26 | Zerto Ltd. | System and method for generating backups of a protected system from a recovery system |
US10949240B2 (en) | 2015-12-09 | 2021-03-16 | Commvault Systems, Inc. | Live synchronization and management of virtual machines across computing and virtualization platforms and using live synchronization to support disaster recovery |
US10228962B2 (en) | 2015-12-09 | 2019-03-12 | Commvault Systems, Inc. | Live synchronization and management of virtual machines across computing and virtualization platforms and using live synchronization to support disaster recovery |
US11803411B2 (en) | 2015-12-09 | 2023-10-31 | Commvault Systems, Inc. | Live synchronization and management of virtual machines across computing and virtualization platforms including in cloud computing environments |
US10423493B1 (en) * | 2015-12-21 | 2019-09-24 | Amazon Technologies, Inc. | Scalable log-based continuous data protection for distributed databases |
US11153380B2 (en) | 2015-12-21 | 2021-10-19 | Amazon Technologies, Inc. | Continuous backup of data in a distributed data store |
US10567500B1 (en) | 2015-12-21 | 2020-02-18 | Amazon Technologies, Inc. | Continuous backup of data in a distributed data store |
US11042446B2 (en) | 2015-12-23 | 2021-06-22 | Commvault Systems, Inc. | Application-level live synchronization across computing platforms such as cloud platforms |
US10387266B2 (en) * | 2015-12-23 | 2019-08-20 | Commvault Systems, Inc. | Application-level live synchronization across computing platforms including synchronizing co-resident applications to disparate standby destinations and selectively synchronizing some applications and not others |
US10922319B2 (en) * | 2017-04-19 | 2021-02-16 | Ebay Inc. | Consistency mitigation techniques for real-time streams |
US10990581B1 (en) | 2017-09-27 | 2021-04-27 | Amazon Technologies, Inc. | Tracking a size of a database change log |
US10754844B1 (en) | 2017-09-27 | 2020-08-25 | Amazon Technologies, Inc. | Efficient database snapshot generation |
US11182372B1 (en) | 2017-11-08 | 2021-11-23 | Amazon Technologies, Inc. | Tracking database partition change log dependencies |
US12210419B2 (en) | 2017-11-22 | 2025-01-28 | Amazon Technologies, Inc. | Continuous data protection |
US11042503B1 (en) | 2017-11-22 | 2021-06-22 | Amazon Technologies, Inc. | Continuous data protection and restoration |
US11269731B1 (en) | 2017-11-22 | 2022-03-08 | Amazon Technologies, Inc. | Continuous data protection |
US11860741B2 (en) | 2017-11-22 | 2024-01-02 | Amazon Technologies, Inc. | Continuous data protection |
US10621049B1 (en) | 2018-03-12 | 2020-04-14 | Amazon Technologies, Inc. | Consistent backups based on local node clock |
US11579981B2 (en) | 2018-08-10 | 2023-02-14 | Amazon Technologies, Inc. | Past-state backup generator and interface for database systems |
US11126505B1 (en) | 2018-08-10 | 2021-09-21 | Amazon Technologies, Inc. | Past-state backup generator and interface for database systems |
US12013764B2 (en) | 2018-08-10 | 2024-06-18 | Amazon Technologies, Inc. | Past-state backup generator and interface for database systems |
US11042454B1 (en) | 2018-11-20 | 2021-06-22 | Amazon Technologies, Inc. | Restoration of a data source |
US11269737B2 (en) * | 2019-09-16 | 2022-03-08 | Microsoft Technology Licensing, Llc | Incrementally updating recovery map data for a memory system |
US12229011B2 (en) | 2019-09-18 | 2025-02-18 | Amazon Technologies, Inc. | Scalable log-based continuous data protection for distributed databases |
US11803308B2 (en) | 2020-06-09 | 2023-10-31 | Commvault Systems, Inc. | Ensuring the integrity of data storage volumes used in block-level live synchronization operations in a data storage management system |
US11327663B2 (en) | 2020-06-09 | 2022-05-10 | Commvault Systems, Inc. | Ensuring the integrity of data storage volumes used in block-level live synchronization operations in a data storage management system |
Also Published As
Publication number | Publication date |
---|---|
US20090240743A1 (en) | 2009-09-24 |
US20110271068A1 (en) | 2011-11-03 |
US7555505B2 (en) | 2009-06-30 |
US8005796B2 (en) | 2011-08-23 |
US20060149792A1 (en) | 2006-07-06 |
US8296265B2 (en) | 2012-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7555505B2 (en) | Method and apparatus for synchronizing applications for data recovery using storage based journaling | |
US8868507B2 (en) | Method and apparatus for data recovery using storage based journaling | |
US7162601B2 (en) | Method and apparatus for backup and recovery system using storage based journaling | |
US7979741B2 (en) | Method and apparatus for data recovery system using storage based journaling | |
CA2548542C (en) | System and method for performing a snapshot and for restoring data | |
EP1470485B1 (en) | Method and system for providing image incremental and disaster recovery | |
US7167880B2 (en) | Method and apparatus for avoiding journal overflow on backup and recovery system using storage based journaling | |
US6023710A (en) | System and method for long-term administration of archival storage | |
US6665815B1 (en) | Physical incremental backup using snapshots | |
EP2161661B1 (en) | Computer system and backup method therefor | |
US8015155B2 (en) | Non-disruptive backup copy in a database online reorganization environment | |
KR20090110823A (en) | Data Shadowing System and How to Store Automatic Backups of Data | |
US20110282843A1 (en) | Method and system for data backup and replication |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HITACHI, LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YAMAGAMI, KENJI;REEL/FRAME:014346/0598 Effective date: 20030724 |
|
STCB | Information on status: application discontinuation |
Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION |