US20050141312A1 - Non-volatile memory and method with non-sequential update block management - Google Patents
Non-volatile memory and method with non-sequential update block management Download PDFInfo
- Publication number
- US20050141312A1 US20050141312A1 US10/917,867 US91786704A US2005141312A1 US 20050141312 A1 US20050141312 A1 US 20050141312A1 US 91786704 A US91786704 A US 91786704A US 2005141312 A1 US2005141312 A1 US 2005141312A1
- Authority
- US
- United States
- Prior art keywords
- block
- logical
- update
- data
- memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1666—Error detection or correction of the data by redundancy in hardware where the redundant component is memory or memory area
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/10—Programming or data input circuits
- G11C16/14—Circuits for erasing electrically, e.g. erase voltage switching circuits
- G11C16/16—Circuits for erasing electrically, e.g. erase voltage switching circuits for erasing blocks, e.g. arrays, words, groups
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1072—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices in multilevel memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1405—Saving, restoring, recovering or retrying at machine instruction level
- G06F11/141—Saving, restoring, recovering or retrying at machine instruction level for bus or memory accesses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1658—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/56—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
- G11C11/5621—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency using charge storage in a floating gate
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/56—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
- G11C11/5621—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency using charge storage in a floating gate
- G11C11/5628—Programming or writing circuits; Data input circuits
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/10—Programming or data input circuits
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/10—Programming or data input circuits
- G11C16/102—External programming circuits, e.g. EPROM programmers; In-circuit programming or reprogramming; EPROM emulators
- G11C16/105—Circuits or methods for updating contents of nonvolatile memory, especially with 'security' features to ensure reliable replacement, i.e. preventing that old data is lost before new data is reliably written
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/10—Programming or data input circuits
- G11C16/20—Initialising; Data preset; Chip identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7202—Allocation control and policies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7203—Temporary buffering, e.g. using volatile buffer or dedicated buffer blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7205—Cleaning, compaction, garbage collection, erase control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7208—Multiple device management, e.g. distributing data over multiple flash devices
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/10—Programming or data input circuits
- G11C16/102—External programming circuits, e.g. EPROM programmers; In-circuit programming or reprogramming; EPROM emulators
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2211/00—Indexing scheme relating to digital stores characterized by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C2211/56—Indexing scheme relating to G11C11/56 and sub-groups for features not covered by these groups
- G11C2211/564—Miscellaneous aspects
- G11C2211/5641—Multilevel memory having cells with different number of storage levels
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/70—Masking faults in memories by using spares or by reconfiguring
- G11C29/76—Masking faults in memories by using spares or by reconfiguring using address translation or modifications
Definitions
- This invention relates generally to non-volatile semiconductor memory and specifically to those having a memory block management system with efficient handling of update data in a block.
- Solid-state memory capable of nonvolatile storage of charge, particularly in the form of EEPROM and flash EEPROM packaged as a small form factor card, has recently become the storage of choice in a variety of mobile and handheld devices, notably information appliances and consumer electronics products.
- RAM random access memory
- flash memory is non-volatile, and retaining its stored data even after power is turned off.
- ROM read only memory
- flash memory is rewritable similar to a disk storage device.
- flash memory is increasingly being used in mass storage applications.
- Conventional mass storage based on rotating magnetic medium such as hard drives and floppy disks, is unsuitable for the mobile and handheld environment.
- disk drives tend to be bulky, are prone to mechanical failure and have high latency and high power requirements. These undesirable attributes make disk-based storage impractical in most mobile and portable applications.
- flash memory both embedded and in the form of a removable card is ideally suited in the mobile and handheld environment because of its small size, low power consumption, high speed and high reliability features.
- Flash EEPROM is similar to EEPROM (electrically erasable and programmable read-only memory) in that it is a non-volatile memory that can be erased and have new data written or “programmed” into their memory cells. Both utilize a floating (unconnected) conductive gate, in a field effect transistor structure, positioned over a channel region in a semiconductor substrate, between source and drain regions. A control gate is then provided over the floating gate. The threshold voltage characteristic of the transistor is controlled by the amount of charge that is retained on the floating gate. That is, for a given level of charge on the floating gate, there is a corresponding voltage (threshold) that must be applied to the control gate before the transistor is turned “on” to permit conduction between its source and drain regions.
- flash memory such as Flash EEPROM allows entire blocks of memory cells to be erased at the same time.
- the floating gate can hold a range of charges and therefore can be programmed to any threshold voltage level within a threshold voltage window.
- the size of the threshold voltage window is delimited by the minimum and maximum threshold levels of the device, which in turn correspond to the range of the charges that can be programmed onto the floating gate.
- the threshold window generally depends on the memory device's characteristics, operating conditions and history. Each distinct, resolvable threshold voltage level range within the window may, in principle, be used to designate a definite memory state of the cell.
- the transistor serving as a memory cell is typically programmed to a “programmed” state by one of two mechanisms.
- hot electron injection a high voltage applied to the drain accelerates electrons across the substrate channel region.
- control gate pulls the hot electrons through a thin gate dielectric onto the floating gate.
- tunnel injection a high voltage is applied to the control gate relative to the substrate. In this way, electrons are pulled from the substrate to the intervening floating gate.
- program has been used historically to describe writing to a memory by injecting electrons to an initially erased charge storage unit of the memory cell so as to alter the memory state, it has now been used interchangeable with more common terms such as “write” or “record.”
- the memory device may be erased by a number of mechanisms.
- a memory cell is electrically erasable, by applying a high voltage to the substrate relative to the control gate so as to induce electrons in the floating gate to tunnel through a thin oxide to the substrate channel region (i.e., Fowler-Nordheim tunneling.)
- the EEPROM is erasable byte by byte.
- the memory is electrically erasable either all at once or one or more minimum erasable blocks at a time, where a minimum erasable block may consist of one or more sectors and each sector may store 512 bytes or more of data.
- the memory device typically comprises one or more memory chips that may be mounted on a card.
- Each memory chip comprises an array of memory cells supported by peripheral circuits such as decoders and erase, write and read circuits.
- peripheral circuits such as decoders and erase, write and read circuits.
- the more sophisticated memory devices also come with a controller that performs intelligent and higher level memory operations and interfacing.
- non-volatile solid-state memory devices There are many commercially successful non-volatile solid-state memory devices being used today. These memory devices may be flash EEPROM or may employ other types of nonvolatile memory cells. Examples of flash memory and systems and methods of manufacturing them are given in U.S. Pat. Nos. 5,070,032, 5,095,344, 5,315,541, 5,343,063, and 5,661,053, 5,313,421 and 6,222,762. In particular, flash memory devices with NAND string structures are described in U.S. Pat. Nos. 5,570,315, 5,903,495, 6,046,935. Also nonvolatile memory devices are also manufactured from memory cells with a dielectric layer for storing charge. Instead of the conductive floating gate elements described earlier, a dielectric layer is used.
- Such memory devices utilizing dielectric storage element have been described by Eitan et al., “NROM: A Novel Localized Trapping, 2-Bit Nonvolatile Memory Cell,” IEEE Electron Device Letters, vol. 21, no. 11, November 2000, pp. 543-545.
- An ONO dielectric layer extends across the channel between source and drain diffusions. The charge for one data bit is localized in the dielectric layer adjacent to the drain, and the charge for the other data bit is localized in the dielectric layer adjacent to the source.
- U.S. Pat. Nos. 5,768,192 and 6,011,725 disclose a nonvolatile memory cell having a trapping dielectric sandwiched between two silicon dioxide layers. Multi-state data storage is implemented by separately reading the binary states of the spatially separated charge storage regions within the dielectric.
- a “page” of memory elements are read or programmed together.
- a row typically contains several interleaved pages or it may constitute one page. All memory elements of a page will be read or programmed together.
- erase operation may take as much as an order of magnitude longer than read and program operations. Thus, it is desirable to have the erase block of substantial size. In this way, the erase time is amortized over a large aggregate of memory cells.
- flash memory predicates that data must be written to an erased memory location. If data of a certain logical address from a host is to be updated, one way is rewrite the update data in the same physical memory location. That is, the logical to physical address mapping is unchanged. However, this will mean the entire erase block contain that physical location will have to be first erased and then rewritten with the updated data. This method of update is inefficient, as it requires an entire erase block to be erased and rewritten, especially if the data to be updated only occupies a small portion of the erase block. It will also result in a higher frequency of erase recycling of the memory block, which is undesirable in view of the limited endurance of this type of memory device.
- U.S. Pat. No. 6,567,307 discloses a method of dealing with sector updates among large erase block including recording the update data in multiple erase blocks acting as scratch pad and eventually consolidating the valid sectors among the various blocks and rewriting the sectors after rearranging them in logically sequential order. In this way, a block needs not be erased and rewritten at every slightest update.
- WO 03/027828 and WO 00/49488 both disclose a memory system dealing with updates among large erase block including partitioning the logical sector addresses in zones.
- a small zone of logical address range is reserved for active system control data separate from another zone for user data. In this way, manipulation of the system control data in its own zone will not interact with the associated user data in another zone.
- Updates are at the logical sector level and a write pointer points to the corresponding physical sectors in a block to be written.
- the mapping information is buffered in RAM and eventually stored in a sector allocation table in the main memory.
- the latest version of a logical sector will obsolete all previous versions among existing blocks, which become partially obsolete. Garbage collection is performed to keep partially obsolete blocks to an acceptable number.
- a non-volatile memory system is organized in physical groups of physical memory locations. Each physical group (metablock) is erasable as a unit and can be used to store a logical group of data.
- a memory management system allows for update of a logical group of data by allocating a metablock dedicated to recording the update data of the logical group.
- the update metablock records update data in the order received and has no restriction on whether the recording is in the correct logical order as originally stored (sequential) or not (chaotic). Eventually the update metablock is closed to further recording.
- One of several processes will take place, but will ultimately end up with a fully filled metablock in the correct order which replaces the original metablock.
- directory data is maintained in the non-volatile memory in a manner that is conducive to frequent updates.
- the system supports multiple logical groups being updated concurrently.
- One feature of the invention allows data to be updated logical-group by logical-group.
- the distribution of logical units are limited in range. This is especially true when the logical group is normally contained within a physical block.
- garbage collection need only be performed over a relatively fewer number of blocks. Garbage collection of a chaotic block may be performed by either consolidation or compaction.
- a memory array organized into erasable blocks and constituted from multiple memory planes so that logical units can be read or programmed into the multiple planes in parallel
- provisions are made to keep the updated logical unit in the same plane as the original. This is accomplished by recording the updated logical unit to the next available location of a second block but in the same plane.
- the logical unit is stored with an identical offset position in the plane as its other versions so that all versions of a given logical units are serviced by an identical set of sensing circuits.
- any intervening gaps from the last programmed memory unit to the next available plane-aligned memory unit are padded accordingly with the current versions of logical units.
- the padding is accomplished by filling in a gap with current versions of the logical units that follow logically from the last programmed logical unit, and with current versions of the logical units that precede logically from the logical unit stored in the next available plane-aligned memory unit.
- each memory unit across the plane is either updated or padded with the latest versions.
- This scheme reduces the time for consolidation of a chaotic block by allowing on-plane rearrangement of the latest versions of the logical units of a logical group and avoiding having to gather the latest versions from different memory planes. This is of benefit where the performance specification for the host interface defines a maximum latency for completion of a sector write operation by the memory system.
- program failure in a block during a time-critical memory operation is handled by continuing the programming operation in a breakout block. Later, at a less critical time, the data recorded in the failed block prior to the interruption is transferred to another block, which could also be the breakout block. The failed block can then be discarded. In this way, when a defective block is encountered, it can be handled without loss of data and without exceeding a specified time limit by having to transfer the stored data in the defective block on the spot.
- This error handling is especially critical for a garbage collection operation so that the entire operation need not be repeated on a fresh block during a critical time. Subsequently, at an opportune time, the data from the defective block can be salvaged by relocation to another block.
- the program failure handling is especially critical during a consolidation operation.
- a normal consolidation operation consolidates into a consolidation block the current versions of all logical units of a logical group residing among an original block and an update block.
- another block acting as a breakout consolidation block will be provisioned to receive the consolidation of the remaining logical units. In this way, no logical units need be copied more than once and the operation with exception handling can still be completed within a period specified for a normal consolidation operation.
- the consolidation operation can be completed by consolidating all outstanding logical units of the group into the breakout block. The opportune time will be during some other period outside the current host write operation when there is time to perform the consolidation. One such opportune time is during another host write where there is an update but no associated consolidation operation.
- the consolidation with the program failure handling can be regarded as being implemented in multiple phases.
- a first phase the logical units are being consolidated into more than one block after a program failure occurs in order to avoid consolidating each logical unit more than once.
- the final phase is completed at an opportune time, where the logical group is consolidated into one block, preferably by collecting all the logical units into the breakout consolidation block in sequential order.
- an index of the logical units in a non-sequential update block is buffered in RAM and stored periodically into the non-volatile memory.
- the index is stored in a block dedicated for storing indices.
- the index is stored in the update block itself.
- the index is stored in the header of each logical unit.
- the logical units written after the last index update but before the next have their indexing information stored in the header of each logical unit. In this way, after a power outage, the location of recently written logical units can be determined without having to perform a scanning during initialization.
- a block is managed as partially sequential and partially non-sequential, directed to more than one logical subgroup.
- critical data such as some or all of the control data is guaranteed an extra level of reliability if it is maintained in duplicates.
- the duplication is performed in a manner that for multi-state memory system employing a two-pass programming technique for successively programming the multi-bits of the same set of memory cells, any programming error in the second pass will not corrupt the data established by the first pass.
- the duplication also helps with detection of write aborts, detection of misdetection (i.e., both copies have good ECC but data are different), and adds an extra level of reliability.
- Several techniques of data duplication are contemplated.
- a subsequent programming pass avoids programming the memory cells storing at least one of the two copies. In this way, at least one of the two copies will be unaffected in the event the subsequent programming pass aborts before completion and corrupts the data of the earlier pass.
- the two copies of a given data are stored in two different blocks where at most one of the two copies have its memory cells programmed in a subsequent programming pass.
- the two copies of a given data is programmed into a multi-state memory in a binary programming mode, so that no further programming on the programmed memory cells will take place.
- a fault-tolerant code is employ to code the multiple memory states such that data established by an earlier programming pass is insensitive to errors in subsequent programming pass.
- a “control garbage collection” or preemptive relocation of a memory block is implemented to avoid the situation where a large number of update blocks all happen to need relocation concurrently. For example, this situation can happen when updating control data used for controlling the operation of the block management system.
- a hierarchy of control data type can exist with varying degree of update frequencies, resulting in their associated update blocks requiring garbage collection or relocation at different rates. There will be certain times that the garbage collection operations of more than one control data types coincide. In the extreme situation, the relocation phases of the update blocks for all control data types could line up, resulting in all of the update blocks requiring relocation at the same time.
- FIG. 1 illustrates schematically the main hardware components of a memory system suitable for implementing the present invention.
- FIG. 2 illustrates the memory being organized into physical groups of sectors (or metablocks) and managed by a memory manager of the controller, according to a preferred embodiment of the invention.
- FIGS. 3 A(i)- 3 A(iii) illustrate schematically the mapping between a logical group and a metablock, according to a preferred embodiment of the present invention.
- FIG. 3B illustrates schematically the mapping between logical groups and metablocks.
- FIG. 4 illustrates the alignment of a metablock with structures in physical memory.
- FIG. 5A illustrates metablocks being constituted from linking of minimum erase units of different planes.
- FIG. 5B illustrates one embodiment in which one minimum erase unit (MEU) is selected from each plane for linking into a metablock.
- MEU minimum erase unit
- FIG. 5C illustrates another embodiment in which more than one MEU are selected from each plane for linking into a metablock.
- FIG. 6 is a schematic block diagram of the metablock management system as implemented in the controller and flash memory.
- FIG. 7A illustrates an example of sectors in a logical group being written in sequential order to a sequential update block.
- FIG. 7B illustrates an example of sectors in a logical group being written in chaotic order to a chaotic update block.
- FIG. 8 illustrates an example of sectors in a logical group being written in sequential order to a sequential update block as a result of two separate host write operations that has a discontinuity in logical addresses.
- FIG. 9 is a flow diagram illustrating a process by the update block manager to update a logical group of data, according a general embodiment of the invention.
- FIG. 10 is a flow diagram illustrating a process by the update block manager to update a logical group of data, according a preferred embodiment of the invention.
- FIG. 11A is a flow diagram illustrating in more detail the consolidation process of closing a chaotic update block shown in FIG. 10 .
- FIG. 11B is a flow diagram illustrating in more detail the compaction process for closing a chaotic update block shown in FIG. 10 .
- FIG. 12A illustrates all possible states of a Logical Group, and the possible transitions between them under various operations.
- FIG. 12B is a table listing the possible states of a Logical Group.
- FIG. 13A illustrates all possible states of a metablock, and the possible transitions between them under various operations.
- a metablock is a Physical Group corresponding to a Logical Group.
- FIG. 13B is a table listing the possible states of a metablock.
- FIGS. 14 (A)- 14 (J) are state diagrams showing the effect of various operations on the state of the logical group and also on the physical metablock.
- FIG. 15 illustrates a preferred embodiment of the structure of an allocation block list (ABL) for keeping track of opened and closed update blocks and erased blocks for allocation.
- ABL allocation block list
- FIG. 16A illustrates the data fields of a chaotic block index (CBI) sector.
- CBI chaotic block index
- FIG. 16B illustrates an example of the chaotic block index (CBI) sectors being recorded in a dedicated metablock.
- CBI chaotic block index
- FIG. 16C is a flow diagram illustrating access to the data of a logical sector of a given logical group undergoing chaotic update.
- FIG. 16D is a flow diagram illustrating access to the data of a logical sector of a given logical group undergoing chaotic update, according to an alternative embodiment in which logical group has been partitioned into subgroups.
- FIG. 16E illustrates examples of Chaotic Block Indexing (CBI) sectors and their functions for the embodiment where each logical group is partitioned into multiple subgroups.
- CBI Chaotic Block Indexing
- FIG. 17A illustrates the data fields of a group address table (GAT) sector.
- GAT group address table
- FIG. 17B illustrates an example of the group address table (GAT) sectors being recorded in a GAT block.
- GAT group address table
- FIG. 18 is a schematic block diagram illustrating the distribution and flow of the control and directory information for usage and recycling of erased blocks.
- FIG. 19 is a flow chart showing the process of logical to physical address translation.
- FIG. 20 illustrates the hierarchy of the operations performed on control data structures in the course of the operation of the memory management.
- FIG. 21 illustrates a memory array constituted from multiple memory planes.
- FIG. 22A is a flow diagram illustrating a method of update with plane alignment, according to a general implementation of the invention.
- FIG. 22B illustrates a preferred embodiment of the step of storing updates in the flow diagram shown in FIG. 22A .
- FIG. 23A illustrates an example of logical units being written in sequential order to a sequential update block without regard to plane alignment.
- FIG. 23B illustrates an example of logical units being written in non-sequential order to a chaotic update block without regard to plane alignment.
- FIG. 24A illustrates the sequential update example of FIG. 23A with plane alignment and padding, according to a preferred embodiment of the invention.
- FIG. 24B illustrates the chaotic update example of FIG. 23B with plane alignment and with no padding, according to one preferred embodiment of the invention.
- FIG. 24C illustrates the chaotic update example of FIG. 23B with plane alignment and padding, according to another preferred embodiment of the invention.
- FIG. 25 illustrates an example memory organization in which each page contains two memory units for storing two logical units, such as two logical sectors.
- FIG. 26A is similar to the memory structure of FIG. 21 except each page contains two sectors instead of one.
- FIG. 26B illustrates metablocks shown in FIG. 26A having memory units laid out in a schematic linear fashion.
- FIG. 27 illustrates an alternative scheme for plane alignment in an update block without padding logical units to be copied from one location to another.
- FIG. 28 illustrates a scheme in which when a program failure occurs in a defective block during a consolidation operation, the consolidation operation is repeated on another block.
- FIG. 29 illustrates schematically, a host write operation with a timing or write latency that allows enough time to complete a write (update) operation as well as a consolidation operation.
- FIG. 30 illustrates a flow chart of the program failure handling according to a general scheme of the invention.
- FIG. 31A illustrates one embodiment of program failure handling in which the third (final relocation) block is different from the second (breakout) block.
- FIG. 31B illustrates another embodiment of program failure handling in which the third (final relocation) block is the same as the second (breakout) block.
- FIG. 32A illustrates a flow chart of initial update operation that results in a consolidation operation.
- FIG. 32B illustrates a flow chart of the multiple phase consolidation operation, according to a preferred embodiment of the invention.
- FIG. 33 illustrates an example timing of the first and final phases of the multiple phase consolidation operation.
- FIG. 34A illustrates the case where the breakout consolidation block is not used as an update block but more as a consolidation block whose consolidation operation has been interrupted.
- FIG. 34B illustrates the third and final phase of the multiple phase consolidation started in FIG. 34A .
- FIG. 35A illustrates the case where the breakout consolidation block is maintained as an update block to receive host writes rather than as a consolidation block.
- FIG. 35B illustrates the third and final phase of the multiple phase consolidation started in FIG. 35A for the second case.
- FIG. 36A illustrates the phased program error handling method as applied to the scenario when the host write triggers the closure of an update block, and the update block is sequential.
- FIG. 36B illustrates the phased program error handling method as it can be applied to a (Partial Blocks system) in the case of an Update of Update blocks.
- FIG. 36C illustrates the phased program error handling the garbage collection operation, or clean-up in a memory block management system that does not support a logical group mapped to a metablock.
- FIG. 37 illustrates an example of a schedule of writing of a CBI sector to an associated chaotic index sector block after every N sector writes of the same logical group.
- FIG. 38A illustrates an update block up to the point when a CBI sector is recorded in it after a predetermined number of writes.
- FIG. 38B illustrates the update block of FIG. 38A to have data pages 1 , 2 and 4 further recorded to it after the index sector.
- FIG. 38C illustrates the update block of FIG. 38B having another logical sector written to trigger the next recording of an index sector.
- FIG. 39A illustrates an intermediate index for the intermediate writes being stored in a header of each data sector in the chaotic update block
- FIG. 39B illustrates an example of storing an intermediate index for the intermediate writes in the header of each sector written.
- FIG. 40 illustrates the information in the chaotic index field being stored in a header of each data sector in the chaotic update block.
- FIG. 41A illustrates threshold voltage distributions of a 4-state memory array when each memory cell stores two bits of data.
- FIG. 41A illustrates threshold voltage distributions of a 4-state memory array when each memory cell stores two bits of data.
- FIG. 41B illustrates an existing, 2-pass programming scheme using a Gray code.
- FIG. 42 illustrates a way of safe guarding critical data by saving each sector in duplicates. For example, the sectors A, B, C, and D are saved in duplicate copies. If there is corruption of the data in one sector copy, then the other one can be read instead.
- FIG. 43 illustrates the non-robustness in which duplicate sectors are typically saved in a multi-state memory.
- FIG. 44A illustrates one embodiment of saving staggered duplicate copies of critical data to a multi-state memory.
- FIG. 44B illustrates another embodiment of saving duplicate copies of critical data only to the logical upper page of a multi-state memory.
- FIG. 44C illustrates yet another embodiment of saving duplicate copies of critical data in binary mode of a multi-state memory.
- FIG. 45 illustrates yet another embodiment of saving duplicate copies of critical data concurrently to two different metablocks.
- FIG. 46A is similar to FIG. 41A in illustrating threshold voltage distributions of a 4-state memory array and is shown as a reference for FIG. 46B .
- FIG. 46B illustrates yet another embodiment of saving duplicate copies of critical data concurrently by the use of a fault-tolerant code.
- FIG. 47 is a table showing the possible states of the two copies of data, and the validity of the data.
- FIG. 48 illustrates a flow chart of preemptive relocation of memory block storing control data.
- FIG. 1 illustrates schematically the main hardware components of a memory system suitable for implementing the present invention.
- the memory system 20 typically operates with a host 10 through a host interface.
- the memory system is typically in the form of a memory card or an embedded memory system.
- the memory system 20 includes a memory 200 whose operations are controlled by a controller 100 .
- the memory 200 comprises of one or more array of non-volatile memory cells distributed over one or more integrated circuit chip.
- the controller 100 includes an interface 110 , a processor 120 , an optional coprocessor 121 , ROM 122 (read-only-memory), RAM 130 (random access memory) and optionally programmable nonvolatile memory 124 .
- the interface 110 has one component interfacing the controller to a host and another component interfacing to the memory 200 .
- Firmware stored in nonvolatile ROM 122 and/or the optional nonvolatile memory 124 provides codes for the processor 120 to implement the functions of the controller 100 . Error correction codes may be processed by the processor 120 or the optional coprocessor 121 .
- the controller 100 is implemented by a state machine (not shown.) In yet another embodiment, the controller 100 is implemented within the host.
- FIG. 2 illustrates the memory being organized into physical groups of sectors (or metablocks) and managed by a memory manager of the controller, according to a preferred embodiment of the invention.
- the memory 200 is organized into metablocks, where each metablock is a group of physical sectors S 0 , . . . , S N-1 that are erasable together.
- the host 10 accesses the memory 200 when running an application under a file system or operating system.
- the host system addresses data in units of logical sectors where, for example, each sector may contain 512 bytes of data.
- an optional host-side memory manager may exist to perform lower level memory management at the host.
- the host 10 In most cases during read or write operations, the host 10 essentially issues a command to the memory system 20 to read or write a segment containing a string of logical sectors of data with contiguous addresses.
- a memory-side memory manager is implemented in the controller 100 of the memory system 20 to manage the storage and retrieval of the data of host logical sectors among metablocks of the flash memory 200 .
- the memory manager contains a number of software modules for managing erase, read and write operations of the metablocks.
- the memory manager also maintains system control and directory data associated with its operations among the flash memory 200 and the controller RAM 130 .
- FIGS. 3 A(i)- 3 A(iii) illustrate schematically the mapping between a logical group and a metablock, according to a preferred embodiment of the present invention.
- the metablock of the physical memory has N physical sectors for storing N logical sectors of data of a logical group.
- FIG. 3A (i) shows the data from a logical group LG i , where the logical sectors are in contiguous logical order 0 , 1 , . . . , N- 1 .
- FIG. 3A (ii) shows the same data being stored in the metablock in the same logical order.
- the metablock when stored in this manner is said to be “sequential.”
- the metablock may have data stored in a different order, in which case the metablock is said to be “non-sequential” or “chaotic.”
- logical sector address wraps round as a loop from bottom back to top of the logical group within the metablock.
- the metablock stores in its first location beginning with the data of logical sector k.
- the last logical sector N- 1 wraps around to sector 0 and finally storing data associated with logical sector k- 1 in its last physical sector.
- a page tag is used to identify any offset, such as identifying the starting logical sector address of the data stored in the first physical sector of the metablock. Two blocks will be considered to have their logical sectors stored in similar order when they only differ by a page tag.
- FIG. 3B illustrates schematically the mapping between logical groups and metablocks.
- Each logical group is mapped to a unique metablock, except for a small number of logical groups in which data is currently being updated. After a logical group has been updated, it may be mapped to a different metablock.
- the mapping information is maintained in a set of logical to physical directories, which will be described in more detail later.
- metablocks with variable size are disclosed in co-pending and co-owned United States patent application, entitled, “Adaptive Metablocks,” filed by Alan Sinclair, on the same day as the present application. The entire disclosure of the co-pending application is hereby incorporated herein by reference.
- One feature of the invention is that the system operates with a single logical partition, and groups of logical sectors throughout the logical address range of the memory system are treated identically. For example, sectors containing system data and sectors containing user data can be distributed anywhere among the logical address space.
- system sectors i.e., sectors relating to file allocation tables, directories or sub-directories
- system sectors i.e., sectors relating to file allocation tables, directories or sub-directories
- the present scheme of updating logical groups of sectors will efficiently handle the patterns of access that are typical of system sectors, as well as those typical of file data.
- FIG. 4 illustrates the alignment of a metablock with structures in physical memory.
- Flash memory comprises blocks of memory cells which are erasable together as a unit. Such erase blocks are the minimum unit of erasure of flash memory or minimum erasable unit (MEU) of the memory.
- the minimum erase unit is a hardware design parameter of the memory, although in some memory systems that supports multiple MEUs erase, it is possible to configure a “super MEU” comprising more than one MEU.
- the metablock represents, at the system level, a group of memory locations, e.g., sectors that are erasable together.
- the physical address space of the flash memory is treated as a set of metablocks, with a metablock being the minimum unit of erasure.
- the terms “metablock” and “block” are used synonymously to define the minimum unit of erasure at the system level for media management, and the term “minimum erase unit” or MEU is used to denote the minimum unit of erasure of flash memory.
- MEUs Minimum Erase Units
- a page is a grouping of memory cells that may be programmed together in a single operation.
- a page may comprise one or more sector.
- a memory array may be partitioned into more than one plane, where only one MEU within a plane may be programmed or erased at a time.
- the planes may be distributed among one or more memory chips.
- the MEUs may comprise one or more page.
- MEUs within a flash memory chip may be organized in planes. Since one MEU from each plane may be programmed or erased concurrently, it is expedient to form a multiple MEU metablock by selecting one MEU from each plane (see FIG. 5B below.)
- FIG. 5A illustrates metablocks being constituted from linking of minimum erase units of different planes.
- Each metablock such as MB 0 , MB 1 , . . . , is constituted from MEUs from different planes of the memory system, where the different planes may be distributed among one or more chips.
- the metablock link manager 170 shown in FIG. 2 manages the linking of the MEUs for each metablock.
- Each metablock is configured during an initial formatting process, and retains its constituent MEUs throughout the life of the system, unless there is a failure of one of the MEUs.
- FIG. 5B illustrates one embodiment in which one minimum erase unit (MEU) is selected from each plane for linking into a metablock.
- MEU minimum erase unit
- FIG. 5C illustrates another embodiment in which more than one MEU are selected from each plane for linking into a metablock.
- more than one MEU may be selected from each plane to form a super MEU.
- a super MEU may be formed from two MEUs. In this case, it may take more than one pass for read or write operation.
- FIG. 6 is a schematic block diagram of the metablock management system as implemented in the controller and flash memory.
- the metablock management system comprises various functional modules implemented in the controller 100 and maintains various control data (including directory data) in tables and lists hierarchically distributed in the flash memory 200 and the controller RAM 130 .
- the function modules implemented in the controller 100 includes an interface module 110 , a logical-to-physical address translation module 140 , an update block manager module 150 , an erase block manager module 160 and a metablock link manager 170 .
- the interface 110 allows the metablock management system to interface with a host system.
- the logical to physical address translation module 140 maps the logical address from the host to a physical memory location.
- the update block Manager module 150 manages data update operations in memory for a given logical group of data.
- the erased block manager 160 manages the erase operation of the metablocks and their allocation for storage of new information.
- a metablock link manager 170 manages the linking of subgroups of minimum erasable blocks of sectors to constitute a given metablock. Detailed description of these modules will be given in their respective sections.
- control data such as addresses, control and status information. Since much of the control data tends to be frequently changing data of small size, it can not be readily stored and maintained efficiently in a flash memory with a large block structure.
- a hierarchical and distributed scheme is employed to store the more static control data in the nonvolatile flash memory while locating the smaller amount of the more varying control data in controller RAM for more efficient update and access.
- the scheme allows the control data in the volatile controller RAM to be rebuilt quickly by scanning a small set of control data in the nonvolatile memory. This is possible because the invention restricts the number of blocks associated with the possible activity of a given logical group of data. In this way, the scanning is confined.
- control data that requires persistence are stored in a nonvolatile metablock that can be updated sector-by-sector, with each update resulting in a new sector being recorded that supercedes a previous one.
- a sector indexing scheme is employed for control data to keep track of the sector-by-sector updates in a metablock.
- the non-volatile flash memory 200 stores the bulk of control data that are relatively static. This includes group address tables (GAT) 210 , chaotic block indices (CBI) 220 , erased block lists (EBL) 230 and MAP 240 .
- GAT 210 keeps track of the mapping between logical groups of sectors and their corresponding metablocks. The mappings do not change except for those undergoing updates.
- the CBI 220 keeps track of the mapping of logically non-sequential sectors during an update.
- the EBL 230 keeps track of the pool of metablocks that have been erased.
- MAP 240 is a bitmap showing the erase status of all metablocks in the flash memory.
- the volatile controller RAM 130 stores a small portion of control data that are frequently changing and accessed. This includes an allocation block list (ABL) 134 and a cleared block list (CBL) 136 .
- ABL allocation block list
- CBL cleared block list
- the ABL 134 keeps track of the allocation of metablocks for recording update data while the CBL 136 keeps track of metablocks that have been deallocated and erased.
- the RAM 130 acts as a cache for control data stored in flash memory 200 .
- the update block manager 150 (shown in FIG. 2 ) handles the update of logical groups.
- each logical group of sectors undergoing an update is allocated a dedicated update metablock for recording the update data.
- any segment of one or more sectors of the logical group will be recorded in the update block.
- An update block can be managed to receive updated data in either sequential order or non-sequential (also known as chaotic) order.
- a chaotic update block allows sector data to be updated in any order within a logical group, and with any repetition of individual sectors.
- a sequential update block can become a chaotic update block, without need for relocation of any data sectors.
- Data of a complete logical group of sectors is preferably stored in logically sequential order in a single metablock.
- the index to the stored logical sectors is predefined.
- the metablock has in store all the sectors of a given logical group in a predefined order it is said to be “intact.”
- an update block when it eventually fills up with update data in logically sequential order, then the update block will become an updated intact metablock that readily replace the original metablock.
- the update block fills up with update data in a logically different order from that of the intact block, the update block is a non-sequential or chaotic update block and the out of order segments must be further processed so that eventually the update data of the logical group is stored in the same order as that of the intact block. In the preferred case, it is in logically sequential order in a single metablock.
- the further processing involves consolidating the updated sectors in the update block with unchanged sectors in the original block into yet another update metablock.
- the consolidated update block will then be in logically sequential order and can be used to replace the original block.
- the consolidation process is preceded by one or more compaction processes.
- the compaction process simply re-records the sectors of the chaotic update block into a replacing chaotic update block while eliminating any duplicate logical sector that has been rendered obsolete by a subsequent update of the same logical sector.
- the update scheme allows for multiple update threads running concurrently, up to a predefined maximum.
- Each thread is a logical group undergoing updates using its dedicated update metablock.
- a metablock is allocated and dedicated as an update block for the update data of the logical group.
- the update block is allocated when a command is received from the host to write a segment of one or more sectors of the logical group for which an existing metablock has been storing all its sectors intact.
- a first segment of data is recorded on the update block. Since each host write is a segment of one or more sector with contiguous logical address, it follows that the first update is always sequential in nature.
- update segments within the same logical group are recorded in the update block in the order received from the host.
- a block continues to be managed as a sequential update block whilst sectors updated by the host within the associated logical group remain logically sequential. All sectors updated in this logical group are written to this sequential update block, until the block is either closed or converted to a chaotic update block.
- FIG. 7A illustrates an example of sectors in a logical group being written in sequential order to a sequential update block as a result of two separate host write operations, whilst the corresponding sectors in the original block for the logical group become obsolete.
- host write operation # 1 the data in the logical sectors LS 5 -LS 8 are being updated.
- the updated data as LS 5 ′-LS 8 ′ are recorded in a newly allocated dedicated update block.
- the first sector to be updated in the logical group is recorded in the dedicated update block starting from the first physical sector location.
- the first logical sector to be updated is not necessarily the logical first sector of the group, and there may therefore be an offset between the start of the logical group and the start of the update block. This offset is known as page tag as described previously in connection with FIG. 3A .
- Subsequent sectors are updated in logically sequential order. When the last sector of the logical group is written, group addresses wrap around and the write sequence continues with the first sector of the group.
- the segment of data in the logical sectors LS 9 -LS 12 are being updated.
- the updated data as LS 9 ′-LS 12 ′ are recorded in the dedicated update block in a location directly following where the last write ends. It can be seen that the two host writes are such that the update data has been recorded in the update block in logically sequential order, namely LS 5 ′-LS 12 ′.
- the update block is regarded as a sequential update block since it has been filled in logically sequential order.
- the update data recorded in the update block obsoletes the corresponding ones in the original block.
- Chaotic update block management may be initiated for an existing sequential update block when any sector updated by the host within the associated logical group is logically non-sequential.
- a chaotic update block is a form of data update block in which logical sectors within an associated logical group may be updated in any order and with any amount of repetition. It is created by conversion from a sequential update block when a sector written by a host is logically non-sequential to the previously written sector within the logical group being updated. All sectors subsequently updated in this logical group are written in the next available sector location in the chaotic update block, whatever their logical sector address within the group.
- FIG. 7B illustrates an example of sectors in a logical group being written in chaotic order to a chaotic update block as a result of five separate host write operations, whilst superseded sectors in the original block for the logical group and duplicated sectors in the chaotic update block become obsolete.
- host write operation # 1 the logical sectors LS 10 -LS 11 of a given logical group stored in an original metablock is updated.
- the updated logical sectors LS 10 ′-LS 11 ′ are stored in a newly allocated update block.
- the update block is a sequential one.
- the logical sectors LS 5 -LS 6 are updated as LS 5 ′-LS 6 ′ and recorded in the update block in the location immediately following the last write.
- FIG. 8 illustrates an example of sectors in a logical group being written in sequential order to a sequential update block as a result of two separate host write operations that has a discontinuity in logical addresses.
- the update data in the logical sectors LS 5 -LS 8 is recorded in a dedicated update block as LS 5 ′-LS 8 ′.
- the update data in the logical sectors LS 14 -LS 16 is being recorded in the update block following the last write as LS 14 ′-LS 16 ′.
- there is an address jump between LS 8 and LS 14 and the host write # 2 would normally render the update block non-sequential.
- one option is to first perform a padding operation (# 2 A) by copying the data of the intervening sectors from the original block to the update block before executing host write # 2 . In this way, the sequential nature of the update block is preserved.
- FIG. 9 is a flow diagram illustrating a process by the update block manager to update a logical group of data, according a general embodiment of the invention.
- the update process comprises the following steps:
- STEP 260 The memory is organized into blocks, each block partitioned into memory units that are erasable together, each memory unit for storing a logical unit of data.
- STEP 262 The data is organized into logical groups, each logical group partitioned into logical units.
- STEP 264 In the standard case, all logical units of a logical group is stored among the memory units of an original block according to a first prescribed order, preferably, in logically sequential order. In this way, the index for accessing the individual logical units in the block is known.
- STEP 270 For a given logical group (e.g., LG X ) of data, a request is made to update a logical unit within LG X .
- a logical unit update is given as an example. In general the update will be a segment of one or more contiguous logical units within LG X .
- STEP 272 The requested update logical unit is to be stored in a second block, dedicated to recording the updates of LG X .
- the recording order is according to a second order, typically, the order the updates are requested.
- One feature of the invention allows an update block to be set up initially generic to recording data in logically sequential or chaotic order. So depending on the second order, the second block can be a sequential one or a chaotic one.
- STEP 274 The second block continues to have requested logical units recorded as the process loops back to STEP 270 .
- the second block will be closed to receiving further update when a predetermined condition for closure materializes. In that case, the process proceeds to STEP 276 .
- STEP 276 Determination is made whether or not the closed, second block has its update logical units recorded in a similar order as that of the original block. The two blocks are considered to have similar order when they recorded logical units differ by only a page tag, as described in connection with FIG. 3A . If the two blocks have similar order the process proceeds to STEP 280 , otherwise, some sort of garbage collection need to be performed in STEP 290 .
- STEP 280 Since the second block has the same order as the first block, it is used to replace the original, first block. The update process then ends at STEP 299 .
- STEP 290 The latest version of each logical units of the given logical group are gathered from among the second block (update block) and the first block (original block). The consolidated logical units of the given logical group are then written to a third block in an order similar to the first block.
- STEP 292 Since the third block (consolidated block) has a similar order to the first block, it is used to replace the original, first block. The update process then ends at STEP 299 .
- STEP 299 When a closeout process creates an intact update block, it becomes the new standard block for the given logical group. The update thread for the logical group will be terminated.
- FIG. 10 is a flow diagram illustrating a process by the update block manager to update a logical group of data, according a preferred embodiment of the invention.
- the update process comprises the following steps:
- STEP 310 For a given logical group (e.g., LG X ) of data, a request is made to update a logical sector within LG X .
- a sector update is given as an example. In general the update will be a segment of one or more contiguous logical sectors within LG X .
- STEP 312 If an update block dedicated to LG X does not already exist, proceed to STEP 410 to initiate a new update thread for the logical group. This will be accomplished by allocating an update block dedicated to recording update data of the logical group. If there is already an update block open, proceed to STEP 314 to begin recording the update sector onto the update block.
- STEP 314 If the current update block is already chaotic (i.e., non-sequential) then simply proceed to STEP 510 for recording the requested update sector onto the chaotic update block. If the current update block is sequential, proceed to STEP 316 for processing of a sequential update block.
- STEP 316 One feature of the invention allows an update block to be set up initially generic to recording data in logically sequential or chaotic order. However, since the logical group ultimately has its data stored in a metablock in a logically sequential order, it is desirable to keep the update block sequential as far as possible. Less processing will then be required when an update block is closed to further updates as garbage collection will not be needed.
- a forced sequential process STEP 320 is optionally performed to preserve the sequential update block as far as possible in view of a pending chaotic update.
- the forced sequential process STEP 320 comprises the following substeps:
- STEP 330 If the update creates a logical address jump not greater a predetermined amount, C B , the process proceeds to a forced sequential update process in STEP 350 , otherwise the process proceeds to STEP 340 to consider if it qualifies for a forced sequential closeout.
- STEP 340 If the number of unfilled physical sectors exceeds a predetermined design parameter, C C , whose typical value is half of the size of the update block, then the update block is relatively unused and will not be prematurely closed. The process proceeds to STEP 370 and the update block will become chaotic. On the other hand, if the update block is substantially filled, it is considered to have been well utilized already and therefore is directed to STEP 360 for forced sequential closeout.
- C C a predetermined design parameter
- STEP 350 Forced sequential update allows current sequential update block to remain sequential as long as the address jump does not exceed a predetermined amount, C B . Essentially, sectors from the update block's associated original block are copied to fill the gap spanned by the address jump. Thus, the sequential update block will be padded with data in the intervening addresses before proceeding to STEP 510 to record the current update sequentially.
- STEP 360 Forced sequential closeout allows the currently sequential update block to be closed out if it is already substantially filled rather than converted to a chaotic one by the pending chaotic update.
- a chaotic or non-sequential update is defined as one with a forward address transition not covered by the address jump exception described above, a backward address transition, or an address repetition.
- the unwritten sector locations of the update block are filled by copying sectors from the update block's associated original partly-obsolete block. The original block is then fully obsolete and may be erased.
- the current update block now has the full set of logical sectors and is then closed out as an intact metablock replacing the original metablock.
- the process then proceeds to STEP 430 to have a new update block allocated in its place to accept the recording of the pending sector update that was first requested in STEP 310 .
- STEP 370 When the pending update is not in sequential order and optionally, if the forced sequential conditions are not satisfied, the sequential update block is allowed to be converted to a chaotic one by virtue of allowing the pending update sector, with non-sequential address, to be recorded on the update block when the process proceeds to STEP 510 . If the maximum number of chaotic update blocks exist, it is necessary to close the least recently accessed chaotic update block before allowing the conversion to proceed; thus preventing the maximum number of chaotic blocks from being exceeded. The identification of the least recently accessed chaotic update block is the same as the general case described in STEP 420 , but is constrained to chaotic update blocks only. Closing a chaotic update block at this time is achieved by consolidation as described in STEP 550 .
- STEP 410 The process of allocating an erase metablock as an update block begins with the determination whether a predetermined system limitation is exceeded or not. Due to finite resources, the memory management system typically allows a predetermined maximum number of update blocks, C A , to exist concurrently. This limit is the aggregate of sequential update blocks and chaotic update blocks, and is a design parameter. In a preferred embodiment, the limit is, for example, a maximum of 8 update blocks. Also, due to the higher demand on system resources, there may also be a corresponding predetermined limit on the maximum number of chaotic update blocks that can be open concurrently (e.g., 4.)
- STEP 420 In the event the maximum number of update blocks, C A , is exceeded, the least-recently accessed update block is closed and garbage collection is performed.
- the least recently accessed update block is identified as the update block associated with the logical block that has been accessed least recently. For the purpose of determining the least recently accessed blocks, an access includes writes and optionally reads of logical sectors. A list of open update blocks is maintained in order of access; at initialization, no access order is assumed.
- the closure of an update block follows along the similar process described in connection with STEP 360 and STEP 530 when the update block is sequential, and in connection with STEP 540 when the update block is chaotic. The closure makes room for the allocation of a new update block in STEP 430 .
- STEP 430 The allocation request is fulfilled with the allocation of a new metablock as an update block dedicated to the given logical group LG X . The process then proceeds to STEP 510 .
- STEP 510 The requested update sector is recorded onto next available physical location of the update block. The process then proceeds to STEP 520 to determine if the update block is ripe for closeout.
- STEP 520 If the update block still has room for accepting additional updates, proceed to STEP 570 . Otherwise proceed to STEP 522 to closeout the update block.
- the write request is split into two portions, with the first portion writing up to the last physical sector of the block. The block is then closed and the second portion of the write will be treated as the next requested write. In the other implementation, the requested write is withheld while the block has it remaining sectors padded and is then closed. The requested write will be treated as the next requested write.
- STEP 522 If the update block is sequential, proceed to STEP 530 for sequential closure. If the update block is chaotic, proceed to STEP 540 for chaotic closure.
- STEP 530 Since the update block is sequential and fully filled, the logical group stored in it is intact. The metablock is intact and replaces the original one. At this time, the original block is fully obsolete and may be erased. The process then proceeds to STEP 570 where the update thread for the given logical group ends.
- STEP 540 Since the update block is non-sequentially filled and may contain multiple updates of some logical sectors, garbage collection is performed to salvage the valid data in it.
- the chaotic update block will either be compacted or consolidated. Which process to perform will be determined in STEP 542 .
- STEP 542 To perform compaction or consolidation will depend on the degeneracy of the update block. If a logical sector is updated multiple times, its logical address is highly degenerate. There will be multiple versions of the same logical sector recorded on the update block and only the last recorded version is the valid one for that logical sector. In an update block containing logical sectors with multiple versions, the number of distinct logical sectors will be much less than that of a logical group.
- the closeout process when the number of distinct logical sectors in the update block exceeds a predetermined design parameter, C D , whose typical value is half of the size of a logical group, the closeout process will perform a consolidation in STEP 550 , otherwise the process will proceed to compaction in STEP 560 .
- STEP 550 If the chaotic update block is to be consolidated, the original block and the update block will be replaced by a new standard metablock containing the consolidated data. After consolidation the update thread will end in STEP 570 .
- STEP 560 If the chaotic update block is to be compacted, it will be replaced by a new update block carrying the compacted data. After compaction the processing of the compacted update block will end in STEP 570 . Alternatively, compaction can be delayed until the update block is written to again, thus removing the possibility of compaction being followed by consolidation without intervening updates. The new update block will then be used in further updating of the given logical block when a next request for update in LG X appears in STEP 502 .
- STEP 570 When a closeout process creates an intact update block, it becomes the new standard block for the given logical group. The update thread for the logical group will be terminated. When a closeout process creates a new update block replacing an existing one, the new update block will be used to record the next update requested for the given logical group. When an update block is not closed out, the processing will continue when a next request for update in LG X appears in STEP 310 .
- a chaotic update block when closed, the update data recorded on it is further processed.
- its valid data is garbage collected either by a process of compaction to another chaotic block, or by a process of consolidation with its associated original block to form a new standard sequential block.
- FIG. 11A is a flow diagram illustrating in more detail the consolidation process of closing a chaotic update block shown in FIG. 10 .
- Chaotic update block consolidation is one of two possible processes performed when the update block is being closed out, e.g., when the update block is full with its last physical sector location written. Consolidation is chosen when the number of distinct logical sectors written in the block exceeds a predetermined design parameter, C D .
- the consolidation process STEP 550 shown in FIG. 10 comprises the following substeps:
- STEP 551 When a chaotic update block is being closed, a new metablock replacing it will be allocated.
- STEP 552 Gather the latest version of each logical sector among the chaotic update block and its associated original block, ignoring all the obsolete sectors.
- STEP 554 Record the gathered valid sectors onto the new metablock in logically sequential order to form an intact block, i.e., a block with all the logical sectors of a logical group recorded in sequential order.
- STEP 556 Replace the original block with the new intact block.
- STEP 558 Erase the closed out update block and the original block.
- FIG. 11B is a flow diagram illustrating in more detail the compaction process for closing a chaotic update block shown in FIG. 10 .
- Compaction is chosen when the number of distinct logical sectors written in the block is below a predetermined design parameter, C D .
- the compaction process STEP 560 shown in FIG. 10 comprises the following substeps:
- STEP 561 When a chaotic update block is being compacted, a new metablock replacing it will be allocated.
- STEP 562 Gather the latest version of each logical sector among the existing chaotic update block to be compacted.
- STEP 564 Record the gathered sectors onto the new update block to form a new update block having compacted sectors.
- STEP 566 Replace the existing update block with the new update block having compacted sectors.
- STEP 568 Erase the closed out update block.
- FIG. 12A illustrates all possible states of a Logical Group, and the possible transitions between them under various operations.
- FIG. 12B is a table listing the possible states of a Logical Group.
- the Logical Group states are defined as follows:
- FIG. 13A illustrates all possible states of a metablock, and the possible transitions between them under various operations.
- FIG. 13B is a table listing the possible states of a metablock.
- the metablock states are defined as follows:
- FIGS. 14 (A)- 14 (J) are state diagrams showing the effect of various operations on the state of the logical group and also on the physical metablock.
- FIG. 14 (A) shows state diagrams corresponding to the logical group and the metablock transitions for a first write operation.
- the host writes one or more sectors of a previously unwritten Logical Group in logically sequential order to a newly allocated Erased metablock.
- the Logical Group and the metablock go to the Sequential Update state.
- FIG. 14 (B) shows state diagrams corresponding to the logical group and the metablock transitions for a first intact operation.
- a previously unwritten Sequential Update Logical Group becomes Intact as all the sectors are written sequentially by the host. The transition can also happen if the card fills up the group by filling the remaining unwritten sectors with a predefined data pattern.
- the metablock becomes Intact.
- FIG. 14 (C) shows state diagrams corresponding to the logical group and the metablock transitions for a first chaotic operation.
- a previously unwritten Sequential Update Logical Group becomes Chaotic when at least one sector has been written non-sequentially by the host.
- FIG. 14 (D) shows state diagrams corresponding to the logical group and the metablock transitions for a first compaction operation. All valid sectors within a previously unwritten Chaotic Update Logical Group are copied to a new Chaotic metablock from the old block, which is then erased.
- FIG. 14 (E) shows state diagrams corresponding to the logical group and the metablock transitions for a first consolidation operation. All valid sectors within a previously unwritten Chaotic Update Logical Group are moved from the old Chaotic block to fill a newly allocated Erased block in logically sequential order. Sectors unwritten by the host are filled with a predefined data pattern. The old chaotic block is then erased.
- FIG. 14 (F) shows state diagrams corresponding to the logical group and the metablock transitions for a sequential write operation.
- the host writes one or more sectors of an Intact Logical Group in logically sequential order to a newly allocated Erased metablock.
- the Logical Group and the metablock go to Sequential Update state.
- the previously Intact metablock becomes an Original metablock.
- FIG. 14 (G) shows state diagrams corresponding to the logical group and the metablock transitions for a sequential fill operation.
- a Sequential Update Logical Group becomes Intact when all its sectors are written sequentially by the host. This may also occur during garbage collection when the Sequential Update Logical Group is filled with valid sectors from the original block in order to make it Intact, after which the original block is erased.
- FIG. 14 (H) shows state diagrams corresponding to the logical group and the metablock transitions for a non-sequential write operation.
- a Sequential Update Logical Group becomes Chaotic when at least one sector is written non-sequentially by the host.
- the non-sequential sector writes may cause valid sectors in either the Update block or the corresponding Original block to become obsolete.
- FIG. 14 (I) shows state diagrams corresponding to the logical group and the metablock transitions for a compaction operation. All valid sectors within a Chaotic Update Logical Group are copied into a new chaotic metablock from the old block, which is then erased. The Original block is unaffected.
- FIG. 14 (J) shows state diagrams corresponding to the logical group and the metablock transitions for a consolidation operation. All valid sectors within a Chaotic Update Logical Group are copied from the old chaotic block and the Original block to fill a newly allocated Erased block in logically sequential order. The old chaotic block and the Original block are then erased.
- FIG. 15 illustrates a preferred embodiment of the structure of an allocation block list (ABL) for keeping track of opened and closed update blocks and erased blocks for allocation.
- the allocation block list (ABL) 610 is held in controller RAM 130 , to allow management of allocation of erased blocks, allocated update blocks, associated blocks and control structures, and to enable correct logical to physical address translation.
- the ABL includes a list of erased blocks, an open update block list 614 and a closed update block list 616 .
- the open update block list 614 is the set of block entries in the ABL with the attributes of Open Update Block.
- the open update block list has one entry for each data update block currently open. Each entry holds the following information.
- LG is the logical group address the current update metablock is dedicated to.
- Sequential/Chaotic is a status indicating whether the update block has been filled with sequential or chaotic update data.
- MB is the metablock address of the update block.
- Page tag is the starting logical sector recorded at the first physical location of the update block. Number of sectors written indicates the number of sectors currently written onto the update block.
- MB 0 is the metablock address of the associated original block. Page Tag 0 is the page tag of the associated original block.
- the closed update block list 616 is a subset of the Allocation Block List (ABL). It is the set of block entries in the ABL with the attributes of Closed Update Block.
- the closed update block list has one entry for each data update block which has been closed, but whose entry has not been updated in a logical to a main physical directory. Each entry holds the following information.
- LG is the logical group address the current update block is dedicated to.
- MB is the metablock address of the update block.
- Page tag is the starting logical sector recorded at the first physical location of the update block.
- MB 0 is the metablock address of the associated original block.
- a sequential update block has the data stored in logically sequential order, thus any logical sector among the block can be located easily.
- a chaotic update block has its logical sectors stored out of order and may also store multiple update generations of a logical sector. Additional information must be maintained to keep track of where each valid logical sector is located in the chaotic update block.
- chaotic block indexing data structures allow tracking and fast access of all valid sectors in a chaotic block.
- Chaotic block indexing independently manages small regions of logical address space, and efficiently handles system data and hot regions of user data.
- the indexing data structures essentially allow indexing information to be maintained in flash memory with infrequent update requirement so that performance is not significantly impacted.
- lists of recently written sectors in chaotic blocks are held in a chaotic sector list in controller RAM.
- a cache of index information from flash memory is held in controller RAM in order to minimize the number of flash sector accesses for address translation.
- Indexes for each chaotic block are stored in chaotic block index (CBI) sectors in flash memory.
- CBI chaotic block index
- FIG. 16A illustrates the data fields of a chaotic block index (CBI) sector.
- a Chaotic Block Index Sector (CBI sector) contains an index for each sector in a logical group mapped to a chaotic update block, defining the location of each sector of the logical group within the chaotic update block or its associated original block.
- a CBI sector includes a chaotic block index field for keeping track of valid sectors within the chaotic block, a chaotic block info field for keeping track of address parameters for the chaotic block, and a sector index field for keeping track of the valid CBI sectors within the metablock (CBI block) storing the CBI sectors.
- FIG. 16B illustrates an example of the chaotic block index (CBI) sectors being recorded in a dedicated metablock.
- the dedicated metablock will be referred to as a CBI block 620 .
- CBI block 620 When a CBI sector is updated, it is written in the next available physical sector location in the CBI block 620 . Multiple copies of a CBI sector may therefore exist in the CBI block, with only the last written copy being valid. For example the CBI sector for the logical group LG 1 has been updated three times with the latest version being the valid one.
- the location of each valid sector in the CBI block is identified by a set of indices in the last written CBI sector in the block.
- the last written CBI sector in the block is CBI sector for LG 136 and its set of indices is the valid one superceding all previous ones.
- the block is compacted during a control write operation by rewriting all valid sectors to a new block location. The full block is then erased.
- the chaotic block index field within a CBI sector contains an index entry for each logical sector within a logical group or sub-group mapped to a chaotic update block. Each index entry signifies an offset within the chaotic update block at which valid data for the corresponding logical sector is located. A reserved index value indicates that no valid data for the logical sector exists in the chaotic update block, and that the corresponding sector in the associated original block is valid. A cache of some chaotic block index field entries is held in controller RAM.
- the chaotic block info field within a CBI sector contains one entry for each chaotic update block that exists in the system, recording address parameter information for the block. Information in this field is only valid in the last written sector in the CBI block. This information is also present in data structures in RAM.
- the entry for each chaotic update block includes three address parameters. The first is the logical address of the logical group (or logical group number) associated with the chaotic update block. The second is the metablock address of the chaotic update block. The third is the physical address offset of the last sector written in the chaotic update block. The offset information sets the start point for scanning of the chaotic update block during initialization, to rebuild data structures in RAM.
- the sector index field contains an entry for each valid CBI sector in the CBI block. It defines the offsets within the CBI block at which the most recently written CBI sectors relating to each permitted chaotic update block are located. A reserved value of an offset in the index indicates that a permitted chaotic update block does not exist.
- FIG. 16C is a flow diagram illustrating access to the data of a logical sector of a given logical group undergoing chaotic update.
- the update data is recorded in the chaotic update block while the unchanged data remains in the original metablock associated with the logical group.
- the process of accessing a logical sector of the logical group under chaotic update is as follows:
- STEP 650 Begin locating a given logical sector of a given logical group.
- STEP 652 Locate last written CBI sector in the CBI block.
- STEP 654 Locate the chaotic update block or original block associated with the given logical group by looking up the Chaotic Block Info field of the last written CBI sector. This step can be performed any time just before STEP 662 .
- STEP 658 If the last written CBI sector is directed to the given logical group, the CBI sector is located. Proceed to STEP 662 . Otherwise, proceed to STEP 660 .
- STEP 660 Locate the CBI sector for the given logical group by looking up the sector index field of the last written CBI sector.
- STEP 662 Locate the given logical sector among either the chaotic block or the original block by looking up the Chaotic Block Index field of the located CBI sector.
- FIG. 16D is a flow diagram illustrating access to the data of a logical sector of a given logical group undergoing chaotic update, according to an alternative embodiment in which logical group has been partitioned into subgroups.
- the finite capacity of a CBI sector can only keep track of a predetermined maximum number of logical sectors.
- the logical group is partitioned into multiple subgroups with a CBI sector assigned to each subgroup.
- each CBI sector has enough capacity for tracking a logical group consisting of 256 sectors and up to 8 chaotic update blocks.
- CBI sectors may exist for up to 8 sub-groups within a logical group, giving support for logical groups up to 2048 sectors in size.
- an indirect indexing scheme is employed to facilitate management of the index.
- Each entry of the sector index has direct and indirect fields.
- the direct sector index defines the offsets within the CBI block at which all possible CBI sectors relating to a specific chaotic update block are located. Information in this field is only valid in the last written CBI sector relating to that specific chaotic update block. A reserved value of an offset in the index indicates that the CBI sector does not exist because the corresponding logical subgroup relating to the chaotic update block either does not exist, or has not been updated since the update block was allocated.
- the indirect sector index defines the offsets within the CBI block at which the most recently written CBI sectors relating to each permitted chaotic update block are located.
- a reserved value of an offset in the index indicates that a permitted chaotic update block does not exist.
- FIG. 16D shows the process of accessing a logical sector of the logical group under chaotic update as follows:
- STEP 670 Partition each Logical Group into multiple subgroups and assign a CBI sector to each subgroup.
- STEP 680 Begin locating a given logical sector of a given subgroup of a given logical group.
- STEP 682 Locate the last written CBI sector in the CBI block.
- STEP 684 Locate the chaotic update block or original block associated with the given subgroup by looking up the Chaotic Block Info field of the last written CBI sector. This step can be performed any time just before STEP 696 .
- STEP 686 If the last written CBI sector is directed to the given logical group, proceed to STEP 691 . Otherwise, proceed to STEP 690 .
- STEP 690 Locate the last written of the multiple CBI sectors for the given logical group by looking up the Indirect Sector Index field of the last written CBI sector.
- STEP 691 At least a CBI sector associate with one of the subgroups for the given logical group has been located. Continue.
- STEP 692 If the located CBI sector directed to the given subgroup, the CBI sector for the given subgroup is located. Proceed to STEP 696 . Otherwise, proceed to STEP 694 .
- STEP 694 Locate the CBI sector for the given subgroup by looking up the direct sector index field of the currently located CBI sector.
- STEP 696 Locate the given logical sector among either the chaotic block or the original block by looking up the Chaotic Block Index field of the CBI sector for the given subgroup.
- FIG. 16E illustrates examples of Chaotic Block Indexing (CBI) sectors and their functions for the embodiment where each logical group is partitioned into multiple subgroups.
- a logical group 700 originally has its intact data stored in an original metablock 702 . The logical group is then undergoing updates with the allocation of a dedicated chaotic update block 704 .
- the logical group 700 is partitioned into subgroups, such subgroups A, B, C, D, each having 256 sectors.
- the last written CBI sector in the CBI block 620 is first located.
- the chaotic block info field of the last written CBI sector provides the address to locate the chaotic update block 704 for the given logical group. At the same time it provides the location of the last sector written in the chaotic block. This information is useful in the event of scanning and rebuilding indices.
- the last written CBI sector turns out to be one of the four CBI sectors of the given logical group, it will be further determined if it is exactly the CBI sector for the given subgroup B that contains the ith logical sector. If it is, then the CBI sector's chaotic block index will point to the metablock location for storing the data for the ith logical sector. The sector location could be either in the chaotic update block 704 or the original block 702 .
- the last written CBI sector turns out to be one of the four CBI sectors of the given logical group but is not exactly for the subgroup B, then its direct sector index is looked up to locate the CBI sector for the subgroup B. Once this exact CBI sector is located, its chaotic block index is looked up to locate the ith logical sector among the chaotic update block 704 and the original block 702 .
- the last written CBI sector turns out not to be anyone of the four CBI sectors of the given logical group, its indirect sector index is looked up to locate one of the four.
- the CBI sector for subgroup C is located.
- this CBI sector for subgroup C has its direct sector index looked up to locate the exact CBI sector for the subgroup B.
- the example shows that when its chaotic block index is looked up, the ith logical sector is found to be unchanged and it valid data will be located in the original block.
- a list of chaotic sectors exists in controller RAM for each chaotic update block in the system.
- Each list contains a record of sectors written in the chaotic update block since a related CBI sector was last updated in flash memory.
- the number of logical sector addresses for a specific chaotic update block, which can be held in a chaotic sector list, is a design parameter with a typical value of 8 to 16.
- the optimum size of the list is determined as a tradeoff between its effects on overhead for chaotic data-write operations and sector scanning time during initialization.
- each chaotic update block is scanned as necessary to identify valid sectors written since the previous update of one of its associated CBI sectors.
- a chaotic sector list in controller RAM for each chaotic update block is constructed. Each block need only be scanned from the last sector address defined in its chaotic block info field in the last written CBI sector.
- a chaotic update block When a chaotic update block is allocated, a CBI sector is written to correspond to all updated logical sub-groups.
- the logical and physical addresses for the chaotic update block are written in an available chaotic block info field in the sector, with null entries in the chaotic block index field.
- a chaotic sector list is opened in controller RAM.
- the corresponding chaotic sector list in controller RAM is modified to include records of sectors written to a chaotic update block.
- a chaotic sector list in controller RAM has no available space for records of further sector writes to a chaotic update block, updated CBI sectors are written for logical sub-groups relating to sectors in the list, and the list is cleared.
- the logical to physical address translation module 140 shown in FIG. 2 is responsible for relating a host's logical address to a corresponding physical address in flash memory. Mapping between logical groups and physical groups (metablocks) are stored in a set of table and lists distributed among the nonvolatile flash memory 200 and the volatile but more agile RAM 130 (see FIG. 1 .) An address table is maintained in flash memory, containing a metablock address for every logical group in the memory system. In addition, logical to physical address records for recently written sectors are temporarily held in RAM. These volatile records can be reconstructed from block lists and data sector headers in flash memory when the system is initialized after power-up. Thus, the address table in flash memory need be updated only infrequently, leading to a low percentage of overhead write operations for control data.
- the hierarchy of address records for logical groups includes the open update block list, the closed update block list in RAM and the group address table (GAT) maintained in flash memory.
- GAT group address table
- the open update block list is a list in controller RAM of data update blocks which are currently open for writing updated host sector data.
- the entry for a block is moved to the closed update block list when the block is closed.
- the closed update block list is a list in controller RAM of data update blocks which have been closed. A subset of the entries in the list is moved to a sector in the Group Address Table during a control write operation.
- the Group Address Table is a list of metablock addresses for all logical groups of host data in the memory system.
- the GAT contains one entry for each logical group, ordered sequentially according to logical address.
- the nth entry in the GAT contains the metablock address for the logical group with address n.
- it is a table in flash memory, comprising a set of sectors (referred to as GAT sectors) with entries defining metablock addresses for every logical group in the memory system.
- the GAT sectors are located in one or more dedicated control blocks (referred to as GAT blocks) in flash memory.
- FIG. 17A illustrates the data fields of a group address table (GAT) sector.
- a GAT sector may for example have sufficient capacity to contain GAT entries for a set of 128 contiguous logical groups.
- Each GAT sector includes two components, namely a set of GAT entries for the metablock address of each logical group within a range, and a GAT sector index.
- the first component contains information for locating the metablock associated with the logical address.
- the second component contains information for locating all valid GAT sectors within the GAT block.
- Each GAT entry has three fields, namely, the metablock number, the page tag as defined earlier in connection with FIG. 3A (iii), and a flag indicating whether the metablock has been relinked.
- the GAT sector index lists the positions of valid GAT sectors in a GAT block. This index is in every GAT sector but is superceded by the version of the next written GAT sector in the GAT block. Thus only the version in the last written GAT sector is valid.
- FIG. 17B illustrates an example of the group address table (GAT) sectors being recorded in one or more GAT block.
- a GAT block is a metablock dedicated to recording GAT sectors.
- GAT sector When a GAT sector is updated, it is written in the next available physical sector location in the GAT block 720 . Multiple copies of a GAT sector may therefore exist in the GAT block, with only the last written copy being valid.
- the GAT sector 255 (containing pointers for the logical groups LG 3968 -LG 4098 ) has been updated at least two times with the latest version being the valid one.
- the location of each valid sector in the GAT block is identified by a set of indices in the last written GAT sector in the block.
- the last written GAT sector in the block is GAT sector 236 and its set of indices is the valid one superceding all previous ones.
- the block is compacted during a control write operation by rewriting all valid sectors to a new block location. The full block is then erased.
- a GAT block contains entries for a logically contiguous set of groups in a region of logical address space.
- GAT sectors within a GAT block each contain logical to physical mapping information for 128 contiguous logical groups.
- the number of GAT sectors required to store entries for all logical groups within the address range spanned by a GAT block occupy only a fraction of the total sector positions in the block.
- a GAT sector may therefore be updated by writing it at the next available sector position in the block.
- An index of all valid GAT sectors and their position in the GAT block is maintained in an index field in the most recently written GAT sector.
- the fraction of the total sectors in a GAT block occupied by valid GAT sectors is a system design parameter, which is typically 25%. However, there is a maximum of 64 valid GAT sectors per GAT block. In systems with large logical capacity, it may be necessary to store GAT sectors in more than one GAT block. In this case, each GAT block is associated with a fixed range of logical
- a GAT update is performed as part of a control write operation, which is triggered when the ABL runs out of blocks for allocation (see FIG. 18 .) It is performed concurrently with ABL fill and CBL empty operations.
- a GAT update operation one GAT sector has entries updated with information from corresponding entries in the closed update block list.
- any corresponding entries are removed from the closed update block list (CUBL).
- the GAT sector to be updated is selected on the basis of the first entry in the closed update block list. The updated sector is written to the next available sector location in the GAT block.
- a GAT rewrite operation occurs during a control write operation when no sector location is available for an updated GAT sector.
- a new GAT block is allocated, and valid GAT sectors as defined by the GAT index are copied in sequential order from the full GAT block. The full GAT block is then erased.
- a GAT cache is a copy in controller RAM 130 of entries in a subdivision of the 128 entries in a GAT sector.
- the number of GAT cache entries is a system design parameter, with typical value 32.
- a GAT cache for the relevant sector subdivision is created each time an entry is read from a GAT sector. Multiple GAT caches are maintained. The number is a design parameter with a typical value of 4.
- a GAT cache is overwritten with entries for a different sector subdivision on a least-recently-used basis.
- the erase block manager 160 shown in FIG. 2 manages erase blocks using a set of lists for maintaining directory and system control information. These lists are distributed among the controller RAM 130 and flash memory 200 . When an erased metablock must be allocated for storage of user data, or for storage of system control data structures, the next available metablock number in the allocation block list (ABL) (see FIG. 15 ) held in controller RAM is selected. Similarly, when a metablock is erased after it has been retired, its number is added to a cleared block list (CBL) also held in controller RAM. Relatively static directory and system control data are stored in flash memory. These include erased block lists and a bitmap (MAP) listing the erased status of all metablocks in the flash memory. The erased block lists and MAP are stored in individual sectors and are recorded to a dedicated metablock, known as a MAP block. These lists, distributed among the controller RAM and flash memory, provide a hierarchy of erased block records to efficiently manage erased metablock usage.
- ABL allocation block list
- CBL cleared block list
- FIG. 18 is a schematic block diagram illustrating the distribution and flow of the control and directory information for usage and recycling of erased blocks.
- the control and directory data are maintained in lists which are held either in controller RAM 130 or in a MAP block 750 residing in flash memory 200 .
- the controller RAM 130 holds the allocation block list (ABL) 610 and a cleared block list (CBL) 740 .
- the allocation block list (ABL) keeps track of which metablocks have recently been allocated for storage of user data, or for storage of system control data structures. When a new erased metablock need be allocated, the next available metablock number in the allocation block list (ABL) is selected.
- the cleared block list (CBL) is used to keep track of update metablocks that have been de-allocated and erased.
- the ABL and CBL are held in controller RAM 130 (see FIG. 1 ) for speedy access and easy manipulation when tracking the relatively active update blocks.
- the allocation block list keeps track of a pool of erased metablocks and the allocation of the erased metablocks to be an update block.
- each of these metablocks that may be described by an attribute designating whether it is an erased block in the ABL pending allocation, an open update block, or a closed update block.
- FIG. 18 shows the ABL containing an erased ABL list 612 , the open update block list 614 and the closed update block list 616 .
- associated with the open update block list 614 is the associated original block list 615 .
- associated with the closed update block list is the associated erased original block list 617 . As shown previously in FIG.
- these associated lists are subset of the open update block list 614 and the closed update block list 616 respectively.
- the erased ABL block list 612 , the open update block list 614 , and the closed update block list 616 are all subsets of the allocation block list (ABL) 610 , the entries in each having respectively the corresponding attribute.
- the MAP block 750 is a metablock dedicated to storing erase management records in flash memory 200 .
- the MAP block stores a time series of MAP block sectors, with each MAP sector being either an erase block management (EBM) sector 760 or a MAP sector 780 .
- EBM erase block management
- the associated control and directory data is preferably contained in a logical sector which may be updated in the MAP block, with each instance of update data being recorded to a new block sector.
- Multiple copies of EBM sectors 760 and MAP sectors 780 may exist in the MAP block 750 , with only the latest version being valid.
- An index to the positions of valid MAP sectors is contained in a field in the EMB block.
- a valid EMB sector is always written last in the MAP block during a control write operation.
- the MAP block 750 is full, it is compacted during a control write operation by rewriting all valid sectors to a new block location. The full block is then erased.
- Each EBM sector 760 contains erased block lists (EBL) 770 , which are lists of addresses of a subset of the population of erased blocks.
- the erased block lists (EBL) 770 act as a buffer containing erased metablock numbers, from which metablock numbers are periodically taken to re-fill the ABL, and to which metablock numbers are periodically added to re-empty the CBL.
- the EBL 770 serves as buffers for the available block buffer (ABB) 772 , the erased block buffer (EBB) 774 and the cleared block buffer (CBB) 776 .
- the available block buffer (ABB) 772 contains a copy of the entries in the ABL 610 immediately following the previous ABL fill operation. It is in effect a backup copy of the ABL just after an ABL fill operation.
- the erased block buffer (EBB) 774 contains erased block addresses which have been previously transferred either from MAP sectors 780 or from the CBB list 776 (described below), and which are available for transfer to the ABL 610 during an ABL fill operation.
- the cleared block buffer (CBB) 776 contains addresses of erased blocks which have been transferred from the CBL 740 during a CBL empty operation and which will be subsequently transferred to MAP sectors 780 or to the EBB list 774 .
- Each of the MAP sectors 780 contains a bitmap structure referred to as MAP.
- the MAP uses one bit for each metablock in flash memory, which is used to indicate the erase status of each block. Bits corresponding to block addresses listed in the ABL, CBL, or erased block lists in the EBM sector are not set to the erased state in the MAP.
- Any block which does not contain valid data structures and which is not designated as an erased block within the MAP, erased block lists, ABL or CBL is never used by the block allocation algorithm and is therefore inaccessible for storage of host or control data structures. This provides a simple mechanism for excluding blocks with defective locations from the accessible flash memory address space.
- the hierarchy shown in FIG. 18 allows erased block records to be managed efficiently and provides full security of the block address lists stored in the controller's RAM. Erased block entries are exchanged between these block address lists and one or more MAP sectors 780 , on an infrequent basis. These lists may be reconstructed during system initialization after a power-down, via information in the erased block lists and address translation tables stored in sectors in flash memory, and limited scanning of a small number of referenced data blocks in flash memory.
- the algorithms adopted for updating the hierarchy of erased metablock records results in erased blocks being allocated for use in an order which interleaves bursts of blocks in address order from the MAP block 750 with bursts of block addresses from the CBL 740 which reflect the order blocks were updated by the host.
- a single MAP sector can provide a bitmap for all metablocks in the system. In this case, erased blocks are always allocated for use in address order as recorded in this MAP sector.
- the ABL 610 is a list with address entries for erased metablocks which may be allocated for use, and metablocks which have recently been allocated as data update blocks.
- the actual number of block addresses in the ABL lies between maximum and minimum limits, which are system design variables.
- the number of ABL entries formatted during manufacturing is a function of the card type and capacity.
- the number of entries in the ABL may be reduced near the end of life of the system, as the number of available erased blocks is reduced by failure of blocks during life. For example, after a fill operation, entries in the ABL may designate blocks available for the following purposes. Entries for Partially written data update blocks with one entry per block, not exceeding a system limit for a maximum of concurrently opened update blocks. Between one to twenty entries for Erased blocks for allocation as data update blocks. Four entries for erased blocks for allocation as control blocks.
- ABL 610 As the ABL 610 becomes depleted through allocations, it will need to be refilled.
- An operation to fill the ABL occurs during a control write operation. This is triggered when a block must be allocated, but the ABL contains insufficient erased block entries available for allocation as a data update block, or for some other control data update block.
- the ABL fill operation is concurrent with a GAT update operation.
- ABL entries with attributes of closed data update blocks are retained, unless an entry for the block is being written in the concurrent GAT update operation, in which case the entry is removed from the ABL.
- the ABL is compacted to remove gaps created by removal of entries, maintaining the order of entries.
- the ABL is completely filled by appending the next available entries from the EBB list.
- the ABB list is over-written with the current entries in the ABL.
- the CBL is a list of erased block addresses in controller RAM with the same limitation on the number of erased block entries as the ABL.
- An operation to empty the CBL occurs during a control write operation. It is therefore concurrent with an ABL fill/GAT update operations, or CBI block write operations.
- entries are removed from the CBL 740 and written to the CBB list 776 .
- a MAP exchange operation between the erase block information in the MAP sectors 780 and the EBM sectors 760 may occur periodically during a control write operation, when the EBB list 774 is empty. If all erased metablocks in the system are recorded in the EBM sector 760 , no MAP sector 780 exists and no MAP exchange is performed.
- a MAP exchange operation a MAP sector feeding the EBB 774 with erased blocks is regarded as a source MAP sector 782 .
- a MAP sector receiving erased blocks from the CBB 776 is regarded as a destination MAP sector 784 . If only one MAP sector exists, it acts as both source and destination MAP sector, as defined below.
- a source MAP sector is selected, on the basis of an incremental pointer.
- a destination MAP sector is selected, on the basis of the block address in the first CBB entry that is not in the source MAP sector.
- the destination MAP sector is updated, as defined by relevant entries in the CBB, and the entries are removed from the CBB.
- the updated destination MAP sector is written in the MAP block, unless no separate source MAP sector exists.
- the source MAP sector is updated, as defined by relevant entries in the CBB, and the entries are removed from the CBB.
- the EBB is filled to the extent possible with erased block addresses defined from the source MAP sector.
- the updated source MAP sector is written in the MAP block.
- An updated EBM sector is written in the MAP block.
- FIG. 18 shows the distribution and flow of the control and directory information between the various lists.
- operations to move entries between elements of the lists or to change the attributes of entries, identified in FIG. 18 as [A] to [O], are as follows.
- the logical to physical address translation module 140 shown in FIG. 2 performs a logical to physical address translation. Except for those logical groups that have recently been updated, the bulk of the translation could be performed using the group address table (GAT) residing in the flash memory 200 or the GAT cache in controller RAM 130 . Address translations for the recently updated logical groups will require looking up address lists for update blocks which reside mainly in controller RAM 130 . The process for logical to physical address translation for a logical sector address is therefore dependent on the type of block associated with the logical group within which the sector is located. The types of blocks are: intact block, sequential data update block, chaotic data update block, closed data update block.
- GAT group address table
- FIG. 19 is a flow chart showing the process of logical to physical address translation. Essentially, the corresponding metablock and the physical sector is located by using the logical sector address first to lookup the various update directories such as the open update block list and the close update block list. If the associated metablock is not part of an update process, then directory information is provided by the GAT.
- the logical to physical address translation includes the following steps:
- STEP 800 A logical sector address is given.
- STEP 810 Look up given logical address in the open update blocks list 614 (see FIGS. 15 and 18 ) in controller RAM. If lookup fails, proceed to STEP 820 , otherwise proceed to STEP 830 .
- STEP 820 Look up given logical address in the closed update block list 616 . If lookup fails, the given logical address is not part of any update process; proceed to STEP 870 for GAT address translation. Otherwise proceed to STEP 860 for closed update block address translation.
- STEP 830 If the update block containing the given logical address is sequential, proceed to STEP 840 for sequential update block address translation. Otherwise proceed to STEP 850 for chaotic update block address translation.
- STEP 840 Obtain the metablock address using sequential update block address translation. Proceed to STEP 880 .
- STEP 850 Obtain the metablock address using chaotic update block address translation. Proceed to STEP 880 .
- STEP 860 Obtain the metablock address using closed update block address translation. Proceed to STEP 880 .
- STEP 870 Obtain the metablock address using group address table (GAT) translation. Proceed to STEP 880 .
- GAT group address table
- STEP 880 Convert the Metablock Address to a physical address. The translation method depends on whether the metablock has been relinked.
- STEP 890 Physical sector address obtained.
- Address translation for a target logical sector address in a logical group associated with a sequential update block can be accomplished directly from information in the open update block list 614 ( FIGS. 15 and 18 ), as follows.
- the address translation sequence for a target logical sector address in a logical group associated with a chaotic update block is as follows.
- address translation may be accomplished directly from its position in this list.
- the most recently written sector in the CBI block contains, within its chaotic block data field, the physical address of the chaotic update block relevant to the target logical sector address. It also contains, within its indirect sector index field, the offset within the CBI block of the last written CBI sector relating to this chaotic update block (see FIGS. 16A-16E ).
- the direct sector index field for the most recently accessed chaotic update sub-group is cached in RAM, eliminating the need to perform the read at step 4 for repeated accesses to the same chaotic update block.
- the direct sector index field read at step 4 or step 5 identifies in turn the CBI sector relating to the logical sub-group containing the target logical sector address.
- the chaotic block index entry for the target logical sector address is read from the CBI sector identified in step 6.
- the most recently read chaotic block index field may be cached in controller RAM, eliminating the need to perform the reads at step 4 and step 7 for repeated accesses to the same logical sub-group.
- the chaotic block index entry defines the location of the target logical sector either in the chaotic update block or in the associated original block. If the valid copy of the target logical sector is in the original block, it is located by use of the original metablock and page tag information.
- Address translation for a target logical sector address in a logical group associated with a closed update block can be accomplished directly from information in the closed block update list (see FIG. 18 ), as follows.
- the metablock address assigned to the target logical group is read from the list.
- the sector address within the metablock is determined from the “page tag” field in the list.
- the address translation sequence for a target logical sector address in a logical group referenced by the GAT is as follows.
- the ranges of the available GAT caches in RAM are evaluated to determine if an entry for the target logical group is contained in a GAT cache.
- the GAT cache contains full group address information, including both metablock address and page tag, allowing translation of the target logical sector address.
- the GAT index must be read for the target GAT block, to identify the location of the GAT sector relating to the target logical group address.
- the GAT index for the last accessed GAT block is held in controller RAM, and may be accessed without need to read a sector from flash memory.
- a list of metablock addresses for every GAT block, and the number of sectors written in each GAT block, is held in controller RAM. If the required GAT index is not available at step 4, it may therefore be read immediately from flash memory.
- the GAT sector relating to the target logical group address is read from the sector location in the GAT block defined by the GAT index obtained at step 4 or step 6.
- a GAT cache is updated with the subdivision of the sector containing the target entry.
- the target sector address is obtained from the metablock address and “page tag” fields within the target GAT entry.
- the relevant LT sector is read from the BLM block, to determine the erase block address for the target sector address. Otherwise, the erase block address is determined directly from the metablock address.
- FIG. 20 illustrates the hierarchy of the operations performed on control data structures in the course of the operation of the memory management.
- Data Update Management Operations act on the various lists that reside in RAM.
- Control write operations act on the various control data sectors and dedicated blocks in flash memory and also exchange data with the lists in RAM.
- Data update management operations are performed in RAM on the ABL, the CBL and the chaotic sector list.
- the ABL is updated when an erased block is allocated as an update block or a control block, or when an update block is closed.
- the CBL is updated when a control block is erased or when an entry for a closed update block is written to the GAT.
- the update chaotic sector list is updated when a sector is written to a chaotic update block.
- a control write operation causes information from control data structures in RAM to be written to control data structures in flash memory, with consequent update of other supporting control data structures in flash memory and RAM, if necessary. It is triggered either when the ABL contains no further entries for erased blocks to be allocated as update blocks, or when the CBI block is rewritten.
- the ABL fill operation, the CBL empty operation and the EBM sector update operation are performed during every control write operation.
- the MAP block containing the EBM sector becomes full, valid EBM and MAP sectors are copied to an allocated erased block, and the previous MAP block is erased.
- a CBI sector is written, as described earlier, after certain chaotic sector write operations.
- the CBI block becomes full, valid CBI sectors are copied to an allocated erased block, and the previous CBI block is erased.
- a MAP exchange operation is performed when there are no further erased block entries in the EBB list in the EBM sector.
- a MAP Address (MAPA) sector which records the current address of the MAP block, is written in a dedicated MAPA block on each occasion the MAP block is rewritten.
- MAPA MAP Address
- a Boot sector is written in a current Boot block on each occasion the MAPA block is rewritten.
- the valid Boot sector is copied from the current version of the Boot block to the backup version, which then becomes the current version.
- the previous current version is erased and becomes the backup version, and the valid Boot sector is written back to it.
- each plane has its own set of sense amplifiers as part of read and program circuits to service in parallel a corresponding page of memory cells that spans the plane.
- sense amplifiers as part of read and program circuits to service in parallel a corresponding page of memory cells that spans the plane.
- a memory array organized into erasable blocks and constituted from multiple memory planes so that logical units can be read or programmed into the multiple planes in parallel
- provisions are made to keep the updated logical unit in the same plane as the original. This is accomplished by recording the updated logical unit to the next available location of a second block but in the same plane.
- the logical unit is stored with an identical offset position in the plane as its other versions so that all versions of a given logical units are serviced by an identical set of sensing circuits.
- any intervening gaps from the last programmed memory unit to the next available plane-aligned memory unit are padded accordingly with the current versions of logical units.
- the padding is accomplished by filling in a gap with current versions of the logical units that follow logically from the last programmed logical unit, and with current versions of the logical units that precede logically from the logical unit stored in the next available plane-aligned memory unit.
- each memory unit across each plane is either updated or padded with the latest versions.
- This scheme reduces the time for consolidation of a chaotic block by allowing on-plane rearrangement of the latest versions of the logical units of a logical group and avoiding having to gather the latest versions from different memory planes. This is of benefit where the performance specification for the host interface defines a maximum latency for completion of a sector write operation by the memory system.
- FIG. 21 illustrates a memory array constituted from multiple memory planes.
- the memory planes could be from the same memory chip or from multiple memory chips.
- Each plane 910 has its own set of read and program circuits 912 to service a page 914 of memory cells in parallel. Without loss of generality, in the example shown, the memory array has four planes operating in parallel.
- a logical unit is a minimum unit of access by a host system. Typically a logical unit is a sector of size 512 bytes.
- a page is a maximum unit of parallel read or program in a plane. Typically a logical page contains one or more logical units. Therefore, when combining multiple planes, a maximum aggregated unit of parallel read or program could be regarded as a metapage of memory cells, where the metapage is constituted by a page from each of the multiple planes. For example, a metapage such as MP 0 has four pages, one from each of the planes, P 0 , P 1 , P 2 and P 3 , storing in parallel logical pages LP 0 , LP 1 , LP 2 , LP 3 . Thus, the read and write performance of the memory is increased four fold as compared to operating only in one plane.
- the memory array is further organized into metablocks such as MB 0 , . . . , MB j where all memory cells within each metablock are erasable together as a unit.
- a metablock such as MB 0 is constituted from multiple memory locations, for storing logical pages 914 of data, such as LP 0 -LP N-1 .
- the logical pages in a metablock are distributed over the four planes P 0 , P 1 , P 2 and P 3 in a predetermined sequence according to the order they are filled in the metablock. For example, when the logical pages are filled in logically sequential order, the planes are visited in cyclic order with the first page in the first plane, the second page in the second plane, etc. After the last plane is reached, the filling returns cyclically to start from the first plane again in the next metapage. In this way, a contiguous run of logical pages can be accessed in parallel when all planes are operated in parallel.
- the memory operations in each memory plane are performed by the set of read/write circuits 912 .
- Data in and out of each of the read/write circuits is transferred through a data bus 930 under the control of a controller 920 .
- a buffer 922 in the controller 920 helps to buffer the transfer of data via the data bus 930 .
- a two-step process is required. The data is first read out from the second plane by the controller and then transferred to the first plane via the data bus and the buffer.
- transfer of data between two different bit lines also requires the data to be exchanged through the data bus 920 .
- the present invention provides structures and schemes for memory block management such that access of data by one plane from another is avoided in order to maximize performance.
- a metapage is formed by multiple logical pages, one in each plane.
- Each logical page may consist of one or more logical units.
- a logical unit is taken as a logical sector of 512 byte and a logical page is also a logical unit wide. Since flash memory does not allow rewrite of a portion of a block without first erasing the whole block, updates to a logical page is not written over the existing location but recorded in an unused location of the block. The previous version(s) of the logical unit is then deemed obsolete. After a number of updates, a block may contain a number of logical units that have since been updated and thus become obsolete.
- the block is then said to be “dirty” and a garbage collection operation will ignore the dirty logical units but collect the latest version of each individual logical units and re-record them in logically sequential order in one or more new blocks.
- the dirty block is then erased and recycled.
- the updated logical unit When the updated logical unit is recorded in the next unused location in a block, it will generally not be recorded in the same memory plane as its previous versions.
- a garbage collection operation such as a consolidation or compaction, the latest version of a logical unit would be re-recorded in the same plane as the original to maintain the original order. However, performance would degrade if the latest version has to be retrieved from another plane.
- an original logical unit of a first block stored in a given plane when an original logical unit of a first block stored in a given plane is to be updated, provisions are made to keep the updated logical unit in the same plane as the original. This is accomplished by recording the updated logical unit to the next available location of a second block but in the same plane. In a preferred embodiment, any intervening gaps from the last programmed memory unit to the next available plane-aligned memory unit are padded (i.e., filled in by copying) with the current versions of logical units in the same relative location as the original logical units in the original block.)
- FIG. 22A is a flow diagram illustrating a method of update with plane alignment, according to a general implementation of the invention.
- FIG. 22B illustrates a preferred embodiment of the step of storing updates in the flow diagram shown in FIG. 22A .
- FIG. 23A illustrates an example of logical units being written in sequential order to a sequential update block without regard to plane alignment.
- the example shows each logical page being the size of a logical sector, such as LS 0 , LS 1 , . . . .
- each block, such as MB 0 can be regarded as partitioned into metapages MP 0 , MP 1 , . . . , with each metapage, such as MP 0 containing four sectors such as LS 0 , LS 1 , LS 2 and LS 3 , respectively from the planes P 0 , P 1 , P 2 and P 3 . Therefore the block is filled in logical units of sector by sector in cyclic order in planes P 0 , P 1 , P 2 and P 3 .
- the segment of data in the logical sectors LS 9 -LS 12 are being updated.
- the updated data as LS 9 ′-LS 12 ′ are recorded in the update block in a location directly following where the last write ends. It can be seen that the two host writes are such that the update data has been recorded in the update block in logically sequential order, namely LS 5 ′-LS 12 ′.
- the update block is regarded as a sequential update block since it has been filled in logically sequential order.
- the update data recorded in the update block obsoletes the corresponding ones in the original block.
- the update logical sectors are being recorded in the update block according to the next available location but without regard to plane alignment.
- the sector LS 5 is originally recorded in plane P 1 , but the updated LS 5 ′ is now recorded in P 0 .
- the other update sectors are all misaligned.
- FIG. 23B illustrates an example of logical units being written in non-sequential order to a chaotic update block without regard to plane alignment.
- the logical sectors LS 10 -LS 11 of a given logical group stored in an original metablock are updated.
- the updated logical sectors LS 10 ′-LS 11 ′ are stored in a newly allocated update block.
- the update block is a sequential one.
- the logical sectors LS 5 -LS 6 are updated as LS 5 ′-LS 6 ′ and recorded in the update block in the location immediately following the last write. This converts the update block from a sequential to a chaotic one.
- the logical sector LS 10 ′ is being updated again and is recorded in the next location of the update block as LS 10 ′′.
- LS 10 ′′ in the update block supersedes LS 10 ′ in a previous recording which in turn supercedes LS 10 in the original block.
- the data in the logical sector LS 10 ′′ is again updated and is recorded in the next location of the update block as LS 10 ′′′.
- LS 10 ′′′ is now the latest and only valid version for the logical sector LS 10 .
- All previous versions of LS 10 are now obsolete.
- host write operation # 5 the data in logical sector LS 30 is being updated and recorded in the update block as LS 30 ′.
- logical units within a logical group can be written in a chaotic update block in any order and with any repetition.
- the update logical sectors are being recorded in the update block according to the next available location but without regard to plane alignment.
- the sector LS 10 is originally recorded in plane P 2 (i.e., MP 2 , 3 rd plane), but the updated LS 10 ′ is now recorded in P 0 (i.e., MP 0 ′, 1 st plane).
- the logical sector LS 10 ′ is updated again as LS 10 ′′ and is placed in the next available location which turns out to be also in plane P 0 (1 st plane in MP 1 ′).
- FIG. 24A illustrates the sequential update example of FIG. 23A with plane alignment and padding, according to a preferred embodiment of the invention.
- the updated data as LS 5 ′-LS 8 ′ are recorded in a newly allocated update block starting from the first available plane-aligned location.
- LS 5 is originally in P 1 , which is the 2 nd plane of a metapage. Therefore, LS 5 ′-LS 7 ′ will be programmed in corresponding planes in the first available metapage, MP 0 , of the update block.
- the gap of the unused 1 st plane in MP 0 ′ is being padded with a current version of the logical sector LS 4 that precedes LS 5 in the metapage of the original block.
- the original LS 4 is then treated as obsolete data.
- the left over LS 8 ′ is then recorded in the 1 st plane of the next metapage, MP 1 ′, and is plane-aligned.
- the updated data as LS 9 ′-LS 12 ′ are recorded in the update block in the next available plane-aligned location. Therefore, LS 9 ′ is recorded in the next available plane-aligned memory unit, which is the 2 nd plane of MP 1 ′. In this case, no gaps result and no padding is necessary.
- the update block is regarded as a sequential update block since it has been filled in logically sequential order. Furthermore, it is plane-aligned since each update logical unit is in the same plane as its original.
- FIG. 24B illustrates the chaotic update example of FIG. 23B with plane alignment and with no padding, according to one preferred embodiment of the invention.
- the updated logical sectors LS 10 ′-LS 11 ′ are stored in a newly allocated update block. Instead of storing them in the next available memory units, they are stored in the next available plane-aligned memory units. Since LS 10 ′ and LS 11 ′ are originally stored in planes P 2 and P 3 (3 rd and 4 th planes of MP 2 of the original block) respectively, the next available plane-aligned memory units would be that in the 3 rd and 4 th planes of MP 0 ′ of the update block. At this point, the update block is non-sequential, with the pages of metapage MP 0 ′ filled in the order UNFILLED, UNFILLED, LS 10 ′ and LS 11 ′.
- the logical sectors LS 5 -LS 6 are updated as LS 5 ′-LS 6 ′ and recorded in the update block in next available plane-aligned location.
- LS 5 ′ and LS 6 ′ being in the 2 nd (P 1 ) and 3 rd (P 2 ) planes or memory unit of MP 1 of the original block, would be programmed into corresponding planes in the next available metapage MP 1 ′ in the update block. This leaves a preceding, unused 1 st plane in MP 1 ′.
- LS 10 ′′′ is now the latest and only valid version for the logical sector LS 10 . This leaves a gap consisting of the last plane in MP 2 ′ and the first two planes in MP 3 ′.
- the data in logical sector LS 30 is being updated and recorded in the update block as LS 30 ′. Since the original LS 30 resides in P 2 or the 3 rd plane of a metapage, it will be written to the next available 3 rd plane in the update block. In this case, it will be the 3 rd plane of MP 4 ′. A gap is created from the last plane of MP 3 ′ to the first two planes of MP 4 ′.
- the example illustrates that logical sectors within a logical group can be written in a chaotic update block in any order and with any repetition in a plane-aligned manner. In a subsequent garbage collection operation, all versions, especially the latest version of a given logical sector will be conveniently serviced by an identical set of sensing circuits.
- FIG. 24C illustrates the chaotic update example of FIG. 23B with plane alignment and padding, according to another preferred embodiment of the invention.
- the operation is similar to that shown in FIG. 24B , except the intervening gaps are first filled in with padding.
- the gap created by the 1 st and 2 nd unused planes of the metapage MP 0 ′ are first padded with current versions of LS 8 and LS 9 , which reside in the original block. This renders the LS 8 and LS 9 in the original block obsolete.
- the update block is a sequential one, with the metapage MP 0 ′ filled in the order LS 8 , LS 9 , LS 10 ′ and LS 11 ′.
- a gap is created by a preceding, unused 1 st plane in MP 1 ′ and it is first padded with LS 4 . This will render the LS 4 in the original block obsolete.
- the second write converts the update block from a sequential to a chaotic one.
- a gap is created by unused, last plane in MP 1 ′ and the first two planes of MP 2 ′.
- the last plane of MP 1 ′ is first padded with LS 7 which follows the last programmed LS 6 ′ and the first two planes of MP 2 ′ are padded with logical units preceding LS 10 , namely LS 8 and LS 9 . This will obsoletes LS 10 ′ in MP 0 ′ and LS 7 -LS 9 in the original block.
- a gap is created consisting of the last plane in MP 2 ′ and the first two planes in MP 3 ′.
- the last plane of MP 2 ′ is padded by LS 11 ′ which is the current version of the logical unit following the last written LS 10 ′′ in the metapage MP 2 ′.
- the first two planes of MP 3 ′ are padded by LS 8 and LS 9 respectively, as the logical units preceding LS 10 ′′′ in the metapage MP 3 ′.
- a gap from the last plane of MP 3 ′ to the first two planes of MP 4 ′ will be padded accordingly, namely with LS 11 ′, LS 28 and LS 29 respectively.
- the example illustrates that logical sectors within a logical group can be written in a chaotic update block in any order and with any repetition in a plane-aligned manner.
- a metapage contains a cyclic run of pages from the individual planes. Since a metapage can be read or programmed in parallel, it is expedient to implement each host update with the granularity of a metapage. The padding, if any, is recorded together with the updated logical units, metapage by metapage.
- padding is performed on unused memory units preceding the plane-aligned memory unit to which the update is to be programmed.
- the action of any unused memory units following the last programmed memory unit is deferred until the next host write.
- any preceding unused memory units are padded within each metapage's boundary. In other words, if a preceding gap straddles over two metapages, padding will performed on each metapage in a logically sequential order appropriate to each metapage, but without regard to continuity across the boundary. In the event of a consolidation of the block, the last written metapage, if partially written, will be completely filled by padding.
- any partially filled metapage is fully padded before moving to the next metapage.
- a physical page may contain one or more memory units. If each memory unit can store a sector of data, then a physical page may store one or more sectors.
- Some memory architecture supports partial-page programming, in which by inhibiting programming of selected memory units within a page, selected logical units may be programmed individually at different times over multiple programming passes.
- a logical group of logical units is stored in logically sequential order in an original block.
- subsequent versions of the logical units are stored in an update block.
- a garbage collection is eventually performed to collect the latest versions of the logical units among the original block and the update block and to consolidate them sequentially into a new original block.
- the garbage collection operation will be more efficient if the updated versions of a given logical unit are all stored in the update block in alignment with its original version in the original block such that all versions are accessible by an identical set of sensing circuits.
- FIG. 25 illustrates an example memory organization in which each page contains two memory units for storing two logical units, such as two logical sectors.
- the logical sectors are stored in logically sequential order, the logical sectors, LS 0 and LS 1 are stored in the page P 0 , the logical sectors LS 2 and LS 3 are stored in the page P 1 , and the logical sectors LS 4 and LS 5 are stored in the page P 3 , etc. It will be seen that in this two-sector page, the first sector from the left has a page offset of “0”, and the second sector has a page offset of “1”.
- the updated logical sectors are recorded in an update block.
- the logical sector LS 2 resides in page P 0 with offset “0” in the original block. If in a first write, if LS 2 is updated to LS 2 ′, it will be stored in the first available location in the update block having the same page offset “0”. This will be in the first memory unit of the page P 0 ′. If in a second write, LS 5 is update to LS 5 ′, it will be stored in the first available location in the update block having the same page offset “1”. This will be in the second memory unit with offset “1” of the page P 1 ′.
- the unused memory units with offset “1” in P 0 ′ and offset “0” in P 1 ′ will first be padded by copying to them the latest versions of the logical sectors that will maintain logical sequential order at least within each page.
- LS 3 will be copied to the offset “1” location in P 0 ′ and LS 4 to the offset “0” location in P 1 ′.
- LS 2 ′ is again updated to LS 2 ′′, it will be stored in offset “0” of P 2 ′.
- LS 22 and LS 23 are respectively updated to LS 22 ′ and LS 23 ′, they will be stored respectively in offsets “0” and “1” of P 3 ′. However, before that, the unused memory unit with offset “1” in P 2 ′ is padded with LS 3 .
- the unit of program may have the granularity of a metapage. If the granularity of writing to a chaotic update block becomes a metapage, the entries in the CBI block described in connection with FIGS. 16A and 16B , may relate to metapages instead of sectors. The increased granularity reduces the number of entries that must be recorded for a chaotic update block, and allows direct indices to be eliminated and a single CBI sector to be used per metablock.
- FIG. 26A is similar to the memory structure of FIG. 21 except each page contains two sectors instead of one.
- a metapage MP 0 now has each of its pages capable of storing two logical units of data. If each logical unit is a sector then logical sectors are stored sequentially in MP 0 with LS 0 and LS 1 in plane P 0 , LS 2 and LS 3 in plane P 1 , etc.
- FIG. 26B illustrates metablocks shown in FIG. 26A having memory units laid out in a schematic linear fashion. As compared to the single sector page of FIG. 21 , the logical sectors are stored cyclically among the four pages with two sectors in each page.
- FIG. 27 illustrates an alternative scheme for plane alignment in an update block without padding logical units to be copied from one location to another.
- the portions of the four planes that intersect with the update block may be regarded as four buffers that collect plane-aligned updated logical units received from a host.
- Each logical unit received from the host is programmed without padding in the next available memory unit of the appropriate buffer.
- a different number of logical units are likely to have been programmed in each plane, according to the sequence of logical unit addresses received from the host.
- the chaotic update block MB′ 1 may contain updated versions of all logical units of a logical metapage, such as for MP′ 0 . It may also contain less than all logical units of a metapage, such as for MP′ 1 . In the case of MP′ 1 , the missing logical unit, LS 4 can be obtained from the corresponding original block MB 0 .
- This alternative scheme is particularly efficient if the memory architecture supports parallel read of arbitrary logical page from each plane. In this way, all logical pages of a metapage can be read in a single parallel read operation even if the individual logical pages are not from the same row.
- FIG. 28 illustrates a scheme in which when a program failure occurs in a defective block during a consolidation operation, the consolidation operation is repeated on another block.
- Block 1 is an original block storing the complete logical units of a logical group in logically sequential order.
- the original block contains sections A, B, C, and D, each storing a subgroup of logical units.
- the newer versions of the logical units are recorded in an update block, namely, Block 2 .
- the updates could record the logical units in sequential or in non-sequential (chaotic) order.
- the update block is closed to receiving further updates either because it is full or for some other reasons.
- the update block (Block 2 ) is closed, the current versions of the logical units that reside either on the update block or the original block (Block 1 ) are consolidated onto a new block (Block 3 ) to form a new original block for the logical group.
- the example shows the update block containing the newer versions of the logical units in sections B and D.
- the sections B and D are illustrated schematically in Block 2 not necessarily in the locations they are recorded but rather aligned to their original locations in Block 1 .
- Block 3 the current versions of all the logical units of the logical group, that originally reside in Block 1 , are recorded in sequential order into a consolidation block (Block 3 ).
- the logical units of section A will first be copied from Block 1 to Block 3 , followed by those of section B from Block 2 to Block 3 .
- a defect in Block 3 results in a program failure.
- One way of handling the program failure is to restart the consolidation process on a fresh block (Block 4 ).
- Block 4 the sections A, B, C, D are copied onto Block 4 , and the defective Block 3 is then discarded.
- this would mean performing two consolidation operations in tandem, which may result in copying as much as two blocks full of logical units.
- Memory devices have a specific time allowance to complete a certain operation. For example when a host writes to a memory device, it expects the write operation to be completed within a specified time, know as “write latency”. While the memory device, such as a memory card, is busy writing the data from the host, it signals a “BUSY” state to the host. If the “BUSY” state lasts longer than the write latency period, the host will time-out the write operation and register an exception or error to the write operation.
- FIG. 29 illustrates schematically, a host write operation with a timing or write latency that allows enough time to complete a write (update) operation as well as a consolidation operation.
- the host write operation has a write latency, T W , that will provide sufficient time for the completion of an update operation 972 of writing host data to an update block ( FIG. 29 (A)).
- a host write to an update block may trigger a consolidation operation.
- the timing also allows for a consolidation operation 974 ( FIG. 29 (B)) in addition to the update operation 972 .
- having to restart the consolidation operation anew in response to a failed one may take too much time and exceed the specified write latency.
- program failure in a block during a time-critical memory operation is handled by continuing the programming operation in a breakout block. Later, at a less critical time, the data recorded in the failed block prior to the interruption is transferred to another block, which could also be the breakout block. The failed block can then be discarded. In this way, when a defective block is encountered, it can be handled without loss of data and without exceeding a specified time limit by having to transfer the stored data in the defective block on the spot.
- This error handling is especially critical for a garbage collection operation so that the entire operation need not be repeated on a fresh block during a critical time. Subsequently, at an opportune time, the data from the defective block can be salvaged by relocation to another block.
- FIG. 30 illustrates a flow chart of the program failure handling according to a general scheme of the invention.
- FIG. 31A illustrates one embodiment of program failure handling in which the third (final relocation) block is different from the second (breakout) block.
- Phase I a sequence of logical units is being recorded on a first block. If the logical units are from a host write, the first block may be regarded as an update block. If the logical units are from a consolidation of compaction operation, the first block may be regarded as a relocation block. If at some point, a program failure is encountered in block 1 , a second block acting as a breakout block is provisioned. The logical unit that failed to be recorded in block 1 and any subsequent logical units are recorded on the breakout block instead. In this way, no additional time is required to replace the failed block 1 and the data residing on it.
- a final Phase III the failed block 1 and the data residing on it are replaced by relocating the logical units to block 3 , which serves as a relocation block.
- block 3 which serves as a relocation block.
- the data in the failed block is salvaged and the failed block can then be discarded.
- the final phase is timed so that it does not conflict with the timing of any contemporary memory operations.
- the relocation block 3 is distinct from the breakout block 2 . This is expedient when the breakout block has been recorded with additional logical units during the intermediate phase. Thus, the breakout block has turned into an update block and may not be suitable for relocating the logical units from the defective block 1 thereto.
- FIG. 31B illustrates another embodiment of program failure handling in which the third (final relocation) block is the same as the second (breakout) block.
- Phases I and II are similar to the first embodiment shown in FIG. 31A .
- Phase III the logical units from the defective block 1 is relocated to the breakout block 2 . This is expedient when the breakout block 2 has not been recorded with additional logical units other the original sequence of the previous write operation. In this way, a minimum of blocks is needed to store the logical units in question.
- the program failure handling is especially critical during a consolidation operation.
- a normal consolidation operation consolidates into a consolidation block the current versions of all logical units of a logical group residing among an original block and an update block.
- another block acting as a breakout consolidation block will be provisioned to receive the consolidation of the remaining logical units. In this way, no logical units need be copied more than once and the operation with exception handling can still be completed within a period specified for a normal consolidation operation.
- the consolidation operation can be completed by consolidating all outstanding logical units of the group into the breakout block. The opportune time will be during some other period outside the current host write operation when there is time to perform the consolidation. One such opportune time is during another host write where there is an update but no associated consolidation operation.
- the consolidation with the program failure handling can be regarded as being implemented in multiple phases.
- a first phase the logical units are being consolidated into more than one block after a program failure occurs in order to avoid consolidating each logical unit more than once.
- the final phase is completed at an opportune time, where the logical group is consolidated into one block, preferably by collecting all the logical units into the breakout consolidation block in sequential order.
- FIG. 32A illustrates a flow chart of initial update operation that results in a consolidation operation.
- FIG. 32B illustrates a flow chart of the multiple phase consolidation operation, according to a preferred embodiment of the invention.
- Phase I operation 1120 comprises of STEP 1122 and STEP 1124 .
- Blocks 1 and 2 can be erased in order to free up space.
- Block 2 can be immediately released to the EBL (Erased Block List, see FIG. 18 ) and reused.
- Block 1 can only be released if this is a Closed Update block and there is another block the corresponding GAT entry points to.
- Block 3 becomes the original block for the logical group and Block 4 becomes a replacement sequential update block for Block 3 .
- the memory device After Phase I consolidation has completed, the memory device so signals the host by releasing the BUSY signal.
- Phase II the intermediate operation 1130
- Phase III consolidation operation 1140 may take place the before Phase III consolidation operation 1140 .
- STEPs 1132 , 1134 and 1136 A number of possible scenarios are possible as given by any one of STEPs 1132 , 1134 and 1136 .
- Block 4 which is the breakout consolidation block and which has up to now assumed the role of a replacement sequential update block, will be used as a normal update block. Depending on the host writes, it may remain sequential or be turned into a chaotic state. As an update block it will at some point trigger closure of another chaotic block as described in an earlier preferred embodiment. If the host writes to another logical group, proceed directly to Phase III operation.
- logical units from the sections A and B will be read from Block 3 as the logical group original block, and logical units from the sections C and D will be read from Block 4 as the group's sequential update block.
- Block 3 Another possibility for the intermediate phase is for the memory device to be powered off and then re-started.
- the blocks in the Allocation Block List (erase pool's blocks to be used, see FIG. 15 and FIG. 18 ) are scanned to identify the defective consolidation block that has become a special status original block (Block 3 ) and the associated sequential update block (Block 4 ) for the logical group.
- the flag in the first logical unit of the breakout block (Block 4 ) will indicate that the associated block is an original block (Block 3 ) that has suffered a program error.
- Block 3 can then be located.
- a flag is programmed into the first logical unit in the breakout consolidation block (Block 4 ). This helps to indicate the special status of the logical group: i.e., it has been consolidated into two blocks, i.e., Block 3 and Block 4 .
- An alternative to using a flag to identify the logical group with a defective block is to detect the block as defective during scanning by the property that it is not full as an original block should be (unless the error happened in the last page, and the last page does not have ECC error). Also, depending on implementation, there could be an information record about the failed group/block stored in a control data structure in flash memory, not just a flag in the header area of the first sector written to the breakout consolidation block (Block 4 ).
- the final consolidation in Phase III is executed whenever there is opportunity without violating any specified time limits.
- a preferred occasion is to “piggy-back” on the next host write slot when there is an update operation to another logical group with no accompanying consolidation operation. If the host write to another logical group triggers a garbage collection itself, then the Phase III consolidation will be deferred.
- FIG. 33 illustrates an example timing of the first and final phases of the multiple phase consolidation operation.
- the host write latency is the width of each host write time slot with a period T W .
- Host Write 1 is a simple update, and the current versions of a first set of logical units in the logical group LG 1 are recorded on an associated update block.
- LG 1 happens to result in the update block being closed (e.g., full).
- a new update block will be provisioned for recording the rest of the updates.
- the provisioning of a new update block may trigger a garbage collection that results in a consolidation operation on LG 4 in order to recycle a block for reuse.
- the current logical units of the LG 4 group are being recorded on a consolidation block in sequential order.
- the consolidation operation proceeds until a defect is encountered in the consolidation block.
- Phase I consolidation is then invoked where the consolidation operation continues on a breakout consolidation block.
- the final consolidation of LG 4 (Phase III) awaits the next opportunity.
- Host Write 4 the operation is merely recording some logical units of LG 2 to its update block.
- the surplus time in the time slot provides the opportunity to execute the final consolidation of LG 4 .
- FIG. 34A and FIG. 34B illustrate respectively a first case for Phase I and Phase III operations of the multiple phase consolidation as applied to the examples in FIG. 28 and FIG. 31 .
- FIG. 34A illustrates the case where the breakout consolidation block is not used as an update block but more as a consolidation block whose consolidation operation has been interrupted.
- FIG. 34A refers to the Host Write # 2 shown in FIG. 33 in which the host writes updates of logical units belonging to the logical group LG 1 and during which the operation also triggers a consolidation of blocks associated with another logical group LG 4 .
- the formation of the original block (Block 1 ) and the update block (Block 2 ) is in the same manner as the example in FIG. 28 .
- the consolidation block (Block 3 ) is found to have a defect at the point of consolidating the logical units in section C.
- the present multiple phase scheme continues the consolidation operation onto a newly provisioned block (Block 4 ) that serves as a breakout consolidation block.
- a Phase I consolidation operation logical units in sections A and B have been consolidated in the consolidation block (Block 3 ).
- the remaining logical units in sections C and D are copied sequentially into the breakout consolidation block (Block 4 ).
- the breakout consolidation block (Block 4 ) is not used to record any update data outside the consolidation operation and remains a breakout consolidation block that has yet to be completed.
- Blocks 1 and 2 Since the data in Blocks 1 and 2 are now fully contained in the other blocks (Blocks 3 and 4 ), they can be erased for recycling.
- the address table (GAT) is updated to point to Block 3 as an original block for the logical group.
- the directory information (in the ACL, see FIG. 15 and FIG. 18 ) for the update block is also updated to point to Block 4 , which has become the sequential update block for the logical group (e.g., LG 4 ).
- the consolidated logical group is not localized in one block, but distributed over the defective consolidation block (Block 3 ) and the breakout consolidation block (Block 4 ).
- the important feature of this scheme is that logical units in the group are only consolidated once during this phase at the expense of spreading the consolidation over more than one block. In this way, the consolidation operation can be completed within the normal specified time.
- FIG. 34B illustrates the third and final phase of the multiple phase consolidation started in FIG. 34A .
- Phase III consolidation is executed at an opportune time after the first phase, such as during a subsequent host write that does not trigger an accompanying consolidation operation.
- FIG. 34B refers to the time slot in which Host Write # 4 shown in FIG. 33 takes place. During that period the host write updates logical units belonging to the logical group LG 2 without triggering another additional consolidation operation. Thus, the surplus time in the time slot can expediently be utilized for Phase III operation for completion of the consolidation of logical group LG 4 .
- the operation consolidates all outstanding logical units of LG 4 not already in the breakout block into the breakout block.
- the recorded sequence will considered to be equivalent to the sequential order of A, B, C, D.
- the current versions of the outstanding logical units to be copied are preferable obtained from Block 3 , since it is already in consolidated form, but may also be collected from Block 1 and Block 2 if they have not been erased.
- FIG. 35A and FIG. 35B illustrate respectively a second case for Phase I and Phase III operations of the multiple phase consolidation as applied to the examples in FIG. 28 and FIG. 33 .
- FIG. 35A illustrates the case where the breakout consolidation block is maintained as an update block to receive host writes rather than as a consolidation block. This is applicable to a host write that for example updates the logical group LG 4 , and in the process also triggers a consolidation in the same logical group.
- the consolidation of Block 1 and Block 2 onto Block 3 proceeds until it encounters a program failure while processing section C.
- the consolidation then continues on a breakout consolidation block (Block 4 ).
- the breakout block is maintained as an update block.
- This case is particular applicable to a scenario where the host write updates a logical group and triggers a consolidation of the same logical group. In the example, this enables recording of the host updates for the logical group LG 4 to be recorded in the breakout consolidation block (Block 4 ) instead of recorded to a new update block.
- the update block (previously the breakout consolidation block (Block 4 )) can be sequential or become chaotic depending on the host data recorded to it.
- Block 4 has turned chaotic since a subsequent newer version of logical units in section C renders the earlier ones in Block 4 obsolete.
- Block 3 will be regarded as the original block for LG 4
- Block 4 will be the associated update block.
- FIG. 35B illustrates the third and final phase of the multiple phase consolidation started in FIG. 35A for the second case.
- Phase III consolidation is executed at an opportune time after the first phase, such as during a subsequent host write that does not trigger an accompanying consolidation operation. During that period the host write updates logical units belonging to a logical group without triggering a consolidation operation. Thus, the surplus time in the time slot can expediently be utilized for Phase III operation for the completion of the consolidation of logical group LG 4 .
- the logical group LG 4 is then garbage collected from Blocks 3 and 4 to a new consolidation block (Block 5 ).
- Block 3 then gets marked as bad, Block 4 is recycled and the new consolidation block (Block 5 ) will become the new original block for the logical group LG 4 .
- the other blocks, Block 1 and Block 2 are also erased and recycled.
- FIGS. 31A, 31B , 34 A, 34 B, 35 A and 35 B apply to a preferred block management system with each physical block (metablock) storing only logical units belonging to the same logical group.
- the invention is equally applicable to other block management systems where there is no logical group to physical block alignment, such as those disclosed in WO 03/027828 and WO 00/49488.
- FIGS. 36A, 36B and 36 C Some examples of implementing the phased program failure handling method to these other system are illustrated in FIGS. 36A, 36B and 36 C.
- FIG. 36A illustrates the phased program error handling method as applied to the scenario when the host write triggers the closure of an update block, and the update block is sequential.
- the closure in this case is done by copying the remaining valid data (B and C) from the original Block 2 to the sequential update Block 3 .
- the portion C will be programmed to the reserved Block 4 .
- the new host data can then be written to a new update Block 5 (not shown).
- the phases II and III of the method are the same as for the case of chaotic block closure.
- FIG. 36B illustrates the phased program error handling method as it can be applied to a (Partial Blocks system) in the case of an Update of Update blocks.
- the logical group is stored in original Block 1 , and other update blocks.
- Consolidation operation includes copying of the data to one of the update blocks (to be chosen according to some rule, Block 3 on the diagram) from the original Block 1 and the other update Block 2 .
- Block 3 was already partially written.
- FIG. 36C illustrates the phased program error handling the garbage collection operation, or clean-up in a memory block management system that does not support a logical group mapped to a metablock.
- a memory block management (Cyclic Storage) system is described in WO 03/027828 A1.
- the distinctive feature of the Cyclic Storage system is that blocks were not allocated for a single logical group. There is support of multiple logical grouping for control data in a metablock.
- the garbage collection involved taking the valid data sectors, which may have no relationship (random logical block addresses), from a partially obsolete block to the Relocation block, which can have some data there already. If the Relocation block gets full during the operation, another one would be opened.
- a CBI sector is used to store an index that keeps track of the locations of the logical sectors stored randomly in a chaotic or non-sequential update block.
- an index of the logical units in an update block buffered in RAM is periodically stored in the non-volatile memory.
- the index is stored in a block dedicated for storing indices.
- the index is stored in the update block.
- the index is stored in the header of each logical unit.
- the logical units written after the last index update but before the next have their indexing information stored in the header of each logical unit. In this way, after a power interruption, the location of recently written logical units can be determined without have to perform a scanning during initialization.
- a block is managed as partially sequential and partially non-sequential, directed to more than one logical subgroup.
- the list of recently written sectors for the logical group is held in controller RAM.
- the list is lost if the memory device suffers a power shut-down, but can be rebuilt by scanning the updated blocks on an initialization after a power turn-on.
- FIG. 37 illustrates an example of a schedule of writing of a CBI sector to an associated chaotic index sector block after every N sector writes of the same logical group.
- the example shows two logical groups LG 3 and LG 11 undergoing concurrent updates. Initially, the logical sectors of LG 3 were stored in sequential order in an original block. Updates of the logical sectors in the group are recorded on an associated update block in the order dictated by a host. The example shows a chaotic update sequence. Concurrently, the logical group LG 11 is also being updated in similar manner at its update block. After every logical sector write, its position in the update block is kept in controller RAM. After every predetermined triggering events, the current index of the logical sectors in the update block is written in the form of a chaotic index sector to the non-volatile chaotic index sector block. For example, the predetermined triggering events occur after every N writes, where N may be 3.
- the examples given have been with the logical unit of data being a sector, those skilled in the art will recognize that the logical unit could be some other aggregate, such as a page containing a sector or a group of sectors. Also, the first page in a sequential block does not need to be logical page 0 , since a page tag wrap around could be in place.
- the index pointer is stored in a dedicated CBI sector in the chaotic update block itself after every N writes thereto.
- This scheme is similar to the embodiment described earlier where the index is also stored in a CBI sector. The difference is in that earlier embodiment, the CBI sector is recorded in a CBI sector block and not in the update block itself.
- FIGS. 37A, 37B and 37 C respectively illustrate the state of an update block also storing CBI sectors at three different stages.
- FIG. 38A illustrates an update block up to the point when a CBI sector is recorded in it after a predetermined number of writes.
- the host after the host has written logical sectors 0 - 3 sequentially, it then issues a command to write another version of logical sector 1 again, thus breaking the contiguous sequence of data writes.
- the update block is then converted to a chaotic update block with the implementation of a chaotic block indexing carried in a CBI sector.
- the CBI is an index that contains indices for all logical sectors of the chaotic block.
- the 0 th entry indicates the offset in the update block for the 0 th logical sector and similarly, the nth entry indicates the offset for the nth logical sector.
- the CBI sector is written to the next available location in the update block. To avoid frequent flash access, the CBI sector is written after every N data sector writes. In this example, N is 4. If the power is lost at this point, the last written sector will be the CBI sector and the block will be regarded as a chaotic update block.
- FIG. 38B illustrates the update block of FIG. 38A to have logical sectors 1 , 2 and 4 further recorded to it after the index sector.
- the newer versions of logical sectors 1 and 2 supersede the older versions previous recorded in the update block.
- the last written sector needs to be found first, and then up to N sectors need to be scanned in order to find the last written index sector and the recently written data sectors.
- FIG. 38C illustrates the update block of FIG. 38B having another logical sector written to trigger the next recording of an index sector.
- the same update block after another N(N 4) sector writes records another current version of the CBI sector.
- an index of logical units recorded in a block is stored in nonvolatile memory after every N writes, and current information about the logical units of the intermediate writes is stored in an overhead portion of each logical unit written.
- information about the logical units written since last index update can be obtained quickly from the overhead portion of the last written logical unit in the block, without having to scan the block.
- FIG. 39A illustrates an intermediate index for the intermediate writes being stored in a header of each data sector in the chaotic update block
- FIG. 39B illustrates an example of storing an intermediate index for the intermediate writes in the header of each sector written.
- a CBI index is written as the next sector in the block.
- logical sectors LS′ 1 , LS′ 2 and LS 4 are written to the block.
- the header will store an intermediate index for the logical units written since the last CBI index.
- the header in LS′ 2 will have an index giving the offset (i.e., location) of the last CBI index as well as that of LS′ 1 .
- the header in LS 4 will have an index giving the offset of the last CBI index as well as that of LS′ 1 and LS′ 2 .
- the last written data sector always contains information about up to N last written pages (i.e., up to the last written CBI sector). Whenever, there was a power restart, the last CBI index provides indexing information for logical units written prior to the CBI index sector, and indexing information for subsequent logical units written is found in the header of the last written data sector. This has the advantage of eliminating, on initialization, the need to scan the block for those subsequently written sectors to determine their locations.
- the scheme of storing intermediate index information in the header of the data sector is equally applicable whether the CBI index sector is stored in the update block itself or in a separate CBI sector block as described in an earlier section.
- the entire CBI index is stored in the overhead portion of each data sector in a chaotic update block.
- FIG. 40 illustrates the information in the chaotic index field being stored in a header of each data sector in the chaotic update block.
- the information capacity of a sector header is limited, and so the range of indices provided by any single sector may be designed to be part of a hierarchical indexing scheme. For example, sectors within a specific plane of the memory may provide indexing to only sectors within that plane. Also, the range of logical addresses may be divided into a number of sub-ranges, to allow an indirect indexing scheme to be adopted. For example, if sectors with 64 logical addresses may be stored in a plane, each sector may have 3 fields for sector offset values, each field able to store 4 offset values. The first field defines the physical offset for the last written sector within the logical offset ranges 0-15, 15-31, 32-47, and 48-63.
- the second field defines physical offset values for 4 sub-ranges of 4 sectors each, within the range to which it relates.
- the third field defines physical offset values for 4 sectors within the sub-range to which it relates.
- the physical offset of a logical sector within the chaotic update block may therefore be determined by reading indirect offset values from up to 3 sectors.
- the logical range of sectors which can be written non-sequentially is reduced.
- sequential data can be copied from source to destination without data transfers to and from the controller, using on-chip copy feature. If the source data is scattered, as happens in chaotic blocks, up to one page read per sector may be required in order to collect all the sectors to be written to destination.
- the Chaotic metablock can be used for writing sectors of up to 4 sub-groups (any of them).
- the logical group is allowed to have up to 4 partially or fully chaotic sub-groups, and a minimum of 4 sub-groups will remain fully sequential.
- Chaotic Logical group should be consolidated and closed. But, this is very unlikely to happen, as in real applications, the hosts do not need more than 4 chaotic ranges of 256 sectors (sub-groups) within a range of 2048 sectors (logical group). As a result, in normal case garbage collection time is also unaffected, but the limiting rule guards form an extreme case of too long garbage collection which may trigger host's time out.
- ECC error correction code
- Example of control data are the directory information and block allocation information associated with the memory block management system, such as those described in connection with FIG. 20 .
- the control data is maintained in both high speed RAM and the slower nonvolatile memory blocks. Any frequently changing control data is maintained in RAM with periodic control writes to update equivalent information stored in a nonvolatile metablock. In this way, the control data is stored in nonvolatile, but slower flash memory without the need for frequent access.
- a hierarchy of control data structures such as GAT, CBI, MAP, and MAPA shown in FIG. 20 is maintained in flash memory. Thus, a control write operation causes information from control data structures in RAM to update equivalent control data structures in flash memory.
- critical data such as some or all of the control data is guaranteed an extra level of reliability if it is maintained in duplicates.
- the duplication is performed in a manner that for multi-state memory system employing a two-pass programming technique for successively programming the multi-bits of the same set of memory cells, any programming error in the second pass will not corrupt the data established by the first pass.
- the duplication also helps with detection of write aborts, detection of misdetection (i.e., both copies have good ECC but data are different), and adds an extra level of reliability.
- Several techniques of data duplication are contemplated.
- a subsequent programming pass avoids programming the memory cells storing at least one of the two copies. In this way, at least one of the two copies will be unaffected in the event the subsequent programming pass aborts before completion and corrupts the data of the earlier pass.
- the two copies of a given data are stored in two different blocks where at most one of the two copies have its memory cells programmed in a subsequent programming pass.
- the two copies of a given data is programmed into a multi-state memory in a binary programming mode, so that no further programming on the programmed memory cells will take place.
- a fault-tolerant code is employ to code the multiple memory states such that data established by an earlier programming pass is insensitive to errors in subsequent programming pass.
- a complication of data duplication arises in multi-state memory in which each memory cell stores more than one bit of data.
- a 4-state memory can be represented by two bits.
- One existing technique is to use a 2-pass programming to program such memory.
- a first bit (lower page bit) is programmed by a first pass.
- the same cell is programmed in a second pass to represent a desired second bit (upper page bit).
- the memory state representation of the first bit is made to depend on the value of the second bit. Therefore, during the programming of the second bit, if an error occurs during to power interruptions or other causes and results in an incorrect memory state, the value of the first bit could also be corrupted.
- FIG. 41A illustrates threshold voltage distributions of a 4-state memory array when each memory cell stores two bits of data.
- the four distributions represent the population of the four memory states, “U”, “X”, “Y” and “Z”. Before a memory cell is programmed it is first erased into its “U” or “unwritten” state. The memory states “X”, “Y” and “Z” are progressively reached as the memory cell get increasingly programmed.
- FIG. 41B illustrates an existing, 2-pass programming scheme using a Gray code.
- the four states can be represented by two bits, a lower page bit and an upper page bit, such as (Upper page bit, Lower page bit).
- an upper page bit such as (Upper page bit, Lower page bit).
- a first programming pass only programs the logical lower page.
- a subsequent, second programming pass on the same page of cells will program the logical upper page without resetting the logical lower page.
- a commonly used code is the Gray code where only one bit changes when transiting to an adjacent state. Therefore, this code has the advantage of placing less demand on error correction since only one bit is involved.
- a general scheme in using the Gray code is to let “1” represent a “not program” condition.
- any cell to store the data “0” will therefore have its logical state transit from (x, 1) to (x, 0), where “x” represent the “don't care” value of the upper bit.
- “x” may as well be labeled by “1” for consistency.
- the (1, 0) logical state is represented by programming the cell to the memory state “X”. That is, prior to a second program pass, the lower bit value of “0” is represented by the memory state “X”.
- a second pass programming is performed to store the bits of the logical upper page. Only those cells needing an upper page bit value of “0” will be programmed.
- the cells in the page are either in the logical state (1, 1) or (1, 0).
- the lower bit value of “0” or “1” need be distinguished.
- the memory cell in question is programmed to the memory state “Y”.
- the memory cell in question is programmed to the memory state “Z”. In this way, during read, by determining the memory state programmed in a cell, both the lower page bit and the upper page bit can be decoded.
- the Gray-code, 2-pass programming scheme can become a problem when the second-pass programming is erroneous.
- the programming of the upper page bit to “0” while the lower bit is at “1” will cause the transition from (1, 1) to (0, 1).
- the memory cell may end up in one of the transition memory state, say “X”.
- “X” will be decoded as the logical state (1, 0). This gives incorrect results for both the upper and lower bits, since it should have been (0, 1).
- the programming is interrupted when “Y” is reached, it would correspond to (0, 0). While the upper bit is now correct, the lower bit is still wrong.
- FIG. 42 illustrates a way of safe guarding critical data by saving each sector in duplicates. For example, the sectors A, B, C, and D are saved in duplicate copies. If there is corruption of the data in one sector copy, then the other one can be read instead.
- FIG. 43 illustrates the non-robustness in which duplicate sectors are typically saved in a multi-state memory.
- the multi-state page actually includes a logical lower page and a logical upper page that are respectively programmed in two passes.
- the page is four sector wide.
- sector A and its duplicate will be concurrently programmed in the logical lower page and similarly for sector B and its duplicate.
- the sectors C, C will be concurrently programmed and likewise for the sectors D, D. If a program abort occurs in the middle of the programming of sectors C, C, the sectors A, A, in the lower page will be corrupted.
- the lower page sectors are first read and buffered prior to the upper page programming, they may be unrecoverable if corrupted. Thus, saving two copies of critical data concurrently such as sectors A, A can not prevent both of them being corrupted by a subsequent problematic saving of sectors C, C in its upper page.
- FIG. 44A illustrates one embodiment of saving staggered duplicate copies of critical data to a multi-state memory.
- the lower page is saved in the same manner as in FIG. 43 , namely, sectors A, A and sectors B, B.
- the sectors C and D are interleaved with their duplicates as C, D, C, D.
- the two copies of sector C can be programmed concurrently and likewise for the two copies of sector D.
- the program of say, the two sectors C gets aborted, it can corrupt the lower page only in terms of one copy of sector A and one copy of sector B. The other copies remain unaffected.
- there are two copies of critical data stored in a first pass they should not be subject to a subsequent second pass programming simultaneously.
- FIG. 44B illustrates another embodiment of saving duplicate copies of critical data only to the logical upper page of a multi-state memory.
- the data in the lower page is not used.
- the critical data and its duplicates, such as sectors A, A and sectors B, B are saved to the logical upper page only. In this way, if there is a program abort, the critical data could be rewritten in another logical upper page, while any corruption to the lower page data would be immaterial.
- This solution basically uses half the storage capacity of each multi-state page.
- FIG. 44C illustrates yet another embodiment of saving duplicate copies of critical data in binary mode of a multi-state memory.
- each memory cell is programmed in a binary mode where its threshold range is only divided into two regions.
- This solution also uses half the storage capacity of each multi-state page. Operating a multi-state memory in binary mode is described in U.S. Pat. No. 6,456,528 B1, the entire disclosure of which is incorporated herein by reference.
- FIG. 45 illustrates yet another embodiment of saving duplicate copies of critical data concurrently to two different metablocks. If one of the blocks becomes unavailable, the data can be read from the other one.
- the critical data are contained in sectors A, B, C, D and E, F, G, H, and I, J, K, L. Each sector is saved in duplicates.
- the two copies will be written concurrently to two differently blocks, Block 0 and Block 1 . If one copy is written to a logical lower page, the other copy will be written to a logical upper page. In this way, there will always be a copy programmed to a logical upper page. If a program abort occurs, it can be reprogrammed to another logical upper page. At the same time, if the lower page is corrupted, there will always be another upper page copy in the other block.
- FIG. 46B illustrates yet another embodiment of saving duplicate copies of critical data concurrently by the use of a fault-tolerant code.
- FIG. 46A is similar to FIG. 41A in illustrating threshold voltage distributions of a 4-state memory array and is shown as a reference for FIG. 46B .
- the fault-tolerant code essentially avoids any upper page programming to transit through any intermediate states.
- the first pass lower page programming has the logical state (1, 1) transits to (1, 0) as represented by programming the erased memory state “U” to “Y”.
- the second pass programming of the upper page bit to “0” if the lower page bit is at “1”, the logical state (1, 1) transits to (0, 1) as represented by programming the erased memory state “U” to “X”.
- the duplicate copies of critical data are preferably written concurrently as described above. Another way to avoid corrupting both copies at the same time is to write the copies sequentially. This method is slower, but the copies themselves indicate whether their programming was successful or not when the controller checks both the copies:
- FIG. 47 is a table showing the possible states of the two copies of data, and the validity of the data.
- first copy has no ECC error, but the second has ECC error, then it could mean that the programming was interrupted in the middle of the second copy programming.
- the first copy contains valid data.
- the second copy data is not trustworthy even if the error is correctable.
- first copy has no ECC error and the second is empty (erased), then it could mean that the programming was interrupted after the end of first copy programming but before the commencement of the second copy.
- the first copy contains valid data.
- first copy has ECC error and the second is empty (erased), then it could mean that the programming was interrupted in the middle of first copy programming.
- the first copy may contain invalid data even if error is correctable.
- the copy read is preferably alternate between the two copies.
- the controller when the controller reads a control data, it reads, say, Copy 1 , the next control read (any control read) should be from Copy 2 , then Copy 1 again etc.
- ECC checked integrity
- the two copies will be read and checked for integrity (ECC checked) regularly. It reduces the risk of not detecting in time errors caused by deteriorating data retention. For example, if only Copy 1 is normally read, then Copy 2 may gradually deteriorate to the level where the error is no longer be salvaged by ECC, and the 2nd copy can no longer be used.
- the block management system maintains a set of control data in flash memory during its operation.
- This set of control data is stored in the metablocks similar to host data.
- the control data itself will be block managed and will be subject to updates and therefore garbage collection operations.
- Every N CBI updates fill up the CBI block and trigger a CBI relocation (rewrite) and a MAP update. If the Chaotic block gets closed then it also triggers GAT update. Every GAT update triggers a MAP update. Every N GAT updates fill up the block and trigger a GAT block relocation.
- a MAP block gets full it also triggers a MAP block relocation and a MAPA Block (if exist, otherwise BOOT block points directly to MAP) update.
- a MAPA block gets fill, it also triggers a MAPA block relocation, a BOOT Block update and a MAP update.
- a BOOT Block gets full, it triggers an active BOOT Block relocation to another BOOT Block.
- each control data block of the hierarchy has its own periodicity in terms of getting filled and being relocated. If each proceeds normally, there will be times when the phases of a large number of the blocks will line up and trigger a massive relocation or garbage collection involving all those blocks at the same time. Relocation of many control blocks will take a long time and should be avoided as some hosts do not tolerate long delays caused by such massive control operations.
- a “control garbage collection” or preemptive relocation of a memory block is implemented to avoid the situation where a large number of update blocks all happen to need relocation concurrently. For example, this situation can happen when updating control data used for controlling the operation of the block management system.
- a hierarchy of control data type can exist with varying degree of update frequencies, resulting in their associated update blocks requiring garbage collection or relocation at different rates. There will be certain times that the garbage collection operations of more than one control data types coincide. In the extreme situation, the relocation phases of the update blocks for all control data types could line up, resulting in all of the update blocks requiring relocation at the same time.
- the margin in advance of the block absolutely having to be relocated is a predetermined number of unwritten memory units before the block is totally full. Consideration is given to the margin being sufficient to precipitate relocation in advance of the block being totally filled, but without being unduly premature, resulting in wasted resources.
- the predetermined number of unwritten memory units is between one and six memory units.
- FIG. 48 illustrates a flow chart of preemptive relocation of memory block storing control data.
- preemptive relocations are normally done when there is no user data garbage collection taking place.
- preemptive relocation can be performed one control block at a time.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
- Read Only Memory (AREA)
- Memory System (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Debugging And Monitoring (AREA)
- Credit Cards Or The Like (AREA)
Abstract
Description
- This application is a continuation-in-part of U.S. patent application Ser. No. 10/750,155, filed on Dec. 30, 2003.
- This invention relates generally to non-volatile semiconductor memory and specifically to those having a memory block management system with efficient handling of update data in a block.
- Solid-state memory capable of nonvolatile storage of charge, particularly in the form of EEPROM and flash EEPROM packaged as a small form factor card, has recently become the storage of choice in a variety of mobile and handheld devices, notably information appliances and consumer electronics products. Unlike RAM (random access memory) that is also solid-state memory, flash memory is non-volatile, and retaining its stored data even after power is turned off. Also, unlike ROM (read only memory), flash memory is rewritable similar to a disk storage device. In spite of the higher cost, flash memory is increasingly being used in mass storage applications. Conventional mass storage, based on rotating magnetic medium such as hard drives and floppy disks, is unsuitable for the mobile and handheld environment. This is because disk drives tend to be bulky, are prone to mechanical failure and have high latency and high power requirements. These undesirable attributes make disk-based storage impractical in most mobile and portable applications. On the other hand, flash memory, both embedded and in the form of a removable card is ideally suited in the mobile and handheld environment because of its small size, low power consumption, high speed and high reliability features.
- Flash EEPROM is similar to EEPROM (electrically erasable and programmable read-only memory) in that it is a non-volatile memory that can be erased and have new data written or “programmed” into their memory cells. Both utilize a floating (unconnected) conductive gate, in a field effect transistor structure, positioned over a channel region in a semiconductor substrate, between source and drain regions. A control gate is then provided over the floating gate. The threshold voltage characteristic of the transistor is controlled by the amount of charge that is retained on the floating gate. That is, for a given level of charge on the floating gate, there is a corresponding voltage (threshold) that must be applied to the control gate before the transistor is turned “on” to permit conduction between its source and drain regions. In particular, flash memory such as Flash EEPROM allows entire blocks of memory cells to be erased at the same time.
- The floating gate can hold a range of charges and therefore can be programmed to any threshold voltage level within a threshold voltage window. The size of the threshold voltage window is delimited by the minimum and maximum threshold levels of the device, which in turn correspond to the range of the charges that can be programmed onto the floating gate. The threshold window generally depends on the memory device's characteristics, operating conditions and history. Each distinct, resolvable threshold voltage level range within the window may, in principle, be used to designate a definite memory state of the cell.
- The transistor serving as a memory cell is typically programmed to a “programmed” state by one of two mechanisms. In “hot electron injection,” a high voltage applied to the drain accelerates electrons across the substrate channel region. At the same time a high voltage applied to the control gate pulls the hot electrons through a thin gate dielectric onto the floating gate. In “tunneling injection,” a high voltage is applied to the control gate relative to the substrate. In this way, electrons are pulled from the substrate to the intervening floating gate. While the term “program” has been used historically to describe writing to a memory by injecting electrons to an initially erased charge storage unit of the memory cell so as to alter the memory state, it has now been used interchangeable with more common terms such as “write” or “record.”
- The memory device may be erased by a number of mechanisms. For EEPROM, a memory cell is electrically erasable, by applying a high voltage to the substrate relative to the control gate so as to induce electrons in the floating gate to tunnel through a thin oxide to the substrate channel region (i.e., Fowler-Nordheim tunneling.) Typically, the EEPROM is erasable byte by byte. For flash EEPROM, the memory is electrically erasable either all at once or one or more minimum erasable blocks at a time, where a minimum erasable block may consist of one or more sectors and each sector may store 512 bytes or more of data.
- The memory device typically comprises one or more memory chips that may be mounted on a card. Each memory chip comprises an array of memory cells supported by peripheral circuits such as decoders and erase, write and read circuits. The more sophisticated memory devices also come with a controller that performs intelligent and higher level memory operations and interfacing.
- There are many commercially successful non-volatile solid-state memory devices being used today. These memory devices may be flash EEPROM or may employ other types of nonvolatile memory cells. Examples of flash memory and systems and methods of manufacturing them are given in U.S. Pat. Nos. 5,070,032, 5,095,344, 5,315,541, 5,343,063, and 5,661,053, 5,313,421 and 6,222,762. In particular, flash memory devices with NAND string structures are described in U.S. Pat. Nos. 5,570,315, 5,903,495, 6,046,935. Also nonvolatile memory devices are also manufactured from memory cells with a dielectric layer for storing charge. Instead of the conductive floating gate elements described earlier, a dielectric layer is used. Such memory devices utilizing dielectric storage element have been described by Eitan et al., “NROM: A Novel Localized Trapping, 2-Bit Nonvolatile Memory Cell,” IEEE Electron Device Letters, vol. 21, no. 11, November 2000, pp. 543-545. An ONO dielectric layer extends across the channel between source and drain diffusions. The charge for one data bit is localized in the dielectric layer adjacent to the drain, and the charge for the other data bit is localized in the dielectric layer adjacent to the source. For example, U.S. Pat. Nos. 5,768,192 and 6,011,725 disclose a nonvolatile memory cell having a trapping dielectric sandwiched between two silicon dioxide layers. Multi-state data storage is implemented by separately reading the binary states of the spatially separated charge storage regions within the dielectric.
- In order to improve read and program performance, multiple charge storage elements or memory transistors in an array are read or programmed in parallel. Thus, a “page” of memory elements are read or programmed together. In existing memory architectures, a row typically contains several interleaved pages or it may constitute one page. All memory elements of a page will be read or programmed together.
- In flash memory systems, erase operation may take as much as an order of magnitude longer than read and program operations. Thus, it is desirable to have the erase block of substantial size. In this way, the erase time is amortized over a large aggregate of memory cells.
- The nature of flash memory predicates that data must be written to an erased memory location. If data of a certain logical address from a host is to be updated, one way is rewrite the update data in the same physical memory location. That is, the logical to physical address mapping is unchanged. However, this will mean the entire erase block contain that physical location will have to be first erased and then rewritten with the updated data. This method of update is inefficient, as it requires an entire erase block to be erased and rewritten, especially if the data to be updated only occupies a small portion of the erase block. It will also result in a higher frequency of erase recycling of the memory block, which is undesirable in view of the limited endurance of this type of memory device.
- Another problem with managing flash memory system has to do with system control and directory data. The data is produced and accessed during the course of various memory operations. Thus, its efficient handling and ready access will directly impact performance. It would be desirable to maintain this type of data in flash memory because flash memory is meant for storage and is nonvolatile. However, with an intervening file management system between the controller and the flash memory, the data can not be accessed as directly. Also, system control and directory data tends to be active and fragmented, which is not conducive to storing in a system with large size block erase. Conventionally, this type of data is set up in the controller RAM, thereby allowing direct access by the controller. After the memory device is powered up, a process of initialization enables the flash memory to be scanned in order to compile the necessary system control and directory information to be placed in the controller RAM. This process takes time and requires controller RAM capacity, all the more so with ever increasing flash memory capacity.
- U.S. Pat. No. 6,567,307 discloses a method of dealing with sector updates among large erase block including recording the update data in multiple erase blocks acting as scratch pad and eventually consolidating the valid sectors among the various blocks and rewriting the sectors after rearranging them in logically sequential order. In this way, a block needs not be erased and rewritten at every slightest update.
- WO 03/027828 and WO 00/49488 both disclose a memory system dealing with updates among large erase block including partitioning the logical sector addresses in zones. A small zone of logical address range is reserved for active system control data separate from another zone for user data. In this way, manipulation of the system control data in its own zone will not interact with the associated user data in another zone. Updates are at the logical sector level and a write pointer points to the corresponding physical sectors in a block to be written. The mapping information is buffered in RAM and eventually stored in a sector allocation table in the main memory. The latest version of a logical sector will obsolete all previous versions among existing blocks, which become partially obsolete. Garbage collection is performed to keep partially obsolete blocks to an acceptable number.
- Prior art systems tend to have the update data distributed over many blocks or the update data may render many existing blocks partially obsolete. The result often is a large amount of garbage collection necessary for the partially obsolete blocks, which is inefficient and causes premature aging of the memory. Also, there is no systematic and efficient way of dealing with sequential update as compared to non-sequential update.
- Therefore there is a general need for high capacity and high performance non-volatile memory. In particular, there is a need to have a high capacity nonvolatile memory able to conduct memory operations in large blocks without the aforementioned problems.
- A non-volatile memory system is organized in physical groups of physical memory locations. Each physical group (metablock) is erasable as a unit and can be used to store a logical group of data. A memory management system allows for update of a logical group of data by allocating a metablock dedicated to recording the update data of the logical group. The update metablock records update data in the order received and has no restriction on whether the recording is in the correct logical order as originally stored (sequential) or not (chaotic). Eventually the update metablock is closed to further recording. One of several processes will take place, but will ultimately end up with a fully filled metablock in the correct order which replaces the original metablock. In the chaotic case, directory data is maintained in the non-volatile memory in a manner that is conducive to frequent updates. The system supports multiple logical groups being updated concurrently.
- One feature of the invention allows data to be updated logical-group by logical-group. Thus, when a logical group is being updated, the distribution of logical units (and also the scatter of memory units that the updates obsolete) are limited in range. This is especially true when the logical group is normally contained within a physical block.
- During updates of the logical group, typically one or two blocks need be assigned to buffer the updated logical units. Thus, garbage collection need only be performed over a relatively fewer number of blocks. Garbage collection of a chaotic block may be performed by either consolidation or compaction.
- The economy of the update process is further evident in the generic treatment of the update blocks so that no additional block need be assigned for chaotic (non-sequential) updates as compared to the sequential ones. All update blocks are allocated as sequential update block, and any update block can change to a chaotic update block. Indeed, the change of an update block from sequential to chaotic is discretionary.
- The efficient use of system resource allows multiple logical groups to be updated concurrently. This further increases efficiency and reduces overheads.
- Alignment for Memory Distributed Over Multiple Memory Planes
- According to another aspect of the invention, for a memory array organized into erasable blocks and constituted from multiple memory planes so that logical units can be read or programmed into the multiple planes in parallel, when an original logical unit of a first block stored in a given memory plane is to be updated, provisions are made to keep the updated logical unit in the same plane as the original. This is accomplished by recording the updated logical unit to the next available location of a second block but in the same plane. Preferably, the logical unit is stored with an identical offset position in the plane as its other versions so that all versions of a given logical units are serviced by an identical set of sensing circuits.
- In a preferred embodiment, any intervening gaps from the last programmed memory unit to the next available plane-aligned memory unit are padded accordingly with the current versions of logical units. The padding is accomplished by filling in a gap with current versions of the logical units that follow logically from the last programmed logical unit, and with current versions of the logical units that precede logically from the logical unit stored in the next available plane-aligned memory unit.
- In this way, all versions of a logical unit are maintained in the same plane with the same offsets as the original so that in a garbage collection operation, the latest version of the logical unit need not be retrieved from a different plane, resulting in reduced performance. In a preferred embodiment, each memory unit across the plane is either updated or padded with the latest versions. Thus a logical unit from each plane could be read out in parallel, which will be in logical sequential order without the need for further rearrangement.
- This scheme reduces the time for consolidation of a chaotic block by allowing on-plane rearrangement of the latest versions of the logical units of a logical group and avoiding having to gather the latest versions from different memory planes. This is of benefit where the performance specification for the host interface defines a maximum latency for completion of a sector write operation by the memory system.
- Phased Program Error Handling
- According to another aspect of the invention, in a memory with block management system, program failure in a block during a time-critical memory operation is handled by continuing the programming operation in a breakout block. Later, at a less critical time, the data recorded in the failed block prior to the interruption is transferred to another block, which could also be the breakout block. The failed block can then be discarded. In this way, when a defective block is encountered, it can be handled without loss of data and without exceeding a specified time limit by having to transfer the stored data in the defective block on the spot. This error handling is especially critical for a garbage collection operation so that the entire operation need not be repeated on a fresh block during a critical time. Subsequently, at an opportune time, the data from the defective block can be salvaged by relocation to another block.
- The program failure handling is especially critical during a consolidation operation. A normal consolidation operation consolidates into a consolidation block the current versions of all logical units of a logical group residing among an original block and an update block. During the consolidation operation, if a program failure occurs in the consolidation block, another block acting as a breakout consolidation block will be provisioned to receive the consolidation of the remaining logical units. In this way, no logical units need be copied more than once and the operation with exception handling can still be completed within a period specified for a normal consolidation operation. At an opportune time, the consolidation operation can be completed by consolidating all outstanding logical units of the group into the breakout block. The opportune time will be during some other period outside the current host write operation when there is time to perform the consolidation. One such opportune time is during another host write where there is an update but no associated consolidation operation.
- Essentially, the consolidation with the program failure handling can be regarded as being implemented in multiple phases. In a first phase, the logical units are being consolidated into more than one block after a program failure occurs in order to avoid consolidating each logical unit more than once. The final phase is completed at an opportune time, where the logical group is consolidated into one block, preferably by collecting all the logical units into the breakout consolidation block in sequential order.
- Non-Sequential Update Block Indexing
- According to another aspect of the invention, in a nonvolatile memory with block management system that supports update blocks with non-sequential logical units, an index of the logical units in a non-sequential update block is buffered in RAM and stored periodically into the non-volatile memory. In one embodiment, the index is stored in a block dedicated for storing indices. In another embodiment, the index is stored in the update block itself. In yet another embodiment, the index is stored in the header of each logical unit. In another aspect, the logical units written after the last index update but before the next have their indexing information stored in the header of each logical unit. In this way, after a power outage, the location of recently written logical units can be determined without having to perform a scanning during initialization. In yet another aspect, a block is managed as partially sequential and partially non-sequential, directed to more than one logical subgroup.
- Control Data Integrity & Management
- According to another aspect of the invention, critical data such as some or all of the control data is guaranteed an extra level of reliability if it is maintained in duplicates. The duplication is performed in a manner that for multi-state memory system employing a two-pass programming technique for successively programming the multi-bits of the same set of memory cells, any programming error in the second pass will not corrupt the data established by the first pass. The duplication also helps with detection of write aborts, detection of misdetection (i.e., both copies have good ECC but data are different), and adds an extra level of reliability. Several techniques of data duplication are contemplated.
- In one embodiment, after two copies of a given data have been programmed in an earlier programming pass, a subsequent programming pass avoids programming the memory cells storing at least one of the two copies. In this way, at least one of the two copies will be unaffected in the event the subsequent programming pass aborts before completion and corrupts the data of the earlier pass.
- In another embodiment, the two copies of a given data are stored in two different blocks where at most one of the two copies have its memory cells programmed in a subsequent programming pass.
- In yet another embodiment, after two copies of a given data have been stored in a programming pass, no further programming will be preformed on the set of memory cells storing the two copies. This is accomplished by programming the two copies in an ultimate programming pass for the set of memory cells.
- In yet another embodiment, the two copies of a given data is programmed into a multi-state memory in a binary programming mode, so that no further programming on the programmed memory cells will take place.
- In yet another embodiment, for multi-state memory system employing a two-pass programming technique for successively programming the multi-bits of the same set of memory cells, a fault-tolerant code is employ to code the multiple memory states such that data established by an earlier programming pass is insensitive to errors in subsequent programming pass.
- According to another aspect of the invention, in a nonvolatile memory with a block management system, a “control garbage collection” or preemptive relocation of a memory block is implemented to avoid the situation where a large number of update blocks all happen to need relocation concurrently. For example, this situation can happen when updating control data used for controlling the operation of the block management system. A hierarchy of control data type can exist with varying degree of update frequencies, resulting in their associated update blocks requiring garbage collection or relocation at different rates. There will be certain times that the garbage collection operations of more than one control data types coincide. In the extreme situation, the relocation phases of the update blocks for all control data types could line up, resulting in all of the update blocks requiring relocation at the same time.
- Additional features and advantages of the present invention will be understood from the following description of its preferred embodiments, which description should be taken in conjunction with the accompanying drawings.
-
FIG. 1 illustrates schematically the main hardware components of a memory system suitable for implementing the present invention. -
FIG. 2 illustrates the memory being organized into physical groups of sectors (or metablocks) and managed by a memory manager of the controller, according to a preferred embodiment of the invention. - FIGS. 3A(i)-3A(iii) illustrate schematically the mapping between a logical group and a metablock, according to a preferred embodiment of the present invention.
-
FIG. 3B illustrates schematically the mapping between logical groups and metablocks. -
FIG. 4 illustrates the alignment of a metablock with structures in physical memory. -
FIG. 5A illustrates metablocks being constituted from linking of minimum erase units of different planes. -
FIG. 5B illustrates one embodiment in which one minimum erase unit (MEU) is selected from each plane for linking into a metablock. -
FIG. 5C illustrates another embodiment in which more than one MEU are selected from each plane for linking into a metablock. -
FIG. 6 is a schematic block diagram of the metablock management system as implemented in the controller and flash memory. -
FIG. 7A illustrates an example of sectors in a logical group being written in sequential order to a sequential update block. -
FIG. 7B illustrates an example of sectors in a logical group being written in chaotic order to a chaotic update block. -
FIG. 8 illustrates an example of sectors in a logical group being written in sequential order to a sequential update block as a result of two separate host write operations that has a discontinuity in logical addresses. -
FIG. 9 is a flow diagram illustrating a process by the update block manager to update a logical group of data, according a general embodiment of the invention. -
FIG. 10 is a flow diagram illustrating a process by the update block manager to update a logical group of data, according a preferred embodiment of the invention. -
FIG. 11A is a flow diagram illustrating in more detail the consolidation process of closing a chaotic update block shown inFIG. 10 . -
FIG. 11B is a flow diagram illustrating in more detail the compaction process for closing a chaotic update block shown inFIG. 10 . -
FIG. 12A illustrates all possible states of a Logical Group, and the possible transitions between them under various operations. -
FIG. 12B is a table listing the possible states of a Logical Group. -
FIG. 13A illustrates all possible states of a metablock, and the possible transitions between them under various operations. A metablock is a Physical Group corresponding to a Logical Group. -
FIG. 13B is a table listing the possible states of a metablock. - FIGS. 14(A)-14(J) are state diagrams showing the effect of various operations on the state of the logical group and also on the physical metablock.
-
FIG. 15 illustrates a preferred embodiment of the structure of an allocation block list (ABL) for keeping track of opened and closed update blocks and erased blocks for allocation. -
FIG. 16A illustrates the data fields of a chaotic block index (CBI) sector. -
FIG. 16B illustrates an example of the chaotic block index (CBI) sectors being recorded in a dedicated metablock. -
FIG. 16C is a flow diagram illustrating access to the data of a logical sector of a given logical group undergoing chaotic update. -
FIG. 16D is a flow diagram illustrating access to the data of a logical sector of a given logical group undergoing chaotic update, according to an alternative embodiment in which logical group has been partitioned into subgroups. -
FIG. 16E illustrates examples of Chaotic Block Indexing (CBI) sectors and their functions for the embodiment where each logical group is partitioned into multiple subgroups. -
FIG. 17A illustrates the data fields of a group address table (GAT) sector. -
FIG. 17B illustrates an example of the group address table (GAT) sectors being recorded in a GAT block. -
FIG. 18 is a schematic block diagram illustrating the distribution and flow of the control and directory information for usage and recycling of erased blocks. -
FIG. 19 is a flow chart showing the process of logical to physical address translation. -
FIG. 20 illustrates the hierarchy of the operations performed on control data structures in the course of the operation of the memory management. -
FIG. 21 illustrates a memory array constituted from multiple memory planes. -
FIG. 22A is a flow diagram illustrating a method of update with plane alignment, according to a general implementation of the invention. -
FIG. 22B illustrates a preferred embodiment of the step of storing updates in the flow diagram shown inFIG. 22A . -
FIG. 23A illustrates an example of logical units being written in sequential order to a sequential update block without regard to plane alignment. -
FIG. 23B illustrates an example of logical units being written in non-sequential order to a chaotic update block without regard to plane alignment. -
FIG. 24A illustrates the sequential update example ofFIG. 23A with plane alignment and padding, according to a preferred embodiment of the invention. -
FIG. 24B illustrates the chaotic update example ofFIG. 23B with plane alignment and with no padding, according to one preferred embodiment of the invention. -
FIG. 24C illustrates the chaotic update example ofFIG. 23B with plane alignment and padding, according to another preferred embodiment of the invention. -
FIG. 25 illustrates an example memory organization in which each page contains two memory units for storing two logical units, such as two logical sectors. -
FIG. 26A is similar to the memory structure ofFIG. 21 except each page contains two sectors instead of one. -
FIG. 26B illustrates metablocks shown inFIG. 26A having memory units laid out in a schematic linear fashion. -
FIG. 27 illustrates an alternative scheme for plane alignment in an update block without padding logical units to be copied from one location to another. -
FIG. 28 illustrates a scheme in which when a program failure occurs in a defective block during a consolidation operation, the consolidation operation is repeated on another block. -
FIG. 29 illustrates schematically, a host write operation with a timing or write latency that allows enough time to complete a write (update) operation as well as a consolidation operation. -
FIG. 30 illustrates a flow chart of the program failure handling according to a general scheme of the invention. -
FIG. 31A illustrates one embodiment of program failure handling in which the third (final relocation) block is different from the second (breakout) block. -
FIG. 31B illustrates another embodiment of program failure handling in which the third (final relocation) block is the same as the second (breakout) block. -
FIG. 32A illustrates a flow chart of initial update operation that results in a consolidation operation. -
FIG. 32B illustrates a flow chart of the multiple phase consolidation operation, according to a preferred embodiment of the invention. -
FIG. 33 illustrates an example timing of the first and final phases of the multiple phase consolidation operation. -
FIG. 34A illustrates the case where the breakout consolidation block is not used as an update block but more as a consolidation block whose consolidation operation has been interrupted. -
FIG. 34B illustrates the third and final phase of the multiple phase consolidation started inFIG. 34A . -
FIG. 35A illustrates the case where the breakout consolidation block is maintained as an update block to receive host writes rather than as a consolidation block. -
FIG. 35B illustrates the third and final phase of the multiple phase consolidation started inFIG. 35A for the second case. -
FIG. 36A illustrates the phased program error handling method as applied to the scenario when the host write triggers the closure of an update block, and the update block is sequential. -
FIG. 36B illustrates the phased program error handling method as it can be applied to a (Partial Blocks system) in the case of an Update of Update blocks. -
FIG. 36C illustrates the phased program error handling the garbage collection operation, or clean-up in a memory block management system that does not support a logical group mapped to a metablock. -
FIG. 37 illustrates an example of a schedule of writing of a CBI sector to an associated chaotic index sector block after every N sector writes of the same logical group. -
FIG. 38A illustrates an update block up to the point when a CBI sector is recorded in it after a predetermined number of writes. -
FIG. 38B illustrates the update block ofFIG. 38A to havedata pages -
FIG. 38C illustrates the update block ofFIG. 38B having another logical sector written to trigger the next recording of an index sector. -
FIG. 39A illustrates an intermediate index for the intermediate writes being stored in a header of each data sector in the chaotic update block -
FIG. 39B illustrates an example of storing an intermediate index for the intermediate writes in the header of each sector written. -
FIG. 40 illustrates the information in the chaotic index field being stored in a header of each data sector in the chaotic update block.FIG. 41A illustrates threshold voltage distributions of a 4-state memory array when each memory cell stores two bits of data. -
FIG. 41A illustrates threshold voltage distributions of a 4-state memory array when each memory cell stores two bits of data. -
FIG. 41B illustrates an existing, 2-pass programming scheme using a Gray code. -
FIG. 42 illustrates a way of safe guarding critical data by saving each sector in duplicates. For example, the sectors A, B, C, and D are saved in duplicate copies. If there is corruption of the data in one sector copy, then the other one can be read instead. -
FIG. 43 illustrates the non-robustness in which duplicate sectors are typically saved in a multi-state memory. -
FIG. 44A illustrates one embodiment of saving staggered duplicate copies of critical data to a multi-state memory. -
FIG. 44B illustrates another embodiment of saving duplicate copies of critical data only to the logical upper page of a multi-state memory. -
FIG. 44C illustrates yet another embodiment of saving duplicate copies of critical data in binary mode of a multi-state memory. -
FIG. 45 illustrates yet another embodiment of saving duplicate copies of critical data concurrently to two different metablocks. -
FIG. 46A is similar toFIG. 41A in illustrating threshold voltage distributions of a 4-state memory array and is shown as a reference forFIG. 46B . -
FIG. 46B illustrates yet another embodiment of saving duplicate copies of critical data concurrently by the use of a fault-tolerant code. -
FIG. 47 is a table showing the possible states of the two copies of data, and the validity of the data. -
FIG. 48 illustrates a flow chart of preemptive relocation of memory block storing control data. -
FIG. 1 illustrates schematically the main hardware components of a memory system suitable for implementing the present invention. Thememory system 20 typically operates with ahost 10 through a host interface. The memory system is typically in the form of a memory card or an embedded memory system. Thememory system 20 includes amemory 200 whose operations are controlled by acontroller 100. Thememory 200 comprises of one or more array of non-volatile memory cells distributed over one or more integrated circuit chip. Thecontroller 100 includes aninterface 110, aprocessor 120, anoptional coprocessor 121, ROM 122 (read-only-memory), RAM 130 (random access memory) and optionally programmablenonvolatile memory 124. Theinterface 110 has one component interfacing the controller to a host and another component interfacing to thememory 200. Firmware stored innonvolatile ROM 122 and/or the optionalnonvolatile memory 124 provides codes for theprocessor 120 to implement the functions of thecontroller 100. Error correction codes may be processed by theprocessor 120 or theoptional coprocessor 121. In an alternative embodiment, thecontroller 100 is implemented by a state machine (not shown.) In yet another embodiment, thecontroller 100 is implemented within the host. - Logical and Physical Block Structures
-
FIG. 2 illustrates the memory being organized into physical groups of sectors (or metablocks) and managed by a memory manager of the controller, according to a preferred embodiment of the invention. Thememory 200 is organized into metablocks, where each metablock is a group of physical sectors S0, . . . , SN-1 that are erasable together. - The
host 10 accesses thememory 200 when running an application under a file system or operating system. Typically, the host system addresses data in units of logical sectors where, for example, each sector may contain 512 bytes of data. Also, it is usual for the host to read or write to the memory system in unit of logical clusters, each consisting of one or more logical sectors. In some host systems, an optional host-side memory manager may exist to perform lower level memory management at the host. In most cases during read or write operations, thehost 10 essentially issues a command to thememory system 20 to read or write a segment containing a string of logical sectors of data with contiguous addresses. - A memory-side memory manager is implemented in the
controller 100 of thememory system 20 to manage the storage and retrieval of the data of host logical sectors among metablocks of theflash memory 200. In the preferred embodiment, the memory manager contains a number of software modules for managing erase, read and write operations of the metablocks. The memory manager also maintains system control and directory data associated with its operations among theflash memory 200 and thecontroller RAM 130. - FIGS. 3A(i)-3A(iii) illustrate schematically the mapping between a logical group and a metablock, according to a preferred embodiment of the present invention. The metablock of the physical memory has N physical sectors for storing N logical sectors of data of a logical group.
FIG. 3A (i) shows the data from a logical group LGi, where the logical sectors are in contiguouslogical order FIG. 3A (ii) shows the same data being stored in the metablock in the same logical order. The metablock when stored in this manner is said to be “sequential.” In general, the metablock may have data stored in a different order, in which case the metablock is said to be “non-sequential” or “chaotic.” - There may be an offset between the lowest address of a logical group and the lowest address of the metablock to which it is mapped. In this case, logical sector address wraps round as a loop from bottom back to top of the logical group within the metablock. For example, in
FIG. 3A (iii), the metablock stores in its first location beginning with the data of logical sector k. When the last logical sector N-1 is reached, it wraps around tosector 0 and finally storing data associated with logical sector k-1 in its last physical sector. In the preferred embodiment, a page tag is used to identify any offset, such as identifying the starting logical sector address of the data stored in the first physical sector of the metablock. Two blocks will be considered to have their logical sectors stored in similar order when they only differ by a page tag. -
FIG. 3B illustrates schematically the mapping between logical groups and metablocks. Each logical group is mapped to a unique metablock, except for a small number of logical groups in which data is currently being updated. After a logical group has been updated, it may be mapped to a different metablock. The mapping information is maintained in a set of logical to physical directories, which will be described in more detail later. - Other types of logical group to metablock mapping are also contemplated. For example, metablocks with variable size are disclosed in co-pending and co-owned United States patent application, entitled, “Adaptive Metablocks,” filed by Alan Sinclair, on the same day as the present application. The entire disclosure of the co-pending application is hereby incorporated herein by reference.
- One feature of the invention is that the system operates with a single logical partition, and groups of logical sectors throughout the logical address range of the memory system are treated identically. For example, sectors containing system data and sectors containing user data can be distributed anywhere among the logical address space.
- Unlike prior art systems, there is no special partitioning or zoning of system sectors (i.e., sectors relating to file allocation tables, directories or sub-directories) in order to localize in logical address space sectors that are likely to contain data with high-frequency and small-size updates. Instead, the present scheme of updating logical groups of sectors will efficiently handle the patterns of access that are typical of system sectors, as well as those typical of file data.
-
FIG. 4 illustrates the alignment of a metablock with structures in physical memory. Flash memory comprises blocks of memory cells which are erasable together as a unit. Such erase blocks are the minimum unit of erasure of flash memory or minimum erasable unit (MEU) of the memory. The minimum erase unit is a hardware design parameter of the memory, although in some memory systems that supports multiple MEUs erase, it is possible to configure a “super MEU” comprising more than one MEU. For flash EEPROM, a MEU may comprise one sector but preferably multiple sectors. In the example shown, it has M sectors. In the preferred embodiment, each sector can store 512 bytes of data and has a user data portion and a header portion for storing system or overhead data. If the metablock is constituted from P MEUs, and each MEU contains M sectors, then, each metablock will have N=P*M sectors. - The metablock represents, at the system level, a group of memory locations, e.g., sectors that are erasable together. The physical address space of the flash memory is treated as a set of metablocks, with a metablock being the minimum unit of erasure. Within this specification, the terms “metablock” and “block” are used synonymously to define the minimum unit of erasure at the system level for media management, and the term “minimum erase unit” or MEU is used to denote the minimum unit of erasure of flash memory.
- Linking of Minimum Erase Units (MEUs) to Form a Metablock
- In order to maximize programming speed and erase speed, parallelism is exploited as much as possible by arranging for multiple pages of information, located in multiple MEUs, to be programmed in parallel, and for multiple MEUs to be erased in parallel.
- In flash memory, a page is a grouping of memory cells that may be programmed together in a single operation. A page may comprise one or more sector. Also, a memory array may be partitioned into more than one plane, where only one MEU within a plane may be programmed or erased at a time. Finally, the planes may be distributed among one or more memory chips.
- In flash memory, the MEUs may comprise one or more page. MEUs within a flash memory chip may be organized in planes. Since one MEU from each plane may be programmed or erased concurrently, it is expedient to form a multiple MEU metablock by selecting one MEU from each plane (see
FIG. 5B below.) -
FIG. 5A illustrates metablocks being constituted from linking of minimum erase units of different planes. Each metablock, such as MB0, MB1, . . . , is constituted from MEUs from different planes of the memory system, where the different planes may be distributed among one or more chips. Themetablock link manager 170 shown inFIG. 2 manages the linking of the MEUs for each metablock. Each metablock is configured during an initial formatting process, and retains its constituent MEUs throughout the life of the system, unless there is a failure of one of the MEUs. -
FIG. 5B illustrates one embodiment in which one minimum erase unit (MEU) is selected from each plane for linking into a metablock. -
FIG. 5C illustrates another embodiment in which more than one MEU are selected from each plane for linking into a metablock. In another embodiment, more than one MEU may be selected from each plane to form a super MEU. For example, a super MEU may be formed from two MEUs. In this case, it may take more than one pass for read or write operation. - The linking and re-linking of MEUs into metablocks is also disclosed in co-pending and co-owned United States patent application, entitled “Adaptive Deterministic Grouping of Blocks into Multi-Block Structures,” filed by Carlos Gonzales et al, on the same day as the present application. The entire disclosure of the co-pending application is hereby incorporated herein by reference.
- Metablock Management
-
FIG. 6 is a schematic block diagram of the metablock management system as implemented in the controller and flash memory. The metablock management system comprises various functional modules implemented in thecontroller 100 and maintains various control data (including directory data) in tables and lists hierarchically distributed in theflash memory 200 and thecontroller RAM 130. The function modules implemented in thecontroller 100 includes aninterface module 110, a logical-to-physicaladdress translation module 140, an updateblock manager module 150, an eraseblock manager module 160 and ametablock link manager 170. - The
interface 110 allows the metablock management system to interface with a host system. The logical to physicaladdress translation module 140 maps the logical address from the host to a physical memory location. The updateblock Manager module 150 manages data update operations in memory for a given logical group of data. The erasedblock manager 160 manages the erase operation of the metablocks and their allocation for storage of new information. Ametablock link manager 170 manages the linking of subgroups of minimum erasable blocks of sectors to constitute a given metablock. Detailed description of these modules will be given in their respective sections. - During operation the metablock management system generates and works with control data such as addresses, control and status information. Since much of the control data tends to be frequently changing data of small size, it can not be readily stored and maintained efficiently in a flash memory with a large block structure. A hierarchical and distributed scheme is employed to store the more static control data in the nonvolatile flash memory while locating the smaller amount of the more varying control data in controller RAM for more efficient update and access. In the event of a power shutdown or failure, the scheme allows the control data in the volatile controller RAM to be rebuilt quickly by scanning a small set of control data in the nonvolatile memory. This is possible because the invention restricts the number of blocks associated with the possible activity of a given logical group of data. In this way, the scanning is confined. In addition, some of the control data that requires persistence are stored in a nonvolatile metablock that can be updated sector-by-sector, with each update resulting in a new sector being recorded that supercedes a previous one. A sector indexing scheme is employed for control data to keep track of the sector-by-sector updates in a metablock.
- The
non-volatile flash memory 200 stores the bulk of control data that are relatively static. This includes group address tables (GAT) 210, chaotic block indices (CBI) 220, erased block lists (EBL) 230 andMAP 240. TheGAT 210 keeps track of the mapping between logical groups of sectors and their corresponding metablocks. The mappings do not change except for those undergoing updates. TheCBI 220 keeps track of the mapping of logically non-sequential sectors during an update. TheEBL 230 keeps track of the pool of metablocks that have been erased.MAP 240 is a bitmap showing the erase status of all metablocks in the flash memory. - The
volatile controller RAM 130 stores a small portion of control data that are frequently changing and accessed. This includes an allocation block list (ABL) 134 and a cleared block list (CBL) 136. TheABL 134 keeps track of the allocation of metablocks for recording update data while theCBL 136 keeps track of metablocks that have been deallocated and erased. In the preferred embodiment, theRAM 130 acts as a cache for control data stored inflash memory 200. - Update Block Manager
- The update block manager 150 (shown in
FIG. 2 ) handles the update of logical groups. According to one aspect of the invention, each logical group of sectors undergoing an update is allocated a dedicated update metablock for recording the update data. In the preferred embodiment, any segment of one or more sectors of the logical group will be recorded in the update block. An update block can be managed to receive updated data in either sequential order or non-sequential (also known as chaotic) order. A chaotic update block allows sector data to be updated in any order within a logical group, and with any repetition of individual sectors. In particular, a sequential update block can become a chaotic update block, without need for relocation of any data sectors. No predetermined allocation of blocks for chaotic data update is required; a non-sequential write at any logical address is automatically accommodated. Thus, unlike prior art systems, there is no special treatment whether the various update segments of the logical group is in logical sequential or non-sequential order. The generic update block will simply be used to record the various segments in the order they are requested by the host. For example, even if host system data or system control data tends to be updated in chaotic fashion, regions of logical address space corresponding to host system data do not need to be treated differently from regions with host user data. - Data of a complete logical group of sectors is preferably stored in logically sequential order in a single metablock. In this way, the index to the stored logical sectors is predefined. When the metablock has in store all the sectors of a given logical group in a predefined order it is said to be “intact.” As for an update block, when it eventually fills up with update data in logically sequential order, then the update block will become an updated intact metablock that readily replace the original metablock. On the other hand, if the update block fills up with update data in a logically different order from that of the intact block, the update block is a non-sequential or chaotic update block and the out of order segments must be further processed so that eventually the update data of the logical group is stored in the same order as that of the intact block. In the preferred case, it is in logically sequential order in a single metablock. The further processing involves consolidating the updated sectors in the update block with unchanged sectors in the original block into yet another update metablock. The consolidated update block will then be in logically sequential order and can be used to replace the original block. Under some predetermined condition, the consolidation process is preceded by one or more compaction processes. The compaction process simply re-records the sectors of the chaotic update block into a replacing chaotic update block while eliminating any duplicate logical sector that has been rendered obsolete by a subsequent update of the same logical sector.
- The update scheme allows for multiple update threads running concurrently, up to a predefined maximum. Each thread is a logical group undergoing updates using its dedicated update metablock.
- Sequential Data Update
- When data belonging to a logical group is first updated, a metablock is allocated and dedicated as an update block for the update data of the logical group. The update block is allocated when a command is received from the host to write a segment of one or more sectors of the logical group for which an existing metablock has been storing all its sectors intact. For the first host write operation, a first segment of data is recorded on the update block. Since each host write is a segment of one or more sector with contiguous logical address, it follows that the first update is always sequential in nature. In subsequent host writes, update segments within the same logical group are recorded in the update block in the order received from the host. A block continues to be managed as a sequential update block whilst sectors updated by the host within the associated logical group remain logically sequential. All sectors updated in this logical group are written to this sequential update block, until the block is either closed or converted to a chaotic update block.
-
FIG. 7A illustrates an example of sectors in a logical group being written in sequential order to a sequential update block as a result of two separate host write operations, whilst the corresponding sectors in the original block for the logical group become obsolete. In hostwrite operation # 1, the data in the logical sectors LS5-LS8 are being updated. The updated data as LS5′-LS8′ are recorded in a newly allocated dedicated update block. - For expediency, the first sector to be updated in the logical group is recorded in the dedicated update block starting from the first physical sector location. In general, the first logical sector to be updated is not necessarily the logical first sector of the group, and there may therefore be an offset between the start of the logical group and the start of the update block. This offset is known as page tag as described previously in connection with
FIG. 3A . Subsequent sectors are updated in logically sequential order. When the last sector of the logical group is written, group addresses wrap around and the write sequence continues with the first sector of the group. - In host
write operation # 2, the segment of data in the logical sectors LS9-LS12 are being updated. The updated data as LS9′-LS12′ are recorded in the dedicated update block in a location directly following where the last write ends. It can be seen that the two host writes are such that the update data has been recorded in the update block in logically sequential order, namely LS5′-LS12′. The update block is regarded as a sequential update block since it has been filled in logically sequential order. The update data recorded in the update block obsoletes the corresponding ones in the original block. - Chaotic Data Update
- Chaotic update block management may be initiated for an existing sequential update block when any sector updated by the host within the associated logical group is logically non-sequential. A chaotic update block is a form of data update block in which logical sectors within an associated logical group may be updated in any order and with any amount of repetition. It is created by conversion from a sequential update block when a sector written by a host is logically non-sequential to the previously written sector within the logical group being updated. All sectors subsequently updated in this logical group are written in the next available sector location in the chaotic update block, whatever their logical sector address within the group.
-
FIG. 7B illustrates an example of sectors in a logical group being written in chaotic order to a chaotic update block as a result of five separate host write operations, whilst superseded sectors in the original block for the logical group and duplicated sectors in the chaotic update block become obsolete. In hostwrite operation # 1, the logical sectors LS10-LS11 of a given logical group stored in an original metablock is updated. The updated logical sectors LS10′-LS11′ are stored in a newly allocated update block. At this point, the update block is a sequential one. In hostwrite operation # 2, the logical sectors LS5-LS6 are updated as LS5′-LS6′ and recorded in the update block in the location immediately following the last write. This converts the update block from a sequential to a chaotic one. In hostwrite operation # 3, the logical sector LS10 is being updated again and is recorded in the next location of the update block as LS10″. At this point LS10″ in the update block supersedes LS10′ in a previous recording which in turns supercedes LS10 in the original block. In hostwrite operation # 4, the data in the logical sector LS10 is again updated and is recorded in the next location of the update block as LS10′″. Thus, LS10′″ is now the latest and only valid data for the logical sector LS10. In hostwrite operation # 5, the data in logical sector LS30 is being updated and recorded in the update block as LS30′. Thus, the example illustrates that sectors within a logical group can be written in a chaotic update block in any order and with any repetition. - Forced Sequential Update
-
FIG. 8 illustrates an example of sectors in a logical group being written in sequential order to a sequential update block as a result of two separate host write operations that has a discontinuity in logical addresses. Inhost write # 1, the update data in the logical sectors LS5-LS8 is recorded in a dedicated update block as LS5′-LS8′. Inhost write # 2, the update data in the logical sectors LS14-LS16 is being recorded in the update block following the last write as LS14′-LS16′. However, there is an address jump between LS8 and LS14 and thehost write # 2 would normally render the update block non-sequential. Since the address jump is not substantial, one option is to first perform a padding operation (#2A) by copying the data of the intervening sectors from the original block to the update block before executinghost write # 2. In this way, the sequential nature of the update block is preserved. -
FIG. 9 is a flow diagram illustrating a process by the update block manager to update a logical group of data, according a general embodiment of the invention. The update process comprises the following steps: - STEP 260: The memory is organized into blocks, each block partitioned into memory units that are erasable together, each memory unit for storing a logical unit of data.
- STEP 262: The data is organized into logical groups, each logical group partitioned into logical units.
- STEP 264: In the standard case, all logical units of a logical group is stored among the memory units of an original block according to a first prescribed order, preferably, in logically sequential order. In this way, the index for accessing the individual logical units in the block is known.
- STEP 270: For a given logical group (e.g., LGX) of data, a request is made to update a logical unit within LGX. (A logical unit update is given as an example. In general the update will be a segment of one or more contiguous logical units within LGX.)
- STEP 272: The requested update logical unit is to be stored in a second block, dedicated to recording the updates of LGX. The recording order is according to a second order, typically, the order the updates are requested. One feature of the invention allows an update block to be set up initially generic to recording data in logically sequential or chaotic order. So depending on the second order, the second block can be a sequential one or a chaotic one.
- STEP 274: The second block continues to have requested logical units recorded as the process loops back to
STEP 270. The second block will be closed to receiving further update when a predetermined condition for closure materializes. In that case, the process proceeds to STEP 276. - STEP 276: Determination is made whether or not the closed, second block has its update logical units recorded in a similar order as that of the original block. The two blocks are considered to have similar order when they recorded logical units differ by only a page tag, as described in connection with
FIG. 3A . If the two blocks have similar order the process proceeds to STEP 280, otherwise, some sort of garbage collection need to be performed inSTEP 290. - STEP 280: Since the second block has the same order as the first block, it is used to replace the original, first block. The update process then ends at
STEP 299. - STEP 290: The latest version of each logical units of the given logical group are gathered from among the second block (update block) and the first block (original block). The consolidated logical units of the given logical group are then written to a third block in an order similar to the first block.
- STEP 292: Since the third block (consolidated block) has a similar order to the first block, it is used to replace the original, first block. The update process then ends at
STEP 299. - STEP 299: When a closeout process creates an intact update block, it becomes the new standard block for the given logical group. The update thread for the logical group will be terminated.
-
FIG. 10 is a flow diagram illustrating a process by the update block manager to update a logical group of data, according a preferred embodiment of the invention. The update process comprises the following steps: - STEP 310: For a given logical group (e.g., LGX) of data, a request is made to update a logical sector within LGX. (A sector update is given as an example. In general the update will be a segment of one or more contiguous logical sectors within LGX.)
- STEP 312: If an update block dedicated to LGX does not already exist, proceed to STEP 410 to initiate a new update thread for the logical group. This will be accomplished by allocating an update block dedicated to recording update data of the logical group. If there is already an update block open, proceed to STEP 314 to begin recording the update sector onto the update block.
- STEP 314: If the current update block is already chaotic (i.e., non-sequential) then simply proceed to STEP 510 for recording the requested update sector onto the chaotic update block. If the current update block is sequential, proceed to STEP 316 for processing of a sequential update block.
- STEP 316: One feature of the invention allows an update block to be set up initially generic to recording data in logically sequential or chaotic order. However, since the logical group ultimately has its data stored in a metablock in a logically sequential order, it is desirable to keep the update block sequential as far as possible. Less processing will then be required when an update block is closed to further updates as garbage collection will not be needed.
- Thus determination is made whether the requested update will follow the current sequential order of the update block. If the update follows sequentially, then proceed to STEP 510 to perform a sequential update, and the update block will remain sequential. On the other hand, if the update does not follow sequentially (chaotic update), it will convert the sequential update block to a chaotic one if no other actions are taken.
- In one embodiment, nothing more is done to salvage the situation and the process proceeds directly to STEP 370 where the update is allowed to turn the update block into a chaotic one.
- Optional Forced Sequential Process
- In another embodiment, a forced
sequential process STEP 320 is optionally performed to preserve the sequential update block as far as possible in view of a pending chaotic update. There are two situations, both of which require copying missing sectors from the original block to maintain the sequential order of logical sectors recorded on the update block. The first situation is where the update creates a short address jump. The second situation is to prematurely close out an update block in order to keep it sequential. The forcedsequential process STEP 320 comprises the following substeps: - STEP 330: If the update creates a logical address jump not greater a predetermined amount, CB, the process proceeds to a forced sequential update process in
STEP 350, otherwise the process proceeds to STEP 340 to consider if it qualifies for a forced sequential closeout. - STEP 340: If the number of unfilled physical sectors exceeds a predetermined design parameter, CC, whose typical value is half of the size of the update block, then the update block is relatively unused and will not be prematurely closed. The process proceeds to STEP 370 and the update block will become chaotic. On the other hand, if the update block is substantially filled, it is considered to have been well utilized already and therefore is directed to STEP 360 for forced sequential closeout.
- STEP 350: Forced sequential update allows current sequential update block to remain sequential as long as the address jump does not exceed a predetermined amount, CB. Essentially, sectors from the update block's associated original block are copied to fill the gap spanned by the address jump. Thus, the sequential update block will be padded with data in the intervening addresses before proceeding to STEP 510 to record the current update sequentially.
- STEP 360: Forced sequential closeout allows the currently sequential update block to be closed out if it is already substantially filled rather than converted to a chaotic one by the pending chaotic update. A chaotic or non-sequential update is defined as one with a forward address transition not covered by the address jump exception described above, a backward address transition, or an address repetition. To prevent a sequential update block to be converted by a chaotic update, the unwritten sector locations of the update block are filled by copying sectors from the update block's associated original partly-obsolete block. The original block is then fully obsolete and may be erased. The current update block now has the full set of logical sectors and is then closed out as an intact metablock replacing the original metablock. The process then proceeds to STEP 430 to have a new update block allocated in its place to accept the recording of the pending sector update that was first requested in
STEP 310. - Conversion to Chaotic Update Block
- STEP 370: When the pending update is not in sequential order and optionally, if the forced sequential conditions are not satisfied, the sequential update block is allowed to be converted to a chaotic one by virtue of allowing the pending update sector, with non-sequential address, to be recorded on the update block when the process proceeds to STEP 510. If the maximum number of chaotic update blocks exist, it is necessary to close the least recently accessed chaotic update block before allowing the conversion to proceed; thus preventing the maximum number of chaotic blocks from being exceeded. The identification of the least recently accessed chaotic update block is the same as the general case described in
STEP 420, but is constrained to chaotic update blocks only. Closing a chaotic update block at this time is achieved by consolidation as described inSTEP 550. - Allocation of New Update Block Subject to System Restriction
- STEP 410: The process of allocating an erase metablock as an update block begins with the determination whether a predetermined system limitation is exceeded or not. Due to finite resources, the memory management system typically allows a predetermined maximum number of update blocks, CA, to exist concurrently. This limit is the aggregate of sequential update blocks and chaotic update blocks, and is a design parameter. In a preferred embodiment, the limit is, for example, a maximum of 8 update blocks. Also, due to the higher demand on system resources, there may also be a corresponding predetermined limit on the maximum number of chaotic update blocks that can be open concurrently (e.g., 4.)
- Thus, when CA update blocks have already been allocated, then the next allocation request could only be satisfied after closing one of the existing allocated ones. The process proceeds to STEP 420. When the number of open update blocks is less than CA, the process proceeds directly to
STEP 430. - STEP 420: In the event the maximum number of update blocks, CA, is exceeded, the least-recently accessed update block is closed and garbage collection is performed. The least recently accessed update block is identified as the update block associated with the logical block that has been accessed least recently. For the purpose of determining the least recently accessed blocks, an access includes writes and optionally reads of logical sectors. A list of open update blocks is maintained in order of access; at initialization, no access order is assumed. The closure of an update block follows along the similar process described in connection with
STEP 360 andSTEP 530 when the update block is sequential, and in connection with STEP 540 when the update block is chaotic. The closure makes room for the allocation of a new update block inSTEP 430. - STEP 430: The allocation request is fulfilled with the allocation of a new metablock as an update block dedicated to the given logical group LGX. The process then proceeds to STEP 510.
- Record Update Data onto Update Block
- STEP 510: The requested update sector is recorded onto next available physical location of the update block. The process then proceeds to STEP 520 to determine if the update block is ripe for closeout.
- Update Block Closeout
- STEP 520: If the update block still has room for accepting additional updates, proceed to STEP 570. Otherwise proceed to STEP 522 to closeout the update block. There are two possible implementations of filling up an update block when the current requested write attempts to write more logical sectors than the block has room for. In the first implementation, the write request is split into two portions, with the first portion writing up to the last physical sector of the block. The block is then closed and the second portion of the write will be treated as the next requested write. In the other implementation, the requested write is withheld while the block has it remaining sectors padded and is then closed. The requested write will be treated as the next requested write.
- STEP 522: If the update block is sequential, proceed to STEP 530 for sequential closure. If the update block is chaotic, proceed to STEP 540 for chaotic closure.
- Sequential Update Block Closeout
- STEP 530: Since the update block is sequential and fully filled, the logical group stored in it is intact. The metablock is intact and replaces the original one. At this time, the original block is fully obsolete and may be erased. The process then proceeds to STEP 570 where the update thread for the given logical group ends.
- Chaotic Update Block Closeout
- STEP 540: Since the update block is non-sequentially filled and may contain multiple updates of some logical sectors, garbage collection is performed to salvage the valid data in it. The chaotic update block will either be compacted or consolidated. Which process to perform will be determined in STEP 542.
- STEP 542: To perform compaction or consolidation will depend on the degeneracy of the update block. If a logical sector is updated multiple times, its logical address is highly degenerate. There will be multiple versions of the same logical sector recorded on the update block and only the last recorded version is the valid one for that logical sector. In an update block containing logical sectors with multiple versions, the number of distinct logical sectors will be much less than that of a logical group.
- In the preferred embodiment, when the number of distinct logical sectors in the update block exceeds a predetermined design parameter, CD, whose typical value is half of the size of a logical group, the closeout process will perform a consolidation in
STEP 550, otherwise the process will proceed to compaction inSTEP 560. - STEP 550: If the chaotic update block is to be consolidated, the original block and the update block will be replaced by a new standard metablock containing the consolidated data. After consolidation the update thread will end in
STEP 570. - STEP 560: If the chaotic update block is to be compacted, it will be replaced by a new update block carrying the compacted data. After compaction the processing of the compacted update block will end in
STEP 570. Alternatively, compaction can be delayed until the update block is written to again, thus removing the possibility of compaction being followed by consolidation without intervening updates. The new update block will then be used in further updating of the given logical block when a next request for update in LGX appears in STEP 502. - STEP 570: When a closeout process creates an intact update block, it becomes the new standard block for the given logical group. The update thread for the logical group will be terminated. When a closeout process creates a new update block replacing an existing one, the new update block will be used to record the next update requested for the given logical group. When an update block is not closed out, the processing will continue when a next request for update in LGX appears in
STEP 310. - As can be seen from the process described above, when a chaotic update block is closed, the update data recorded on it is further processed. In particular its valid data is garbage collected either by a process of compaction to another chaotic block, or by a process of consolidation with its associated original block to form a new standard sequential block.
-
FIG. 11A is a flow diagram illustrating in more detail the consolidation process of closing a chaotic update block shown inFIG. 10 . Chaotic update block consolidation is one of two possible processes performed when the update block is being closed out, e.g., when the update block is full with its last physical sector location written. Consolidation is chosen when the number of distinct logical sectors written in the block exceeds a predetermined design parameter, CD. Theconsolidation process STEP 550 shown inFIG. 10 comprises the following substeps: - STEP 551: When a chaotic update block is being closed, a new metablock replacing it will be allocated.
- STEP 552: Gather the latest version of each logical sector among the chaotic update block and its associated original block, ignoring all the obsolete sectors.
- STEP 554: Record the gathered valid sectors onto the new metablock in logically sequential order to form an intact block, i.e., a block with all the logical sectors of a logical group recorded in sequential order.
- STEP 556: Replace the original block with the new intact block.
- STEP 558: Erase the closed out update block and the original block.
-
FIG. 11B is a flow diagram illustrating in more detail the compaction process for closing a chaotic update block shown inFIG. 10 . Compaction is chosen when the number of distinct logical sectors written in the block is below a predetermined design parameter, CD. Thecompaction process STEP 560 shown inFIG. 10 comprises the following substeps: - STEP 561: When a chaotic update block is being compacted, a new metablock replacing it will be allocated.
- STEP 562: Gather the latest version of each logical sector among the existing chaotic update block to be compacted.
- STEP 564: Record the gathered sectors onto the new update block to form a new update block having compacted sectors.
- STEP 566: Replace the existing update block with the new update block having compacted sectors.
- STEP 568: Erase the closed out update block.
- Logical and Metablock States
-
FIG. 12A illustrates all possible states of a Logical Group, and the possible transitions between them under various operations. -
FIG. 12B is a table listing the possible states of a Logical Group. The Logical Group states are defined as follows: - 1. Intact: All logical sectors in the Logical Group have been written in logically sequential order, possibly using page tag wrap around, in a single metablock.
- 2. Unwritten: No logical sector in the Logical Group has ever been written. The Logical Group is marked as unwritten in a group address table and has no allocated metablock. A predefined data pattern is returned in response to a host read for every sector within this group.
- 3. Sequential Update: Some sectors within the Logical Group have been written in logically sequential order in a metablock, possibly using page tag, so that they supersede the corresponding logical sectors from any previous Intact state of the group.
- 4. Chaotic Update: Some sectors within the Logical Group have been written in logically non-sequential order in a metablock, possibly using page tag, so that they supersede the corresponding logical sectors from any previous Intact state of the group. A sector within the group may be written more than once, with the latest version superseding all previous versions.
-
FIG. 13A illustrates all possible states of a metablock, and the possible transitions between them under various operations. -
FIG. 13B is a table listing the possible states of a metablock. The metablock states are defined as follows: - 1. Erased: All the sectors in the metablock are erased.
- 2. Sequential Update: The metablock is partially written with sectors in logically sequential order, possibly using page tag. All the sectors belong to the same Logical Group.
- 3. Chaotic Update: The metablock is partially or fully written with sectors in logically non-sequential order. Any sector can be written more than once. All sectors belong to the same Logical Group.
- 4: Intact: The metablock is fully written in logically sequential order, possibly using page tag.
- 5: Original: The metablock was previously Intact but at least one sector has been made obsolete by a host data update.
- FIGS. 14(A)-14(J) are state diagrams showing the effect of various operations on the state of the logical group and also on the physical metablock.
-
FIG. 14 (A) shows state diagrams corresponding to the logical group and the metablock transitions for a first write operation. The host writes one or more sectors of a previously unwritten Logical Group in logically sequential order to a newly allocated Erased metablock. The Logical Group and the metablock go to the Sequential Update state. -
FIG. 14 (B) shows state diagrams corresponding to the logical group and the metablock transitions for a first intact operation. A previously unwritten Sequential Update Logical Group becomes Intact as all the sectors are written sequentially by the host. The transition can also happen if the card fills up the group by filling the remaining unwritten sectors with a predefined data pattern. The metablock becomes Intact. -
FIG. 14 (C) shows state diagrams corresponding to the logical group and the metablock transitions for a first chaotic operation. A previously unwritten Sequential Update Logical Group becomes Chaotic when at least one sector has been written non-sequentially by the host. -
FIG. 14 (D) shows state diagrams corresponding to the logical group and the metablock transitions for a first compaction operation. All valid sectors within a previously unwritten Chaotic Update Logical Group are copied to a new Chaotic metablock from the old block, which is then erased. -
FIG. 14 (E) shows state diagrams corresponding to the logical group and the metablock transitions for a first consolidation operation. All valid sectors within a previously unwritten Chaotic Update Logical Group are moved from the old Chaotic block to fill a newly allocated Erased block in logically sequential order. Sectors unwritten by the host are filled with a predefined data pattern. The old chaotic block is then erased. -
FIG. 14 (F) shows state diagrams corresponding to the logical group and the metablock transitions for a sequential write operation. The host writes one or more sectors of an Intact Logical Group in logically sequential order to a newly allocated Erased metablock. The Logical Group and the metablock go to Sequential Update state. The previously Intact metablock becomes an Original metablock. -
FIG. 14 (G) shows state diagrams corresponding to the logical group and the metablock transitions for a sequential fill operation. A Sequential Update Logical Group becomes Intact when all its sectors are written sequentially by the host. This may also occur during garbage collection when the Sequential Update Logical Group is filled with valid sectors from the original block in order to make it Intact, after which the original block is erased. -
FIG. 14 (H) shows state diagrams corresponding to the logical group and the metablock transitions for a non-sequential write operation. A Sequential Update Logical Group becomes Chaotic when at least one sector is written non-sequentially by the host. The non-sequential sector writes may cause valid sectors in either the Update block or the corresponding Original block to become obsolete. -
FIG. 14 (I) shows state diagrams corresponding to the logical group and the metablock transitions for a compaction operation. All valid sectors within a Chaotic Update Logical Group are copied into a new chaotic metablock from the old block, which is then erased. The Original block is unaffected. -
FIG. 14 (J) shows state diagrams corresponding to the logical group and the metablock transitions for a consolidation operation. All valid sectors within a Chaotic Update Logical Group are copied from the old chaotic block and the Original block to fill a newly allocated Erased block in logically sequential order. The old chaotic block and the Original block are then erased. - Update Block Tracking and Management
-
FIG. 15 illustrates a preferred embodiment of the structure of an allocation block list (ABL) for keeping track of opened and closed update blocks and erased blocks for allocation. The allocation block list (ABL) 610 is held incontroller RAM 130, to allow management of allocation of erased blocks, allocated update blocks, associated blocks and control structures, and to enable correct logical to physical address translation. In the preferred embodiment, the ABL includes a list of erased blocks, an openupdate block list 614 and a closedupdate block list 616. - The open
update block list 614 is the set of block entries in the ABL with the attributes of Open Update Block. The open update block list has one entry for each data update block currently open. Each entry holds the following information. LG is the logical group address the current update metablock is dedicated to. Sequential/Chaotic is a status indicating whether the update block has been filled with sequential or chaotic update data. MB is the metablock address of the update block. Page tag is the starting logical sector recorded at the first physical location of the update block. Number of sectors written indicates the number of sectors currently written onto the update block. MB0 is the metablock address of the associated original block. Page Tag0 is the page tag of the associated original block. - The closed
update block list 616 is a subset of the Allocation Block List (ABL). It is the set of block entries in the ABL with the attributes of Closed Update Block. The closed update block list has one entry for each data update block which has been closed, but whose entry has not been updated in a logical to a main physical directory. Each entry holds the following information. LG is the logical group address the current update block is dedicated to. MB is the metablock address of the update block. Page tag is the starting logical sector recorded at the first physical location of the update block. MB0 is the metablock address of the associated original block. - Chaotic Block Indexing
- A sequential update block has the data stored in logically sequential order, thus any logical sector among the block can be located easily. A chaotic update block has its logical sectors stored out of order and may also store multiple update generations of a logical sector. Additional information must be maintained to keep track of where each valid logical sector is located in the chaotic update block.
- In the preferred embodiment, chaotic block indexing data structures allow tracking and fast access of all valid sectors in a chaotic block. Chaotic block indexing independently manages small regions of logical address space, and efficiently handles system data and hot regions of user data. The indexing data structures essentially allow indexing information to be maintained in flash memory with infrequent update requirement so that performance is not significantly impacted. On the other hand, lists of recently written sectors in chaotic blocks are held in a chaotic sector list in controller RAM. Also, a cache of index information from flash memory is held in controller RAM in order to minimize the number of flash sector accesses for address translation. Indexes for each chaotic block are stored in chaotic block index (CBI) sectors in flash memory.
-
FIG. 16A illustrates the data fields of a chaotic block index (CBI) sector. A Chaotic Block Index Sector (CBI sector) contains an index for each sector in a logical group mapped to a chaotic update block, defining the location of each sector of the logical group within the chaotic update block or its associated original block. A CBI sector includes a chaotic block index field for keeping track of valid sectors within the chaotic block, a chaotic block info field for keeping track of address parameters for the chaotic block, and a sector index field for keeping track of the valid CBI sectors within the metablock (CBI block) storing the CBI sectors. -
FIG. 16B illustrates an example of the chaotic block index (CBI) sectors being recorded in a dedicated metablock. The dedicated metablock will be referred to as aCBI block 620. When a CBI sector is updated, it is written in the next available physical sector location in theCBI block 620. Multiple copies of a CBI sector may therefore exist in the CBI block, with only the last written copy being valid. For example the CBI sector for the logical group LG1 has been updated three times with the latest version being the valid one. The location of each valid sector in the CBI block is identified by a set of indices in the last written CBI sector in the block. In this example, the last written CBI sector in the block is CBI sector for LG136 and its set of indices is the valid one superceding all previous ones. When the CBI block eventually becomes fully filled with CBI sectors, the block is compacted during a control write operation by rewriting all valid sectors to a new block location. The full block is then erased. - The chaotic block index field within a CBI sector contains an index entry for each logical sector within a logical group or sub-group mapped to a chaotic update block. Each index entry signifies an offset within the chaotic update block at which valid data for the corresponding logical sector is located. A reserved index value indicates that no valid data for the logical sector exists in the chaotic update block, and that the corresponding sector in the associated original block is valid. A cache of some chaotic block index field entries is held in controller RAM.
- The chaotic block info field within a CBI sector contains one entry for each chaotic update block that exists in the system, recording address parameter information for the block. Information in this field is only valid in the last written sector in the CBI block. This information is also present in data structures in RAM.
- The entry for each chaotic update block includes three address parameters. The first is the logical address of the logical group (or logical group number) associated with the chaotic update block. The second is the metablock address of the chaotic update block. The third is the physical address offset of the last sector written in the chaotic update block. The offset information sets the start point for scanning of the chaotic update block during initialization, to rebuild data structures in RAM.
- The sector index field contains an entry for each valid CBI sector in the CBI block. It defines the offsets within the CBI block at which the most recently written CBI sectors relating to each permitted chaotic update block are located. A reserved value of an offset in the index indicates that a permitted chaotic update block does not exist.
-
FIG. 16C is a flow diagram illustrating access to the data of a logical sector of a given logical group undergoing chaotic update. During the update process, the update data is recorded in the chaotic update block while the unchanged data remains in the original metablock associated with the logical group. The process of accessing a logical sector of the logical group under chaotic update is as follows: - STEP 650: Begin locating a given logical sector of a given logical group.
- STEP 652: Locate last written CBI sector in the CBI block.
- STEP 654: Locate the chaotic update block or original block associated with the given logical group by looking up the Chaotic Block Info field of the last written CBI sector. This step can be performed any time just before
STEP 662. - STEP 658: If the last written CBI sector is directed to the given logical group, the CBI sector is located. Proceed to STEP 662. Otherwise, proceed to STEP 660.
- STEP 660: Locate the CBI sector for the given logical group by looking up the sector index field of the last written CBI sector.
- STEP 662: Locate the given logical sector among either the chaotic block or the original block by looking up the Chaotic Block Index field of the located CBI sector.
-
FIG. 16D is a flow diagram illustrating access to the data of a logical sector of a given logical group undergoing chaotic update, according to an alternative embodiment in which logical group has been partitioned into subgroups. The finite capacity of a CBI sector can only keep track of a predetermined maximum number of logical sectors. When the logical group has more logical sectors than a single CBI sector can handle, the logical group is partitioned into multiple subgroups with a CBI sector assigned to each subgroup. In one example, each CBI sector has enough capacity for tracking a logical group consisting of 256 sectors and up to 8 chaotic update blocks. If the logical group has a size exceeding 256 sectors, a separate CBI sector exists for each 256-sector sub-group within the logical group. CBI sectors may exist for up to 8 sub-groups within a logical group, giving support for logical groups up to 2048 sectors in size. - In the preferred embodiment, an indirect indexing scheme is employed to facilitate management of the index. Each entry of the sector index has direct and indirect fields.
- The direct sector index defines the offsets within the CBI block at which all possible CBI sectors relating to a specific chaotic update block are located. Information in this field is only valid in the last written CBI sector relating to that specific chaotic update block. A reserved value of an offset in the index indicates that the CBI sector does not exist because the corresponding logical subgroup relating to the chaotic update block either does not exist, or has not been updated since the update block was allocated.
- The indirect sector index defines the offsets within the CBI block at which the most recently written CBI sectors relating to each permitted chaotic update block are located. A reserved value of an offset in the index indicates that a permitted chaotic update block does not exist.
-
FIG. 16D shows the process of accessing a logical sector of the logical group under chaotic update as follows: - STEP 670: Partition each Logical Group into multiple subgroups and assign a CBI sector to each subgroup.
- STEP 680: Begin locating a given logical sector of a given subgroup of a given logical group.
- STEP 682: Locate the last written CBI sector in the CBI block.
- STEP 684: Locate the chaotic update block or original block associated with the given subgroup by looking up the Chaotic Block Info field of the last written CBI sector. This step can be performed any time just before
STEP 696. - STEP 686: If the last written CBI sector is directed to the given logical group, proceed to STEP 691. Otherwise, proceed to STEP 690.
- STEP 690: Locate the last written of the multiple CBI sectors for the given logical group by looking up the Indirect Sector Index field of the last written CBI sector.
- STEP 691: At least a CBI sector associate with one of the subgroups for the given logical group has been located. Continue.
- STEP 692: If the located CBI sector directed to the given subgroup, the CBI sector for the given subgroup is located. Proceed to STEP 696. Otherwise, proceed to STEP 694.
- STEP 694: Locate the CBI sector for the given subgroup by looking up the direct sector index field of the currently located CBI sector.
- STEP 696: Locate the given logical sector among either the chaotic block or the original block by looking up the Chaotic Block Index field of the CBI sector for the given subgroup.
-
FIG. 16E illustrates examples of Chaotic Block Indexing (CBI) sectors and their functions for the embodiment where each logical group is partitioned into multiple subgroups. Alogical group 700 originally has its intact data stored in anoriginal metablock 702. The logical group is then undergoing updates with the allocation of a dedicatedchaotic update block 704. In the present examples, thelogical group 700 is partitioned into subgroups, such subgroups A, B, C, D, each having 256 sectors. - In order to locate the ith sector in the subgroup B, the last written CBI sector in the CBI block 620 is first located. The chaotic block info field of the last written CBI sector provides the address to locate the
chaotic update block 704 for the given logical group. At the same time it provides the location of the last sector written in the chaotic block. This information is useful in the event of scanning and rebuilding indices. - If the last written CBI sector turns out to be one of the four CBI sectors of the given logical group, it will be further determined if it is exactly the CBI sector for the given subgroup B that contains the ith logical sector. If it is, then the CBI sector's chaotic block index will point to the metablock location for storing the data for the ith logical sector. The sector location could be either in the
chaotic update block 704 or theoriginal block 702. - If the last written CBI sector turns out to be one of the four CBI sectors of the given logical group but is not exactly for the subgroup B, then its direct sector index is looked up to locate the CBI sector for the subgroup B. Once this exact CBI sector is located, its chaotic block index is looked up to locate the ith logical sector among the
chaotic update block 704 and theoriginal block 702. - If the last written CBI sector turns out not to be anyone of the four CBI sectors of the given logical group, its indirect sector index is looked up to locate one of the four. In the example shown in
FIG. 16E , the CBI sector for subgroup C is located. Then this CBI sector for subgroup C has its direct sector index looked up to locate the exact CBI sector for the subgroup B. The example shows that when its chaotic block index is looked up, the ith logical sector is found to be unchanged and it valid data will be located in the original block. - Similar consideration applies to locating the jth logical sector in subgroup C of the given logical group. The example shows that the last written CBI sector turns out not to be any one of the four CBI sectors of the given logical group. Its indirect sector index points to one of the four CBI sectors for the given group. The last written of four pointed to also turns out to be exactly the CBI sector for the subgroup C. When its chaotic block index is looked up, the jth logical sector is found to be located at a designated location in the
chaotic update block 704. - A list of chaotic sectors exists in controller RAM for each chaotic update block in the system. Each list contains a record of sectors written in the chaotic update block since a related CBI sector was last updated in flash memory. The number of logical sector addresses for a specific chaotic update block, which can be held in a chaotic sector list, is a design parameter with a typical value of 8 to 16. The optimum size of the list is determined as a tradeoff between its effects on overhead for chaotic data-write operations and sector scanning time during initialization.
- During system initialization, each chaotic update block is scanned as necessary to identify valid sectors written since the previous update of one of its associated CBI sectors. A chaotic sector list in controller RAM for each chaotic update block is constructed. Each block need only be scanned from the last sector address defined in its chaotic block info field in the last written CBI sector.
- When a chaotic update block is allocated, a CBI sector is written to correspond to all updated logical sub-groups. The logical and physical addresses for the chaotic update block are written in an available chaotic block info field in the sector, with null entries in the chaotic block index field. A chaotic sector list is opened in controller RAM.
- When a chaotic update block is closed, a CBI sector is written with the logical and physical addresses of the block removed from the chaotic block info field in the sector. The corresponding chaotic sector list in RAM becomes unused.
- The corresponding chaotic sector list in controller RAM is modified to include records of sectors written to a chaotic update block. When a chaotic sector list in controller RAM has no available space for records of further sector writes to a chaotic update block, updated CBI sectors are written for logical sub-groups relating to sectors in the list, and the list is cleared.
- When the CBI block 620 becomes full, valid CBI sectors are copied to an allocated erased block, and the previous CBI block is erased.
- Address Tables
- The logical to physical
address translation module 140 shown inFIG. 2 is responsible for relating a host's logical address to a corresponding physical address in flash memory. Mapping between logical groups and physical groups (metablocks) are stored in a set of table and lists distributed among thenonvolatile flash memory 200 and the volatile but more agile RAM 130 (seeFIG. 1 .) An address table is maintained in flash memory, containing a metablock address for every logical group in the memory system. In addition, logical to physical address records for recently written sectors are temporarily held in RAM. These volatile records can be reconstructed from block lists and data sector headers in flash memory when the system is initialized after power-up. Thus, the address table in flash memory need be updated only infrequently, leading to a low percentage of overhead write operations for control data. - The hierarchy of address records for logical groups includes the open update block list, the closed update block list in RAM and the group address table (GAT) maintained in flash memory.
- The open update block list is a list in controller RAM of data update blocks which are currently open for writing updated host sector data. The entry for a block is moved to the closed update block list when the block is closed. The closed update block list is a list in controller RAM of data update blocks which have been closed. A subset of the entries in the list is moved to a sector in the Group Address Table during a control write operation.
- The Group Address Table (GAT) is a list of metablock addresses for all logical groups of host data in the memory system. The GAT contains one entry for each logical group, ordered sequentially according to logical address. The nth entry in the GAT contains the metablock address for the logical group with address n. In the preferred embodiment, it is a table in flash memory, comprising a set of sectors (referred to as GAT sectors) with entries defining metablock addresses for every logical group in the memory system. The GAT sectors are located in one or more dedicated control blocks (referred to as GAT blocks) in flash memory.
-
FIG. 17A illustrates the data fields of a group address table (GAT) sector. A GAT sector may for example have sufficient capacity to contain GAT entries for a set of 128 contiguous logical groups. Each GAT sector includes two components, namely a set of GAT entries for the metablock address of each logical group within a range, and a GAT sector index. The first component contains information for locating the metablock associated with the logical address. The second component contains information for locating all valid GAT sectors within the GAT block. Each GAT entry has three fields, namely, the metablock number, the page tag as defined earlier in connection withFIG. 3A (iii), and a flag indicating whether the metablock has been relinked. The GAT sector index lists the positions of valid GAT sectors in a GAT block. This index is in every GAT sector but is superceded by the version of the next written GAT sector in the GAT block. Thus only the version in the last written GAT sector is valid. -
FIG. 17B illustrates an example of the group address table (GAT) sectors being recorded in one or more GAT block. A GAT block is a metablock dedicated to recording GAT sectors. When a GAT sector is updated, it is written in the next available physical sector location in theGAT block 720. Multiple copies of a GAT sector may therefore exist in the GAT block, with only the last written copy being valid. For example the GAT sector 255 (containing pointers for the logical groups LG3968-LG4098) has been updated at least two times with the latest version being the valid one. The location of each valid sector in the GAT block is identified by a set of indices in the last written GAT sector in the block. In this example, the last written GAT sector in the block is GAT sector 236 and its set of indices is the valid one superceding all previous ones. When the GAT block eventually becomes fully filled with GAT sectors, the block is compacted during a control write operation by rewriting all valid sectors to a new block location. The full block is then erased. - As described earlier, a GAT block contains entries for a logically contiguous set of groups in a region of logical address space. GAT sectors within a GAT block each contain logical to physical mapping information for 128 contiguous logical groups. The number of GAT sectors required to store entries for all logical groups within the address range spanned by a GAT block occupy only a fraction of the total sector positions in the block. A GAT sector may therefore be updated by writing it at the next available sector position in the block. An index of all valid GAT sectors and their position in the GAT block is maintained in an index field in the most recently written GAT sector. The fraction of the total sectors in a GAT block occupied by valid GAT sectors is a system design parameter, which is typically 25%. However, there is a maximum of 64 valid GAT sectors per GAT block. In systems with large logical capacity, it may be necessary to store GAT sectors in more than one GAT block. In this case, each GAT block is associated with a fixed range of logical groups.
- A GAT update is performed as part of a control write operation, which is triggered when the ABL runs out of blocks for allocation (see
FIG. 18 .) It is performed concurrently with ABL fill and CBL empty operations. During a GAT update operation, one GAT sector has entries updated with information from corresponding entries in the closed update block list. When a GAT entry is updated, any corresponding entries are removed from the closed update block list (CUBL). For example, the GAT sector to be updated is selected on the basis of the first entry in the closed update block list. The updated sector is written to the next available sector location in the GAT block. - A GAT rewrite operation occurs during a control write operation when no sector location is available for an updated GAT sector. A new GAT block is allocated, and valid GAT sectors as defined by the GAT index are copied in sequential order from the full GAT block. The full GAT block is then erased.
- A GAT cache is a copy in
controller RAM 130 of entries in a subdivision of the 128 entries in a GAT sector. The number of GAT cache entries is a system design parameter, with typical value 32. A GAT cache for the relevant sector subdivision is created each time an entry is read from a GAT sector. Multiple GAT caches are maintained. The number is a design parameter with a typical value of 4. A GAT cache is overwritten with entries for a different sector subdivision on a least-recently-used basis. - Erased Metablock Management
- The erase
block manager 160 shown inFIG. 2 manages erase blocks using a set of lists for maintaining directory and system control information. These lists are distributed among thecontroller RAM 130 andflash memory 200. When an erased metablock must be allocated for storage of user data, or for storage of system control data structures, the next available metablock number in the allocation block list (ABL) (seeFIG. 15 ) held in controller RAM is selected. Similarly, when a metablock is erased after it has been retired, its number is added to a cleared block list (CBL) also held in controller RAM. Relatively static directory and system control data are stored in flash memory. These include erased block lists and a bitmap (MAP) listing the erased status of all metablocks in the flash memory. The erased block lists and MAP are stored in individual sectors and are recorded to a dedicated metablock, known as a MAP block. These lists, distributed among the controller RAM and flash memory, provide a hierarchy of erased block records to efficiently manage erased metablock usage. -
FIG. 18 is a schematic block diagram illustrating the distribution and flow of the control and directory information for usage and recycling of erased blocks. The control and directory data are maintained in lists which are held either incontroller RAM 130 or in aMAP block 750 residing inflash memory 200. - In the preferred embodiment, the
controller RAM 130 holds the allocation block list (ABL) 610 and a cleared block list (CBL) 740. As described earlier in connection withFIG. 15 , the allocation block list (ABL) keeps track of which metablocks have recently been allocated for storage of user data, or for storage of system control data structures. When a new erased metablock need be allocated, the next available metablock number in the allocation block list (ABL) is selected. Similarly, the cleared block list (CBL) is used to keep track of update metablocks that have been de-allocated and erased. The ABL and CBL are held in controller RAM 130 (seeFIG. 1 ) for speedy access and easy manipulation when tracking the relatively active update blocks. - The allocation block list (ABL) keeps track of a pool of erased metablocks and the allocation of the erased metablocks to be an update block. Thus, each of these metablocks that may be described by an attribute designating whether it is an erased block in the ABL pending allocation, an open update block, or a closed update block.
FIG. 18 shows the ABL containing an erasedABL list 612, the openupdate block list 614 and the closedupdate block list 616. In addition, associated with the openupdate block list 614 is the associatedoriginal block list 615. Similarly, associated with the closed update block list is the associated erasedoriginal block list 617. As shown previously inFIG. 15 , these associated lists are subset of the openupdate block list 614 and the closedupdate block list 616 respectively. The erasedABL block list 612, the openupdate block list 614, and the closedupdate block list 616 are all subsets of the allocation block list (ABL) 610, the entries in each having respectively the corresponding attribute. - The
MAP block 750 is a metablock dedicated to storing erase management records inflash memory 200. The MAP block stores a time series of MAP block sectors, with each MAP sector being either an erase block management (EBM)sector 760 or aMAP sector 780. As erased blocks are used up in allocation and recycled when a metablock is retired, the associated control and directory data is preferably contained in a logical sector which may be updated in the MAP block, with each instance of update data being recorded to a new block sector. Multiple copies ofEBM sectors 760 andMAP sectors 780 may exist in theMAP block 750, with only the latest version being valid. An index to the positions of valid MAP sectors is contained in a field in the EMB block. A valid EMB sector is always written last in the MAP block during a control write operation. When theMAP block 750 is full, it is compacted during a control write operation by rewriting all valid sectors to a new block location. The full block is then erased. - Each
EBM sector 760 contains erased block lists (EBL) 770, which are lists of addresses of a subset of the population of erased blocks. The erased block lists (EBL) 770 act as a buffer containing erased metablock numbers, from which metablock numbers are periodically taken to re-fill the ABL, and to which metablock numbers are periodically added to re-empty the CBL. TheEBL 770 serves as buffers for the available block buffer (ABB) 772, the erased block buffer (EBB) 774 and the cleared block buffer (CBB) 776. - The available block buffer (ABB) 772 contains a copy of the entries in the
ABL 610 immediately following the previous ABL fill operation. It is in effect a backup copy of the ABL just after an ABL fill operation. - The erased block buffer (EBB) 774 contains erased block addresses which have been previously transferred either from
MAP sectors 780 or from the CBB list 776 (described below), and which are available for transfer to theABL 610 during an ABL fill operation. - The cleared block buffer (CBB) 776 contains addresses of erased blocks which have been transferred from the
CBL 740 during a CBL empty operation and which will be subsequently transferred toMAP sectors 780 or to theEBB list 774. - Each of the
MAP sectors 780 contains a bitmap structure referred to as MAP. The MAP uses one bit for each metablock in flash memory, which is used to indicate the erase status of each block. Bits corresponding to block addresses listed in the ABL, CBL, or erased block lists in the EBM sector are not set to the erased state in the MAP. - Any block which does not contain valid data structures and which is not designated as an erased block within the MAP, erased block lists, ABL or CBL is never used by the block allocation algorithm and is therefore inaccessible for storage of host or control data structures. This provides a simple mechanism for excluding blocks with defective locations from the accessible flash memory address space.
- The hierarchy shown in
FIG. 18 allows erased block records to be managed efficiently and provides full security of the block address lists stored in the controller's RAM. Erased block entries are exchanged between these block address lists and one ormore MAP sectors 780, on an infrequent basis. These lists may be reconstructed during system initialization after a power-down, via information in the erased block lists and address translation tables stored in sectors in flash memory, and limited scanning of a small number of referenced data blocks in flash memory. - The algorithms adopted for updating the hierarchy of erased metablock records results in erased blocks being allocated for use in an order which interleaves bursts of blocks in address order from the MAP block 750 with bursts of block addresses from the
CBL 740 which reflect the order blocks were updated by the host. For most metablock sizes and system memory capacities, a single MAP sector can provide a bitmap for all metablocks in the system. In this case, erased blocks are always allocated for use in address order as recorded in this MAP sector. - Erase Block Management Operations
- As described earlier, the
ABL 610 is a list with address entries for erased metablocks which may be allocated for use, and metablocks which have recently been allocated as data update blocks. The actual number of block addresses in the ABL lies between maximum and minimum limits, which are system design variables. The number of ABL entries formatted during manufacturing is a function of the card type and capacity. In addition, the number of entries in the ABL may be reduced near the end of life of the system, as the number of available erased blocks is reduced by failure of blocks during life. For example, after a fill operation, entries in the ABL may designate blocks available for the following purposes. Entries for Partially written data update blocks with one entry per block, not exceeding a system limit for a maximum of concurrently opened update blocks. Between one to twenty entries for Erased blocks for allocation as data update blocks. Four entries for erased blocks for allocation as control blocks. - ABL Fill Operation
- As the
ABL 610 becomes depleted through allocations, it will need to be refilled. An operation to fill the ABL occurs during a control write operation. This is triggered when a block must be allocated, but the ABL contains insufficient erased block entries available for allocation as a data update block, or for some other control data update block. During a control write, the ABL fill operation is concurrent with a GAT update operation. - The following actions occur during an ABL fill operation.
- 1. ABL entries with attributes of current data update blocks are retained.
- 2. ABL entries with attributes of closed data update blocks are retained, unless an entry for the block is being written in the concurrent GAT update operation, in which case the entry is removed from the ABL.
- 3. ABL entries for unallocated erase blocks are retained.
- 4. The ABL is compacted to remove gaps created by removal of entries, maintaining the order of entries.
- 5. The ABL is completely filled by appending the next available entries from the EBB list.
- 6. The ABB list is over-written with the current entries in the ABL.
- CBL Empty Operation
- The CBL is a list of erased block addresses in controller RAM with the same limitation on the number of erased block entries as the ABL. An operation to empty the CBL occurs during a control write operation. It is therefore concurrent with an ABL fill/GAT update operations, or CBI block write operations. In a CBL empty operation, entries are removed from the
CBL 740 and written to theCBB list 776. - MAP Exchange Operation
- A MAP exchange operation between the erase block information in the
MAP sectors 780 and theEBM sectors 760 may occur periodically during a control write operation, when theEBB list 774 is empty. If all erased metablocks in the system are recorded in theEBM sector 760, noMAP sector 780 exists and no MAP exchange is performed. During a MAP exchange operation, a MAP sector feeding theEBB 774 with erased blocks is regarded as asource MAP sector 782. Conversely, a MAP sector receiving erased blocks from theCBB 776 is regarded as adestination MAP sector 784. If only one MAP sector exists, it acts as both source and destination MAP sector, as defined below. - The following actions are performed during a MAP exchange.
- 1. A source MAP sector is selected, on the basis of an incremental pointer.
- 2. A destination MAP sector is selected, on the basis of the block address in the first CBB entry that is not in the source MAP sector.
- 3. The destination MAP sector is updated, as defined by relevant entries in the CBB, and the entries are removed from the CBB.
- 4. The updated destination MAP sector is written in the MAP block, unless no separate source MAP sector exists.
- 5. The source MAP sector is updated, as defined by relevant entries in the CBB, and the entries are removed from the CBB.
- 6. Remaining entries in the CBB are appended to the EBB.
- 7. The EBB is filled to the extent possible with erased block addresses defined from the source MAP sector.
- 8. The updated source MAP sector is written in the MAP block.
- 9. An updated EBM sector is written in the MAP block.
- List Management
-
FIG. 18 shows the distribution and flow of the control and directory information between the various lists. For expediency, operations to move entries between elements of the lists or to change the attributes of entries, identified inFIG. 18 as [A] to [O], are as follows. - [A] When an erased block is allocated as an update block for host data, the attributes of its entry in the ABL are changed from Erased ABL Block to Open Update Block.
- [B] When an erased block is allocated as a control block, its entry in the ABL is removed.
- [C] When an ABL entry is created with Open Update Block attributes, an Associated Original Block field is added to the entry to record the original metablock address for the logical group being updated. This information is obtained from the GAT.
- [D] When an update block is closed, the attributes of its entry in the ABL are changed from Open Update Block to Closed Update Block.
- [E] When an update block is closed, its associated original block is erased and the attributes of the Associated Original Block field in its entry in the ABL are changed to Erased Original Block.
- [F] During an ABL fill operation, any closed update block whose address is updated in the GAT during the same control write operation has its entry removed from the ABL.
- [G] During an ABL fill operation, when an entry for a closed update block is removed from the ABL, an entry for its associated erased original block is moved to the CBL.
- [H] When a control block is erased, an entry for it is added to the CBL.
- [I] During an ABL fill operation, erased block entries are moved to the ABL from the EBB list, and are given attributes of Erased ABL Blocks.
- [J] After modification of all relevant ABL entries during an ABL fill operation, the block addresses in the ABL replace the block addresses in the ABB list.
- [K] Concurrently with an ABL fill operation during a control write, entries for erased blocks in the CBL are moved to the CBB list.
- [L] During a MAP exchange operation, all relevant entries are moved from the CBB list to the MAP destination sector.
- [M] During a MAP exchange operation, all relevant entries are moved from the CBB list to the MAP source sector.
- [N] Subsequent to [L] and [M] during a MAP exchange operation, all remaining entries are moved from the CBB list to the EBB list.
- [O] Subsequent to [N] during a MAP exchange operation, entries other than those moved in [M] are moved from the MAP source sector to fill the EBB list, if possible.
Logical to Physical Address Translation - To locate a logical sector's physical location in flash memory, the logical to physical
address translation module 140 shown inFIG. 2 performs a logical to physical address translation. Except for those logical groups that have recently been updated, the bulk of the translation could be performed using the group address table (GAT) residing in theflash memory 200 or the GAT cache incontroller RAM 130. Address translations for the recently updated logical groups will require looking up address lists for update blocks which reside mainly incontroller RAM 130. The process for logical to physical address translation for a logical sector address is therefore dependent on the type of block associated with the logical group within which the sector is located. The types of blocks are: intact block, sequential data update block, chaotic data update block, closed data update block. -
FIG. 19 is a flow chart showing the process of logical to physical address translation. Essentially, the corresponding metablock and the physical sector is located by using the logical sector address first to lookup the various update directories such as the open update block list and the close update block list. If the associated metablock is not part of an update process, then directory information is provided by the GAT. The logical to physical address translation includes the following steps: - STEP 800: A logical sector address is given.
- STEP 810: Look up given logical address in the open update blocks list 614 (see
FIGS. 15 and 18 ) in controller RAM. If lookup fails, proceed to STEP 820, otherwise proceed to STEP 830. - STEP 820: Look up given logical address in the closed
update block list 616. If lookup fails, the given logical address is not part of any update process; proceed to STEP 870 for GAT address translation. Otherwise proceed to STEP 860 for closed update block address translation. - STEP 830: If the update block containing the given logical address is sequential, proceed to STEP 840 for sequential update block address translation. Otherwise proceed to STEP 850 for chaotic update block address translation.
- STEP 840: Obtain the metablock address using sequential update block address translation. Proceed to STEP 880.
- STEP 850: Obtain the metablock address using chaotic update block address translation. Proceed to STEP 880.
- STEP 860: Obtain the metablock address using closed update block address translation. Proceed to STEP 880.
- STEP 870: Obtain the metablock address using group address table (GAT) translation. Proceed to STEP 880.
- STEP 880: Convert the Metablock Address to a physical address. The translation method depends on whether the metablock has been relinked.
- STEP 890: Physical sector address obtained.
- The various address translation processes are described in more detail as follows:
- Sequential Update Block Address Translation (STEP 840)
- Address translation for a target logical sector address in a logical group associated with a sequential update block can be accomplished directly from information in the open update block list 614 (
FIGS. 15 and 18 ), as follows. - 1. It is determined from the “page tag” and “number of sectors written” fields in the list whether the target logical sector is located in the update block or its associated original block.
- 2. The metablock address appropriate to the target logical sector is read from the list.
- 3. The sector address within the metablock is determined from the appropriate “page tag” field.
Chaotic Update Block Address Translation (STEP 850) - The address translation sequence for a target logical sector address in a logical group associated with a chaotic update block is as follows.
- 1. If it is determined from the chaotic sector list in RAM that the sector is a recently written sector, address translation may be accomplished directly from its position in this list.
- 2. The most recently written sector in the CBI block contains, within its chaotic block data field, the physical address of the chaotic update block relevant to the target logical sector address. It also contains, within its indirect sector index field, the offset within the CBI block of the last written CBI sector relating to this chaotic update block (see
FIGS. 16A-16E ). - 3. The information in these fields is cached in RAM, eliminating the need to read the sector during subsequent address translation.
- 4. The CBI sector identified by the indirect sector index field at
step 3 is read. - 5. The direct sector index field for the most recently accessed chaotic update sub-group is cached in RAM, eliminating the need to perform the read at
step 4 for repeated accesses to the same chaotic update block. - 6. The direct sector index field read at
step 4 orstep 5 identifies in turn the CBI sector relating to the logical sub-group containing the target logical sector address. - 7. The chaotic block index entry for the target logical sector address is read from the CBI sector identified in
step 6. - 8. The most recently read chaotic block index field may be cached in controller RAM, eliminating the need to perform the reads at
step 4 andstep 7 for repeated accesses to the same logical sub-group. - 9. The chaotic block index entry defines the location of the target logical sector either in the chaotic update block or in the associated original block. If the valid copy of the target logical sector is in the original block, it is located by use of the original metablock and page tag information.
- Closed Update Block Address Translation (STEP 860)
- Address translation for a target logical sector address in a logical group associated with a closed update block can be accomplished directly from information in the closed block update list (see
FIG. 18 ), as follows. - 1. The metablock address assigned to the target logical group is read from the list.
- 2. The sector address within the metablock is determined from the “page tag” field in the list.
- GAT Address Translation (STEP 870)
- If a logical group is not referenced by either the open or closed block update lists, its entry in the GAT is valid. The address translation sequence for a target logical sector address in a logical group referenced by the GAT is as follows.
- 1. The ranges of the available GAT caches in RAM are evaluated to determine if an entry for the target logical group is contained in a GAT cache.
- 2. If the target logical group is found in
step 1, the GAT cache contains full group address information, including both metablock address and page tag, allowing translation of the target logical sector address. - 3. If the target address is not in a GAT cache, the GAT index must be read for the target GAT block, to identify the location of the GAT sector relating to the target logical group address.
- 4. The GAT index for the last accessed GAT block is held in controller RAM, and may be accessed without need to read a sector from flash memory.
- 5. A list of metablock addresses for every GAT block, and the number of sectors written in each GAT block, is held in controller RAM. If the required GAT index is not available at
step 4, it may therefore be read immediately from flash memory. - 6. The GAT sector relating to the target logical group address is read from the sector location in the GAT block defined by the GAT index obtained at
step 4 orstep 6. A GAT cache is updated with the subdivision of the sector containing the target entry. - 7. The target sector address is obtained from the metablock address and “page tag” fields within the target GAT entry.
- Metablock to Physical Address Translation (STEP 880)
- If a flag associated with the metablock address indicates that the metablock has been re-linked, the relevant LT sector is read from the BLM block, to determine the erase block address for the target sector address. Otherwise, the erase block address is determined directly from the metablock address.
- Control Data Management
-
FIG. 20 illustrates the hierarchy of the operations performed on control data structures in the course of the operation of the memory management. Data Update Management Operations act on the various lists that reside in RAM. Control write operations act on the various control data sectors and dedicated blocks in flash memory and also exchange data with the lists in RAM. - Data update management operations are performed in RAM on the ABL, the CBL and the chaotic sector list. The ABL is updated when an erased block is allocated as an update block or a control block, or when an update block is closed. The CBL is updated when a control block is erased or when an entry for a closed update block is written to the GAT. The update chaotic sector list is updated when a sector is written to a chaotic update block.
- A control write operation causes information from control data structures in RAM to be written to control data structures in flash memory, with consequent update of other supporting control data structures in flash memory and RAM, if necessary. It is triggered either when the ABL contains no further entries for erased blocks to be allocated as update blocks, or when the CBI block is rewritten.
- In the preferred embodiment, the ABL fill operation, the CBL empty operation and the EBM sector update operation are performed during every control write operation. When the MAP block containing the EBM sector becomes full, valid EBM and MAP sectors are copied to an allocated erased block, and the previous MAP block is erased.
- One GAT sector is written, and the Closed Update Block List is modified accordingly, during every control write operation. When a GAT block becomes full, a GAT rewrite operation is performed.
- A CBI sector is written, as described earlier, after certain chaotic sector write operations. When the CBI block becomes full, valid CBI sectors are copied to an allocated erased block, and the previous CBI block is erased.
- A MAP exchange operation, as described earlier, is performed when there are no further erased block entries in the EBB list in the EBM sector.
- A MAP Address (MAPA) sector, which records the current address of the MAP block, is written in a dedicated MAPA block on each occasion the MAP block is rewritten. When the MAPA block becomes full, the valid MAPA sector is copied to an allocated erased block, and the previous MAPA block is erased.
- A Boot sector is written in a current Boot block on each occasion the MAPA block is rewritten. When the boot block becomes full, the valid Boot sector is copied from the current version of the Boot block to the backup version, which then becomes the current version. The previous current version is erased and becomes the backup version, and the valid Boot sector is written back to it.
- Alignment for Memory Distributed Over Multiple Memory Planes
- As described earlier in connection with
FIG. 4 andFIGS. 5A-5C , in order to increase performance, multiple memory planes are operated in parallel. Basically each plane has its own set of sense amplifiers as part of read and program circuits to service in parallel a corresponding page of memory cells that spans the plane. When multiple planes are combined, multiple pages may be operated on in parallel, resulting in even greater performance. - According to another aspect of the invention, for a memory array organized into erasable blocks and constituted from multiple memory planes so that logical units can be read or programmed into the multiple planes in parallel, when an original logical unit of a first block stored in a given memory plane is to be updated, provisions are made to keep the updated logical unit in the same plane as the original. This is accomplished by recording the updated logical unit to the next available location of a second block but in the same plane. Preferably, the logical unit is stored with an identical offset position in the plane as its other versions so that all versions of a given logical units are serviced by an identical set of sensing circuits.
- In a preferred embodiment, any intervening gaps from the last programmed memory unit to the next available plane-aligned memory unit are padded accordingly with the current versions of logical units. The padding is accomplished by filling in a gap with current versions of the logical units that follow logically from the last programmed logical unit, and with current versions of the logical units that precede logically from the logical unit stored in the next available plane-aligned memory unit.
- In this way, all versions of a logical unit are maintained in the same plane with the same offsets as the original so that in a garbage collection operation, the latest version of the logical unit need not be retrieved from a different plane, resulting in reduced performance. In a preferred embodiment, each memory unit across each plane is either updated or padded with the latest versions. Thus in a parallel operation across the multiple planes, the logical units will be in logical sequential order without the need for further rearrangement.
- This scheme reduces the time for consolidation of a chaotic block by allowing on-plane rearrangement of the latest versions of the logical units of a logical group and avoiding having to gather the latest versions from different memory planes. This is of benefit where the performance specification for the host interface defines a maximum latency for completion of a sector write operation by the memory system.
-
FIG. 21 illustrates a memory array constituted from multiple memory planes. The memory planes could be from the same memory chip or from multiple memory chips. Eachplane 910 has its own set of read andprogram circuits 912 to service apage 914 of memory cells in parallel. Without loss of generality, in the example shown, the memory array has four planes operating in parallel. - Generally, a logical unit is a minimum unit of access by a host system. Typically a logical unit is a sector of size 512 bytes. A page is a maximum unit of parallel read or program in a plane. Typically a logical page contains one or more logical units. Therefore, when combining multiple planes, a maximum aggregated unit of parallel read or program could be regarded as a metapage of memory cells, where the metapage is constituted by a page from each of the multiple planes. For example, a metapage such as MP0 has four pages, one from each of the planes, P0, P1, P2 and P3, storing in parallel logical pages LP0, LP1, LP2, LP3. Thus, the read and write performance of the memory is increased four fold as compared to operating only in one plane.
- The memory array is further organized into metablocks such as MB0, . . . , MBj where all memory cells within each metablock are erasable together as a unit. A metablock such as MB0 is constituted from multiple memory locations, for storing
logical pages 914 of data, such as LP0-LPN-1. The logical pages in a metablock are distributed over the four planes P0, P1, P2 and P3 in a predetermined sequence according to the order they are filled in the metablock. For example, when the logical pages are filled in logically sequential order, the planes are visited in cyclic order with the first page in the first plane, the second page in the second plane, etc. After the last plane is reached, the filling returns cyclically to start from the first plane again in the next metapage. In this way, a contiguous run of logical pages can be accessed in parallel when all planes are operated in parallel. - In general, if there are W planes operating in parallel and the metablock is filled in logically sequential order, the kth logical page in a metablock would reside in plane x where x=k MOD W. For example, with four planes, W=4, and when filling the block in logical sequential order, the 5th logical page LP5 would reside in a plane given by 5
MOD 4, which isplane 1, as can be seen inFIG. 21 . - The memory operations in each memory plane are performed by the set of read/
write circuits 912. Data in and out of each of the read/write circuits is transferred through adata bus 930 under the control of acontroller 920. Abuffer 922 in thecontroller 920 helps to buffer the transfer of data via thedata bus 930. In particular when operations in a first plane require access to data in a second plane, a two-step process is required. The data is first read out from the second plane by the controller and then transferred to the first plane via the data bus and the buffer. In fact, in most memory architectures, transfer of data between two different bit lines also requires the data to be exchanged through thedata bus 920. - At the very least, this involved transfer out of one set of read/write circuits in one plane and into another set of read/write circuits in another plane. In the case where the planes are from different chips, it will require transfer between chips. The present invention provides structures and schemes for memory block management such that access of data by one plane from another is avoided in order to maximize performance.
- As shown in
FIG. 21 , a metapage is formed by multiple logical pages, one in each plane. Each logical page may consist of one or more logical units. As data are being recorded logical unit by logical unit into a block across the planes, each logical unit will fall in one of the four memory planes. - The issue of plane alignment arises when a logical unit is updated. In the current example, for the purpose of illustration, a logical unit is taken as a logical sector of 512 byte and a logical page is also a logical unit wide. Since flash memory does not allow rewrite of a portion of a block without first erasing the whole block, updates to a logical page is not written over the existing location but recorded in an unused location of the block. The previous version(s) of the logical unit is then deemed obsolete. After a number of updates, a block may contain a number of logical units that have since been updated and thus become obsolete. The block is then said to be “dirty” and a garbage collection operation will ignore the dirty logical units but collect the latest version of each individual logical units and re-record them in logically sequential order in one or more new blocks. The dirty block is then erased and recycled.
- When the updated logical unit is recorded in the next unused location in a block, it will generally not be recorded in the same memory plane as its previous versions. When it comes to a garbage collection operation, such as a consolidation or compaction, the latest version of a logical unit would be re-recorded in the same plane as the original to maintain the original order. However, performance would degrade if the latest version has to be retrieved from another plane.
- Thus, according to another aspect of the invention, when an original logical unit of a first block stored in a given plane is to be updated, provisions are made to keep the updated logical unit in the same plane as the original. This is accomplished by recording the updated logical unit to the next available location of a second block but in the same plane. In a preferred embodiment, any intervening gaps from the last programmed memory unit to the next available plane-aligned memory unit are padded (i.e., filled in by copying) with the current versions of logical units in the same relative location as the original logical units in the original block.)
-
FIG. 22A is a flow diagram illustrating a method of update with plane alignment, according to a general implementation of the invention. - STEP 950: In a nonvolatile memory organized into blocks, each block partitioned into memory units that are erasable together, each memory unit for storing a logical unit of data.
- STEP 952: Constituting the memory from multiple memory planes, each plane having a set of sensing circuits for servicing a memory page in parallel, said memory page containing one or more memory unit.
- STEP 954: Storing a first version of logical units among memory units of a first block according to a first order, each first version logical unit being stored within one of the memory planes.
- STEP 956: Storing subsequent versions of logical units to a second block according to a second order different from the first, each subsequent version being stored in a next available memory unit in the same plane as the first version, such that all versions of a logical unit are accessible from the same plane by the same set of sensing circuits.
-
FIG. 22B illustrates a preferred embodiment of the step of storing updates in the flow diagram shown inFIG. 22A . -
STEP 956′ includesSTEP 957,STEP 958 andSTEP 959. - STEP 957: Partitioning each block into metapages, each metapage being constituted from a page of each plane. This step can be performed prior to any of the storing steps.
- STEP 958: Storing subsequent versions of logical units to a second block according to a second order different from the first, each subsequent version being storing in a next available memory unit having identical offset in a metapage as that of the first version
- STEP 959: Concurrently with storing subsequent versions of logical units, padding metapage by metapage any unused memory units preceding said next available memory unit by copying current versions of logical units according to the first order.
-
FIG. 23A illustrates an example of logical units being written in sequential order to a sequential update block without regard to plane alignment. The example shows each logical page being the size of a logical sector, such as LS0, LS1, . . . . In the four-plane example, each block, such as MB0 can be regarded as partitioned into metapages MP0, MP1, . . . , with each metapage, such as MP0 containing four sectors such as LS0, LS1, LS2 and LS3, respectively from the planes P0, P1, P2 and P3. Therefore the block is filled in logical units of sector by sector in cyclic order in planes P0, P1, P2 and P3. - In host
write operation # 1, the data in the logical sectors LS5-LS8 are being updated. The updated data as LS5′-LS8′ are recorded in a newly allocated update block starting from the first available location. - In host
write operation # 2, the segment of data in the logical sectors LS9-LS12 are being updated. The updated data as LS9′-LS12′ are recorded in the update block in a location directly following where the last write ends. It can be seen that the two host writes are such that the update data has been recorded in the update block in logically sequential order, namely LS5′-LS12′. The update block is regarded as a sequential update block since it has been filled in logically sequential order. The update data recorded in the update block obsoletes the corresponding ones in the original block. - However, the update logical sectors are being recorded in the update block according to the next available location but without regard to plane alignment. For example, the sector LS5 is originally recorded in plane P1, but the updated LS5′ is now recorded in P0. Similarly the other update sectors are all misaligned.
-
FIG. 23B illustrates an example of logical units being written in non-sequential order to a chaotic update block without regard to plane alignment. - In host
write operation # 1, the logical sectors LS10-LS11 of a given logical group stored in an original metablock are updated. The updated logical sectors LS10′-LS11′ are stored in a newly allocated update block. At this point, the update block is a sequential one. In hostwrite operation # 2, the logical sectors LS5-LS6 are updated as LS5′-LS6′ and recorded in the update block in the location immediately following the last write. This converts the update block from a sequential to a chaotic one. In hostwrite operation # 3, the logical sector LS10′ is being updated again and is recorded in the next location of the update block as LS10″. At this point LS10″ in the update block supersedes LS10′ in a previous recording which in turn supercedes LS10 in the original block. In hostwrite operation # 4, the data in the logical sector LS10″ is again updated and is recorded in the next location of the update block as LS10′″. Thus, LS10′″ is now the latest and only valid version for the logical sector LS10. All previous versions of LS10 are now obsolete. In hostwrite operation # 5, the data in logical sector LS30 is being updated and recorded in the update block as LS30′. In this example, logical units within a logical group can be written in a chaotic update block in any order and with any repetition. - Again, the update logical sectors are being recorded in the update block according to the next available location but without regard to plane alignment. For example, the sector LS10 is originally recorded in plane P2 (i.e., MP2, 3rd plane), but the updated LS10′ is now recorded in P0 (i.e., MP0′, 1st plane). Similarly, in
host write # 3, the logical sector LS10′ is updated again as LS10″ and is placed in the next available location which turns out to be also in plane P0 (1st plane in MP1′). Thus, it can be seen that in general, recording an update sector into the next available location of a block could result in the updated sector being stored in a plane different from its earlier versions. - Plane-Aligned, Sequential Update Block with Intervening Gaps Filled in by Padding
-
FIG. 24A illustrates the sequential update example ofFIG. 23A with plane alignment and padding, according to a preferred embodiment of the invention. - In host
write operation # 1, the updated data as LS5′-LS8′ are recorded in a newly allocated update block starting from the first available plane-aligned location. In this case LS5 is originally in P1, which is the 2nd plane of a metapage. Therefore, LS5′-LS7′ will be programmed in corresponding planes in the first available metapage, MP0, of the update block. At the same time, the gap of the unused 1st plane in MP0′ is being padded with a current version of the logical sector LS4 that precedes LS5 in the metapage of the original block. The original LS4 is then treated as obsolete data. The left over LS8′ is then recorded in the 1st plane of the next metapage, MP1′, and is plane-aligned. - In host
write operation # 2, the updated data as LS9′-LS12′ are recorded in the update block in the next available plane-aligned location. Therefore, LS9′ is recorded in the next available plane-aligned memory unit, which is the 2nd plane of MP1′. In this case, no gaps result and no padding is necessary. The update block is regarded as a sequential update block since it has been filled in logically sequential order. Furthermore, it is plane-aligned since each update logical unit is in the same plane as its original. - Plane-Aligned, Chaotic Update Block with Intervening Gaps
-
FIG. 24B illustrates the chaotic update example ofFIG. 23B with plane alignment and with no padding, according to one preferred embodiment of the invention. - In host
write operation # 1, the updated logical sectors LS10′-LS11′ are stored in a newly allocated update block. Instead of storing them in the next available memory units, they are stored in the next available plane-aligned memory units. Since LS10′ and LS11′ are originally stored in planes P2 and P3 (3rd and 4th planes of MP2 of the original block) respectively, the next available plane-aligned memory units would be that in the 3rd and 4th planes of MP0′ of the update block. At this point, the update block is non-sequential, with the pages of metapage MP0′ filled in the order UNFILLED, UNFILLED, LS10′ and LS11′. - In host
write operation # 2, the logical sectors LS5-LS6 are updated as LS5′-LS6′ and recorded in the update block in next available plane-aligned location. Thus, LS5′ and LS6′, being in the 2nd (P1) and 3rd (P2) planes or memory unit of MP1 of the original block, would be programmed into corresponding planes in the next available metapage MP1′ in the update block. This leaves a preceding, unused 1st plane in MP1′. - In host
write operation # 3, the logical sector LS10′ is being updated again and is recorded in the next plane-aligned location of the update block as LS10″. Thus, it will be written to the next available 3rd plane, which will be in MP2′. This leaves a preceding gap of the last plane in MP1′ and the first two planes of MP2′. This will obsolete LS10′ in MP0′. - In host
write operation # 4, the data in the logical sector LS10″ is again updated and is recorded in the next available 3rd plane of the metapage MP2′ in the update block as LS10′″. Thus, LS10′″ is now the latest and only valid version for the logical sector LS10. This leaves a gap consisting of the last plane in MP2′ and the first two planes in MP3′. - In host
write operation # 5, the data in logical sector LS30 is being updated and recorded in the update block as LS30′. Since the original LS30 resides in P2 or the 3rd plane of a metapage, it will be written to the next available 3rd plane in the update block. In this case, it will be the 3rd plane of MP4′. A gap is created from the last plane of MP3′ to the first two planes of MP4′. Thus, the example illustrates that logical sectors within a logical group can be written in a chaotic update block in any order and with any repetition in a plane-aligned manner. In a subsequent garbage collection operation, all versions, especially the latest version of a given logical sector will be conveniently serviced by an identical set of sensing circuits. - Plane-Aligned, Chaotic Update Block with Intervening Gaps Filled in by Padding
-
FIG. 24C illustrates the chaotic update example ofFIG. 23B with plane alignment and padding, according to another preferred embodiment of the invention. - The operation is similar to that shown in
FIG. 24B , except the intervening gaps are first filled in with padding. In hostwrite operation # 1, the gap created by the 1st and 2nd unused planes of the metapage MP0′ are first padded with current versions of LS8 and LS9, which reside in the original block. This renders the LS8 and LS9 in the original block obsolete. At this point, the update block is a sequential one, with the metapage MP0′ filled in the order LS8, LS9, LS10′ and LS11′. - In host
write operation # 2, a gap is created by a preceding, unused 1st plane in MP1′ and it is first padded with LS4. This will render the LS4 in the original block obsolete. As before, the second write converts the update block from a sequential to a chaotic one. - In host
write operation # 3, a gap is created by unused, last plane in MP1′ and the first two planes of MP2′. The last plane of MP1′ is first padded with LS7 which follows the last programmed LS6′ and the first two planes of MP2′ are padded with logical units preceding LS10, namely LS8 and LS9. This will obsoletes LS10′ in MP0′ and LS7-LS9 in the original block. - In host
write operation # 4, a gap is created consisting of the last plane in MP2′ and the first two planes in MP3′. The last plane of MP2′ is padded by LS11′ which is the current version of the logical unit following the last written LS10″ in the metapage MP2′. The first two planes of MP3′ are padded by LS8 and LS9 respectively, as the logical units preceding LS10′″ in the metapage MP3′. - In host
write operation # 5, a gap from the last plane of MP3′ to the first two planes of MP4′ will be padded accordingly, namely with LS11′, LS28 and LS29 respectively. Thus, the example illustrates that logical sectors within a logical group can be written in a chaotic update block in any order and with any repetition in a plane-aligned manner. - In the preferred embodiment, a metapage contains a cyclic run of pages from the individual planes. Since a metapage can be read or programmed in parallel, it is expedient to implement each host update with the granularity of a metapage. The padding, if any, is recorded together with the updated logical units, metapage by metapage.
- In the embodiment illustrated by the examples of
FIG. 24A andFIG. 24C , during each host write, padding is performed on unused memory units preceding the plane-aligned memory unit to which the update is to be programmed. The action of any unused memory units following the last programmed memory unit is deferred until the next host write. Generally, any preceding unused memory units are padded within each metapage's boundary. In other words, if a preceding gap straddles over two metapages, padding will performed on each metapage in a logically sequential order appropriate to each metapage, but without regard to continuity across the boundary. In the event of a consolidation of the block, the last written metapage, if partially written, will be completely filled by padding. - In another embodiment, any partially filled metapage is fully padded before moving to the next metapage.
- Memory Unit Granularity
- Variations to the unit of read or program are possible, depending on the flexibility supported by the individual memory architectures. The independent nature of the individual planes allows each page from individual planes in a metapage to be read and programmed independently. The examples given earlier have the maximum unit of program to be a page in each plane. Within a metapage, a partial metapage programming of less than all the pages is possible. For example, it is possible to program the first three pages of a metapage and then program the fourth page subsequently.
- Also, at the plane level, a physical page may contain one or more memory units. If each memory unit can store a sector of data, then a physical page may store one or more sectors. Some memory architecture supports partial-page programming, in which by inhibiting programming of selected memory units within a page, selected logical units may be programmed individually at different times over multiple programming passes.
- Logical Unit Alignment within a Memory Plane for Chaotic Update of a Logical Group
- In a block memory management system, a logical group of logical units is stored in logically sequential order in an original block. When the logical group is updated, subsequent versions of the logical units are stored in an update block. If the logical units are stored chaotically (i.e., non-sequentially) in the update block, a garbage collection is eventually performed to collect the latest versions of the logical units among the original block and the update block and to consolidate them sequentially into a new original block. The garbage collection operation will be more efficient if the updated versions of a given logical unit are all stored in the update block in alignment with its original version in the original block such that all versions are accessible by an identical set of sensing circuits.
- According to another aspect of the invention, in the above-described block memory management system, when the memory is organized into a series of memory pages, where each page of memory units are serviced in parallel by a set of sensing circuits, all versions of a given logical unit are aligned if they all have identical offset location in the page they are stored.
-
FIG. 25 illustrates an example memory organization in which each page contains two memory units for storing two logical units, such as two logical sectors. In an original block, since the logical sectors are stored in logically sequential order, the logical sectors, LS0 and LS1 are stored in the page P0, the logical sectors LS2 and LS3 are stored in the page P1, and the logical sectors LS4 and LS5 are stored in the page P3, etc. It will be seen that in this two-sector page, the first sector from the left has a page offset of “0”, and the second sector has a page offset of “1”. - When the logical group of logical sectors stored sequentially in the original block is updated, the updated logical sectors are recorded in an update block. For example, the logical sector LS2 resides in page P0 with offset “0” in the original block. If in a first write, if LS2 is updated to LS2′, it will be stored in the first available location in the update block having the same page offset “0”. This will be in the first memory unit of the page P0′. If in a second write, LS5 is update to LS5′, it will be stored in the first available location in the update block having the same page offset “1”. This will be in the second memory unit with offset “1” of the page P1′. However, before storing LS5′, the unused memory units with offset “1” in P0′ and offset “0” in P1′ will first be padded by copying to them the latest versions of the logical sectors that will maintain logical sequential order at least within each page. In this case, LS3 will be copied to the offset “1” location in P0′ and LS4 to the offset “0” location in P1′. If in a third write, LS2′ is again updated to LS2″, it will be stored in offset “0” of P2′. If in a fourth write, LS22 and LS23 are respectively updated to LS22′ and LS23′, they will be stored respectively in offsets “0” and “1” of P3′. However, before that, the unused memory unit with offset “1” in P2′ is padded with LS3.
- The updating sequence described above assumes that it is possible to program individual sectors within a page. For some memory architecture where partial page programming is not support, all sectors within a page must be programmed together. In that case, in the first write, LS2′ and LS3 are programmed together into P0′. In the second write, LS4 and LS5′ are programmed together into P1′. In the third write, LS2″ and LS3 are programmed together into P2′, etc.
- Plane Alignment within Metapage
- Alternatively, the unit of program may have the granularity of a metapage. If the granularity of writing to a chaotic update block becomes a metapage, the entries in the CBI block described in connection with
FIGS. 16A and 16B , may relate to metapages instead of sectors. The increased granularity reduces the number of entries that must be recorded for a chaotic update block, and allows direct indices to be eliminated and a single CBI sector to be used per metablock. -
FIG. 26A is similar to the memory structure ofFIG. 21 except each page contains two sectors instead of one. Thus, it can be seen that a metapage MP0 now has each of its pages capable of storing two logical units of data. If each logical unit is a sector then logical sectors are stored sequentially in MP0 with LS0 and LS1 in plane P0, LS2 and LS3 in plane P1, etc. -
FIG. 26B illustrates metablocks shown inFIG. 26A having memory units laid out in a schematic linear fashion. As compared to the single sector page ofFIG. 21 , the logical sectors are stored cyclically among the four pages with two sectors in each page. - In general, if there are W planes operating in parallel and there are K memory units per page and the metablock is filled in logically sequential order, the kth logical page in a metablock would reside in plane x where x=k′ MOD W, where k′=INT(k/K). For example, with four planes, W=4, and with 2 sectors per page, K=2, then for k=5, which refers to the 5th logical sector, LS5, it will reside in a plane given by 2
MOD 4, which isplane 2, as can be seen inFIG. 24A . Generally, the same principles apply for implementing plane alignment described earlier. - The examples given earlier are for page alignment with planes in a multi-plane architecture. In the case of a page with multiple sectors, it will be advantageous to also maintain sector alignment within a page. In this way, the same set of sense circuits can be used expediently for different versions of the same logical sector. Operations such as relocation and “read-modify-write” of a sector are efficiently performed. When aligning sector order within a page, the same techniques as in aligning page to plane can be employed. Also depending on embodiments, any intervening gaps may or may not be padded.
- Logical Unit or Plane Alignment without Padding
-
FIG. 27 illustrates an alternative scheme for plane alignment in an update block without padding logical units to be copied from one location to another. The portions of the four planes that intersect with the update block may be regarded as four buffers that collect plane-aligned updated logical units received from a host. Each logical unit received from the host is programmed without padding in the next available memory unit of the appropriate buffer. A different number of logical units are likely to have been programmed in each plane, according to the sequence of logical unit addresses received from the host. - The chaotic update block MB′1 may contain updated versions of all logical units of a logical metapage, such as for MP′0. It may also contain less than all logical units of a metapage, such as for MP′1. In the case of MP′1, the missing logical unit, LS4 can be obtained from the corresponding original block MB0.
- This alternative scheme is particularly efficient if the memory architecture supports parallel read of arbitrary logical page from each plane. In this way, all logical pages of a metapage can be read in a single parallel read operation even if the individual logical pages are not from the same row.
- Phased Program Error Handling
- When there is a program failure in a block then all the data to be stored to the block is typically moved to another block and the failed block marked as bad. Depending on the timing specification for the operation in which the failure is encountered, there may not be enough time to additionally move the stored data to another block. The worst case scenario is program failure during normal garbage collection operation, where another, similar garbage collection operation is required to relocate all the data to another block. In this case, a specified write latency time limit of a given host/memory device can be violated since it is typically designed to accommodate one by not two garbage collection operations.
-
FIG. 28 illustrates a scheme in which when a program failure occurs in a defective block during a consolidation operation, the consolidation operation is repeated on another block. In the example,Block 1 is an original block storing the complete logical units of a logical group in logically sequential order. For illustrative purposes, the original block contains sections A, B, C, and D, each storing a subgroup of logical units. When a host updates certain logical units of the group, the newer versions of the logical units are recorded in an update block, namely,Block 2. As described earlier in connection with update blocks, depending on the host, the updates could record the logical units in sequential or in non-sequential (chaotic) order. Eventually, the update block is closed to receiving further updates either because it is full or for some other reasons. When the update block (Block 2) is closed, the current versions of the logical units that reside either on the update block or the original block (Block 1) are consolidated onto a new block (Block 3) to form a new original block for the logical group. The example shows the update block containing the newer versions of the logical units in sections B and D. For expediency, the sections B and D are illustrated schematically inBlock 2 not necessarily in the locations they are recorded but rather aligned to their original locations inBlock 1. - In a consolidation operation, the current versions of all the logical units of the logical group, that originally reside in
Block 1, are recorded in sequential order into a consolidation block (Block 3). Thus, the logical units of section A will first be copied fromBlock 1 toBlock 3, followed by those of section B fromBlock 2 toBlock 3. In the example, when the logical units of section C are to be copied fromBlock 1 toBlock 3, a defect inBlock 3 results in a program failure. - One way of handling the program failure is to restart the consolidation process on a fresh block (Block 4). Thus, the sections A, B, C, D are copied onto
Block 4, and thedefective Block 3 is then discarded. However, this would mean performing two consolidation operations in tandem, which may result in copying as much as two blocks full of logical units. - Memory devices have a specific time allowance to complete a certain operation. For example when a host writes to a memory device, it expects the write operation to be completed within a specified time, know as “write latency”. While the memory device, such as a memory card, is busy writing the data from the host, it signals a “BUSY” state to the host. If the “BUSY” state lasts longer than the write latency period, the host will time-out the write operation and register an exception or error to the write operation.
-
FIG. 29 illustrates schematically, a host write operation with a timing or write latency that allows enough time to complete a write (update) operation as well as a consolidation operation. The host write operation has a write latency, TW, that will provide sufficient time for the completion of anupdate operation 972 of writing host data to an update block (FIG. 29 (A)). As described in the block management system earlier, a host write to an update block may trigger a consolidation operation. Thus, the timing also allows for a consolidation operation 974 (FIG. 29 (B)) in addition to theupdate operation 972. However, having to restart the consolidation operation anew in response to a failed one may take too much time and exceed the specified write latency. - According to another aspect of the invention, in a memory with block management system, program failure in a block during a time-critical memory operation is handled by continuing the programming operation in a breakout block. Later, at a less critical time, the data recorded in the failed block prior to the interruption is transferred to another block, which could also be the breakout block. The failed block can then be discarded. In this way, when a defective block is encountered, it can be handled without loss of data and without exceeding a specified time limit by having to transfer the stored data in the defective block on the spot. This error handling is especially critical for a garbage collection operation so that the entire operation need not be repeated on a fresh block during a critical time. Subsequently, at an opportune time, the data from the defective block can be salvaged by relocation to another block.
-
FIG. 30 illustrates a flow chart of the program failure handling according to a general scheme of the invention. - STEP 1002: Organizing a nonvolatile memory organized into blocks, each block partitioned into memory units that are erasable together, each memory unit for storing a logical unit of data.
Program Failure Handling (First Phase) - STEP 1012: Storing a sequence of logical units of data in a first block.
- STEP 1014: In response to a storing failure at the first block after storing a number of logical units, storing subsequent logical units in a second block serving as a breakout block for the first block.
Program Failure Handling (Final Phase) - STEP 1020: In response to a predefined event, transferring the logical units stored in the first block to a third block, wherein the third block may or may not be identical to the second block.
- STEP 1022: Discarding the first block.
-
FIG. 31A illustrates one embodiment of program failure handling in which the third (final relocation) block is different from the second (breakout) block. During Phase I, a sequence of logical units is being recorded on a first block. If the logical units are from a host write, the first block may be regarded as an update block. If the logical units are from a consolidation of compaction operation, the first block may be regarded as a relocation block. If at some point, a program failure is encountered inblock 1, a second block acting as a breakout block is provisioned. The logical unit that failed to be recorded inblock 1 and any subsequent logical units are recorded on the breakout block instead. In this way, no additional time is required to replace the failedblock 1 and the data residing on it. - In an intermediate Phase II, all the recorded logical units of the sequence are obtainable between
block 1 andblock 2. - In a final Phase III, the failed
block 1 and the data residing on it are replaced by relocating the logical units to block 3, which serves as a relocation block. Thus, the data in the failed block is salvaged and the failed block can then be discarded. The final phase is timed so that it does not conflict with the timing of any contemporary memory operations. - In this embodiment the
relocation block 3 is distinct from thebreakout block 2. This is expedient when the breakout block has been recorded with additional logical units during the intermediate phase. Thus, the breakout block has turned into an update block and may not be suitable for relocating the logical units from thedefective block 1 thereto. -
FIG. 31B illustrates another embodiment of program failure handling in which the third (final relocation) block is the same as the second (breakout) block. Phases I and II are similar to the first embodiment shown inFIG. 31A . However, in Phase III, the logical units from thedefective block 1 is relocated to thebreakout block 2. This is expedient when thebreakout block 2 has not been recorded with additional logical units other the original sequence of the previous write operation. In this way, a minimum of blocks is needed to store the logical units in question. - Embodiment for Program Failure Handling During Consolidation
- The program failure handling is especially critical during a consolidation operation. A normal consolidation operation consolidates into a consolidation block the current versions of all logical units of a logical group residing among an original block and an update block. During the consolidation operation, if a program failure occurs in the consolidation block, another block acting as a breakout consolidation block will be provisioned to receive the consolidation of the remaining logical units. In this way, no logical units need be copied more than once and the operation with exception handling can still be completed within a period specified for a normal consolidation operation. At an opportune time, the consolidation operation can be completed by consolidating all outstanding logical units of the group into the breakout block. The opportune time will be during some other period outside the current host write operation when there is time to perform the consolidation. One such opportune time is during another host write where there is an update but no associated consolidation operation.
- Essentially, the consolidation with the program failure handling can be regarded as being implemented in multiple phases. In a first phase, the logical units are being consolidated into more than one block after a program failure occurs in order to avoid consolidating each logical unit more than once. The final phase is completed at an opportune time, where the logical group is consolidated into one block, preferably by collecting all the logical units into the breakout consolidation block in sequential order.
-
FIG. 32A illustrates a flow chart of initial update operation that results in a consolidation operation. - STEP 1102: Organizing a nonvolatile memory organized into blocks, each block partitioned into memory units that are erasable together, each memory unit for storing a logical unit of data.
- STEP 1104: Organizing data into a plurality of logical groups, each logical group being a group of logical units storable in a block.
- STEP 1112: Receiving host data packaged in logical units.
- STEP 1114: Creating an original block of the logical group by storing in a first block a first version of logical units of the logical group according to a first order.
- STEP 1116: Creating an update block of the logical group by storing in a second block including subsequent versions of logical units of the logical group according to a second order.
- STEP 1119: At some predefined event as described in earlier sections, a garbage collection is performed to collect the current versions of the logical units among the various blocks and re-record into a new block.
-
FIG. 32B illustrates a flow chart of the multiple phase consolidation operation, according to a preferred embodiment of the invention. - Consolidation Failure Handling (Phase I)
- The consolidation with error handling,
Phase I operation 1120 comprises ofSTEP 1122 andSTEP 1124. - STEP 1122: Creating a consolidation block for the logical group by storing in a third block current versions of logical units of the logical group similar to the first order.
- STEP 1124: In response to a storing failure at the consolidation block, providing a breakout consolidation block by storing in a fourth block similar to the first order those logical units of the logical group absent from said third block.
- Since the data in
Blocks Blocks Blocks Block 2 can be immediately released to the EBL (Erased Block List, seeFIG. 18 ) and reused.Block 1 can only be released if this is a Closed Update block and there is another block the corresponding GAT entry points to. - Essentially,
Block 3 becomes the original block for the logical group andBlock 4 becomes a replacement sequential update block forBlock 3. - After Phase I consolidation has completed, the memory device so signals the host by releasing the BUSY signal.
- Intermediate Operation (Phase II)
- Phase II, the
intermediate operation 1130, may take place the before PhaseIII consolidation operation 1140. A number of possible scenarios are possible as given by any one ofSTEPs - STEP 1132: Either in a write operation of the logical group, writing to the fourth block (breakout consolidation block) as the update block.
- If the host writes to the logical group in question,
Block 4, which is the breakout consolidation block and which has up to now assumed the role of a replacement sequential update block, will be used as a normal update block. Depending on the host writes, it may remain sequential or be turned into a chaotic state. As an update block it will at some point trigger closure of another chaotic block as described in an earlier preferred embodiment. If the host writes to another logical group, proceed directly to Phase III operation. - STEP 1134: Or in a read operation, reading the memory with the third block as the original block for the logical group and the fourth block as the update block.
- In this case, logical units from sections A and B will be read from
Block 3 as the logical group's original block, and logical units from sections C and D will be read fromBlock 4 as the group's update block. As only sections A and B can be read fromBlock 3, there will be no access to the page where programming failed, and there will be no access to the unwritten portion thereafter. Although the GAT directory in flash memory has not yet been updated and still points to Block 1 as the original block, no data will be read from it, and the block itself has been erased earlier. - Another possibility is a host read of the logical units in the logical group. In this case, logical units from the sections A and B will be read from
Block 3 as the logical group original block, and logical units from the sections C and D will be read fromBlock 4 as the group's sequential update block. - STEP 1136: Or in a power-up initialization, re-identifying any of the first to fourth blocks by scanning their contents.
- Another possibility for the intermediate phase is for the memory device to be powered off and then re-started. As described earlier, during power-up initialization, the blocks in the Allocation Block List (erase pool's blocks to be used, see
FIG. 15 andFIG. 18 ) are scanned to identify the defective consolidation block that has become a special status original block (Block 3) and the associated sequential update block (Block 4) for the logical group. The flag in the first logical unit of the breakout block (Block 4) will indicate that the associated block is an original block (Block 3) that has suffered a program error. By consulting the block directory (GAT),Block 3 can then be located. - In one embodiment, a flag is programmed into the first logical unit in the breakout consolidation block (Block 4). This helps to indicate the special status of the logical group: i.e., it has been consolidated into two blocks, i.e.,
Block 3 andBlock 4. - An alternative to using a flag to identify the logical group with a defective block is to detect the block as defective during scanning by the property that it is not full as an original block should be (unless the error happened in the last page, and the last page does not have ECC error). Also, depending on implementation, there could be an information record about the failed group/block stored in a control data structure in flash memory, not just a flag in the header area of the first sector written to the breakout consolidation block (Block 4).
- Consolidation Completion (Phase III)
-
- STEP 1142: In response to a predefined event, and for a first case when the fourth block has not been further recorded since Phase I, storing thereto current versions of all outstanding logical units of the logical group similar to the first order, and for a second case when the fourth block has been further recorded since Phase I, consolidating the third and fourth blocks into a fifth block.
- STEP 1144: Thereafter, for the first case, operating the memory with the consolidated fourth block as the original block for the logical group, and for the second case, operating the memory with the fifth block as the original block for the logical group.
- The final consolidation in Phase III is executed whenever there is opportunity without violating any specified time limits. A preferred occasion is to “piggy-back” on the next host write slot when there is an update operation to another logical group with no accompanying consolidation operation. If the host write to another logical group triggers a garbage collection itself, then the Phase III consolidation will be deferred.
-
FIG. 33 illustrates an example timing of the first and final phases of the multiple phase consolidation operation. The host write latency is the width of each host write time slot with a period TW. Host Write 1 is a simple update, and the current versions of a first set of logical units in the logical group LG1 are recorded on an associated update block. - In
Host Write 2 another update on logical group LG1 happens to result in the update block being closed (e.g., full). A new update block will be provisioned for recording the rest of the updates. The provisioning of a new update block may trigger a garbage collection that results in a consolidation operation on LG4 in order to recycle a block for reuse. The current logical units of the LG4 group are being recorded on a consolidation block in sequential order. The consolidation operation proceeds until a defect is encountered in the consolidation block. Phase I consolidation is then invoked where the consolidation operation continues on a breakout consolidation block. In the meantime, the final consolidation of LG4 (Phase III) awaits the next opportunity. - In
Host Write 3, a write of logical units of the logical group LG2 also happens to trigger a consolidation for LG2. This means the time slot is already fully utilized. - In
Host Write 4, the operation is merely recording some logical units of LG2 to its update block. The surplus time in the time slot provides the opportunity to execute the final consolidation of LG4. - Embodiment where the Breakout Consolidation Block is not Converted to an Update Block
-
FIG. 34A andFIG. 34B illustrate respectively a first case for Phase I and Phase III operations of the multiple phase consolidation as applied to the examples inFIG. 28 andFIG. 31 . -
FIG. 34A illustrates the case where the breakout consolidation block is not used as an update block but more as a consolidation block whose consolidation operation has been interrupted. In particular,FIG. 34A refers to theHost Write # 2 shown inFIG. 33 in which the host writes updates of logical units belonging to the logical group LG1 and during which the operation also triggers a consolidation of blocks associated with another logical group LG4. - The formation of the original block (Block 1) and the update block (Block 2) is in the same manner as the example in
FIG. 28 . Similarly during the consolidation operation, the consolidation block (Block 3) is found to have a defect at the point of consolidating the logical units in section C. However, unlike the re-consolidation scheme illustrated inFIG. 28 , the present multiple phase scheme continues the consolidation operation onto a newly provisioned block (Block 4) that serves as a breakout consolidation block. Thus, in a Phase I consolidation operation, logical units in sections A and B have been consolidated in the consolidation block (Block 3). When a program failure occurs in the consolidation block, the remaining logical units in sections C and D are copied sequentially into the breakout consolidation block (Block 4). - If originally the host writes an update in a first logical group that triggers a consolidation operation of blocks associated with a second logical group, the updates of the first logical group are recorded into an update block (typically a new update block) for the first logical group. In this case, the breakout consolidation block (Block 4) is not used to record any update data outside the consolidation operation and remains a breakout consolidation block that has yet to be completed.
- Since the data in
Blocks Blocks 3 and 4), they can be erased for recycling. The address table (GAT) is updated to point to Block 3 as an original block for the logical group. The directory information (in the ACL, seeFIG. 15 andFIG. 18 ) for the update block is also updated to point toBlock 4, which has become the sequential update block for the logical group (e.g., LG4). - The result is that the consolidated logical group is not localized in one block, but distributed over the defective consolidation block (Block 3) and the breakout consolidation block (Block 4). The important feature of this scheme is that logical units in the group are only consolidated once during this phase at the expense of spreading the consolidation over more than one block. In this way, the consolidation operation can be completed within the normal specified time.
-
FIG. 34B illustrates the third and final phase of the multiple phase consolidation started inFIG. 34A . As described in connection withFIG. 33 , Phase III consolidation is executed at an opportune time after the first phase, such as during a subsequent host write that does not trigger an accompanying consolidation operation. In particular,FIG. 34B refers to the time slot in whichHost Write # 4 shown inFIG. 33 takes place. During that period the host write updates logical units belonging to the logical group LG2 without triggering another additional consolidation operation. Thus, the surplus time in the time slot can expediently be utilized for Phase III operation for completion of the consolidation of logical group LG4. - The operation consolidates all outstanding logical units of LG4 not already in the breakout block into the breakout block. In the example, this means sections A and B will be copied from
Block 3 to the breakout block (Block 4) in logically sequential order. In view of the wrap-around scheme of logical units in a block and the use of a page tag (seeFIG. 3A ), even though the example shows that inBlock 4, sections A and B are being recorded after section C and D, the recorded sequence will considered to be equivalent to the sequential order of A, B, C, D. Depending on implementation, the current versions of the outstanding logical units to be copied are preferable obtained fromBlock 3, since it is already in consolidated form, but may also be collected fromBlock 1 andBlock 2 if they have not been erased. - After the final consolidation has completed on the breakout block (Block 4), it will be designated as the original block for the logical group and the appropriate directory (e.g., GAT, see
FIG. 17A ) will be updated accordingly. Similarly, the failed physical block (Block 3) is marked as bad and mapped out. The other blocks,Block 1 andBlock 2 are erased and recycled. In the meantime, the updates to LG2 are recorded in the update block associated with LG2. - Embodiment where the Breakout Consolidation Block is Turned into an Update Block
-
FIG. 35A andFIG. 35B illustrate respectively a second case for Phase I and Phase III operations of the multiple phase consolidation as applied to the examples inFIG. 28 andFIG. 33 . -
FIG. 35A illustrates the case where the breakout consolidation block is maintained as an update block to receive host writes rather than as a consolidation block. This is applicable to a host write that for example updates the logical group LG4, and in the process also triggers a consolidation in the same logical group. - As in the case for
FIG. 34A , the consolidation ofBlock 1 andBlock 2 ontoBlock 3 proceeds until it encounters a program failure while processing section C. The consolidation then continues on a breakout consolidation block (Block 4). After the outstanding logical units (e.g. in sections C and D) have been consolidated in the breakout block (Block 4), instead of waiting in Phase III to complete the consolidation of the logical group thereto, the breakout block is maintained as an update block. This case is particular applicable to a scenario where the host write updates a logical group and triggers a consolidation of the same logical group. In the example, this enables recording of the host updates for the logical group LG4 to be recorded in the breakout consolidation block (Block 4) instead of recorded to a new update block. The update block (previously the breakout consolidation block (Block 4)) can be sequential or become chaotic depending on the host data recorded to it. In the example shown,Block 4 has turned chaotic since a subsequent newer version of logical units in section C renders the earlier ones inBlock 4 obsolete. - During the intermediate phase,
Block 3 will be regarded as the original block for LG4, andBlock 4 will be the associated update block. -
FIG. 35B illustrates the third and final phase of the multiple phase consolidation started inFIG. 35A for the second case. As described in connection withFIG. 33 , Phase III consolidation is executed at an opportune time after the first phase, such as during a subsequent host write that does not trigger an accompanying consolidation operation. During that period the host write updates logical units belonging to a logical group without triggering a consolidation operation. Thus, the surplus time in the time slot can expediently be utilized for Phase III operation for the completion of the consolidation of logical group LG4. - The logical group LG4 is then garbage collected from
Blocks Block 3 then gets marked as bad,Block 4 is recycled and the new consolidation block (Block 5) will become the new original block for the logical group LG4. The other blocks,Block 1 andBlock 2 are also erased and recycled. - Other Embodiments of Phased Program Failure Handling
- The examples described in
FIGS. 31A, 31B , 34A, 34B, 35A and 35B apply to a preferred block management system with each physical block (metablock) storing only logical units belonging to the same logical group. The invention is equally applicable to other block management systems where there is no logical group to physical block alignment, such as those disclosed in WO 03/027828 and WO 00/49488. Some examples of implementing the phased program failure handling method to these other system are illustrated inFIGS. 36A, 36B and 36C. -
FIG. 36A illustrates the phased program error handling method as applied to the scenario when the host write triggers the closure of an update block, and the update block is sequential. The closure in this case is done by copying the remaining valid data (B and C) from theoriginal Block 2 to thesequential update Block 3. In the case of program failure at the start of the data portion C programming, the portion C will be programmed to thereserved Block 4. The new host data can then be written to a new update Block 5 (not shown). The phases II and III of the method are the same as for the case of chaotic block closure. -
FIG. 36B illustrates the phased program error handling method as it can be applied to a (Partial Blocks system) in the case of an Update of Update blocks. In this case the logical group is stored inoriginal Block 1, and other update blocks. Consolidation operation includes copying of the data to one of the update blocks (to be chosen according to some rule,Block 3 on the diagram) from theoriginal Block 1 and theother update Block 2. The difference from the main scenario, which is already described, is that theBlock 3 was already partially written. -
FIG. 36C illustrates the phased program error handling the garbage collection operation, or clean-up in a memory block management system that does not support a logical group mapped to a metablock. Such a memory block management (Cyclic Storage) system is described in WO 03/027828 A1. The distinctive feature of the Cyclic Storage system is that blocks were not allocated for a single logical group. There is support of multiple logical grouping for control data in a metablock. The garbage collection involved taking the valid data sectors, which may have no relationship (random logical block addresses), from a partially obsolete block to the Relocation block, which can have some data there already. If the Relocation block gets full during the operation, another one would be opened. - Non-Sequential Update Block Indexing
- In the earlier section on chaotic block indexing and in association with
FIGS. 16A-16E , a CBI sector is used to store an index that keeps track of the locations of the logical sectors stored randomly in a chaotic or non-sequential update block. - According to another aspect of the invention, in a nonvolatile memory with block management system that supports update blocks with non-sequential logical units, an index of the logical units in an update block buffered in RAM is periodically stored in the non-volatile memory. In one embodiment, the index is stored in a block dedicated for storing indices. In another the index is stored in the update block. In yet another embodiment, the index is stored in the header of each logical unit. In another aspect, the logical units written after the last index update but before the next have their indexing information stored in the header of each logical unit. In this way, after a power interruption, the location of recently written logical units can be determined without have to perform a scanning during initialization. In yet another aspect, a block is managed as partially sequential and partially non-sequential, directed to more than one logical subgroup.
- Index Pointer Stored in a CBI Sector in a CBI Block after a Predetermined Triggering Event
- According to the scheme described in association with
FIGS. 16A-16E , lists of recently written sectors in chaotic blocks are held in controller RAM. A CBI sector containing the most current indexing information is written to flash memory (a CBI block 620) only after a predetermined number of writes in the logical group associated with a given chaotic block. In this way, the number of updates to the CBI block is reduced. - Before the next update of the CBI sector for a logical group, the list of recently written sectors for the logical group is held in controller RAM. The list is lost if the memory device suffers a power shut-down, but can be rebuilt by scanning the updated blocks on an initialization after a power turn-on.
-
FIG. 37 illustrates an example of a schedule of writing of a CBI sector to an associated chaotic index sector block after every N sector writes of the same logical group. The example shows two logical groups LG3 and LG11 undergoing concurrent updates. Initially, the logical sectors of LG3 were stored in sequential order in an original block. Updates of the logical sectors in the group are recorded on an associated update block in the order dictated by a host. The example shows a chaotic update sequence. Concurrently, the logical group LG11 is also being updated in similar manner at its update block. After every logical sector write, its position in the update block is kept in controller RAM. After every predetermined triggering events, the current index of the logical sectors in the update block is written in the form of a chaotic index sector to the non-volatile chaotic index sector block. For example, the predetermined triggering events occur after every N writes, where N may be 3. - Although the examples given have been with the logical unit of data being a sector, those skilled in the art will recognize that the logical unit could be some other aggregate, such as a page containing a sector or a group of sectors. Also, the first page in a sequential block does not need to be
logical page 0, since a page tag wrap around could be in place. - Index Pointer Stored in a CBI sector in Chaotic Update Block after a Predetermined Triggering Event
- In another embodiment, the index pointer is stored in a dedicated CBI sector in the chaotic update block itself after every N writes thereto. This scheme is similar to the embodiment described earlier where the index is also stored in a CBI sector. The difference is in that earlier embodiment, the CBI sector is recorded in a CBI sector block and not in the update block itself.
- The method is based on keeping all the chaotic block indexing information in the Chaotic Update block itself.
FIGS. 37A, 37B and 37C respectively illustrate the state of an update block also storing CBI sectors at three different stages. -
FIG. 38A illustrates an update block up to the point when a CBI sector is recorded in it after a predetermined number of writes. In this example, after the host has written logical sectors 0-3 sequentially, it then issues a command to write another version oflogical sector 1 again, thus breaking the contiguous sequence of data writes. The update block is then converted to a chaotic update block with the implementation of a chaotic block indexing carried in a CBI sector. As described before, the CBI is an index that contains indices for all logical sectors of the chaotic block. For example, the 0th entry indicates the offset in the update block for the 0th logical sector and similarly, the nth entry indicates the offset for the nth logical sector. The CBI sector is written to the next available location in the update block. To avoid frequent flash access, the CBI sector is written after every N data sector writes. In this example, N is 4. If the power is lost at this point, the last written sector will be the CBI sector and the block will be regarded as a chaotic update block. -
FIG. 38B illustrates the update block ofFIG. 38A to havelogical sectors logical sectors -
FIG. 38C illustrates the update block ofFIG. 38B having another logical sector written to trigger the next recording of an index sector. The same update block after another N(N=4) sector writes records another current version of the CBI sector. - The advantage of this scheme is that it eliminates the need for a separate CBI block. At the same time it does not have to worry about whether the overhead data area of a physical flash sector is large enough to accommodate the number of entries required for an index to valid sectors in the chaotic update block. The chaotic update block then contains all the information and no external data is needed for address translation. This results in a simpler algorithm with reduced number of control updates related to CBI block compaction and shorter cascade control update. (See earlier sections on CBI block management.)
- Information about Recently Written Sectors Stored in Data Sector Header in Chaotic Update Block
- According to another aspect of the invention, an index of logical units recorded in a block is stored in nonvolatile memory after every N writes, and current information about the logical units of the intermediate writes is stored in an overhead portion of each logical unit written. In this way, after a power restart, information about the logical units written since last index update can be obtained quickly from the overhead portion of the last written logical unit in the block, without having to scan the block.
-
FIG. 39A illustrates an intermediate index for the intermediate writes being stored in a header of each data sector in the chaotic update block -
FIG. 39B illustrates an example of storing an intermediate index for the intermediate writes in the header of each sector written. In the example, after four sectors, LS0-LS3 have been written, a CBI index is written as the next sector in the block. Thereafter, logical sectors LS′1, LS′2 and LS4 are written to the block. Each time, the header will store an intermediate index for the logical units written since the last CBI index. Thus, the header in LS′2 will have an index giving the offset (i.e., location) of the last CBI index as well as that of LS′1. Similarly, the header in LS4 will have an index giving the offset of the last CBI index as well as that of LS′1 and LS′2. - The last written data sector always contains information about up to N last written pages (i.e., up to the last written CBI sector). Whenever, there was a power restart, the last CBI index provides indexing information for logical units written prior to the CBI index sector, and indexing information for subsequent logical units written is found in the header of the last written data sector. This has the advantage of eliminating, on initialization, the need to scan the block for those subsequently written sectors to determine their locations.
- The scheme of storing intermediate index information in the header of the data sector is equally applicable whether the CBI index sector is stored in the update block itself or in a separate CBI sector block as described in an earlier section.
- Index Pointer Stored in Data Sector Header in Chaotic Update Block
- In another embodiment, the entire CBI index is stored in the overhead portion of each data sector in a chaotic update block.
-
FIG. 40 illustrates the information in the chaotic index field being stored in a header of each data sector in the chaotic update block. - The information capacity of a sector header is limited, and so the range of indices provided by any single sector may be designed to be part of a hierarchical indexing scheme. For example, sectors within a specific plane of the memory may provide indexing to only sectors within that plane. Also, the range of logical addresses may be divided into a number of sub-ranges, to allow an indirect indexing scheme to be adopted. For example, if sectors with 64 logical addresses may be stored in a plane, each sector may have 3 fields for sector offset values, each field able to store 4 offset values. The first field defines the physical offset for the last written sector within the logical offset ranges 0-15, 15-31, 32-47, and 48-63. The second field defines physical offset values for 4 sub-ranges of 4 sectors each, within the range to which it relates. The third field defines physical offset values for 4 sectors within the sub-range to which it relates. The physical offset of a logical sector within the chaotic update block may therefore be determined by reading indirect offset values from up to 3 sectors.
- The advantage of this scheme is that it also eliminates the need for a separate CBI block or a CBI sector. However, it may only be applied if the overhead data area of a physical flash sector is large enough to accommodate the number of entries required for an index to valid sectors in the chaotic update block.
- Limited Logical Range within Logical Group for Chaotic Update Block
- Within a logical group, the logical range of sectors which can be written non-sequentially is reduced. The main advantage of this technique that the sectors outside the range stay written sequentially in the original block and garbage collection operation can be done in much shorter time, as copying of sequentially written data can be done faster as only one multi-sector page (could be parallel read of pages in multi-chip case) need be read to get all the data for the destination page (provided the source and the destination are aligned, if not, then another read may be required). Also, sequential data can be copied from source to destination without data transfers to and from the controller, using on-chip copy feature. If the source data is scattered, as happens in chaotic blocks, up to one page read per sector may be required in order to collect all the sectors to be written to destination.
- In one embodiment, instead of literally limiting the logical range to some number of sectors, it is being done via limiting the number of CBIs (it makes sense to limit chaotic range only for large groups/metablocks, which require multiple Chaotic Block Indices to cover the entire logical group's range). For example, if a metablock/group has 2048 sectors, then in may require up to 8 CBI sectors, each covering a contiguous logical range of one sub-group of 256 sectors. If the number of CBIs is limited to 4, then the Chaotic metablock can be used for writing sectors of up to 4 sub-groups (any of them). Thus, the logical group is allowed to have up to 4 partially or fully chaotic sub-groups, and a minimum of 4 sub-groups will remain fully sequential. If a Chaotic block has 4 valid CBI sectors, associated with it, and a host writes a sector outside the ranges of these CBI sectors (chaotic sub-groups), then the Chaotic Logical group should be consolidated and closed. But, this is very unlikely to happen, as in real applications, the hosts do not need more than 4 chaotic ranges of 256 sectors (sub-groups) within a range of 2048 sectors (logical group). As a result, in normal case garbage collection time is also unaffected, but the limiting rule guards form an extreme case of too long garbage collection which may trigger host's time out.
- Indexing of Partially Sequential Chaotic Update Block
- When a sequential update block has been partially written before the block is converted to chaotic management mode, all or part of the sequentially updated section of the logical group may continued to be treated as having been sequentially updated, and the chaotic update management may be applied to only a subset of the address range of the logical group.
- Control Data Integrity & Mangement
- Data stored in memory device may become corrupted due to power interruption or a certain memory location becoming defective. If a defect in the memory block is encountered, the data is relocated to a different block and the defective block is discarded. If the error is not extensive, it can be corrected on-the-fly by an error correction code (ECC) that is saved with the data. However, there are times when the ECC is unable to correct the corrupted data. For example, when the number of error bits exceed the capacity of the ECC. This is unacceptable for critical data such as the control data associated with the memory block management system.
- Example of control data are the directory information and block allocation information associated with the memory block management system, such as those described in connection with
FIG. 20 . As described earlier, the control data is maintained in both high speed RAM and the slower nonvolatile memory blocks. Any frequently changing control data is maintained in RAM with periodic control writes to update equivalent information stored in a nonvolatile metablock. In this way, the control data is stored in nonvolatile, but slower flash memory without the need for frequent access. A hierarchy of control data structures such as GAT, CBI, MAP, and MAPA shown inFIG. 20 is maintained in flash memory. Thus, a control write operation causes information from control data structures in RAM to update equivalent control data structures in flash memory. - Critical Data Duplication
- According to another aspect of the invention, critical data such as some or all of the control data is guaranteed an extra level of reliability if it is maintained in duplicates. The duplication is performed in a manner that for multi-state memory system employing a two-pass programming technique for successively programming the multi-bits of the same set of memory cells, any programming error in the second pass will not corrupt the data established by the first pass. The duplication also helps with detection of write aborts, detection of misdetection (i.e., both copies have good ECC but data are different), and adds an extra level of reliability. Several techniques of data duplication are contemplated.
- In one embodiment, after two copies of a given data have been programmed in an earlier programming pass, a subsequent programming pass avoids programming the memory cells storing at least one of the two copies. In this way, at least one of the two copies will be unaffected in the event the subsequent programming pass aborts before completion and corrupts the data of the earlier pass.
- In another embodiment, the two copies of a given data are stored in two different blocks where at most one of the two copies have its memory cells programmed in a subsequent programming pass.
- In yet another embodiment, after two copies of a given data have been stored in a programming pass, no further programming will be preformed on the set of memory cells storing the two copies. This is accomplished by programming the two copies in an ultimate programming pass for the set of memory cells.
- In yet another embodiment, the two copies of a given data is programmed into a multi-state memory in a binary programming mode, so that no further programming on the programmed memory cells will take place.
- In yet another embodiment, for multi-state memory system employing a two-pass programming technique for successively programming the multi-bits of the same set of memory cells, a fault-tolerant code is employ to code the multiple memory states such that data established by an earlier programming pass is insensitive to errors in subsequent programming pass.
- A complication of data duplication arises in multi-state memory in which each memory cell stores more than one bit of data. For example, a 4-state memory can be represented by two bits. One existing technique is to use a 2-pass programming to program such memory. A first bit (lower page bit) is programmed by a first pass. Subsequently, the same cell is programmed in a second pass to represent a desired second bit (upper page bit). In order not to change the value of the first bit in the second pass, the memory state representation of the first bit is made to depend on the value of the second bit. Therefore, during the programming of the second bit, if an error occurs during to power interruptions or other causes and results in an incorrect memory state, the value of the first bit could also be corrupted.
-
FIG. 41A illustrates threshold voltage distributions of a 4-state memory array when each memory cell stores two bits of data. The four distributions represent the population of the four memory states, “U”, “X”, “Y” and “Z”. Before a memory cell is programmed it is first erased into its “U” or “unwritten” state. The memory states “X”, “Y” and “Z” are progressively reached as the memory cell get increasingly programmed. -
FIG. 41B illustrates an existing, 2-pass programming scheme using a Gray code. The four states can be represented by two bits, a lower page bit and an upper page bit, such as (Upper page bit, Lower page bit). For a page of cells to be programmed in parallel, there are really two logical pages: a logical lower page and a logical upper page. A first programming pass only programs the logical lower page. By appropriate coding, a subsequent, second programming pass on the same page of cells will program the logical upper page without resetting the logical lower page. A commonly used code is the Gray code where only one bit changes when transiting to an adjacent state. Therefore, this code has the advantage of placing less demand on error correction since only one bit is involved. - A general scheme in using the Gray code is to let “1” represent a “not program” condition. Thus, the erased memory state “U” is represented by (Upper page bit, Lower page bit)=(1, 1). In a first pass to program the logical lower page, any cell to store the data “0” will therefore have its logical state transit from (x, 1) to (x, 0), where “x” represent the “don't care” value of the upper bit. However, since the upper bit has not been programmed yet, “x” may as well be labeled by “1” for consistency. The (1, 0) logical state is represented by programming the cell to the memory state “X”. That is, prior to a second program pass, the lower bit value of “0” is represented by the memory state “X”.
- A second pass programming is performed to store the bits of the logical upper page. Only those cells needing an upper page bit value of “0” will be programmed. After the first pass, the cells in the page are either in the logical state (1, 1) or (1, 0). In order to preserve the values of the lower page in the second pass, the lower bit value of “0” or “1” need be distinguished. For the transition from (1, 0) to (0, 0) the memory cell in question is programmed to the memory state “Y”. For the transition from (1, 1) to (0, 1) the memory cell in question is programmed to the memory state “Z”. In this way, during read, by determining the memory state programmed in a cell, both the lower page bit and the upper page bit can be decoded.
- However, the Gray-code, 2-pass programming scheme can become a problem when the second-pass programming is erroneous. For example, the programming of the upper page bit to “0” while the lower bit is at “1” will cause the transition from (1, 1) to (0, 1). This requires the memory cell to be progressively programmed from “U” through “X” and “Y” to “Z”. If there is a power outage before the completion of the programming, the memory cell may end up in one of the transition memory state, say “X”. When the memory cell is read, “X” will be decoded as the logical state (1, 0). This gives incorrect results for both the upper and lower bits, since it should have been (0, 1). Similarly, if the programming is interrupted when “Y” is reached, it would correspond to (0, 0). While the upper bit is now correct, the lower bit is still wrong.
- Thus, it can be seen that a problem in the upper page programming can corrupt data already in the lower page. In particular when a second pass programming involves passing over an intermediate memory state, a program abort could have the programming ending up in that memory state, resulting in an incorrect lower page bit being decoded.
-
FIG. 42 illustrates a way of safe guarding critical data by saving each sector in duplicates. For example, the sectors A, B, C, and D are saved in duplicate copies. If there is corruption of the data in one sector copy, then the other one can be read instead. -
FIG. 43 illustrates the non-robustness in which duplicate sectors are typically saved in a multi-state memory. As described earlier, in an example 4-state memory, the multi-state page actually includes a logical lower page and a logical upper page that are respectively programmed in two passes. In the example shown, the page is four sector wide. Thus, sector A and its duplicate will be concurrently programmed in the logical lower page and similarly for sector B and its duplicate. Then in a subsequent second pass of programming in the logical upper page, the sectors C, C will be concurrently programmed and likewise for the sectors D, D. If a program abort occurs in the middle of the programming of sectors C, C, the sectors A, A, in the lower page will be corrupted. Unless, the lower page sectors are first read and buffered prior to the upper page programming, they may be unrecoverable if corrupted. Thus, saving two copies of critical data concurrently such as sectors A, A can not prevent both of them being corrupted by a subsequent problematic saving of sectors C, C in its upper page. -
FIG. 44A illustrates one embodiment of saving staggered duplicate copies of critical data to a multi-state memory. Basically the lower page is saved in the same manner as inFIG. 43 , namely, sectors A, A and sectors B, B. However, in the upper page programming, the sectors C and D are interleaved with their duplicates as C, D, C, D. If partial page programming is supported, the two copies of sector C can be programmed concurrently and likewise for the two copies of sector D. If the program of say, the two sectors C gets aborted, it can corrupt the lower page only in terms of one copy of sector A and one copy of sector B. The other copies remain unaffected. Thus, if there are two copies of critical data stored in a first pass, they should not be subject to a subsequent second pass programming simultaneously. -
FIG. 44B illustrates another embodiment of saving duplicate copies of critical data only to the logical upper page of a multi-state memory. In this case, the data in the lower page is not used. The critical data and its duplicates, such as sectors A, A and sectors B, B are saved to the logical upper page only. In this way, if there is a program abort, the critical data could be rewritten in another logical upper page, while any corruption to the lower page data would be immaterial. This solution basically uses half the storage capacity of each multi-state page. -
FIG. 44C illustrates yet another embodiment of saving duplicate copies of critical data in binary mode of a multi-state memory. In this case, each memory cell is programmed in a binary mode where its threshold range is only divided into two regions. Thus, there is only one-pass programming and the programming can be restart in a different location if a program abort occurs. This solution also uses half the storage capacity of each multi-state page. Operating a multi-state memory in binary mode is described in U.S. Pat. No. 6,456,528 B1, the entire disclosure of which is incorporated herein by reference. -
FIG. 45 illustrates yet another embodiment of saving duplicate copies of critical data concurrently to two different metablocks. If one of the blocks becomes unavailable, the data can be read from the other one. For example, the critical data are contained in sectors A, B, C, D and E, F, G, H, and I, J, K, L. Each sector is saved in duplicates. The two copies will be written concurrently to two differently blocks,Block 0 andBlock 1. If one copy is written to a logical lower page, the other copy will be written to a logical upper page. In this way, there will always be a copy programmed to a logical upper page. If a program abort occurs, it can be reprogrammed to another logical upper page. At the same time, if the lower page is corrupted, there will always be another upper page copy in the other block. -
FIG. 46B illustrates yet another embodiment of saving duplicate copies of critical data concurrently by the use of a fault-tolerant code.FIG. 46A is similar toFIG. 41A in illustrating threshold voltage distributions of a 4-state memory array and is shown as a reference forFIG. 46B . The fault-tolerant code essentially avoids any upper page programming to transit through any intermediate states. Thus, the first pass lower page programming has the logical state (1, 1) transits to (1, 0) as represented by programming the erased memory state “U” to “Y”. In the second pass programming of the upper page bit to “0”, if the lower page bit is at “1”, the logical state (1, 1) transits to (0, 1) as represented by programming the erased memory state “U” to “X”. If the lower page bit is at “0”, the logical state (1, 0) transits to (0, 0) as represented by programming the memory state “Y” to “Z”. Since the upper page programming only involves programming to the next adjacent memory state, a program abort does not change the lower page bit. - Serial Write
- The duplicate copies of critical data are preferably written concurrently as described above. Another way to avoid corrupting both copies at the same time is to write the copies sequentially. This method is slower, but the copies themselves indicate whether their programming was successful or not when the controller checks both the copies:
-
FIG. 47 is a table showing the possible states of the two copies of data, and the validity of the data. - If both first and second copies have no ECC error, then the programming of the data is believed to have been totally successfully. Valid data can be obtained from either copy.
- If first copy has no ECC error, but the second has ECC error, then it could mean that the programming was interrupted in the middle of the second copy programming. The first copy contains valid data. The second copy data is not trustworthy even if the error is correctable.
- If first copy has no ECC error and the second is empty (erased), then it could mean that the programming was interrupted after the end of first copy programming but before the commencement of the second copy. The first copy contains valid data.
- If first copy has ECC error and the second is empty (erased), then it could mean that the programming was interrupted in the middle of first copy programming. The first copy may contain invalid data even if error is correctable.
- For reading of data that is maintained in duplicates, the following techniques are preferred, as they take advantage of the existence of duplicate copies. Read and compare both copies. In this case the states of both copies as shown in
FIG. 47 can be used to ensure that there is no error misdetection. - In another embodiment where the controller reads only one copy, for speed and simplicity considerations, the copy read is preferably alternate between the two copies. For example, when the controller reads a control data, it reads, say,
Copy 1, the next control read (any control read) should be fromCopy 2, then Copy 1 again etc. In this way, the two copies will be read and checked for integrity (ECC checked) regularly. It reduces the risk of not detecting in time errors caused by deteriorating data retention. For example, if only Copy 1 is normally read, then Copy 2 may gradually deteriorate to the level where the error is no longer be salvaged by ECC, and the 2nd copy can no longer be used. - Pre-Emptive Data Relocation
- As described in connection with
FIG. 20 , the block management system maintains a set of control data in flash memory during its operation. This set of control data is stored in the metablocks similar to host data. As such, the control data itself will be block managed and will be subject to updates and therefore garbage collection operations. - It has also been described that a hierarchy of control data exists, with the ones in the lower hierarchy being updated more often than those higher up. For example, assuming that every control block has N control sectors to write, the following sequence of control updates and control block relocations, normally happens. Referring to
FIG. 20 again, every N CBI updates fill up the CBI block and trigger a CBI relocation (rewrite) and a MAP update. If the Chaotic block gets closed then it also triggers GAT update. Every GAT update triggers a MAP update. Every N GAT updates fill up the block and trigger a GAT block relocation. In addition, when a MAP block gets full it also triggers a MAP block relocation and a MAPA Block (if exist, otherwise BOOT block points directly to MAP) update. In addition, when a MAPA block gets fill, it also triggers a MAPA block relocation, a BOOT Block update and a MAP update. In addition, when a BOOT Block gets full, it triggers an active BOOT Block relocation to another BOOT Block. - Since the hierarchy is formed by the BOOT control data at the top followed by MAPA, MAP and then GAT, thus, in every N3 GAT updates there will be a “cascade control update”, where all of the GAT, MAP, MAPA and BOOT blocks would be relocated. In the case when GAT update is caused by a Chaotic or Sequential Update block closure as a result of a host write, there will also be a garbage collection operation (i.e., relocation or rewrite.) In that case of Chaotic Update Block garbage collection, a CBI would be updated, and that can also trigger a CBI block relocation. Thus, in this extreme situation, a large number of metablocks need be garbage collected at the same time.
- In can be seen that each control data block of the hierarchy has its own periodicity in terms of getting filled and being relocated. If each proceeds normally, there will be times when the phases of a large number of the blocks will line up and trigger a massive relocation or garbage collection involving all those blocks at the same time. Relocation of many control blocks will take a long time and should be avoided as some hosts do not tolerate long delays caused by such massive control operations.
- According to another aspect of the invention, in a nonvolatile memory with a block management system, a “control garbage collection” or preemptive relocation of a memory block is implemented to avoid the situation where a large number of update blocks all happen to need relocation concurrently. For example, this situation can happen when updating control data used for controlling the operation of the block management system. A hierarchy of control data type can exist with varying degree of update frequencies, resulting in their associated update blocks requiring garbage collection or relocation at different rates. There will be certain times that the garbage collection operations of more than one control data types coincide. In the extreme situation, the relocation phases of the update blocks for all control data types could line up, resulting in all of the update blocks requiring relocation at the same time.
- This undesirable situation is avoided by the present invention where whenever a current memory operation can accommodate a voluntary garbage collection operation, a preemptive relocation of an update block takes place in advance the block being totally filled. In particular, priority is given to the block with data type highest up in the hierarchy having the slowest rate. In this way, once the slowest rate blocks are relocated, they will not need another garbage collection for a relatively long time. Also, the slower rate blocks being higher up in the hierarchy do not have much of a cascade of relocation to trigger. The inventive method can be regarded as introducing some sort of dithering to the overall mix of things in order to avoid alignment of the phases of the various blocks in question. Thus, whenever an opportunity arises, a slow-filling block that has a slight margin from being totally filled is to be relocated preemptively.
- In a system with a hierarchy of control data such that those lower in the hierarchy is changing faster than the one higher up due to a cascading effect, priority is given to a block of control data higher up in the hierarchy. One example of the opportunity for performing a voluntary preemptive relocation is when a host write does not itself trigger a relocation so any surplus in its latency period can be utilized for the preemptive relocation operation. Generally, the margin in advance of the block absolutely having to be relocated is a predetermined number of unwritten memory units before the block is totally full. Consideration is given to the margin being sufficient to precipitate relocation in advance of the block being totally filled, but without being unduly premature, resulting in wasted resources. In the preferred embodiment, the predetermined number of unwritten memory units is between one and six memory units.
-
FIG. 48 illustrates a flow chart of preemptive relocation of memory block storing control data. - STEP 1202: Organizing a nonvolatile memory into blocks, each block partitioned into memory units that are erasable together.
- STEP 1204: Maintaining different types of data.
- STEP 1206: Assigning a ranking to the different types of data.
- STEP 1208: Storing updates of said different types of data among a plurality of blocks so that each block is storing essentially data of the same type.
- STEP 1210: In response to a block having less than a predetermined number of empty memory units and having data type of highest rank among said plurality of blocks, relocating current updates of data of said block to another block. Go to STEP 1208 unless interrupted.
- An example algorithm for implementing preemptive relocation for the control data shown in
FIG. 20 is as follows: - If ((there is no garbage collection due to user data) OR (MAP has 6 or fewer unwritten sectors left) OR (GAT has 3 or fewer unwritten sectors left)
Then If (BOOT has 1 unwritten sector left) Then relocate BOOT (i.e. relocate to a block) Else If (MAPA has 1 unwritten sector left) Then relocate MAPA and update MAP Else If (MAP has 1 unwritten sector left) Then relocate MAP Else If (the last updated, or the biggest, GAT has 1 unwritten sector left) Then relocate GAT Else If (CBI has 1 unwritten sector left) Then Relocate CBI Else Else Exit - Thus, the preemptive relocations are normally done when there is no user data garbage collection taking place. In the worst case, when every host write triggers user data garbage collection, but there is enough time for voluntary relocation of one block, preemptive relocation can be performed one control block at a time.
- As the user data garbage collection operations and control updates may coincide with physical errors, it is better to do have larger safety margins by doing the preemptive relocation or controlled garbage collection earlier in advance, say, at the point when the block still has 2 or more unwritten memory units (e.g., sectors.)
- Although the various aspects of the present invention have been described with respect to certain embodiments, it is understood that the invention is entitled to protection within the full scope of the appended claims.
Claims (37)
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/917,867 US20050141312A1 (en) | 2003-12-30 | 2004-08-13 | Non-volatile memory and method with non-sequential update block management |
KR1020067013317A KR20070007264A (en) | 2003-12-30 | 2004-12-21 | Non-volatile memory and method with non-sequential update block management |
EP10009157.8A EP2293196B1 (en) | 2003-12-30 | 2004-12-21 | Non-volatile memory and method with non-sequential update block management |
PCT/US2004/043762 WO2005066793A2 (en) | 2003-12-30 | 2004-12-21 | Non-volatile memory and method with non-sequential update block management |
JP2006547516A JP4851344B2 (en) | 2003-12-30 | 2004-12-21 | Non-volatile memory and method with nonsequential update block management |
EP04815767A EP1704484A2 (en) | 2003-12-30 | 2004-12-21 | Non-volatile memory and method with non-sequential update block management |
TW093141426A TWI288328B (en) | 2003-12-30 | 2004-12-30 | Non-volatile memory and method with non-sequential update block management |
US12/239,489 US8103841B2 (en) | 2003-12-30 | 2008-09-26 | Non-volatile memory and method with non-sequential update block management |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/750,155 US7139864B2 (en) | 2003-12-30 | 2003-12-30 | Non-volatile memory and method with block management system |
US10/917,867 US20050141312A1 (en) | 2003-12-30 | 2004-08-13 | Non-volatile memory and method with non-sequential update block management |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/750,155 Continuation-In-Part US7139864B2 (en) | 2003-12-30 | 2003-12-30 | Non-volatile memory and method with block management system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/239,489 Continuation US8103841B2 (en) | 2003-12-30 | 2008-09-26 | Non-volatile memory and method with non-sequential update block management |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050141312A1 true US20050141312A1 (en) | 2005-06-30 |
Family
ID=34701161
Family Applications (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/750,155 Active 2025-06-21 US7139864B2 (en) | 2003-12-30 | 2003-12-30 | Non-volatile memory and method with block management system |
US10/917,889 Abandoned US20050166087A1 (en) | 2003-12-30 | 2004-08-13 | Non-volatile memory and method with phased program failure handling |
US10/917,867 Abandoned US20050141312A1 (en) | 2003-12-30 | 2004-08-13 | Non-volatile memory and method with non-sequential update block management |
US10/917,725 Active 2025-06-26 US8051257B2 (en) | 2003-12-30 | 2004-08-13 | Non-volatile memory and method with control data management |
US10/917,888 Abandoned US20050141313A1 (en) | 2003-12-30 | 2004-08-13 | Non-volatile memory and method with memory planes alignment |
US12/239,489 Expired - Fee Related US8103841B2 (en) | 2003-12-30 | 2008-09-26 | Non-volatile memory and method with non-sequential update block management |
US12/239,351 Expired - Lifetime US7913061B2 (en) | 2003-12-30 | 2008-09-26 | Non-volatile memory and method with memory planes alignment |
US12/248,160 Expired - Lifetime US7945759B2 (en) | 2003-12-30 | 2008-10-09 | Non-volatile memory and method with phased program failure handling |
US13/102,261 Expired - Lifetime US8621177B2 (en) | 2003-12-30 | 2011-05-06 | Non-volatile memory and method with phased program failure handling |
US13/244,178 Expired - Fee Related US8239643B2 (en) | 2003-12-30 | 2011-09-23 | Non-volatile memory and method with control data management |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/750,155 Active 2025-06-21 US7139864B2 (en) | 2003-12-30 | 2003-12-30 | Non-volatile memory and method with block management system |
US10/917,889 Abandoned US20050166087A1 (en) | 2003-12-30 | 2004-08-13 | Non-volatile memory and method with phased program failure handling |
Family Applications After (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/917,725 Active 2025-06-26 US8051257B2 (en) | 2003-12-30 | 2004-08-13 | Non-volatile memory and method with control data management |
US10/917,888 Abandoned US20050141313A1 (en) | 2003-12-30 | 2004-08-13 | Non-volatile memory and method with memory planes alignment |
US12/239,489 Expired - Fee Related US8103841B2 (en) | 2003-12-30 | 2008-09-26 | Non-volatile memory and method with non-sequential update block management |
US12/239,351 Expired - Lifetime US7913061B2 (en) | 2003-12-30 | 2008-09-26 | Non-volatile memory and method with memory planes alignment |
US12/248,160 Expired - Lifetime US7945759B2 (en) | 2003-12-30 | 2008-10-09 | Non-volatile memory and method with phased program failure handling |
US13/102,261 Expired - Lifetime US8621177B2 (en) | 2003-12-30 | 2011-05-06 | Non-volatile memory and method with phased program failure handling |
US13/244,178 Expired - Fee Related US8239643B2 (en) | 2003-12-30 | 2011-09-23 | Non-volatile memory and method with control data management |
Country Status (9)
Country | Link |
---|---|
US (10) | US7139864B2 (en) |
EP (2) | EP1704571B1 (en) |
JP (5) | JP4938460B2 (en) |
KR (1) | KR101012274B1 (en) |
CN (5) | CN100435115C (en) |
AT (3) | ATE498893T1 (en) |
DE (3) | DE602004031461D1 (en) |
TW (1) | TWI272483B (en) |
WO (1) | WO2005066972A1 (en) |
Cited By (132)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050144396A1 (en) * | 2003-12-31 | 2005-06-30 | Eschmann Michael K. | Coalescing disk write back requests |
US20060067181A1 (en) * | 1999-05-08 | 2006-03-30 | Samsung Electronics Co., Ltd | Recording medium for storing linking type information and method of processing defective area using the same |
US20060184722A1 (en) * | 2005-02-16 | 2006-08-17 | Sinclair Alan W | Direct data file storage implementation techniques in flash memories |
US20060184723A1 (en) * | 2005-02-16 | 2006-08-17 | Sinclair Alan W | Direct file data programming and deletion in flash memories |
US7139864B2 (en) | 2003-12-30 | 2006-11-21 | Sandisk Corporation | Non-volatile memory and method with block management system |
US20070033325A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Non-volatile memory with scheduled reclaim operations |
US20070033373A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Method and system for dual mode access for storage devices |
US20070033374A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Reprogrammable Non-Volatile Memory Systems With Indexing of Directly Stored Data Files |
US20070033375A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Indexing of File Data in Reprogrammable Non-Volatile Memories That Directly Store Data Files |
US20070086260A1 (en) * | 2005-10-13 | 2007-04-19 | Sinclair Alan W | Method of storing transformed units of data in a memory system having fixed sized storage blocks |
US20070084375A1 (en) * | 2005-08-10 | 2007-04-19 | Smith Kyle S | High density cartridge and method for reloading |
US20070101095A1 (en) * | 2005-10-27 | 2007-05-03 | Sandisk Corporation | Methods for adaptively handling data writes in non-volatile memories |
US20070136555A1 (en) * | 2005-12-13 | 2007-06-14 | Sinclair Alan W | Logically-addressed file storage methods |
US20070143570A1 (en) * | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Method and system for accessing non-volatile storage devices |
US20070143532A1 (en) * | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Method and system for accessing non-volatile storage devices |
US20070143378A1 (en) * | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Non-volatile memories with adaptive file handling in a directly mapped file storage system |
US20070143561A1 (en) * | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Methods for adaptive file data handling in non-volatile memories with a directly mapped file storage system |
US20070140020A1 (en) * | 2005-12-15 | 2007-06-21 | Simpletech, Inc. | Parallel data storage system |
WO2007073536A2 (en) * | 2005-12-21 | 2007-06-28 | Sandisk Corporation | Non-volatile memories and methods with memory allocation for a directly mapped file storage system |
US20070156998A1 (en) * | 2005-12-21 | 2007-07-05 | Gorobets Sergey A | Methods for memory allocation in non-volatile memories with a directly mapped file storage system |
US20080034174A1 (en) * | 2006-08-04 | 2008-02-07 | Shai Traister | Non-volatile memory storage systems for phased garbage collection |
US20080034175A1 (en) * | 2006-08-04 | 2008-02-07 | Shai Traister | Methods for phased garbage collection |
EP1895418A1 (en) * | 2005-12-09 | 2008-03-05 | Matsushita Electric Industrial Co., Ltd. | Nonvolatile memory device, method of writing data, and method of reading out data |
US20080071969A1 (en) * | 2006-09-15 | 2008-03-20 | Lin Jason T | Method for Class-Based Update Block Replacement Rules in Non-Volatile Memory |
US20080071970A1 (en) * | 2006-09-15 | 2008-03-20 | Lin Jason T | Non-Volatile Memory With Class-Based Update Block Replacement Rules |
US20080082775A1 (en) * | 2006-09-29 | 2008-04-03 | Sergey Anatolievich Gorobets | System for phased garbage collection |
US20080082596A1 (en) * | 2006-09-29 | 2008-04-03 | Sergey Anatolievich Gorobets | Method for phased garbage collection |
WO2008042592A2 (en) * | 2006-09-29 | 2008-04-10 | Sandisk Corporation | Phased garbage collection |
US20080086619A1 (en) * | 2006-09-28 | 2008-04-10 | Shai Traister | Methods for phased garbage collection using phased garbage collection block or scratch pad block as a buffer |
WO2008042596A2 (en) * | 2006-09-28 | 2008-04-10 | Sandisk Corporation | Memory systems for phased garbage collection using phased garbage collection block or scratch pad block as a buffer |
WO2008045839A1 (en) * | 2006-10-12 | 2008-04-17 | Sandisk Corporation | Non-volatile memory with worst-case control data management and methods therefor |
US20080091872A1 (en) * | 2005-01-20 | 2008-04-17 | Bennett Alan D | Scheduling of Housekeeping Operations in Flash Memory Systems |
US20080091871A1 (en) * | 2006-10-12 | 2008-04-17 | Alan David Bennett | Non-volatile memory with worst-case control data management |
EP1923792A1 (en) | 2006-11-20 | 2008-05-21 | Samsung Electronics Co., Ltd | Apparatus and method of managing nonvolatile memory |
US7383375B2 (en) | 2003-12-30 | 2008-06-03 | Sandisk Corporation | Data run programming |
US20080155175A1 (en) * | 2006-12-26 | 2008-06-26 | Sinclair Alan W | Host System That Manages a LBA Interface With Flash Memory |
US20080155227A1 (en) * | 2006-12-26 | 2008-06-26 | Sinclair Alan W | Managing a LBA Interface in a Direct Data File Memory System |
US20080155178A1 (en) * | 2006-12-26 | 2008-06-26 | Sinclair Alan W | Use of a Direct Data File System With a Continuous Logical Address Space Interface |
US20080162612A1 (en) * | 2006-12-28 | 2008-07-03 | Andrew Tomlin | Method for block relinking |
US20080162787A1 (en) * | 2006-12-28 | 2008-07-03 | Andrew Tomlin | System for block relinking |
WO2008082950A1 (en) * | 2006-12-28 | 2008-07-10 | Sandisk Corporation | System for block relinking |
US20080183953A1 (en) * | 2006-12-06 | 2008-07-31 | David Flynn | Apparatus, system, and method for storage space recovery in solid-state storage |
US20080222350A1 (en) * | 2007-03-09 | 2008-09-11 | Shih-Chieh Chang | Flash memory device for storing data and method thereof |
US7433993B2 (en) | 2003-12-30 | 2008-10-07 | San Disk Corportion | Adaptive metablocks |
US7441071B2 (en) | 2006-09-28 | 2008-10-21 | Sandisk Corporation | Memory systems for phased garbage collection using phased garbage collection block or scratch pad block as a buffer |
US20080294814A1 (en) * | 2007-05-24 | 2008-11-27 | Sergey Anatolievich Gorobets | Flash Memory System with Management of Housekeeping Operations |
WO2008147752A1 (en) * | 2007-05-24 | 2008-12-04 | Sandisk Corporation | Managing housekeeping operations in flash memory |
US20080307156A1 (en) * | 2007-06-08 | 2008-12-11 | Sinclair Alan W | System For Interfacing A Host Operating Through A Logical Address Space With A Direct File Storage Medium |
US20080307158A1 (en) * | 2007-06-08 | 2008-12-11 | Sinclair Alan W | Method and apparatus for providing data type and host file information to a mass storage system |
US20080307155A1 (en) * | 2007-06-08 | 2008-12-11 | Sinclair Alan W | Method of Interfacing A Host Operating Through A Logical Address Space With A Direct File STorage Medium |
US7480766B2 (en) | 2005-08-03 | 2009-01-20 | Sandisk Corporation | Interfacing systems operating through a logical address space and on a direct data file basis |
US20090070518A1 (en) * | 2007-09-07 | 2009-03-12 | Shai Traister | Adaptive Block List Management |
US20090070520A1 (en) * | 2007-09-06 | 2009-03-12 | Nagamasa Mizushima | Semiconductor storage device and method of controlling semiconductor storage device |
US20090089482A1 (en) * | 2007-09-28 | 2009-04-02 | Shai Traister | Dynamic metablocks |
US20090094389A1 (en) * | 2007-10-09 | 2009-04-09 | Seagate Technology, Llc | System and method of matching data rates |
US20090125671A1 (en) * | 2006-12-06 | 2009-05-14 | David Flynn | Apparatus, system, and method for storage space recovery after reaching a read count limit |
US20090150641A1 (en) * | 2007-12-06 | 2009-06-11 | David Flynn | Apparatus, system, and method for efficient mapping of virtual and physical addresses |
US7552271B2 (en) | 2005-08-03 | 2009-06-23 | Sandisk Corporation | Nonvolatile memory with block management |
US20090164705A1 (en) * | 2007-12-21 | 2009-06-25 | Gorobets Sergey A | System and Method for Implementing Extensions to Intelligently Manage Resources of a Mass Storage System |
US7558906B2 (en) | 2005-08-03 | 2009-07-07 | Sandisk Corporation | Methods of managing blocks in nonvolatile memory |
US20090198895A1 (en) * | 2008-02-05 | 2009-08-06 | Via Technologies, Inc. | Control method, memory, and processing system utilizing the same |
US20090210611A1 (en) * | 2008-02-20 | 2009-08-20 | Nagamasa Mizushima | Storage system and data write method |
US20090210614A1 (en) * | 2005-12-21 | 2009-08-20 | Sergey Anatolievich Gorobets | Non-Volatile Memories With Versions of File Data Identified By Identical File ID and File Offset Stored in Identical Location Within a Memory Page |
US7631162B2 (en) | 2005-10-27 | 2009-12-08 | Sandisck Corporation | Non-volatile memory with adaptive handling of data writes |
US20100115194A1 (en) * | 2007-06-13 | 2010-05-06 | Toshio Suzuki | Semiconductor memory information storage apparatus and method of controlling writing |
US7739444B2 (en) | 2006-12-26 | 2010-06-15 | Sandisk Corporation | System using a direct data file system with a continuous logical address space interface |
US20100223421A1 (en) * | 2009-02-27 | 2010-09-02 | Samsung Electronics Co., Ltd. | User device including flash memory storing index and index accessing method thereof |
US7793068B2 (en) | 2005-12-21 | 2010-09-07 | Sandisk Corporation | Dual mode access for non-volatile storage devices |
US20100257308A1 (en) * | 2009-04-07 | 2010-10-07 | Sandisk Corporation | Host stop-transmission handling |
US7814262B2 (en) | 2005-10-13 | 2010-10-12 | Sandisk Corporation | Memory system storing transformed units of data in fixed sized storage blocks |
EP2260391A1 (en) * | 2008-03-01 | 2010-12-15 | Kabushiki Kaisha Toshiba | Memory system |
US20100318721A1 (en) * | 2009-06-16 | 2010-12-16 | Sandisk Corporation | Program failure handling in nonvolatile memory |
US20100318839A1 (en) * | 2009-06-16 | 2010-12-16 | Sandisk Corporation | Data recovery in multi-level cell nonvolatile memory |
US7877539B2 (en) | 2005-02-16 | 2011-01-25 | Sandisk Corporation | Direct data file storage in flash memories |
US7917686B2 (en) | 2006-12-26 | 2011-03-29 | Sandisk Corporation | Host system with direct data file interface configurability |
EP2359252A1 (en) * | 2008-12-17 | 2011-08-24 | LSI Corporation | Storage system snapshot assisted by ssd technology |
US20110238886A1 (en) * | 2010-03-23 | 2011-09-29 | Apple Inc. | Garbage collection schemes for index block |
US20120005451A1 (en) * | 2010-07-02 | 2012-01-05 | Samsung Electronics Co., Ltd | Data storage device and bad block managing method thereof |
CN102508788A (en) * | 2011-09-28 | 2012-06-20 | 成都市华为赛门铁克科技有限公司 | SSD (solid state drive) and SSD garbage collection method and device |
US8209461B2 (en) | 2006-12-26 | 2012-06-26 | Sandisk Technologies Inc. | Configuration of host LBA interface with flash memory |
US20120166718A1 (en) * | 2010-12-28 | 2012-06-28 | Silicon Motion, Inc. | Flash Storage Device and Data Writing Method Thereof |
US20120284469A1 (en) * | 2011-05-03 | 2012-11-08 | Seo Dong-Young | Memory system and bad block management method |
US20130041927A1 (en) * | 2011-08-10 | 2013-02-14 | Alibaba Group Holding Limited | Shrinking Virtual Hard Disk Image |
FR2982406A1 (en) * | 2011-11-07 | 2013-05-10 | St Microelectronics Rousset | SECURE MEMORY THAT PREVENTS DEGRADATION OF DATA |
TWI417884B (en) * | 2009-04-23 | 2013-12-01 | Phison Electronics Corp | Data accessing method for flash memory and storage system and controller using the same |
US20140059273A1 (en) * | 2012-08-24 | 2014-02-27 | Kabushiki Kaisha Toshiba | Host apparatus and memory device |
US20140089566A1 (en) * | 2012-09-25 | 2014-03-27 | Phison Electronics Corp. | Data storing method, and memory controller and memory storage apparatus using the same |
US8762703B2 (en) | 2010-04-16 | 2014-06-24 | Micron Technology, Inc. | Boot partitions in memory devices and systems |
CN103946805A (en) * | 2011-11-21 | 2014-07-23 | 桑迪士克科技股份有限公司 | Scrub techniques for use with dynamic read |
US8874872B2 (en) | 2011-01-21 | 2014-10-28 | Seagate Technology Llc | Garbage collection management in memories |
WO2014209984A1 (en) * | 2013-06-25 | 2014-12-31 | Marvell World Trade Ltd. | Adaptive cache memory controller |
US20150052300A1 (en) * | 2013-08-16 | 2015-02-19 | Micron Technology, Inc. | Data storage management |
US9021215B2 (en) | 2011-03-21 | 2015-04-28 | Apple Inc. | Storage system exporting internal storage rules |
US9037902B2 (en) | 2013-03-15 | 2015-05-19 | Sandisk Technologies Inc. | Flash memory techniques for recovering from write interrupt resulting from voltage fault |
US20150143029A1 (en) * | 2013-11-21 | 2015-05-21 | Sandisk Technologies Inc. | Dynamic logical groups for mapping flash memory |
WO2015099922A1 (en) * | 2013-12-26 | 2015-07-02 | Intel Corporation | Managing a transfer buffer for a non-volatile memory |
US9104315B2 (en) | 2005-02-04 | 2015-08-11 | Sandisk Technologies Inc. | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
US9116823B2 (en) | 2006-12-06 | 2015-08-25 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for adaptive error-correction coding |
US9170754B2 (en) | 2007-12-06 | 2015-10-27 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment |
CN105302501A (en) * | 2015-11-27 | 2016-02-03 | 浙江宇视科技有限公司 | Control method and device for disk sectors |
US20160134683A1 (en) * | 2013-05-31 | 2016-05-12 | Koninklijke Philips N.V. | System and method for automatically uploading, downloading, and updating data such as sleep study data |
US9361236B2 (en) | 2013-06-18 | 2016-06-07 | Arm Limited | Handling write requests for a data array |
US9455048B2 (en) | 2013-06-28 | 2016-09-27 | Sandisk Technologies Llc | NAND flash word line management using multiple fragment pools |
US9495241B2 (en) | 2006-12-06 | 2016-11-15 | Longitude Enterprise Flash S.A.R.L. | Systems and methods for adaptive data storage |
CN106610900A (en) * | 2015-10-22 | 2017-05-03 | 蜂巢数据有限公司 | Eliminating garbage collection in NAND flash devices |
USRE46446E1 (en) * | 2005-05-09 | 2017-06-20 | Sandisk Technologies Llc | Method and system for facilitating fast wake-up of a flash memory system |
EP3092570A4 (en) * | 2014-01-10 | 2017-09-13 | Samsung Electronics Co., Ltd. | Method for processing data in storage device and storage device |
US20170286477A1 (en) * | 2013-09-04 | 2017-10-05 | Amazon Technologies, Inc. | Database system recovery using non-volatile system memory |
US9817593B1 (en) | 2016-07-11 | 2017-11-14 | Sandisk Technologies Llc | Block management in non-volatile memory system with non-blocking control sync system |
US9983829B2 (en) * | 2016-01-13 | 2018-05-29 | Sandisk Technologies Llc | Physical addressing schemes for non-volatile memory systems employing multi-die interleave schemes |
US10049040B2 (en) | 2011-01-21 | 2018-08-14 | Seagate Technology Llc | Just in time garbage collection |
US10079612B1 (en) | 2017-03-15 | 2018-09-18 | Alibaba Group Holding Limited | Distributed erasure coding pool deployed in hyperscale infrastructure |
US20180275875A1 (en) * | 2017-03-23 | 2018-09-27 | Toshiba Memory Corporation | Memory system |
US20180293016A1 (en) * | 2016-09-07 | 2018-10-11 | Boe Technology Group Co., Ltd. | Method and apparatus for updating data in a memory for electrical compensation |
US10114562B2 (en) | 2014-09-16 | 2018-10-30 | Sandisk Technologies Llc | Adaptive block allocation in nonvolatile memory |
US20180341411A1 (en) * | 2017-05-24 | 2018-11-29 | Cisco Technology, Inc. | Intelligent layout of composite data structures in tiered storage |
EP2063428B1 (en) * | 2007-11-26 | 2018-12-26 | Kabushiki Kaisha Toshiba | Content data recording apparatus and method |
US20190005079A1 (en) * | 2017-06-30 | 2019-01-03 | Intel Corporation | Logical band-based key-value storage structure |
TWI651650B (en) * | 2018-02-05 | 2019-02-21 | 大陸商深圳大心電子科技有限公司 | Memory management method and storage controller using the same |
US10255178B2 (en) * | 2016-09-06 | 2019-04-09 | Toshiba Memory Corporation | Storage device that maintains a plurality of layers of address mapping |
CN110326045A (en) * | 2017-02-22 | 2019-10-11 | 美光科技公司 | Device and method for being operated in memory |
US10482009B1 (en) * | 2013-03-15 | 2019-11-19 | Google Llc | Use of a logical-to-logical translation map and a logical-to-physical translation map to access a data storage device |
US20200133526A1 (en) * | 2018-10-30 | 2020-04-30 | EMC IP Holding Company LLC | Write folding mechanism using reusable shared striping in a storage system |
US10854311B1 (en) * | 2019-08-28 | 2020-12-01 | Micron Technology, Inc. | Data redirection upon failure of a program operation |
US10978156B2 (en) * | 2018-06-29 | 2021-04-13 | Sandisk Technologies Llc | Concurrent programming of multiple cells for non-volatile memory devices |
US11152071B1 (en) * | 2020-05-27 | 2021-10-19 | Western Digital Technologies, Inc. | Erase operation reattempt to recover misidentified bad blocks resulting from consecutive erase failures |
US11436153B2 (en) * | 2020-05-26 | 2022-09-06 | Western Digital Technologies, Inc. | Moving change log tables to align to zones |
US11467744B2 (en) | 2020-05-27 | 2022-10-11 | Western Digital Technologies, Inc. | System to identify aggressor blocks causing back to back erase failure |
US11545221B2 (en) | 2018-06-29 | 2023-01-03 | Sandisk Technologies Llc | Concurrent programming of multiple cells for non-volatile memory devices |
WO2023034535A1 (en) * | 2021-09-01 | 2023-03-09 | Micron Technology, Inc. | Virtual management unit scheme for two-pass programming in a memory sub-system |
FR3129763A1 (en) * | 2021-12-01 | 2023-06-02 | Stmicroelectronics (Rousset) Sas | SYSTEM-ON-CHIP WITH NON-VOLATILE MEMORY |
US20240241657A1 (en) * | 2023-01-12 | 2024-07-18 | SK Hynix Inc. | Storage device and method of operating the same |
Families Citing this family (452)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8027194B2 (en) | 1988-06-13 | 2011-09-27 | Samsung Electronics Co., Ltd. | Memory system and method of accessing a semiconductor memory device |
US7953931B2 (en) * | 1999-08-04 | 2011-05-31 | Super Talent Electronics, Inc. | High endurance non-volatile memory devices |
US7299316B2 (en) * | 2004-02-26 | 2007-11-20 | Super Talent Electronics, Inc. | Memory flash card reader employing an indexing scheme |
US8019943B2 (en) * | 2000-01-06 | 2011-09-13 | Super Talent Electronics, Inc. | High endurance non-volatile memory devices |
US6426893B1 (en) * | 2000-02-17 | 2002-07-30 | Sandisk Corporation | Flash eeprom system with simultaneous multiple data sector programming and storage of physical block characteristics in other designated blocks |
EP1402356A2 (en) * | 2001-07-03 | 2004-03-31 | Research In Motion Limited | System and method of object-oriented persistence |
CA2515968C (en) * | 2003-02-12 | 2009-04-28 | Research In Motion Limited | An apparatus and methods for managing data used by a mobile device |
US20040192132A1 (en) * | 2003-03-20 | 2004-09-30 | Fay Ralph Michael | Fungi resistant asphalt and asphalt sheet materials |
US7173852B2 (en) * | 2003-10-03 | 2007-02-06 | Sandisk Corporation | Corrected data storage and handling methods |
DE10349595B3 (en) * | 2003-10-24 | 2004-12-09 | Hyperstone Ag | Writing sectors of block-deletable memory, writes to alternative memory block sectors in sequence, storing their positions in sector table |
TWI232466B (en) * | 2003-10-28 | 2005-05-11 | Prolific Technology Inc | Method for updating data of non-volatile memory |
US7173863B2 (en) * | 2004-03-08 | 2007-02-06 | Sandisk Corporation | Flash controller cache architecture |
JP3892851B2 (en) * | 2004-02-04 | 2007-03-14 | 株式会社東芝 | Memory card and semiconductor device |
KR100541642B1 (en) * | 2004-03-11 | 2006-01-10 | 삼성전자주식회사 | Data managing device and method thereof |
JP2005301591A (en) * | 2004-04-09 | 2005-10-27 | Toshiba Corp | Device with nonvolatile memory, and memory controller |
US7225291B2 (en) * | 2004-04-19 | 2007-05-29 | Phison Electronics Corp. | Storage controlling and judging method of flash memory |
HUP0400808A2 (en) * | 2004-04-19 | 2005-11-28 | Dr.Kozéky László Géza | Plasmatorch and its application in the metallurgy, in the pyrolisis with plasma energy, in the vitrification and in other material modification processes |
US8607016B2 (en) * | 2004-07-21 | 2013-12-10 | Sandisk Technologies Inc. | FAT analysis for optimized sequential cluster management |
US7441067B2 (en) | 2004-11-15 | 2008-10-21 | Sandisk Corporation | Cyclic flash memory wear leveling |
US7366826B2 (en) * | 2004-12-16 | 2008-04-29 | Sandisk Corporation | Non-volatile memory and method with multi-stream update tracking |
US7386655B2 (en) | 2004-12-16 | 2008-06-10 | Sandisk Corporation | Non-volatile memory and method with improved indexing for scratch pad and update blocks |
US7315916B2 (en) * | 2004-12-16 | 2008-01-01 | Sandisk Corporation | Scratch pad block |
US7412560B2 (en) * | 2004-12-16 | 2008-08-12 | Sandisk Corporation | Non-volatile memory and method with multi-stream updating |
US7882299B2 (en) * | 2004-12-21 | 2011-02-01 | Sandisk Corporation | System and method for use of on-chip non-volatile memory write cache |
JP4817836B2 (en) | 2004-12-27 | 2011-11-16 | 株式会社東芝 | Card and host equipment |
JP4815825B2 (en) * | 2005-03-10 | 2011-11-16 | 日本電気株式会社 | Disk array device and method for reconstructing the same |
JP4738038B2 (en) * | 2005-03-25 | 2011-08-03 | 株式会社東芝 | Memory card |
JP2006285669A (en) * | 2005-03-31 | 2006-10-19 | Toshiba Corp | Host apparatus |
US7849253B2 (en) * | 2005-04-04 | 2010-12-07 | Standard Microsystems Corporation | Method for fast access to flash-memory media |
DE102005022019A1 (en) * | 2005-05-12 | 2007-02-01 | Giesecke & Devrient Gmbh | Secure processing of data |
US8051270B2 (en) * | 2005-05-23 | 2011-11-01 | Panasonic Corporation | Memory controller, nonvolatile storage device, nonvolatile storage system, and memory control method |
US7873683B2 (en) * | 2005-07-01 | 2011-01-18 | Qnx Software Systems Gmbh & Co. Kg | File system having transaction record coalescing |
US7970803B2 (en) | 2005-07-01 | 2011-06-28 | Qnx Software Systems Gmbh & Co. Kg | Optimized startup verification of file system integrity |
US7809777B2 (en) * | 2005-07-01 | 2010-10-05 | Qnx Software Systems Gmbh & Co. Kg | File system having deferred verification of data integrity |
US8959125B2 (en) | 2005-07-01 | 2015-02-17 | 226008 Ontario Inc. | File system having inverted hierarchical structure |
US20070055793A1 (en) * | 2005-08-03 | 2007-03-08 | Wellsyn Technology, Inc. | System of managing peripheral interfaces in IPMI architecture and method thereof |
JP4533956B2 (en) * | 2005-08-03 | 2010-09-01 | サンディスク コーポレイション | Free up data storage capacity of flash memory system |
DE602006019263D1 (en) * | 2005-08-03 | 2011-02-10 | Sandisk Corp | NON-VOLATILE MEMORY WITH BLOCK ADMINISTRATION |
JP2007052717A (en) * | 2005-08-19 | 2007-03-01 | Fujitsu Ltd | Data transfer device and method |
US7558804B1 (en) * | 2005-08-26 | 2009-07-07 | American Megatrends, Inc. | Method, apparatus, and computer-readable medium for space-efficient storage of variables in a non-volatile computer memory |
JP4751163B2 (en) * | 2005-09-29 | 2011-08-17 | 株式会社東芝 | Memory system |
US20070106842A1 (en) * | 2005-11-04 | 2007-05-10 | Conley Kevin M | Enhanced first level storage caching methods using nonvolatile memory |
US7634585B2 (en) * | 2005-11-04 | 2009-12-15 | Sandisk Corporation | In-line cache using nonvolatile memory between host and disk device |
US7747927B2 (en) | 2005-11-22 | 2010-06-29 | Sandisk Corporation | Method for adapting a memory system to operate with a legacy host originally designed to operate with a different memory system |
US7739472B2 (en) | 2005-11-22 | 2010-06-15 | Sandisk Corporation | Memory system for legacy hosts |
US7644251B2 (en) | 2005-12-19 | 2010-01-05 | Sigmatel, Inc. | Non-volatile solid-state memory controller |
WO2007073538A2 (en) * | 2005-12-21 | 2007-06-28 | Sandisk Corporation | Non-volatile memories and methods with data alignment in a directly mapped file storage system |
US7245556B1 (en) * | 2005-12-28 | 2007-07-17 | Sandisk Corporation | Methods for writing non-volatile memories for increased endurance |
US20070150644A1 (en) * | 2005-12-28 | 2007-06-28 | Yosi Pinto | System for writing non-volatile memories for increased endurance |
US7609564B2 (en) * | 2005-12-31 | 2009-10-27 | Intel Corporation | Systems and techniques for non-volatile memory buffering |
KR100703807B1 (en) * | 2006-02-17 | 2007-04-09 | 삼성전자주식회사 | Method and apparatus for managing block by update type of data in block type memory |
JP2007233838A (en) * | 2006-03-02 | 2007-09-13 | Toshiba Corp | Control method for memory system |
JP2007241539A (en) * | 2006-03-07 | 2007-09-20 | Hitachi Systems & Services Ltd | Data management and control system for semiconductor flash memory, and semiconductor flash memory housing device |
JP4171749B2 (en) * | 2006-04-17 | 2008-10-29 | Tdk株式会社 | Memory controller and flash memory system |
US20080005719A1 (en) * | 2006-06-30 | 2008-01-03 | Morris Robert P | Methods, systems, and computer program products for providing a program execution environment |
US20080005728A1 (en) * | 2006-06-30 | 2008-01-03 | Robert Paul Morris | Methods, systems, and computer program products for enabling cross language access to an addressable entity in an execution environment |
US20080005752A1 (en) * | 2006-06-30 | 2008-01-03 | Robert Paul Morris | Methods, systems, and computer program products for generating application processes by linking applications |
US20080127220A1 (en) * | 2006-06-30 | 2008-05-29 | Robert Paul Morris | Methods, systems, and computer program products for creating an input-value-specific loadable instance of an application |
US20080005528A1 (en) * | 2006-06-30 | 2008-01-03 | Morris Robert P | Methods, Systems, and Computer Program Products for Using a Structured Data Storage System to Provide Access to Addressable Entities in Virtual Address Space |
US20080022265A1 (en) * | 2006-06-30 | 2008-01-24 | Morris Robert P | Methods, systems, and computer program products for generating and using object modules |
US20080005529A1 (en) * | 2006-06-30 | 2008-01-03 | Morris Robert P | Methods, Systems, and Computer Program Products for Providing Access to Addressable Entities Using a Non-Sequential Virtual Address Space |
US20080005727A1 (en) * | 2006-06-30 | 2008-01-03 | Robert Paul Morris | Methods, systems, and computer program products for enabling cross language access to an addressable entity |
JP4956068B2 (en) * | 2006-06-30 | 2012-06-20 | 株式会社東芝 | Semiconductor memory device and control method thereof |
US7783956B2 (en) * | 2006-07-12 | 2010-08-24 | Cronera Systems Incorporated | Data recorder |
US20080022079A1 (en) * | 2006-07-24 | 2008-01-24 | Archer Charles J | Executing an allgather operation with an alltoallv operation in a parallel computer |
US7466600B2 (en) | 2006-08-03 | 2008-12-16 | Micron Technology, Inc. | System and method for initiating a bad block disable process in a non-volatile memory |
US20080040544A1 (en) * | 2006-08-08 | 2008-02-14 | Jason Caulkins | Computer System For Reading And Writing Data |
US7366017B2 (en) | 2006-08-22 | 2008-04-29 | Micron Technology, Inc. | Method for modifying data more than once in a multi-level cell memory location within a memory array |
US7908276B2 (en) * | 2006-08-25 | 2011-03-15 | Qnx Software Systems Gmbh & Co. Kg | Filesystem having a filename cache |
US20080059510A1 (en) * | 2006-08-31 | 2008-03-06 | Daniel Cardamore | Multimedia system framework having layer consolidating access to multiple media devices |
US7280398B1 (en) | 2006-08-31 | 2007-10-09 | Micron Technology, Inc. | System and memory for sequential multi-plane page memory operations |
JP4682261B2 (en) * | 2006-09-15 | 2011-05-11 | サンディスク コーポレイション | Method for non-volatile memory and class-based update block replacement rules |
US7734890B2 (en) * | 2006-10-06 | 2010-06-08 | Okralabs Llc | Method and system for using a distributable virtual address space |
JP4609406B2 (en) * | 2006-10-12 | 2011-01-12 | Tdk株式会社 | MEMORY CONTROLLER, FLASH MEMORY SYSTEM PROVIDED WITH MEMORY CONTROLLER, AND FLASH MEMORY CONTROL METHOD |
US20080091901A1 (en) * | 2006-10-12 | 2008-04-17 | Alan David Bennett | Method for non-volatile memory with worst-case control data management |
KR100806343B1 (en) * | 2006-10-19 | 2008-02-27 | 삼성전자주식회사 | Memory system including flash memory and mapping table management method thereof |
US20080120604A1 (en) * | 2006-11-20 | 2008-05-22 | Morris Robert P | Methods, Systems, And Computer Program Products For Providing Program Runtime Data Validation |
US8489817B2 (en) | 2007-12-06 | 2013-07-16 | Fusion-Io, Inc. | Apparatus, system, and method for caching data |
US8935302B2 (en) | 2006-12-06 | 2015-01-13 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for data block usage information synchronization for a non-volatile storage volume |
US20080147747A1 (en) * | 2006-12-14 | 2008-06-19 | Dan Cardamore | Media system having synchronization with preemptive prioritization of synchronization order |
DE112006004185T5 (en) | 2006-12-27 | 2009-11-19 | Intel Corporation, Santa Clara | Method for managing data in nonvolatile memory |
KR100823171B1 (en) * | 2007-02-01 | 2008-04-18 | 삼성전자주식회사 | Computer system having a partitioned flash translation layer and flash translation layer partition method thereof |
KR100885181B1 (en) * | 2007-02-06 | 2009-02-23 | 삼성전자주식회사 | Memory system performing group mapping operation and address mapping method thereof |
US20080235465A1 (en) * | 2007-03-19 | 2008-09-25 | Bum Suck So | Systems for conversion of update blocks based on association with host file management data structures |
US7904670B2 (en) * | 2007-03-19 | 2011-03-08 | Sandisk Corporation | Methods for conversion of update blocks based on comparison with a threshold size |
US8275953B2 (en) * | 2007-03-19 | 2012-09-25 | Sandisk Technologies Inc. | Methods for forcing an update block to remain sequential |
US20080235464A1 (en) * | 2007-03-19 | 2008-09-25 | Shai Traister | System for conversion of update blocks based on comparison with a threshold size |
US8341375B2 (en) * | 2007-03-19 | 2012-12-25 | Sandisk Technologies Inc. | Methods for conversion of update blocks based on association with host file management data structures |
US20080235489A1 (en) * | 2007-03-19 | 2008-09-25 | Sergey Anatolievich Gorobets | Systems for forcing an update block to remain sequential |
KR20080085574A (en) * | 2007-03-20 | 2008-09-24 | 삼성전자주식회사 | Apparatus and method for garbage collection of non-volatile memory |
US7991942B2 (en) | 2007-05-09 | 2011-08-02 | Stmicroelectronics S.R.L. | Memory block compaction method, circuit, and system in storage devices based on flash memories |
US8041883B2 (en) * | 2007-05-09 | 2011-10-18 | Stmicroelectronics S.R.L. | Restoring storage devices based on flash memories and related circuit, system, and method |
US20080294813A1 (en) * | 2007-05-24 | 2008-11-27 | Sergey Anatolievich Gorobets | Managing Housekeeping Operations in Flash Memory |
US8161480B2 (en) | 2007-05-29 | 2012-04-17 | International Business Machines Corporation | Performing an allreduce operation using shared memory |
US20080320459A1 (en) * | 2007-06-22 | 2008-12-25 | Morris Robert P | Method And Systems For Providing Concurrency Control For Addressable Entities |
US20080320282A1 (en) * | 2007-06-22 | 2008-12-25 | Morris Robert P | Method And Systems For Providing Transaction Support For Executable Program Components |
US20090006663A1 (en) * | 2007-06-27 | 2009-01-01 | Archer Charles J | Direct Memory Access ('DMA') Engine Assisted Local Reduction |
US8504784B2 (en) * | 2007-06-27 | 2013-08-06 | Sandisk Technologies Inc. | Scheduling methods of phased garbage collection and housekeeping operations in a flash memory system |
KR100922389B1 (en) * | 2007-07-04 | 2009-10-19 | 삼성전자주식회사 | Indexing scheme for flash memory |
KR101472797B1 (en) * | 2007-07-16 | 2014-12-15 | 삼성전자주식회사 | Method and apparatus for reading or writing data |
KR101348364B1 (en) | 2007-08-23 | 2014-01-07 | 삼성전자주식회사 | Method for determining binary signal of memory cell and apparatus thereof |
US20090055574A1 (en) * | 2007-08-25 | 2009-02-26 | Bei-Chuan Chen | NAND Flash Memory Device And Related Method Thereof |
KR20090026941A (en) * | 2007-09-11 | 2009-03-16 | 삼성전자주식회사 | Method for address mapping in virtual file system embodied in storage device having a plurality of non-volatile data storage and apparatus thereof |
US7719890B2 (en) | 2007-09-12 | 2010-05-18 | Sandisk Corporation | Data protection for write abort |
US7873803B2 (en) * | 2007-09-25 | 2011-01-18 | Sandisk Corporation | Nonvolatile memory with self recovery |
KR101464338B1 (en) * | 2007-10-25 | 2014-11-25 | 삼성전자주식회사 | Data storage device, memory system, and computing system using nonvolatile memory device |
JP4829202B2 (en) * | 2007-11-02 | 2011-12-07 | 株式会社日立製作所 | Storage device and memory control method |
US8296498B2 (en) * | 2007-11-13 | 2012-10-23 | Sandisk Technologies Inc. | Method and system for virtual fast access non-volatile RAM |
US9519540B2 (en) | 2007-12-06 | 2016-12-13 | Sandisk Technologies Llc | Apparatus, system, and method for destaging cached data |
US7953919B2 (en) * | 2007-12-21 | 2011-05-31 | Spansion Llc | Physical block addressing of electronic memory devices |
JP2009157515A (en) * | 2007-12-25 | 2009-07-16 | Toshiba Corp | Semiconductor memory controller and semiconductor memory |
US8301912B2 (en) * | 2007-12-31 | 2012-10-30 | Sandisk Technologies Inc. | System, method and memory device providing data scrambling compatible with on-chip copy operation |
JP4471007B2 (en) * | 2008-02-05 | 2010-06-02 | ソニー株式会社 | RECORDING DEVICE, RECORDING DEVICE CONTROL METHOD, RECORDING DEVICE CONTROL METHOD PROGRAM AND RECORDING DEVICE CONTROL METHOD PROGRAM RECORDING MEDIUM |
JP4653817B2 (en) * | 2008-03-01 | 2011-03-16 | 株式会社東芝 | Memory system |
KR101067457B1 (en) * | 2008-03-01 | 2011-09-27 | 가부시끼가이샤 도시바 | Memory system |
JP2009211234A (en) * | 2008-03-01 | 2009-09-17 | Toshiba Corp | Memory system |
CN101251788A (en) * | 2008-03-07 | 2008-08-27 | 威盛电子股份有限公司 | Storage unit management method and system |
JP4164118B1 (en) * | 2008-03-26 | 2008-10-08 | 眞澄 鈴木 | Storage device using flash memory |
US20090249021A1 (en) * | 2008-03-26 | 2009-10-01 | Morris Robert P | Method And Systems For Invoking An Advice Operation Associated With A Joinpoint |
US8422402B2 (en) | 2008-04-01 | 2013-04-16 | International Business Machines Corporation | Broadcasting a message in a parallel computer |
TWI377580B (en) * | 2008-04-03 | 2012-11-21 | Wistron Corp | Method and related device for data storage in an electronic device |
US8266366B2 (en) | 2008-04-11 | 2012-09-11 | SanDisk Technologies, Inc. | Memory device operable in read-only and write-once, read-many (WORM) modes of operation |
US8566505B2 (en) * | 2008-04-15 | 2013-10-22 | SMART Storage Systems, Inc. | Flash management using sequential techniques |
US8180954B2 (en) * | 2008-04-15 | 2012-05-15 | SMART Storage Systems, Inc. | Flash management using logical page size |
US8185778B2 (en) * | 2008-04-15 | 2012-05-22 | SMART Storage Systems, Inc. | Flash management using separate metadata storage |
WO2009129340A2 (en) * | 2008-04-15 | 2009-10-22 | Adtron, Inc. | Flash memory management |
US20090259806A1 (en) * | 2008-04-15 | 2009-10-15 | Adtron, Inc. | Flash management using bad page tracking and high defect flash memory |
US8028123B2 (en) * | 2008-04-15 | 2011-09-27 | SMART Modular Technologies (AZ) , Inc. | Circular wear leveling |
JP4439569B2 (en) * | 2008-04-24 | 2010-03-24 | 株式会社東芝 | Memory system |
US20090282267A1 (en) * | 2008-05-09 | 2009-11-12 | Ori Stern | Partial scrambling to reduce correlation |
US8484440B2 (en) | 2008-05-21 | 2013-07-09 | International Business Machines Corporation | Performing an allreduce operation on a plurality of compute nodes of a parallel computer |
US20090320012A1 (en) * | 2008-06-04 | 2009-12-24 | Mediatek Inc. | Secure booting for updating firmware over the air |
US8880775B2 (en) * | 2008-06-20 | 2014-11-04 | Seagate Technology Llc | System and method of garbage collection in a memory device |
US8843691B2 (en) * | 2008-06-25 | 2014-09-23 | Stec, Inc. | Prioritized erasure of data blocks in a flash storage device |
TWI370969B (en) | 2008-07-09 | 2012-08-21 | Phison Electronics Corp | Data accessing method, and storage system and controller using the same |
US8281053B2 (en) | 2008-07-21 | 2012-10-02 | International Business Machines Corporation | Performing an all-to-all data exchange on a plurality of data buffers by performing swap operations |
US8516203B2 (en) | 2008-08-13 | 2013-08-20 | Sandisk Technologies Inc. | Methods and apparatus for passing information to a host system to suggest logical locations to allocate to a file |
TWI473100B (en) * | 2008-09-05 | 2015-02-11 | A Data Technology Co Ltd | Flash memory system and its operation method |
US8103631B2 (en) * | 2008-11-19 | 2012-01-24 | Sandisk Il Ltd | Merging files on storage and retrieve |
KR101469771B1 (en) * | 2008-12-03 | 2014-12-08 | 삼성전자주식회사 | Semiconductor device comprising flash memory and address mapping method thereof |
US8205063B2 (en) * | 2008-12-30 | 2012-06-19 | Sandisk Technologies Inc. | Dynamic mapping of logical ranges to write blocks |
US8452940B2 (en) * | 2008-12-30 | 2013-05-28 | Sandisk Technologies Inc. | Optimized memory management for random and sequential data writing |
KR101028929B1 (en) * | 2008-12-31 | 2011-04-12 | 성균관대학교산학협력단 | Methods of Distributing Log Block Associativity in Real-time System And Flash Memory Device Performing the Same |
US8094500B2 (en) * | 2009-01-05 | 2012-01-10 | Sandisk Technologies Inc. | Non-volatile memory and method with write cache partitioning |
US8700840B2 (en) * | 2009-01-05 | 2014-04-15 | SanDisk Technologies, Inc. | Nonvolatile memory with write cache having flush/eviction methods |
EP2374063B1 (en) | 2009-01-05 | 2017-11-22 | SanDisk Technologies LLC | Non-volatile memory and method with write cache partitioning |
US8250333B2 (en) * | 2009-01-05 | 2012-08-21 | Sandisk Technologies Inc. | Mapping address table maintenance in a memory device |
US8244960B2 (en) | 2009-01-05 | 2012-08-14 | Sandisk Technologies Inc. | Non-volatile memory and method with write cache partition management methods |
US8040744B2 (en) | 2009-01-05 | 2011-10-18 | Sandisk Technologies Inc. | Spare block management of non-volatile memories |
US20100174845A1 (en) * | 2009-01-05 | 2010-07-08 | Sergey Anatolievich Gorobets | Wear Leveling for Non-Volatile Memories: Maintenance of Experience Count and Passive Techniques |
US8645749B2 (en) * | 2009-02-04 | 2014-02-04 | Micron Technology, Inc. | Systems and methods for storing and recovering controller data in non-volatile memory devices |
US8219776B2 (en) * | 2009-09-23 | 2012-07-10 | Lsi Corporation | Logical-to-physical address translation for solid state disks |
US8296503B2 (en) * | 2009-05-26 | 2012-10-23 | Mediatek Inc. | Data updating and recovering methods for a non-volatile memory array |
US8102705B2 (en) * | 2009-06-05 | 2012-01-24 | Sandisk Technologies Inc. | Structure and method for shuffling data within non-volatile memory devices |
US8027195B2 (en) * | 2009-06-05 | 2011-09-27 | SanDisk Technologies, Inc. | Folding data stored in binary format into multi-state format within non-volatile memory devices |
KR101324688B1 (en) * | 2009-06-12 | 2013-11-04 | 바이올린 메모리 인코포레이티드 | Memory system having persistent garbage collection |
US20100318584A1 (en) * | 2009-06-13 | 2010-12-16 | Microsoft Corporation | Distributed Cache Availability During Garbage Collection |
US8874825B2 (en) * | 2009-06-30 | 2014-10-28 | Sandisk Technologies Inc. | Storage device and method using parameters based on physical memory block location |
TWI386802B (en) * | 2009-07-03 | 2013-02-21 | Phison Electronics Corp | Data writing method for flash memory and control circuit and storage system using the same |
US20110002169A1 (en) * | 2009-07-06 | 2011-01-06 | Yan Li | Bad Column Management with Bit Information in Non-Volatile Memory Systems |
JP5347779B2 (en) * | 2009-07-07 | 2013-11-20 | ソニー株式会社 | MEMORY DEVICE, MEMORY CONTROL METHOD, AND PROGRAM |
US9104629B2 (en) * | 2009-07-09 | 2015-08-11 | International Business Machines Corporation | Autonomic reclamation processing on sequential storage media |
JP2011048725A (en) * | 2009-08-28 | 2011-03-10 | Panasonic Corp | Nonvolatile memory device and nonvolatile memory controller |
WO2011031796A2 (en) | 2009-09-08 | 2011-03-17 | Fusion-Io, Inc. | Apparatus, system, and method for caching data on a solid-state storage device |
US9223514B2 (en) | 2009-09-09 | 2015-12-29 | SanDisk Technologies, Inc. | Erase suspend/resume for memory |
US9122579B2 (en) | 2010-01-06 | 2015-09-01 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for a storage layer |
KR101769883B1 (en) | 2009-09-09 | 2017-08-21 | 샌디스크 테크놀로지스 엘엘씨 | Apparatus, system, and method for allocating storage |
US8429436B2 (en) | 2009-09-09 | 2013-04-23 | Fusion-Io, Inc. | Apparatus, system, and method for power reduction in a storage device |
US8255655B2 (en) | 2009-10-02 | 2012-08-28 | Sandisk Technologies Inc. | Authentication and securing of write-once, read-many (WORM) memory devices |
US8195909B2 (en) * | 2009-10-05 | 2012-06-05 | Seagate Technology Llc | Data management in a data storage system |
US9753847B2 (en) * | 2009-10-27 | 2017-09-05 | Western Digital Technologies, Inc. | Non-volatile semiconductor memory segregating sequential, random, and system data to reduce garbage collection for page based mapping |
US9110594B2 (en) * | 2009-11-04 | 2015-08-18 | Seagate Technology Llc | File management system for devices containing solid-state media |
US20110119462A1 (en) * | 2009-11-19 | 2011-05-19 | Ocz Technology Group, Inc. | Method for restoring and maintaining solid-state drive performance |
CN101794254B (en) * | 2009-11-25 | 2012-07-04 | 深圳市硅格半导体有限公司 | NAND-FLASH data processing method |
US8489803B2 (en) * | 2009-12-14 | 2013-07-16 | Smsc Holdings S.A.R.L. | Efficient use of flash memory in flash drives |
US8443167B1 (en) | 2009-12-16 | 2013-05-14 | Western Digital Technologies, Inc. | Data storage device employing a run-length mapping table and a single address mapping table |
US8144512B2 (en) | 2009-12-18 | 2012-03-27 | Sandisk Technologies Inc. | Data transfer flows for on-chip folding |
US8725935B2 (en) | 2009-12-18 | 2014-05-13 | Sandisk Technologies Inc. | Balanced performance for on-chip folding of non-volatile memories |
US20110153912A1 (en) | 2009-12-18 | 2011-06-23 | Sergey Anatolievich Gorobets | Maintaining Updates of Multi-Level Non-Volatile Memory in Binary Non-Volatile Memory |
US8468294B2 (en) * | 2009-12-18 | 2013-06-18 | Sandisk Technologies Inc. | Non-volatile memory with multi-gear control using on-chip folding of data |
FR2954572B1 (en) * | 2009-12-23 | 2020-04-17 | Thales | METHOD FOR MANAGING ROTATING DATA. |
TWI399644B (en) * | 2009-12-24 | 2013-06-21 | Univ Nat Taiwan | Block management method for a non-volatile memory |
TWI414940B (en) | 2009-12-30 | 2013-11-11 | Phison Electronics Corp | Block management and data writing method, flash memory storage system and controller |
CN102122233B (en) * | 2010-01-08 | 2013-04-10 | 群联电子股份有限公司 | Method for managing block and writing data, flash memory storage system and controller |
US8316176B1 (en) | 2010-02-17 | 2012-11-20 | Western Digital Technologies, Inc. | Non-volatile semiconductor memory segregating sequential data during garbage collection to reduce write amplification |
US8407449B1 (en) | 2010-02-26 | 2013-03-26 | Western Digital Technologies, Inc. | Non-volatile semiconductor memory storing an inverse map for rebuilding a translation table |
CN102193869B (en) * | 2010-03-01 | 2015-05-20 | 群联电子股份有限公司 | Memory management and write-in method, memory controller and storage system |
US8194340B1 (en) | 2010-03-18 | 2012-06-05 | Western Digital Technologies, Inc. | Disk drive framing write data with in-line mapping data during write operations |
US8699185B1 (en) | 2012-12-10 | 2014-04-15 | Western Digital Technologies, Inc. | Disk drive defining guard bands to support zone sequentiality when butterfly writing shingled data tracks |
US8693133B1 (en) | 2010-03-22 | 2014-04-08 | Western Digital Technologies, Inc. | Systems and methods for improving sequential data rate performance using sorted data zones for butterfly format |
US9330715B1 (en) | 2010-03-22 | 2016-05-03 | Western Digital Technologies, Inc. | Mapping of shingled magnetic recording media |
US8687306B1 (en) | 2010-03-22 | 2014-04-01 | Western Digital Technologies, Inc. | Systems and methods for improving sequential data rate performance using sorted data zones |
US8856438B1 (en) | 2011-12-09 | 2014-10-07 | Western Digital Technologies, Inc. | Disk drive with reduced-size translation table |
CN101799788B (en) * | 2010-03-23 | 2014-06-11 | 中兴通讯股份有限公司 | Level-to-level administration method and system of storage resources |
US8332460B2 (en) | 2010-04-14 | 2012-12-11 | International Business Machines Corporation | Performing a local reduction operation on a parallel computer |
US9424087B2 (en) | 2010-04-29 | 2016-08-23 | International Business Machines Corporation | Optimizing collective operations |
IT1399916B1 (en) * | 2010-04-30 | 2013-05-09 | Balluchi | MEMORY DEVICE FOR LOGGED REGISTER ACCESS |
US20110268265A1 (en) * | 2010-04-30 | 2011-11-03 | Lathrop Alexander M | Disk media security system and method |
US9563397B1 (en) | 2010-05-05 | 2017-02-07 | Western Digital Technologies, Inc. | Disk drive using non-volatile cache when garbage collecting log structured writes |
CN102243611B (en) * | 2010-05-11 | 2015-06-17 | 深圳市朗科科技股份有限公司 | Data storage method and system |
WO2011143628A2 (en) | 2010-05-13 | 2011-11-17 | Fusion-Io, Inc. | Apparatus, system, and method for conditional and atomic storage operations |
US8381018B2 (en) | 2010-05-21 | 2013-02-19 | Mediatek Inc. | Method for data recovery for flash devices |
US8949577B2 (en) | 2010-05-28 | 2015-02-03 | International Business Machines Corporation | Performing a deterministic reduction operation in a parallel computer |
US8341339B1 (en) | 2010-06-14 | 2012-12-25 | Western Digital Technologies, Inc. | Hybrid drive garbage collecting a non-volatile semiconductor memory by migrating valid data to a disk |
US8554741B1 (en) | 2010-06-16 | 2013-10-08 | Western Digital Technologies, Inc. | Timeline application for log structured storage devices |
US8543757B2 (en) | 2010-06-23 | 2013-09-24 | Sandisk Technologies Inc. | Techniques of maintaining logical to physical mapping information in non-volatile memory systems |
US8417876B2 (en) | 2010-06-23 | 2013-04-09 | Sandisk Technologies Inc. | Use of guard bands and phased maintenance operations to avoid exceeding maximum latency requirements in non-volatile memory systems |
US8959284B1 (en) | 2010-06-28 | 2015-02-17 | Western Digital Technologies, Inc. | Disk drive steering write data to write cache based on workload |
US8639692B2 (en) * | 2010-06-28 | 2014-01-28 | Alcatel Lucent | High-dimensional stratified sampling |
US20110320910A1 (en) * | 2010-06-29 | 2011-12-29 | Yi-Chun Liu | Storage management method and storage system |
KR101077901B1 (en) * | 2010-07-12 | 2011-10-31 | (주)이더블유비엠코리아 | Apparatus and method for managing flash memory using log block level mapping algorithm |
WO2012016089A2 (en) | 2010-07-28 | 2012-02-02 | Fusion-Io, Inc. | Apparatus, system, and method for conditional and atomic storage operations |
US8725934B2 (en) | 2011-12-22 | 2014-05-13 | Fusion-Io, Inc. | Methods and appratuses for atomic storage operations |
JP2012033002A (en) * | 2010-07-30 | 2012-02-16 | Toshiba Corp | Memory management device and memory management method |
US9146875B1 (en) | 2010-08-09 | 2015-09-29 | Western Digital Technologies, Inc. | Hybrid drive converting non-volatile semiconductor memory to read only based on life remaining |
US9058280B1 (en) | 2010-08-13 | 2015-06-16 | Western Digital Technologies, Inc. | Hybrid drive migrating data from disk to non-volatile semiconductor memory based on accumulated access time |
US9268499B1 (en) | 2010-08-13 | 2016-02-23 | Western Digital Technologies, Inc. | Hybrid drive migrating high workload data from disk to non-volatile semiconductor memory |
US8639872B1 (en) | 2010-08-13 | 2014-01-28 | Western Digital Technologies, Inc. | Hybrid drive comprising write cache spanning non-volatile semiconductor memory and disk |
US8601310B2 (en) * | 2010-08-26 | 2013-12-03 | Cisco Technology, Inc. | Partial memory mirroring and error containment |
US8683295B1 (en) | 2010-08-31 | 2014-03-25 | Western Digital Technologies, Inc. | Hybrid drive writing extended error correction code symbols to disk for data sectors stored in non-volatile semiconductor memory |
US8775720B1 (en) | 2010-08-31 | 2014-07-08 | Western Digital Technologies, Inc. | Hybrid drive balancing execution times for non-volatile semiconductor memory and disk |
US8984216B2 (en) | 2010-09-09 | 2015-03-17 | Fusion-Io, Llc | Apparatus, system, and method for managing lifetime of a storage device |
US8782334B1 (en) | 2010-09-10 | 2014-07-15 | Western Digital Technologies, Inc. | Hybrid drive copying disk cache to non-volatile semiconductor memory |
US8776081B2 (en) | 2010-09-14 | 2014-07-08 | International Business Machines Corporation | Send-side matching of data communications messages |
KR20120029239A (en) * | 2010-09-16 | 2012-03-26 | 삼성전자주식회사 | System and method for writing data using pram |
US8578340B1 (en) * | 2010-09-24 | 2013-11-05 | Ca, Inc. | Recording and replaying computer program execution with recorded execution event breakpoints |
TWI435216B (en) | 2010-09-27 | 2014-04-21 | Silicon Motion Inc | Method for performing meta block management, and associated memory device and controller thereof |
US8825977B1 (en) | 2010-09-28 | 2014-09-02 | Western Digital Technologies, Inc. | Hybrid drive writing copy of data to disk when non-volatile semiconductor memory nears end of life |
US8825976B1 (en) | 2010-09-28 | 2014-09-02 | Western Digital Technologies, Inc. | Hybrid drive executing biased migration policy during host boot to migrate data to a non-volatile semiconductor memory |
US8670205B1 (en) | 2010-09-29 | 2014-03-11 | Western Digital Technologies, Inc. | Hybrid drive changing power mode of disk channel when frequency of write data exceeds a threshold |
US8699171B1 (en) | 2010-09-30 | 2014-04-15 | Western Digital Technologies, Inc. | Disk drive selecting head for write operation based on environmental condition |
US8452911B2 (en) | 2010-09-30 | 2013-05-28 | Sandisk Technologies Inc. | Synchronized maintenance operations in a multi-bank storage system |
US8756361B1 (en) | 2010-10-01 | 2014-06-17 | Western Digital Technologies, Inc. | Disk drive modifying metadata cached in a circular buffer when a write operation is aborted |
US8954664B1 (en) | 2010-10-01 | 2015-02-10 | Western Digital Technologies, Inc. | Writing metadata files on a disk |
US8850161B2 (en) * | 2010-10-13 | 2014-09-30 | Riverbed Technology, Inc. | Method of improving performance of a data storage device |
US8612798B1 (en) | 2010-10-21 | 2013-12-17 | Western Digital Technologies, Inc. | Hybrid drive storing write data in non-volatile semiconductor memory if write verify of disk fails |
US8427771B1 (en) | 2010-10-21 | 2013-04-23 | Western Digital Technologies, Inc. | Hybrid drive storing copy of data in non-volatile semiconductor memory for suspect disk data sectors |
US8429343B1 (en) | 2010-10-21 | 2013-04-23 | Western Digital Technologies, Inc. | Hybrid drive employing non-volatile semiconductor memory to facilitate refreshing disk |
US8560759B1 (en) | 2010-10-25 | 2013-10-15 | Western Digital Technologies, Inc. | Hybrid drive storing redundant copies of data on disk and in non-volatile semiconductor memory based on read frequency |
US9069475B1 (en) | 2010-10-26 | 2015-06-30 | Western Digital Technologies, Inc. | Hybrid drive selectively spinning up disk when powered on |
US8566841B2 (en) | 2010-11-10 | 2013-10-22 | International Business Machines Corporation | Processing communications events in parallel active messaging interface by awakening thread from wait state |
TWI463495B (en) * | 2010-12-10 | 2014-12-01 | Phison Electronics Corp | Data writing method, memory controller and memory storage apparatus |
US9218278B2 (en) | 2010-12-13 | 2015-12-22 | SanDisk Technologies, Inc. | Auto-commit memory |
US9208071B2 (en) | 2010-12-13 | 2015-12-08 | SanDisk Technologies, Inc. | Apparatus, system, and method for accessing memory |
EP2652623B1 (en) | 2010-12-13 | 2018-08-01 | SanDisk Technologies LLC | Apparatus, system, and method for auto-commit memory |
US9047178B2 (en) | 2010-12-13 | 2015-06-02 | SanDisk Technologies, Inc. | Auto-commit memory synchronization |
US10817502B2 (en) | 2010-12-13 | 2020-10-27 | Sandisk Technologies Llc | Persistent memory management |
US10817421B2 (en) | 2010-12-13 | 2020-10-27 | Sandisk Technologies Llc | Persistent data structures |
US20120239860A1 (en) | 2010-12-17 | 2012-09-20 | Fusion-Io, Inc. | Apparatus, system, and method for persistent data management on a non-volatile storage media |
US8458435B1 (en) | 2010-12-20 | 2013-06-04 | Western Digital Technologies, Inc. | Sequential write thread detection |
US9213594B2 (en) | 2011-01-19 | 2015-12-15 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for managing out-of-service conditions |
US8626989B2 (en) * | 2011-02-02 | 2014-01-07 | Micron Technology, Inc. | Control arrangements and methods for accessing block oriented nonvolatile memory |
US8909851B2 (en) | 2011-02-08 | 2014-12-09 | SMART Storage Systems, Inc. | Storage control system with change logging mechanism and method of operation thereof |
US9003104B2 (en) | 2011-02-15 | 2015-04-07 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for a file-level cache |
US9201677B2 (en) | 2011-05-23 | 2015-12-01 | Intelligent Intellectual Property Holdings 2 Llc | Managing data input/output operations |
US8874823B2 (en) | 2011-02-15 | 2014-10-28 | Intellectual Property Holdings 2 Llc | Systems and methods for managing data input/output operations |
WO2012116369A2 (en) | 2011-02-25 | 2012-08-30 | Fusion-Io, Inc. | Apparatus, system, and method for managing contents of a cache |
US9563555B2 (en) | 2011-03-18 | 2017-02-07 | Sandisk Technologies Llc | Systems and methods for storage allocation |
US8966191B2 (en) | 2011-03-18 | 2015-02-24 | Fusion-Io, Inc. | Logical interface for contextual storage |
US9361044B2 (en) | 2011-03-28 | 2016-06-07 | Western Digital Technologies, Inc. | Power-safe data management system |
US8935466B2 (en) | 2011-03-28 | 2015-01-13 | SMART Storage Systems, Inc. | Data storage system with non-volatile memory and method of operation thereof |
US9342446B2 (en) | 2011-03-29 | 2016-05-17 | SanDisk Technologies, Inc. | Non-volatile memory system allowing reverse eviction of data updates to non-volatile binary cache |
US9141528B2 (en) | 2011-05-17 | 2015-09-22 | Sandisk Technologies Inc. | Tracking and handling of super-hot data in non-volatile memory systems |
US20120297121A1 (en) * | 2011-05-17 | 2012-11-22 | Sergey Anatolievich Gorobets | Non-Volatile Memory and Method with Small Logical Groups Distributed Among Active SLC and MLC Memory Partitions |
KR20140040137A (en) | 2011-05-17 | 2014-04-02 | 샌디스크 테크놀로지스, 인코포레이티드 | Non-volatile memory and method with small logical groups distributed among active slc and mlc memory partitions |
US9176864B2 (en) | 2011-05-17 | 2015-11-03 | SanDisk Technologies, Inc. | Non-volatile memory and method having block management with hot/cold data sorting |
TWI447579B (en) | 2011-05-18 | 2014-08-01 | Phison Electronics Corp | Program code loading and accessing method, memory controller and memory storage apparatus |
US8825944B2 (en) | 2011-05-23 | 2014-09-02 | International Business Machines Corporation | Populating strides of tracks to demote from a first cache to a second cache |
JP2012248109A (en) * | 2011-05-30 | 2012-12-13 | Toshiba Corp | Memory unit having multiple channels and read command group generating method for compaction in the memory unit |
US8793429B1 (en) | 2011-06-03 | 2014-07-29 | Western Digital Technologies, Inc. | Solid-state drive with reduced power up time |
US8756382B1 (en) | 2011-06-30 | 2014-06-17 | Western Digital Technologies, Inc. | Method for file based shingled data storage utilizing multiple media types |
US9898402B2 (en) * | 2011-07-01 | 2018-02-20 | Micron Technology, Inc. | Unaligned data coalescing |
US8589762B2 (en) * | 2011-07-05 | 2013-11-19 | International Business Machines Corporation | Adaptive multi-bit error correction in endurance limited memories |
US8893083B2 (en) | 2011-08-09 | 2014-11-18 | International Business Machines Coporation | Collective operation protocol selection in a parallel computer |
US8667501B2 (en) | 2011-08-10 | 2014-03-04 | International Business Machines Corporation | Performing a local barrier operation |
US8910178B2 (en) | 2011-08-10 | 2014-12-09 | International Business Machines Corporation | Performing a global barrier operation in a parallel computer |
US9098399B2 (en) | 2011-08-31 | 2015-08-04 | SMART Storage Systems, Inc. | Electronic system with storage management mechanism and method of operation thereof |
US9021319B2 (en) | 2011-09-02 | 2015-04-28 | SMART Storage Systems, Inc. | Non-volatile memory management system with load leveling and method of operation thereof |
US9063844B2 (en) | 2011-09-02 | 2015-06-23 | SMART Storage Systems, Inc. | Non-volatile memory management system with time measure mechanism and method of operation thereof |
US9021231B2 (en) | 2011-09-02 | 2015-04-28 | SMART Storage Systems, Inc. | Storage control system with write amplification control mechanism and method of operation thereof |
US8630056B1 (en) | 2011-09-12 | 2014-01-14 | Western Digital Technologies, Inc. | Hybrid drive adjusting spin-up profile based on cache status of non-volatile semiconductor memory |
US8909889B1 (en) | 2011-10-10 | 2014-12-09 | Western Digital Technologies, Inc. | Method and apparatus for servicing host commands by a disk drive |
US8977803B2 (en) | 2011-11-21 | 2015-03-10 | Western Digital Technologies, Inc. | Disk drive data caching using a multi-tiered memory |
US9268701B1 (en) | 2011-11-21 | 2016-02-23 | Western Digital Technologies, Inc. | Caching of data in data storage systems by managing the size of read and write cache based on a measurement of cache reliability |
US8977804B1 (en) | 2011-11-21 | 2015-03-10 | Western Digital Technologies, Inc. | Varying data redundancy in storage systems |
US9213493B1 (en) | 2011-12-16 | 2015-12-15 | Western Digital Technologies, Inc. | Sorted serpentine mapping for storage drives |
US8819367B1 (en) | 2011-12-19 | 2014-08-26 | Western Digital Technologies, Inc. | Accelerated translation power recovery |
US8762627B2 (en) | 2011-12-21 | 2014-06-24 | Sandisk Technologies Inc. | Memory logical defragmentation during garbage collection |
US8612706B1 (en) | 2011-12-21 | 2013-12-17 | Western Digital Technologies, Inc. | Metadata recovery in a disk drive |
US8904091B1 (en) | 2011-12-22 | 2014-12-02 | Western Digital Technologies, Inc. | High performance media transport manager architecture for data storage systems |
US9274937B2 (en) | 2011-12-22 | 2016-03-01 | Longitude Enterprise Flash S.A.R.L. | Systems, methods, and interfaces for vector input/output operations |
US8966178B2 (en) * | 2012-01-17 | 2015-02-24 | International Business Machines Corporation | Populating a first stride of tracks from a first cache to write to a second stride in a second cache |
US8825953B2 (en) | 2012-01-17 | 2014-09-02 | International Business Machines Corporation | Demoting tracks from a first cache to a second cache by using a stride number ordering of strides in the second cache to consolidate strides in the second cache |
US8825957B2 (en) | 2012-01-17 | 2014-09-02 | International Business Machines Corporation | Demoting tracks from a first cache to a second cache by using an occupancy of valid tracks in strides in the second cache to consolidate strides in the second cache |
US9021201B2 (en) | 2012-01-17 | 2015-04-28 | International Business Machines Corporation | Demoting partial tracks from a first cache to a second cache |
US9251086B2 (en) | 2012-01-24 | 2016-02-02 | SanDisk Technologies, Inc. | Apparatus, system, and method for managing a cache |
US10359972B2 (en) | 2012-08-31 | 2019-07-23 | Sandisk Technologies Llc | Systems, methods, and interfaces for adaptive persistence |
US9116812B2 (en) | 2012-01-27 | 2015-08-25 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for a de-duplication cache |
US20130205066A1 (en) * | 2012-02-03 | 2013-08-08 | Sandisk Technologies Inc. | Enhanced write abort management in flash memory |
US9239781B2 (en) | 2012-02-07 | 2016-01-19 | SMART Storage Systems, Inc. | Storage control system with erase block mechanism and method of operation thereof |
US8706847B2 (en) | 2012-02-09 | 2014-04-22 | International Business Machines Corporation | Initiating a collective operation in a parallel computer |
US9495135B2 (en) | 2012-02-09 | 2016-11-15 | International Business Machines Corporation | Developing collective operations for a parallel computer |
US8799598B2 (en) * | 2012-02-17 | 2014-08-05 | Spansion Llc | Redundancy loading efficiency |
US9213632B1 (en) | 2012-02-29 | 2015-12-15 | Marvell International Ltd. | Systems and methods for data storage devices to use external resources |
US8842473B2 (en) | 2012-03-15 | 2014-09-23 | Sandisk Technologies Inc. | Techniques for accessing column selecting shift register with skipped entries in non-volatile memories |
US9298252B2 (en) | 2012-04-17 | 2016-03-29 | SMART Storage Systems, Inc. | Storage control system with power down mechanism and method of operation thereof |
TWI605458B (en) | 2012-04-25 | 2017-11-11 | Sony Corp | Non-volatile memory devices, non-volatile memory control devices, and non-volatile memory control methods |
US8681548B2 (en) | 2012-05-03 | 2014-03-25 | Sandisk Technologies Inc. | Column redundancy circuitry for non-volatile memory |
US8966205B1 (en) | 2012-05-10 | 2015-02-24 | Western Digital Technologies, Inc. | System data management using garbage collection and hybrid self mapping |
US8984247B1 (en) * | 2012-05-10 | 2015-03-17 | Western Digital Technologies, Inc. | Storing and reconstructing mapping table data in a data storage system |
US9977612B1 (en) | 2012-05-11 | 2018-05-22 | Western Digital Technologies, Inc. | System data management using garbage collection and logs |
US9170932B1 (en) | 2012-05-22 | 2015-10-27 | Western Digital Technologies, Inc. | System data storage mechanism providing coherency and segmented data loading |
US8949689B2 (en) | 2012-06-11 | 2015-02-03 | SMART Storage Systems, Inc. | Storage control system with data management mechanism and method of operation thereof |
US10339056B2 (en) | 2012-07-03 | 2019-07-02 | Sandisk Technologies Llc | Systems, methods and apparatus for cache transfers |
US9612966B2 (en) | 2012-07-03 | 2017-04-04 | Sandisk Technologies Llc | Systems, methods and apparatus for a virtual machine cache |
US8750045B2 (en) | 2012-07-27 | 2014-06-10 | Sandisk Technologies Inc. | Experience count dependent program algorithm for flash memory |
US8966343B2 (en) | 2012-08-21 | 2015-02-24 | Western Digital Technologies, Inc. | Solid-state drive retention monitor using reference blocks |
US8898548B1 (en) | 2012-08-24 | 2014-11-25 | Western Digital Technologies, Inc. | Methods, data storage devices and systems having variable size ECC page size |
CN104854554A (en) * | 2012-09-06 | 2015-08-19 | 百科容(科技)公司 | Storage translation layer |
CN102883207B (en) * | 2012-09-13 | 2016-11-23 | 深圳创维数字技术有限公司 | The method of a kind of data process and digital TV terminal |
US10318495B2 (en) | 2012-09-24 | 2019-06-11 | Sandisk Technologies Llc | Snapshots for a non-volatile device |
US10509776B2 (en) | 2012-09-24 | 2019-12-17 | Sandisk Technologies Llc | Time sequence data management |
US8897080B2 (en) | 2012-09-28 | 2014-11-25 | Sandisk Technologies Inc. | Variable rate serial to parallel shift register |
US9490035B2 (en) | 2012-09-28 | 2016-11-08 | SanDisk Technologies, Inc. | Centralized variable rate serializer and deserializer for bad column management |
US9076506B2 (en) | 2012-09-28 | 2015-07-07 | Sandisk Technologies Inc. | Variable rate parallel to serial shift register |
US9268682B2 (en) | 2012-10-05 | 2016-02-23 | Skyera, Llc | Methods, devices and systems for physical-to-logical mapping in solid state drives |
US9507523B1 (en) | 2012-10-12 | 2016-11-29 | Western Digital Technologies, Inc. | Methods, devices and systems for variable size logical page management in a solid state drive |
CN102929579A (en) * | 2012-10-16 | 2013-02-13 | 国电南京自动化股份有限公司 | Internal memory copying method with improved efficiency |
US9489296B1 (en) * | 2012-10-17 | 2016-11-08 | Western Digital Technologies, Inc. | Methods, devices and systems for hardware-based garbage collection in solid state drives |
US8959281B1 (en) | 2012-11-09 | 2015-02-17 | Western Digital Technologies, Inc. | Data management for a storage device |
TWI479492B (en) * | 2012-11-20 | 2015-04-01 | Phison Electronics Corp | Memory storage device, memory controller thereof, and method for programming data thereof |
CN103839578B (en) * | 2012-11-27 | 2017-02-08 | 广东华晟数据固态存储有限公司 | Method for prolonging data hold time of NAND-based solid-state memory |
US9671962B2 (en) | 2012-11-30 | 2017-06-06 | Sandisk Technologies Llc | Storage control system with data management mechanism of parity and method of operation thereof |
KR101993626B1 (en) * | 2012-12-11 | 2019-06-28 | 삼성전자 주식회사 | SoC comprising special function register, operating method for thereof |
US9430376B2 (en) | 2012-12-26 | 2016-08-30 | Western Digital Technologies, Inc. | Priority-based garbage collection for data storage systems |
US9842660B1 (en) | 2012-12-28 | 2017-12-12 | Virident Systems, Llc | System and method to improve enterprise reliability through tracking I/O performance metrics in non-volatile random access memory |
US9286002B1 (en) | 2012-12-28 | 2016-03-15 | Virident Systems Inc. | Dynamic restriping in nonvolatile memory systems |
US9223693B2 (en) | 2012-12-31 | 2015-12-29 | Sandisk Technologies Inc. | Memory system having an unequal number of memory die on different control channels |
US9336133B2 (en) | 2012-12-31 | 2016-05-10 | Sandisk Technologies Inc. | Method and system for managing program cycles including maintenance programming operations in a multi-layer memory |
US9734050B2 (en) | 2012-12-31 | 2017-08-15 | Sandisk Technologies Llc | Method and system for managing background operations in a multi-layer memory |
US9348746B2 (en) * | 2012-12-31 | 2016-05-24 | Sandisk Technologies | Method and system for managing block reclaim operations in a multi-layer memory |
US9465731B2 (en) | 2012-12-31 | 2016-10-11 | Sandisk Technologies Llc | Multi-layer non-volatile memory system having multiple partitions in a layer |
US9734911B2 (en) | 2012-12-31 | 2017-08-15 | Sandisk Technologies Llc | Method and system for asynchronous die operations in a non-volatile memory |
US9123445B2 (en) | 2013-01-22 | 2015-09-01 | SMART Storage Systems, Inc. | Storage control system with data management mechanism and method of operation thereof |
US10445229B1 (en) * | 2013-01-28 | 2019-10-15 | Radian Memory Systems, Inc. | Memory controller with at least one address segment defined for which data is striped across flash memory dies, with a common address offset being used to obtain physical addresses for the data in each of the dies |
US9214965B2 (en) | 2013-02-20 | 2015-12-15 | Sandisk Enterprise Ip Llc | Method and system for improving data integrity in non-volatile storage |
US9329928B2 (en) | 2013-02-20 | 2016-05-03 | Sandisk Enterprise IP LLC. | Bandwidth optimization in a non-volatile memory system |
US9183137B2 (en) | 2013-02-27 | 2015-11-10 | SMART Storage Systems, Inc. | Storage control system with data management mechanism and method of operation thereof |
US8972776B2 (en) | 2013-03-06 | 2015-03-03 | Seagate Technology, Llc | Partial R-block recycling |
US9384839B2 (en) | 2013-03-07 | 2016-07-05 | Sandisk Technologies Llc | Write sequence providing write abort protection |
US9470720B2 (en) | 2013-03-08 | 2016-10-18 | Sandisk Technologies Llc | Test system with localized heating and method of manufacture thereof |
US9189389B2 (en) | 2013-03-11 | 2015-11-17 | Kabushiki Kaisha Toshiba | Memory controller and memory system |
US9478271B2 (en) * | 2013-03-14 | 2016-10-25 | Seagate Technology Llc | Nonvolatile memory data recovery after power failure |
US10073626B2 (en) | 2013-03-15 | 2018-09-11 | Virident Systems, Llc | Managing the write performance of an asymmetric memory system |
US9135164B2 (en) | 2013-03-15 | 2015-09-15 | Virident Systems Inc. | Synchronous mirroring in non-volatile memory systems |
US8996796B1 (en) | 2013-03-15 | 2015-03-31 | Virident Systems Inc. | Small block write operations in non-volatile memory systems |
US9842053B2 (en) | 2013-03-15 | 2017-12-12 | Sandisk Technologies Llc | Systems and methods for persistent cache logging |
US9043780B2 (en) | 2013-03-27 | 2015-05-26 | SMART Storage Systems, Inc. | Electronic system with system modification control mechanism and method of operation thereof |
CN105190761A (en) | 2013-03-27 | 2015-12-23 | 惠普发展公司,有限责任合伙企业 | Non-volatile memory based synchronous logic |
US10049037B2 (en) | 2013-04-05 | 2018-08-14 | Sandisk Enterprise Ip Llc | Data management in a storage system |
US9170941B2 (en) | 2013-04-05 | 2015-10-27 | Sandisk Enterprises IP LLC | Data hardening in a storage system |
US9543025B2 (en) | 2013-04-11 | 2017-01-10 | Sandisk Technologies Llc | Storage control system with power-off time estimation mechanism and method of operation thereof |
US10546648B2 (en) | 2013-04-12 | 2020-01-28 | Sandisk Technologies Llc | Storage control system with data management mechanism and method of operation thereof |
US10558561B2 (en) | 2013-04-16 | 2020-02-11 | Sandisk Technologies Llc | Systems and methods for storage metadata management |
US10102144B2 (en) * | 2013-04-16 | 2018-10-16 | Sandisk Technologies Llc | Systems, methods and interfaces for data virtualization |
KR20140142035A (en) * | 2013-06-03 | 2014-12-11 | 삼성전자주식회사 | Memory controller and method of operation of the memory controller |
US9898056B2 (en) | 2013-06-19 | 2018-02-20 | Sandisk Technologies Llc | Electronic assembly with thermal channel and method of manufacture thereof |
US9313874B2 (en) | 2013-06-19 | 2016-04-12 | SMART Storage Systems, Inc. | Electronic system with heat extraction and method of manufacture thereof |
US9367353B1 (en) | 2013-06-25 | 2016-06-14 | Sandisk Technologies Inc. | Storage control system with power throttling mechanism and method of operation thereof |
US9244519B1 (en) | 2013-06-25 | 2016-01-26 | Smart Storage Systems. Inc. | Storage system with data transfer rate adjustment for power throttling |
US9141176B1 (en) | 2013-07-29 | 2015-09-22 | Western Digital Technologies, Inc. | Power management for data storage device |
US9842128B2 (en) | 2013-08-01 | 2017-12-12 | Sandisk Technologies Llc | Systems and methods for atomic storage operations |
US9146850B2 (en) | 2013-08-01 | 2015-09-29 | SMART Storage Systems, Inc. | Data storage system with dynamic read threshold mechanism and method of operation thereof |
US9361222B2 (en) | 2013-08-07 | 2016-06-07 | SMART Storage Systems, Inc. | Electronic system with storage drive life estimation mechanism and method of operation thereof |
US9448946B2 (en) | 2013-08-07 | 2016-09-20 | Sandisk Technologies Llc | Data storage system with stale data mechanism and method of operation thereof |
US9431113B2 (en) | 2013-08-07 | 2016-08-30 | Sandisk Technologies Llc | Data storage system with dynamic erase block grouping mechanism and method of operation thereof |
US9070379B2 (en) | 2013-08-28 | 2015-06-30 | Western Digital Technologies, Inc. | Data migration for data storage device |
US20150092488A1 (en) * | 2013-09-27 | 2015-04-02 | Yogesh Wakchaure | Flash memory system endurance improvement using temperature based nand settings |
US10019320B2 (en) | 2013-10-18 | 2018-07-10 | Sandisk Technologies Llc | Systems and methods for distributed atomic storage operations |
US8917471B1 (en) | 2013-10-29 | 2014-12-23 | Western Digital Technologies, Inc. | Power management for data storage device |
US10073630B2 (en) | 2013-11-08 | 2018-09-11 | Sandisk Technologies Llc | Systems and methods for log coordination |
US9152555B2 (en) | 2013-11-15 | 2015-10-06 | Sandisk Enterprise IP LLC. | Data management with modular erase in a data storage system |
US9466383B2 (en) * | 2013-12-30 | 2016-10-11 | Sandisk Technologies Llc | Non-volatile memory and method with adaptive logical groups |
WO2015118623A1 (en) * | 2014-02-05 | 2015-08-13 | 株式会社日立製作所 | Information processing device |
US9927998B2 (en) * | 2014-02-05 | 2018-03-27 | Tidal Systems, Inc. | Flash memory compression |
KR102225989B1 (en) * | 2014-03-04 | 2021-03-10 | 삼성전자주식회사 | Nonvolatile memory system and operation method thereof |
US9354955B1 (en) | 2014-03-19 | 2016-05-31 | Western Digital Technologies, Inc. | Partial garbage collection for fast error handling and optimized garbage collection for the invisible band |
US9653184B2 (en) | 2014-06-16 | 2017-05-16 | Sandisk Technologies Llc | Non-volatile memory module with physical-to-physical address remapping |
US8976609B1 (en) | 2014-06-16 | 2015-03-10 | Sandisk Enterprise Ip Llc | Low-test memory stack for non-volatile storage |
US9613715B2 (en) | 2014-06-16 | 2017-04-04 | Sandisk Technologies Llc | Low-test memory stack for non-volatile storage |
US8953269B1 (en) | 2014-07-18 | 2015-02-10 | Western Digital Technologies, Inc. | Management of data objects in a data object zone |
US9875055B1 (en) | 2014-08-04 | 2018-01-23 | Western Digital Technologies, Inc. | Check-pointing of metadata |
US9575661B2 (en) | 2014-08-19 | 2017-02-21 | Samsung Electronics Co., Ltd. | Nonvolatile memory systems configured to use deduplication and methods of controlling the same |
US9626289B2 (en) * | 2014-08-28 | 2017-04-18 | Sandisk Technologies Llc | Metalblock relinking to physical blocks of semiconductor memory in adaptive wear leveling based on health |
US10175964B2 (en) * | 2014-09-26 | 2019-01-08 | Microsoft Technology Licensing, Llc | Compiler caching for runtime routine redundancy tracking |
US9542286B2 (en) * | 2014-09-30 | 2017-01-10 | Sandisk Technologies Llc | Failure logging mechanism to reduce garbage collection time in partially reused bad blocks |
US9934872B2 (en) | 2014-10-30 | 2018-04-03 | Sandisk Technologies Llc | Erase stress and delta erase loop count methods for various fail modes in non-volatile memory |
TWI604307B (en) | 2014-10-31 | 2017-11-01 | 慧榮科技股份有限公司 | Data storage device and flash memory control method |
KR20160072706A (en) * | 2014-12-15 | 2016-06-23 | 에스케이하이닉스 주식회사 | Semiconductor device and operating method thereof |
US9224502B1 (en) | 2015-01-14 | 2015-12-29 | Sandisk Technologies Inc. | Techniques for detection and treating memory hole to local interconnect marginality defects |
US10482008B2 (en) * | 2015-01-23 | 2019-11-19 | Hewlett Packard Enterprise Development Lp | Aligned variable reclamation |
KR102275710B1 (en) * | 2015-02-02 | 2021-07-09 | 삼성전자주식회사 | Memory Device and Memory System capable of over-writing and Operating Method thereof |
US10032524B2 (en) | 2015-02-09 | 2018-07-24 | Sandisk Technologies Llc | Techniques for determining local interconnect defects |
US9946607B2 (en) | 2015-03-04 | 2018-04-17 | Sandisk Technologies Llc | Systems and methods for storage error management |
US9269446B1 (en) | 2015-04-08 | 2016-02-23 | Sandisk Technologies Inc. | Methods to improve programming of slow cells |
US9564219B2 (en) | 2015-04-08 | 2017-02-07 | Sandisk Technologies Llc | Current based detection and recording of memory hole-interconnect spacing defects |
US10009438B2 (en) | 2015-05-20 | 2018-06-26 | Sandisk Technologies Llc | Transaction log acceleration |
KR102513491B1 (en) * | 2015-07-15 | 2023-03-27 | 에스케이하이닉스 주식회사 | Memory system and operating method of memory system |
CN105183545A (en) * | 2015-07-23 | 2015-12-23 | 柳州易旺科技有限公司 | Big data information compression method |
JP2017045288A (en) * | 2015-08-27 | 2017-03-02 | 株式会社東芝 | Memory system |
US10133490B2 (en) | 2015-10-30 | 2018-11-20 | Sandisk Technologies Llc | System and method for managing extended maintenance scheduling in a non-volatile memory |
US10042553B2 (en) | 2015-10-30 | 2018-08-07 | Sandisk Technologies Llc | Method and system for programming a multi-layer non-volatile memory having a single fold data path |
US10120613B2 (en) | 2015-10-30 | 2018-11-06 | Sandisk Technologies Llc | System and method for rescheduling host and maintenance operations in a non-volatile memory |
US9778855B2 (en) | 2015-10-30 | 2017-10-03 | Sandisk Technologies Llc | System and method for precision interleaving of data writes in a non-volatile memory |
US9837161B2 (en) | 2016-03-09 | 2017-12-05 | Nxp Usa, Inc. | Split-gate memory having sector retirement with reduced current and method therefor |
US10031845B2 (en) * | 2016-04-01 | 2018-07-24 | Intel Corporation | Method and apparatus for processing sequential writes to a block group of physical blocks in a memory device |
US10223004B2 (en) | 2016-04-07 | 2019-03-05 | International Business Machines Corporation | Parallel read and writes in 3D flash memory |
US9875034B2 (en) | 2016-04-07 | 2018-01-23 | International Business Machines Corporation | Delaying programming requests in flash memory |
WO2017183153A1 (en) * | 2016-04-21 | 2017-10-26 | 富士通フロンテック株式会社 | Write control program, write control device, and write control method |
FR3055992A1 (en) * | 2016-09-09 | 2018-03-16 | Proton World International N.V. | INDEX MANAGEMENT IN A FLASH MEMORY |
CN107870727B (en) * | 2016-09-23 | 2021-01-01 | 伊姆西Ip控股有限责任公司 | Method and apparatus for storing data |
KR20180091296A (en) | 2017-02-06 | 2018-08-16 | 삼성전자주식회사 | Storage device for processing corrupted meta data and Method of operating the storage device |
IT201700020134A1 (en) * | 2017-02-22 | 2018-08-22 | St Microelectronics Srl | PROCEDURE FOR MANAGING SEMICONDUCTOR MEMORIES, INTERFACE, MEMORY AND CORRESPONDENT DEVICE |
US10388379B2 (en) * | 2017-03-21 | 2019-08-20 | Micron Technology, Inc. | Apparatuses and methods for automated dynamic word line start voltage |
TWI629590B (en) * | 2017-04-14 | 2018-07-11 | 群聯電子股份有限公司 | Memory management method, memory control circuit unit and memory storage device |
US10476879B2 (en) * | 2017-07-26 | 2019-11-12 | International Business Machines Corporation | Blockchain authentication via hard/soft token verification |
US11194524B2 (en) | 2017-09-15 | 2021-12-07 | Qualcomm Incorporated | Apparatus and method for performing persistent write operations using a persistent write command |
FR3072476A1 (en) | 2017-10-13 | 2019-04-19 | Proton World International N.V. | MEMORY LOGIC UNIT FOR FLASH MEMORY |
US10445230B2 (en) * | 2017-12-08 | 2019-10-15 | Macronix International Co., Ltd. | Managing block arrangement of super blocks |
CN109947660B (en) * | 2017-12-21 | 2023-03-14 | 建兴储存科技(广州)有限公司 | Solid state storage device and related computer system |
CN108763101B (en) * | 2018-05-30 | 2021-11-09 | 郑州云海信息技术有限公司 | Data carrying method and system |
US10628076B1 (en) * | 2018-10-01 | 2020-04-21 | Micron Technology, Inc. | Data erasure in memory sub-systems |
CN110022299A (en) * | 2019-03-06 | 2019-07-16 | 浙江天脉领域科技有限公司 | A kind of method of ultra-large distributed network computing |
CN109995610A (en) * | 2019-03-06 | 2019-07-09 | 浙江天脉领域科技有限公司 | A kind of peer-to-peer network underlying communications systems for simulating biological neural form |
KR20210118453A (en) | 2019-04-30 | 2021-09-30 | 양쯔 메모리 테크놀로지스 씨오., 엘티디. | Electronic device and method for managing read levels of flash memory |
US11281578B2 (en) | 2019-08-20 | 2022-03-22 | Micron Technology, Inc. | Garbage collection in a memory sub-system during a low battery state |
US11282567B2 (en) | 2019-08-20 | 2022-03-22 | Micron Technology, Inc. | Sequential SLC read optimization |
US20210055878A1 (en) * | 2019-08-20 | 2021-02-25 | Micron Technology, Inc. | Data compaction within the same plane of a memory component |
US11726869B2 (en) | 2019-08-20 | 2023-08-15 | Micron Technology, Inc. | Performing error control operation on memory component for garbage collection |
US11281392B2 (en) | 2019-08-28 | 2022-03-22 | Micron Technology, Inc. | Garbage collection in a memory component using an adjusted parameter |
US11294827B2 (en) * | 2019-09-12 | 2022-04-05 | Western Digital Technologies, Inc. | Non-sequential zoned namespaces |
US11762569B2 (en) * | 2019-10-29 | 2023-09-19 | International Business Machines Corporation | Workload based relief valve activation for hybrid controller architectures |
US11216364B2 (en) * | 2020-02-18 | 2022-01-04 | Micron Technology, Inc. | Sequential read optimization in a memory sub-system that programs sequentially |
EP3899711B1 (en) * | 2020-02-20 | 2024-02-07 | Yangtze Memory Technologies Co., Ltd. | Method of programming multi-plane memory device |
US11287989B2 (en) | 2020-03-24 | 2022-03-29 | Western Digital Technologies, Inc. | Dynamic allocation of sub blocks |
US11137920B1 (en) * | 2020-04-30 | 2021-10-05 | Micron Technology, Inc. | Storing zones in a zone namespace on separate planes of a multi-plane memory device |
US11663068B2 (en) * | 2020-06-29 | 2023-05-30 | Western Digital Technologies, Inc. | Write abort error detection in multi-pass programming |
TWI747349B (en) * | 2020-06-30 | 2021-11-21 | 大陸商合肥沛睿微電子股份有限公司 | Low-level formatting method of storage device |
TWI775268B (en) | 2021-01-07 | 2022-08-21 | 慧榮科技股份有限公司 | Storage device, flash memory control and control method thereof |
JP7574118B2 (en) | 2021-03-19 | 2024-10-28 | 株式会社Pfu | Computer equipment |
US11842062B2 (en) | 2022-02-09 | 2023-12-12 | Western Digital Technologies, Inc. | Method of handling irregular MetaBlock wear leveling and UGSD boot time improvement |
US20240061786A1 (en) * | 2022-08-22 | 2024-02-22 | Samsung Electronics Co., Ltd. | Systems, methods, and apparatus for accessing data in versions of memory pages |
Citations (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5070032A (en) * | 1989-03-15 | 1991-12-03 | Sundisk Corporation | Method of making dense flash eeprom semiconductor memory structures |
US5095344A (en) * | 1988-06-08 | 1992-03-10 | Eliyahou Harari | Highly compact eprom and flash eeprom devices |
US5313421A (en) * | 1992-01-14 | 1994-05-17 | Sundisk Corporation | EEPROM with split gate source side injection |
US5315541A (en) * | 1992-07-24 | 1994-05-24 | Sundisk Corporation | Segmented column memory array |
US5343063A (en) * | 1990-12-18 | 1994-08-30 | Sundisk Corporation | Dense vertical programmable read only memory cell structure and processes for making them |
US5367484A (en) * | 1993-04-01 | 1994-11-22 | Microchip Technology Incorporated | Programmable high endurance block for EEPROM device |
US5404485A (en) * | 1993-03-08 | 1995-04-04 | M-Systems Flash Disk Pioneers Ltd. | Flash file system |
US5473765A (en) * | 1994-01-24 | 1995-12-05 | 3Com Corporation | Apparatus for using flash memory as a floppy disk emulator in a computer system |
US5532962A (en) * | 1992-05-20 | 1996-07-02 | Sandisk Corporation | Soft errors handling in EEPROM devices |
US5570315A (en) * | 1993-09-21 | 1996-10-29 | Kabushiki Kaisha Toshiba | Multi-state EEPROM having write-verify control circuit |
US5661053A (en) * | 1994-05-25 | 1997-08-26 | Sandisk Corporation | Method of making dense flash EEPROM cell array and peripheral supporting circuits formed in deposited field oxide with the use of spacers |
US5742934A (en) * | 1995-09-13 | 1998-04-21 | Mitsubishi Denki Kabushiki Kaisha | Flash solid state disk card with selective use of an address conversion table depending on logical and physical sector numbers |
US5751634A (en) * | 1995-05-16 | 1998-05-12 | Kabushiki Kaisha Toshiba | Non-volatile semiconductor memory device for storing multivalue data and readout/write-in method therefor |
US5768192A (en) * | 1996-07-23 | 1998-06-16 | Saifun Semiconductors, Ltd. | Non-volatile semiconductor memory cell utilizing asymmetrical charge trapping |
US5774397A (en) * | 1993-06-29 | 1998-06-30 | Kabushiki Kaisha Toshiba | Non-volatile semiconductor memory device and method of programming a non-volatile memory cell to a predetermined state |
US5798968A (en) * | 1996-09-24 | 1998-08-25 | Sandisk Corporation | Plane decode/virtual sector architecture |
US5860124A (en) * | 1996-09-30 | 1999-01-12 | Intel Corporation | Method for performing a continuous over-write of a file in nonvolatile memory |
US5890192A (en) * | 1996-11-05 | 1999-03-30 | Sandisk Corporation | Concurrent write of multiple chunks of data into multiple subarrays of flash EEPROM |
US5903495A (en) * | 1996-03-18 | 1999-05-11 | Kabushiki Kaisha Toshiba | Semiconductor device and memory system |
US5907856A (en) * | 1995-07-31 | 1999-05-25 | Lexar Media, Inc. | Moving sectors within a block of information in a flash memory mass storage architecture |
US5909449A (en) * | 1997-09-08 | 1999-06-01 | Invox Technology | Multibit-per-cell non-volatile memory with error detection and correction |
US5930167A (en) * | 1997-07-30 | 1999-07-27 | Sandisk Corporation | Multi-state non-volatile flash memory capable of being its own two state write cache |
US5933368A (en) * | 1996-11-25 | 1999-08-03 | Macronix International Co., Ltd. | Flash memory mass storage system |
US5937425A (en) * | 1997-10-16 | 1999-08-10 | M-Systems Flash Disk Pioneers Ltd. | Flash file system optimized for page-mode flash technologies |
US6011725A (en) * | 1997-08-01 | 2000-01-04 | Saifun Semiconductors, Ltd. | Two bit non-volatile electrically erasable and programmable semiconductor memory cell utilizing asymmetrical charge trapping |
US6034897A (en) * | 1999-04-01 | 2000-03-07 | Lexar Media, Inc. | Space management for managing high capacity nonvolatile memory |
US6125435A (en) * | 1995-09-13 | 2000-09-26 | Lexar Media, Inc. | Alignment of cluster address to block addresses within a semiconductor non-volatile mass storage memory |
US6141249A (en) * | 1999-04-01 | 2000-10-31 | Lexar Media, Inc. | Organization of blocks within a nonvolatile memory unit to effectively decrease sector write operation time |
US6222762B1 (en) * | 1992-01-14 | 2001-04-24 | Sandisk Corporation | Multi-state memory |
US6226728B1 (en) * | 1998-04-21 | 2001-05-01 | Intel Corporation | Dynamic allocation for efficient management of variable sized data within a nonvolatile memory |
US6272610B1 (en) * | 1993-03-11 | 2001-08-07 | Hitachi, Ltd. | File memory device using flash memories, and an information processing system using the same |
US6304980B1 (en) * | 1996-03-13 | 2001-10-16 | International Business Machines Corporation | Peer-to-peer backup system with failure-triggered device switching honoring reservation of primary device |
US20010042882A1 (en) * | 1999-10-08 | 2001-11-22 | Yun Chang | Method for forming a flash memory cell having contoured floating gate surface |
US6377500B1 (en) * | 1999-11-11 | 2002-04-23 | Kabushiki Kaisha Toshiba | Memory system with a non-volatile memory, having address translating function |
US6401160B1 (en) * | 1999-03-31 | 2002-06-04 | Intel Corporation | Method and apparatus to permit adjustable code/data boundary in a nonvolatile memory |
US6421279B1 (en) * | 1993-04-08 | 2002-07-16 | Hitachi, Ltd. | Flash memory control method and apparatus processing system therewith |
US6446249B1 (en) * | 2000-05-11 | 2002-09-03 | Quickturn Design Systems, Inc. | Emulation circuit with a hold time algorithm, logic and analyzer and shadow memory |
US6456528B1 (en) * | 2001-09-17 | 2002-09-24 | Sandisk Corporation | Selective operation of a multi-state non-volatile memory system in a binary mode |
US20020136054A1 (en) * | 2001-01-11 | 2002-09-26 | Blodgett Greg A. | Memory device with multi-level storage cells and apparatuses, systems and methods including same |
US20030076709A1 (en) * | 2001-10-24 | 2003-04-24 | Jen-Ren Huang | Method for operating a multi-level memory cell |
US6567307B1 (en) * | 2000-07-21 | 2003-05-20 | Lexar Media, Inc. | Block management for mass storage |
US6571261B1 (en) * | 2000-07-13 | 2003-05-27 | International Business Machines Corporation | Defragmentation utility for a shared disk parallel file system across a storage area network |
US20030110343A1 (en) * | 2001-12-11 | 2003-06-12 | Mitsubishi Denki Kabushiki Kaisha | File system including non-volatile semiconductor memory device having a plurality of banks |
US6591328B1 (en) * | 1998-07-28 | 2003-07-08 | Sony Corporation | Non-volatile memory storing address control table data formed of logical addresses and physical addresses |
US6826651B2 (en) * | 1998-05-29 | 2004-11-30 | International Business Machines Corporation | State-based allocation and replacement for improved hit ratio in directory caches |
US6895464B2 (en) * | 2002-06-03 | 2005-05-17 | Honeywell International Inc. | Flash memory management system and method utilizing multiple block list windows |
US6898662B2 (en) * | 2001-09-28 | 2005-05-24 | Lexar Media, Inc. | Memory system sectors |
US7139864B2 (en) * | 2003-12-30 | 2006-11-21 | Sandisk Corporation | Non-volatile memory and method with block management system |
Family Cites Families (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5043940A (en) | 1988-06-08 | 1991-08-27 | Eliyahou Harari | Flash EEPROM memory systems having multistate storage cells |
US5172338B1 (en) | 1989-04-13 | 1997-07-08 | Sandisk Corp | Multi-state eeprom read and write circuits and techniques |
JPH0546494A (en) | 1991-08-19 | 1993-02-26 | Toshiba Corp | Memory check system |
JP3122222B2 (en) | 1992-05-08 | 2001-01-09 | 株式会社東芝 | Memory card device |
JPH08212019A (en) * | 1995-01-31 | 1996-08-20 | Mitsubishi Electric Corp | Semiconductor disk device |
US5797033A (en) | 1995-03-31 | 1998-08-18 | Cirrus Logic, Inc. | Direct memory access for storing and retrieving data based on packet size |
JP3707854B2 (en) | 1996-03-01 | 2005-10-19 | 株式会社東芝 | Computer having cache function and cache memory control method |
JP3197815B2 (en) * | 1996-04-15 | 2001-08-13 | インターナショナル・ビジネス・マシーンズ・コーポレ−ション | Semiconductor memory device and control method thereof |
US6900193B1 (en) | 1996-05-01 | 2005-05-31 | The United States Of America As Represented By The Department Of Health And Human Services | Structural modification of 19-norprogesterone I: 17-α-substituted-11-β-substituted-4-aryl and 21-substituted 19-norpregnadienedione as new antiprogestational agents |
US6477627B1 (en) * | 1996-05-31 | 2002-11-05 | Emc Corporation | Method and apparatus for mirroring data in a remote data storage system |
JPH1027444A (en) * | 1996-07-10 | 1998-01-27 | Fujitsu Ltd | Position sensitivity adjusting method of disk device |
JP3919261B2 (en) * | 1996-07-31 | 2007-05-23 | キヤノン株式会社 | Memory control device and memory access method |
JP3072722B2 (en) | 1997-06-20 | 2000-08-07 | ソニー株式会社 | Data management device and data management method using flash memory and storage medium using flash memory |
US6157991A (en) | 1998-04-01 | 2000-12-05 | Emc Corporation | Method and apparatus for asynchronously updating a mirror of a source device |
US6185663B1 (en) | 1998-06-15 | 2001-02-06 | Compaq Computer Corporation | Computer method and apparatus for file system block allocation with multiple redo |
JP4689137B2 (en) * | 2001-08-08 | 2011-05-25 | 株式会社日立製作所 | Remote copy control method and storage system |
US6490649B2 (en) | 1998-11-10 | 2002-12-03 | Lexar Media, Inc. | Memory device |
GB9903490D0 (en) | 1999-02-17 | 1999-04-07 | Memory Corp Plc | Memory system |
US6426893B1 (en) | 2000-02-17 | 2002-07-30 | Sandisk Corporation | Flash eeprom system with simultaneous multiple data sector programming and storage of physical block characteristics in other designated blocks |
US6598174B1 (en) | 2000-04-26 | 2003-07-22 | Dell Products L.P. | Method and apparatus for storage unit replacement in non-redundant array |
DE10196175T1 (en) | 2000-05-11 | 2003-04-17 | Quickturn Design Systems Inc | Emulation circuit with a hold time algorithm, logic analyzer and shadow memory |
JP3699890B2 (en) | 2000-08-30 | 2005-09-28 | シャープ株式会社 | Nonvolatile semiconductor memory device |
JP4184586B2 (en) | 2000-09-28 | 2008-11-19 | 株式会社東芝 | Semiconductor memory device |
JP2002133878A (en) | 2000-10-23 | 2002-05-10 | Hitachi Ltd | Non-volatile semiconductor memory circuit and semiconductor integrated circuit |
DE10165027C5 (en) * | 2000-10-27 | 2019-10-02 | Nsk Ltd. | Rolling bearing and spindle device for machine tool |
US6763424B2 (en) | 2001-01-19 | 2004-07-13 | Sandisk Corporation | Partial block data programming and reading operations in a non-volatile memory |
US6728849B2 (en) | 2001-12-14 | 2004-04-27 | Hitachi, Ltd. | Remote storage system and method |
JP3828376B2 (en) | 2001-05-07 | 2006-10-04 | 株式会社東芝 | Storage system |
KR100389867B1 (en) | 2001-06-04 | 2003-07-04 | 삼성전자주식회사 | Flash memory management method |
US6591330B2 (en) | 2001-06-18 | 2003-07-08 | M-Systems Flash Disk Pioneers Ltd. | System and method for flexible flash file |
JP4256600B2 (en) * | 2001-06-19 | 2009-04-22 | Tdk株式会社 | MEMORY CONTROLLER, FLASH MEMORY SYSTEM PROVIDED WITH MEMORY CONTROLLER, AND FLASH MEMORY CONTROL METHOD |
US6522580B2 (en) | 2001-06-27 | 2003-02-18 | Sandisk Corporation | Operating techniques for reducing effects of coupling between storage elements of a non-volatile memory operated in multiple data states |
US7356641B2 (en) | 2001-08-28 | 2008-04-08 | International Business Machines Corporation | Data management in flash memory |
US6717847B2 (en) | 2001-09-17 | 2004-04-06 | Sandisk Corporation | Selective operation of a multi-state non-volatile memory system in a binary mode |
GB0123416D0 (en) | 2001-09-28 | 2001-11-21 | Memquest Ltd | Non-volatile memory control |
GB0123415D0 (en) | 2001-09-28 | 2001-11-21 | Memquest Ltd | Method of writing data to non-volatile memory |
US6925007B2 (en) | 2001-10-31 | 2005-08-02 | Sandisk Corporation | Multi-state non-volatile integrated circuit memory systems that employ dielectric storage elements |
US7199905B2 (en) * | 2001-12-04 | 2007-04-03 | Xerox Corporation | Systems and methods for halftoning multiple color separation layers by interlayer error diffusion |
US6820180B2 (en) | 2002-04-04 | 2004-11-16 | International Business Machines Corporation | Apparatus and method of cascading backup logical volume mirrors |
US7047377B2 (en) | 2002-08-20 | 2006-05-16 | Gruintine Pueche, Inc. | System and method for conducting an auction-based ranking of search results on a computer network |
US8412879B2 (en) | 2002-10-28 | 2013-04-02 | Sandisk Technologies Inc. | Hybrid implementation for error correction codes within a non-volatile memory system |
US7096313B1 (en) | 2002-10-28 | 2006-08-22 | Sandisk Corporation | Tracking the least frequently erased blocks in non-volatile memory systems |
US7103732B1 (en) | 2002-10-28 | 2006-09-05 | Sandisk Corporation | Method and apparatus for managing an erase count block |
US7526599B2 (en) * | 2002-10-28 | 2009-04-28 | Sandisk Corporation | Method and apparatus for effectively enabling an out of sequence write process within a non-volatile memory system |
US6831865B2 (en) | 2002-10-28 | 2004-12-14 | Sandisk Corporation | Maintaining erase counts in non-volatile storage systems |
US6985992B1 (en) | 2002-10-28 | 2006-01-10 | Sandisk Corporation | Wear-leveling in non-volatile storage systems |
US7346740B2 (en) | 2002-11-01 | 2008-03-18 | Seagate Technology Llc | Transferring speculative data in lieu of requested data in a data transfer operation |
JP4200362B2 (en) | 2003-02-20 | 2008-12-24 | ソニー株式会社 | Recording medium recording control method, recording control apparatus, and electronic apparatus |
DE10349595B3 (en) | 2003-10-24 | 2004-12-09 | Hyperstone Ag | Writing sectors of block-deletable memory, writes to alternative memory block sectors in sequence, storing their positions in sector table |
US8504798B2 (en) | 2003-12-30 | 2013-08-06 | Sandisk Technologies Inc. | Management of non-volatile memory systems having large erase blocks |
KR20060134011A (en) | 2003-12-30 | 2006-12-27 | 쌘디스크 코포레이션 | Non-volatile memory and method with memory planes alignment |
US7383375B2 (en) | 2003-12-30 | 2008-06-03 | Sandisk Corporation | Data run programming |
EP1704479B1 (en) * | 2003-12-30 | 2014-03-26 | SanDisk Technologies Inc. | Non-volatile memory and method with phased program failure handling |
KR20070007264A (en) | 2003-12-30 | 2007-01-15 | 쌘디스크 코포레이션 | Non-volatile memory and method with non-sequential update block management |
US20050144363A1 (en) | 2003-12-30 | 2005-06-30 | Sinclair Alan W. | Data boundary management |
US7433993B2 (en) | 2003-12-30 | 2008-10-07 | San Disk Corportion | Adaptive metablocks |
EP1758027B1 (en) | 2003-12-30 | 2010-07-14 | SanDisk Corporation | Non-volatile memory and method with control data management |
US7315916B2 (en) * | 2004-12-16 | 2008-01-01 | Sandisk Corporation | Scratch pad block |
-
2003
- 2003-12-30 US US10/750,155 patent/US7139864B2/en active Active
-
2004
- 2004-08-13 US US10/917,889 patent/US20050166087A1/en not_active Abandoned
- 2004-08-13 US US10/917,867 patent/US20050141312A1/en not_active Abandoned
- 2004-08-13 US US10/917,725 patent/US8051257B2/en active Active
- 2004-08-13 US US10/917,888 patent/US20050141313A1/en not_active Abandoned
- 2004-12-21 AT AT04815706T patent/ATE498893T1/en not_active IP Right Cessation
- 2004-12-21 EP EP04815706A patent/EP1704571B1/en not_active Not-in-force
- 2004-12-21 CN CNB2004800421293A patent/CN100435115C/en active Active
- 2004-12-21 EP EP10009157.8A patent/EP2293196B1/en not_active Not-in-force
- 2004-12-21 DE DE602004031461T patent/DE602004031461D1/en active Active
- 2004-12-21 WO PCT/US2004/043692 patent/WO2005066972A1/en active Application Filing
- 2004-12-21 JP JP2006547493A patent/JP4938460B2/en not_active Expired - Fee Related
- 2004-12-21 JP JP2006547386A patent/JP4933269B2/en active Active
- 2004-12-21 KR KR1020067013310A patent/KR101012274B1/en not_active IP Right Cessation
- 2004-12-21 CN CNB2004800421382A patent/CN100435116C/en active Active
- 2004-12-21 JP JP2006547516A patent/JP4851344B2/en not_active Expired - Fee Related
- 2004-12-21 CN CNA2004800415625A patent/CN1914689A/en active Pending
- 2004-12-22 CN CNA2004800419683A patent/CN1973334A/en active Pending
- 2004-12-22 DE DE602004028174T patent/DE602004028174D1/en active Active
- 2004-12-22 AT AT04815695T patent/ATE423380T1/en not_active IP Right Cessation
- 2004-12-22 CN CNA2004800421363A patent/CN1922580A/en active Pending
- 2004-12-22 JP JP2006547487A patent/JP4898457B2/en active Active
- 2004-12-22 AT AT06076755T patent/ATE474269T1/en not_active IP Right Cessation
- 2004-12-22 DE DE602004019571T patent/DE602004019571D1/en active Active
- 2004-12-30 TW TW093141373A patent/TWI272483B/en not_active IP Right Cessation
-
2008
- 2008-09-26 US US12/239,489 patent/US8103841B2/en not_active Expired - Fee Related
- 2008-09-26 US US12/239,351 patent/US7913061B2/en not_active Expired - Lifetime
- 2008-10-09 US US12/248,160 patent/US7945759B2/en not_active Expired - Lifetime
-
2011
- 2011-05-06 US US13/102,261 patent/US8621177B2/en not_active Expired - Lifetime
- 2011-08-05 JP JP2011171531A patent/JP5617079B2/en not_active Expired - Fee Related
- 2011-09-23 US US13/244,178 patent/US8239643B2/en not_active Expired - Fee Related
Patent Citations (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5095344A (en) * | 1988-06-08 | 1992-03-10 | Eliyahou Harari | Highly compact eprom and flash eeprom devices |
US5070032A (en) * | 1989-03-15 | 1991-12-03 | Sundisk Corporation | Method of making dense flash eeprom semiconductor memory structures |
US5343063A (en) * | 1990-12-18 | 1994-08-30 | Sundisk Corporation | Dense vertical programmable read only memory cell structure and processes for making them |
US5313421A (en) * | 1992-01-14 | 1994-05-17 | Sundisk Corporation | EEPROM with split gate source side injection |
US6222762B1 (en) * | 1992-01-14 | 2001-04-24 | Sandisk Corporation | Multi-state memory |
US5532962A (en) * | 1992-05-20 | 1996-07-02 | Sandisk Corporation | Soft errors handling in EEPROM devices |
US5315541A (en) * | 1992-07-24 | 1994-05-24 | Sundisk Corporation | Segmented column memory array |
US5404485A (en) * | 1993-03-08 | 1995-04-04 | M-Systems Flash Disk Pioneers Ltd. | Flash file system |
US6272610B1 (en) * | 1993-03-11 | 2001-08-07 | Hitachi, Ltd. | File memory device using flash memories, and an information processing system using the same |
US5367484A (en) * | 1993-04-01 | 1994-11-22 | Microchip Technology Incorporated | Programmable high endurance block for EEPROM device |
US6421279B1 (en) * | 1993-04-08 | 2002-07-16 | Hitachi, Ltd. | Flash memory control method and apparatus processing system therewith |
US5774397A (en) * | 1993-06-29 | 1998-06-30 | Kabushiki Kaisha Toshiba | Non-volatile semiconductor memory device and method of programming a non-volatile memory cell to a predetermined state |
US5570315A (en) * | 1993-09-21 | 1996-10-29 | Kabushiki Kaisha Toshiba | Multi-state EEPROM having write-verify control circuit |
US5473765A (en) * | 1994-01-24 | 1995-12-05 | 3Com Corporation | Apparatus for using flash memory as a floppy disk emulator in a computer system |
US5661053A (en) * | 1994-05-25 | 1997-08-26 | Sandisk Corporation | Method of making dense flash EEPROM cell array and peripheral supporting circuits formed in deposited field oxide with the use of spacers |
US5751634A (en) * | 1995-05-16 | 1998-05-12 | Kabushiki Kaisha Toshiba | Non-volatile semiconductor memory device for storing multivalue data and readout/write-in method therefor |
US5907856A (en) * | 1995-07-31 | 1999-05-25 | Lexar Media, Inc. | Moving sectors within a block of information in a flash memory mass storage architecture |
US5742934A (en) * | 1995-09-13 | 1998-04-21 | Mitsubishi Denki Kabushiki Kaisha | Flash solid state disk card with selective use of an address conversion table depending on logical and physical sector numbers |
US6125435A (en) * | 1995-09-13 | 2000-09-26 | Lexar Media, Inc. | Alignment of cluster address to block addresses within a semiconductor non-volatile mass storage memory |
US6304980B1 (en) * | 1996-03-13 | 2001-10-16 | International Business Machines Corporation | Peer-to-peer backup system with failure-triggered device switching honoring reservation of primary device |
US6046935A (en) * | 1996-03-18 | 2000-04-04 | Kabushiki Kaisha Toshiba | Semiconductor device and memory system |
US5903495A (en) * | 1996-03-18 | 1999-05-11 | Kabushiki Kaisha Toshiba | Semiconductor device and memory system |
US5768192A (en) * | 1996-07-23 | 1998-06-16 | Saifun Semiconductors, Ltd. | Non-volatile semiconductor memory cell utilizing asymmetrical charge trapping |
US5798968A (en) * | 1996-09-24 | 1998-08-25 | Sandisk Corporation | Plane decode/virtual sector architecture |
US5860124A (en) * | 1996-09-30 | 1999-01-12 | Intel Corporation | Method for performing a continuous over-write of a file in nonvolatile memory |
US5890192A (en) * | 1996-11-05 | 1999-03-30 | Sandisk Corporation | Concurrent write of multiple chunks of data into multiple subarrays of flash EEPROM |
US5933368A (en) * | 1996-11-25 | 1999-08-03 | Macronix International Co., Ltd. | Flash memory mass storage system |
US5930167A (en) * | 1997-07-30 | 1999-07-27 | Sandisk Corporation | Multi-state non-volatile flash memory capable of being its own two state write cache |
US6011725A (en) * | 1997-08-01 | 2000-01-04 | Saifun Semiconductors, Ltd. | Two bit non-volatile electrically erasable and programmable semiconductor memory cell utilizing asymmetrical charge trapping |
US5909449A (en) * | 1997-09-08 | 1999-06-01 | Invox Technology | Multibit-per-cell non-volatile memory with error detection and correction |
US5937425A (en) * | 1997-10-16 | 1999-08-10 | M-Systems Flash Disk Pioneers Ltd. | Flash file system optimized for page-mode flash technologies |
US6226728B1 (en) * | 1998-04-21 | 2001-05-01 | Intel Corporation | Dynamic allocation for efficient management of variable sized data within a nonvolatile memory |
US6826651B2 (en) * | 1998-05-29 | 2004-11-30 | International Business Machines Corporation | State-based allocation and replacement for improved hit ratio in directory caches |
US6591328B1 (en) * | 1998-07-28 | 2003-07-08 | Sony Corporation | Non-volatile memory storing address control table data formed of logical addresses and physical addresses |
US6401160B1 (en) * | 1999-03-31 | 2002-06-04 | Intel Corporation | Method and apparatus to permit adjustable code/data boundary in a nonvolatile memory |
US6141249A (en) * | 1999-04-01 | 2000-10-31 | Lexar Media, Inc. | Organization of blocks within a nonvolatile memory unit to effectively decrease sector write operation time |
US6034897A (en) * | 1999-04-01 | 2000-03-07 | Lexar Media, Inc. | Space management for managing high capacity nonvolatile memory |
US20010042882A1 (en) * | 1999-10-08 | 2001-11-22 | Yun Chang | Method for forming a flash memory cell having contoured floating gate surface |
US6544844B2 (en) * | 1999-10-08 | 2003-04-08 | Macronix International Co., Ltd. | Method for forming a flash memory cell having contoured floating gate surface |
US6377500B1 (en) * | 1999-11-11 | 2002-04-23 | Kabushiki Kaisha Toshiba | Memory system with a non-volatile memory, having address translating function |
US6446249B1 (en) * | 2000-05-11 | 2002-09-03 | Quickturn Design Systems, Inc. | Emulation circuit with a hold time algorithm, logic and analyzer and shadow memory |
US6697957B1 (en) * | 2000-05-11 | 2004-02-24 | Quickturn Design Systems, Inc. | Emulation circuit with a hold time algorithm, logic analyzer and shadow memory |
US6571261B1 (en) * | 2000-07-13 | 2003-05-27 | International Business Machines Corporation | Defragmentation utility for a shared disk parallel file system across a storage area network |
US6567307B1 (en) * | 2000-07-21 | 2003-05-20 | Lexar Media, Inc. | Block management for mass storage |
US20020136054A1 (en) * | 2001-01-11 | 2002-09-26 | Blodgett Greg A. | Memory device with multi-level storage cells and apparatuses, systems and methods including same |
US6456528B1 (en) * | 2001-09-17 | 2002-09-24 | Sandisk Corporation | Selective operation of a multi-state non-volatile memory system in a binary mode |
US6898662B2 (en) * | 2001-09-28 | 2005-05-24 | Lexar Media, Inc. | Memory system sectors |
US6643170B2 (en) * | 2001-10-24 | 2003-11-04 | Macronix International Co., Ltd. | Method for operating a multi-level memory cell |
US20030076709A1 (en) * | 2001-10-24 | 2003-04-24 | Jen-Ren Huang | Method for operating a multi-level memory cell |
US20030110343A1 (en) * | 2001-12-11 | 2003-06-12 | Mitsubishi Denki Kabushiki Kaisha | File system including non-volatile semiconductor memory device having a plurality of banks |
US6871259B2 (en) * | 2001-12-11 | 2005-03-22 | Renesas Technology Corp. | File system including non-volatile semiconductor memory device having a plurality of banks |
US6895464B2 (en) * | 2002-06-03 | 2005-05-17 | Honeywell International Inc. | Flash memory management system and method utilizing multiple block list windows |
US7139864B2 (en) * | 2003-12-30 | 2006-11-21 | Sandisk Corporation | Non-volatile memory and method with block management system |
Cited By (254)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060067181A1 (en) * | 1999-05-08 | 2006-03-30 | Samsung Electronics Co., Ltd | Recording medium for storing linking type information and method of processing defective area using the same |
US7433993B2 (en) | 2003-12-30 | 2008-10-07 | San Disk Corportion | Adaptive metablocks |
US20090019218A1 (en) * | 2003-12-30 | 2009-01-15 | Alan Welsh Sinclair | Non-Volatile Memory And Method With Non-Sequential Update Block Management |
US7913061B2 (en) | 2003-12-30 | 2011-03-22 | Sandisk Corporation | Non-volatile memory and method with memory planes alignment |
US8621177B2 (en) | 2003-12-30 | 2013-12-31 | Sandisk Corporation | Non-volatile memory and method with phased program failure handling |
US8051257B2 (en) | 2003-12-30 | 2011-11-01 | Sandisk Technologies Inc. | Non-volatile memory and method with control data management |
US7139864B2 (en) | 2003-12-30 | 2006-11-21 | Sandisk Corporation | Non-volatile memory and method with block management system |
US7945759B2 (en) | 2003-12-30 | 2011-05-17 | Sandisk Corporation | Non-volatile memory and method with phased program failure handling |
US20090037651A1 (en) * | 2003-12-30 | 2009-02-05 | Sergey Anatolievich Gorobets | Non-Volatile Memory and Method with Phased Program Failure Handling |
US20090019217A1 (en) * | 2003-12-30 | 2009-01-15 | Sergey Anatolievich Gorobets | Non-Volatile Memory And Method With Memory Planes Alignment |
US20110219174A1 (en) * | 2003-12-30 | 2011-09-08 | Sergey Anatolievich Gorobets | Non-Volatile Memory and Method with Phased Program Failure Handling |
US8103841B2 (en) | 2003-12-30 | 2012-01-24 | Sandisk Technologies Inc. | Non-volatile memory and method with non-sequential update block management |
US8239643B2 (en) | 2003-12-30 | 2012-08-07 | Sandisk Technologies Inc. | Non-volatile memory and method with control data management |
US7383375B2 (en) | 2003-12-30 | 2008-06-03 | Sandisk Corporation | Data run programming |
US20050144396A1 (en) * | 2003-12-31 | 2005-06-30 | Eschmann Michael K. | Coalescing disk write back requests |
US20080091872A1 (en) * | 2005-01-20 | 2008-04-17 | Bennett Alan D | Scheduling of Housekeeping Operations in Flash Memory Systems |
US7565478B2 (en) | 2005-01-20 | 2009-07-21 | Sandisk Corporation | Scheduling of housekeeping operations in flash memory systems |
US20090265508A1 (en) * | 2005-01-20 | 2009-10-22 | Alan David Bennett | Scheduling of Housekeeping Operations in Flash Memory Systems |
US8364883B2 (en) | 2005-01-20 | 2013-01-29 | Sandisk Technologies Inc. | Scheduling of housekeeping operations in flash memory systems |
US10055147B2 (en) | 2005-02-04 | 2018-08-21 | Sandisk Technologies Llc | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
US9104315B2 (en) | 2005-02-04 | 2015-08-11 | Sandisk Technologies Inc. | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
US10126959B2 (en) | 2005-02-04 | 2018-11-13 | Sandisk Technologies Llc | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
US7984233B2 (en) | 2005-02-16 | 2011-07-19 | Sandisk Corporation | Direct data file storage implementation techniques in flash memories |
US20060184722A1 (en) * | 2005-02-16 | 2006-08-17 | Sinclair Alan W | Direct data file storage implementation techniques in flash memories |
US20060184723A1 (en) * | 2005-02-16 | 2006-08-17 | Sinclair Alan W | Direct file data programming and deletion in flash memories |
US8214583B2 (en) | 2005-02-16 | 2012-07-03 | Sandisk Technologies Inc. | Direct file data programming and deletion in flash memories |
US7877539B2 (en) | 2005-02-16 | 2011-01-25 | Sandisk Corporation | Direct data file storage in flash memories |
US20100223423A1 (en) * | 2005-02-16 | 2010-09-02 | Sinclair Alan W | Direct File Data Programming and Deletion in Flash Memories |
US20100217926A1 (en) * | 2005-02-16 | 2010-08-26 | Sinclair Alan W | Direct Data File Storage Implementation Techniques in Flash Memories |
US20060184718A1 (en) * | 2005-02-16 | 2006-08-17 | Sinclair Alan W | Direct file data programming and deletion in flash memories |
US20060184719A1 (en) * | 2005-02-16 | 2006-08-17 | Sinclair Alan W | Direct data file storage implementation techniques in flash memories |
USRE46446E1 (en) * | 2005-05-09 | 2017-06-20 | Sandisk Technologies Llc | Method and system for facilitating fast wake-up of a flash memory system |
US8055832B2 (en) | 2005-08-03 | 2011-11-08 | SanDisk Technologies, Inc. | Management of memory blocks that directly store data files |
US20070033327A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Enhanced host interface |
US7581057B2 (en) | 2005-08-03 | 2009-08-25 | Sandisk Corporation | Memory system with management of memory blocks that directly store data files |
US20070033324A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Scheduling of reclaim operations in non-volatile memory |
US20070033374A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Reprogrammable Non-Volatile Memory Systems With Indexing of Directly Stored Data Files |
US20070033326A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Enhanced host interfacing methods |
US20070186032A1 (en) * | 2005-08-03 | 2007-08-09 | Sinclair Alan W | Flash Memory Systems With Direct Data File Storage Utilizing Data Consolidation and Garbage Collection |
US20070033325A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Non-volatile memory with scheduled reclaim operations |
US7558905B2 (en) | 2005-08-03 | 2009-07-07 | Sandisk Corporation | Reclaiming data storage capacity in flash memory systems |
US7480766B2 (en) | 2005-08-03 | 2009-01-20 | Sandisk Corporation | Interfacing systems operating through a logical address space and on a direct data file basis |
US20070030734A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Reclaiming Data Storage Capacity in Flash Memories |
US8291151B2 (en) | 2005-08-03 | 2012-10-16 | Sandisk Technologies Inc. | Enhanced host interface |
US20070033373A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Method and system for dual mode access for storage devices |
US7558906B2 (en) | 2005-08-03 | 2009-07-07 | Sandisk Corporation | Methods of managing blocks in nonvolatile memory |
US7450420B2 (en) | 2005-08-03 | 2008-11-11 | Sandisk Corporation | Reclaiming data storage capacity in flash memories |
US7552271B2 (en) | 2005-08-03 | 2009-06-23 | Sandisk Corporation | Nonvolatile memory with block management |
US7669003B2 (en) | 2005-08-03 | 2010-02-23 | Sandisk Corporation | Reprogrammable non-volatile memory systems with indexing of directly stored data files |
US20070033376A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Data Consolidation and Garbage Collection in Direct Data File Storage Memories |
US20070033330A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Reclaiming Data Storage Capacity in Flash Memory Systems |
US20070033375A1 (en) * | 2005-08-03 | 2007-02-08 | Sinclair Alan W | Indexing of File Data in Reprogrammable Non-Volatile Memories That Directly Store Data Files |
US7984084B2 (en) | 2005-08-03 | 2011-07-19 | SanDisk Technologies, Inc. | Non-volatile memory with scheduled reclaim operations |
US7409489B2 (en) | 2005-08-03 | 2008-08-05 | Sandisk Corporation | Scheduling of reclaim operations in non-volatile memory |
US7949845B2 (en) | 2005-08-03 | 2011-05-24 | Sandisk Corporation | Indexing of file data in reprogrammable non-volatile memories that directly store data files |
US20070084375A1 (en) * | 2005-08-10 | 2007-04-19 | Smith Kyle S | High density cartridge and method for reloading |
US7814262B2 (en) | 2005-10-13 | 2010-10-12 | Sandisk Corporation | Memory system storing transformed units of data in fixed sized storage blocks |
US7529905B2 (en) | 2005-10-13 | 2009-05-05 | Sandisk Corporation | Method of storing transformed units of data in a memory system having fixed sized storage blocks |
US20070086260A1 (en) * | 2005-10-13 | 2007-04-19 | Sinclair Alan W | Method of storing transformed units of data in a memory system having fixed sized storage blocks |
US20070101095A1 (en) * | 2005-10-27 | 2007-05-03 | Sandisk Corporation | Methods for adaptively handling data writes in non-volatile memories |
US7509471B2 (en) | 2005-10-27 | 2009-03-24 | Sandisk Corporation | Methods for adaptively handling data writes in non-volatile memories |
US7631162B2 (en) | 2005-10-27 | 2009-12-08 | Sandisck Corporation | Non-volatile memory with adaptive handling of data writes |
EP1895418A4 (en) * | 2005-12-09 | 2008-08-27 | Matsushita Electric Ind Co Ltd | Nonvolatile memory device, method of writing data, and method of reading out data |
US20090049229A1 (en) * | 2005-12-09 | 2009-02-19 | Matsushita Electric Industrial Co., Ltd. | Nonvolatile memory device, method of writing data,and method of reading out data |
US8307149B2 (en) | 2005-12-09 | 2012-11-06 | Panasonic Corporation | Nonvolatile memory device including a logical-to-physical logig-to-physical address conversion table, a temporary block and a temporary table |
EP1895418A1 (en) * | 2005-12-09 | 2008-03-05 | Matsushita Electric Industrial Co., Ltd. | Nonvolatile memory device, method of writing data, and method of reading out data |
US7877540B2 (en) | 2005-12-13 | 2011-01-25 | Sandisk Corporation | Logically-addressed file storage methods |
US20070136555A1 (en) * | 2005-12-13 | 2007-06-14 | Sinclair Alan W | Logically-addressed file storage methods |
US20080253199A1 (en) * | 2005-12-15 | 2008-10-16 | Stec, Inc. | Parallel data storage system |
US7894288B2 (en) | 2005-12-15 | 2011-02-22 | Stec, Inc. | Parallel data storage system |
US20100023714A1 (en) * | 2005-12-15 | 2010-01-28 | Stec, Inc. | Parallel data storage system |
US20070140020A1 (en) * | 2005-12-15 | 2007-06-21 | Simpletech, Inc. | Parallel data storage system |
US7376034B2 (en) * | 2005-12-15 | 2008-05-20 | Stec, Inc. | Parallel data storage system |
US7619941B2 (en) | 2005-12-15 | 2009-11-17 | Stec, Inc. | Parallel data storage system |
US20070143532A1 (en) * | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Method and system for accessing non-volatile storage devices |
US20090182791A1 (en) * | 2005-12-21 | 2009-07-16 | Sergey Anatolievich Gorobets | Non-Volatile Memories And Method With Adaptive File Handling In A Directly Mapped File Storage System |
US20090210614A1 (en) * | 2005-12-21 | 2009-08-20 | Sergey Anatolievich Gorobets | Non-Volatile Memories With Versions of File Data Identified By Identical File ID and File Offset Stored in Identical Location Within a Memory Page |
US20070156998A1 (en) * | 2005-12-21 | 2007-07-05 | Gorobets Sergey A | Methods for memory allocation in non-volatile memories with a directly mapped file storage system |
US7793068B2 (en) | 2005-12-21 | 2010-09-07 | Sandisk Corporation | Dual mode access for non-volatile storage devices |
WO2007073536A2 (en) * | 2005-12-21 | 2007-06-28 | Sandisk Corporation | Non-volatile memories and methods with memory allocation for a directly mapped file storage system |
WO2007073536A3 (en) * | 2005-12-21 | 2008-07-17 | Sandisk Corp | Non-volatile memories and methods with memory allocation for a directly mapped file storage system |
US7769978B2 (en) | 2005-12-21 | 2010-08-03 | Sandisk Corporation | Method and system for accessing non-volatile storage devices |
US7747837B2 (en) | 2005-12-21 | 2010-06-29 | Sandisk Corporation | Method and system for accessing non-volatile storage devices |
US20070143570A1 (en) * | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Method and system for accessing non-volatile storage devices |
US20100146197A1 (en) * | 2005-12-21 | 2010-06-10 | Sergey Anatolievich Gorobets | Non-Volatile Memory And Method With Memory Allocation For A Directly Mapped File Storage System |
US20070143561A1 (en) * | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Methods for adaptive file data handling in non-volatile memories with a directly mapped file storage system |
US8209516B2 (en) | 2005-12-21 | 2012-06-26 | Sandisk Technologies Inc. | Method and system for dual mode access for storage devices |
US20070143378A1 (en) * | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Non-volatile memories with adaptive file handling in a directly mapped file storage system |
US7444461B2 (en) | 2006-08-04 | 2008-10-28 | Sandisk Corporation | Methods for phased garbage collection |
US20080034174A1 (en) * | 2006-08-04 | 2008-02-07 | Shai Traister | Non-volatile memory storage systems for phased garbage collection |
US20080034175A1 (en) * | 2006-08-04 | 2008-02-07 | Shai Traister | Methods for phased garbage collection |
US7451265B2 (en) | 2006-08-04 | 2008-11-11 | Sandisk Corporation | Non-volatile memory storage systems for phased garbage collection |
US20080071970A1 (en) * | 2006-09-15 | 2008-03-20 | Lin Jason T | Non-Volatile Memory With Class-Based Update Block Replacement Rules |
US7774392B2 (en) * | 2006-09-15 | 2010-08-10 | Sandisk Corporation | Non-volatile memory with management of a pool of update memory blocks based on each block's activity and data order |
US7779056B2 (en) * | 2006-09-15 | 2010-08-17 | Sandisk Corporation | Managing a pool of update memory blocks based on each block's activity and data order |
US20080071969A1 (en) * | 2006-09-15 | 2008-03-20 | Lin Jason T | Method for Class-Based Update Block Replacement Rules in Non-Volatile Memory |
US7444462B2 (en) | 2006-09-28 | 2008-10-28 | Sandisk Corporation | Methods for phased garbage collection using phased garbage collection block or scratch pad block as a buffer |
WO2008042596A2 (en) * | 2006-09-28 | 2008-04-10 | Sandisk Corporation | Memory systems for phased garbage collection using phased garbage collection block or scratch pad block as a buffer |
US7441071B2 (en) | 2006-09-28 | 2008-10-21 | Sandisk Corporation | Memory systems for phased garbage collection using phased garbage collection block or scratch pad block as a buffer |
WO2008042596A3 (en) * | 2006-09-28 | 2008-07-31 | Sandisk Corp | Memory systems for phased garbage collection using phased garbage collection block or scratch pad block as a buffer |
US20080086619A1 (en) * | 2006-09-28 | 2008-04-10 | Shai Traister | Methods for phased garbage collection using phased garbage collection block or scratch pad block as a buffer |
WO2008042592A3 (en) * | 2006-09-29 | 2008-06-12 | Sandisk Corp | Phased garbage collection |
US20080082596A1 (en) * | 2006-09-29 | 2008-04-03 | Sergey Anatolievich Gorobets | Method for phased garbage collection |
US7444463B2 (en) | 2006-09-29 | 2008-10-28 | Sandisk Corporation | System for phased garbage collection with state indicators |
WO2008042592A2 (en) * | 2006-09-29 | 2008-04-10 | Sandisk Corporation | Phased garbage collection |
US7464216B2 (en) | 2006-09-29 | 2008-12-09 | Sandisk Corporation | Method for phased garbage collection with state indicators |
US20080082775A1 (en) * | 2006-09-29 | 2008-04-03 | Sergey Anatolievich Gorobets | System for phased garbage collection |
JP2010507147A (en) * | 2006-10-12 | 2010-03-04 | サンディスク コーポレイション | Nonvolatile memory with data management in the worst case and method therefor |
US20080091871A1 (en) * | 2006-10-12 | 2008-04-17 | Alan David Bennett | Non-volatile memory with worst-case control data management |
WO2008045839A1 (en) * | 2006-10-12 | 2008-04-17 | Sandisk Corporation | Non-volatile memory with worst-case control data management and methods therefor |
US20080120488A1 (en) * | 2006-11-20 | 2008-05-22 | Samsung Electronics Co., Ltd. | Apparatus and method of managing nonvolatile memory |
EP1923792A1 (en) | 2006-11-20 | 2008-05-21 | Samsung Electronics Co., Ltd | Apparatus and method of managing nonvolatile memory |
US20080183953A1 (en) * | 2006-12-06 | 2008-07-31 | David Flynn | Apparatus, system, and method for storage space recovery in solid-state storage |
US8402201B2 (en) | 2006-12-06 | 2013-03-19 | Fusion-Io, Inc. | Apparatus, system, and method for storage space recovery in solid-state storage |
US11960412B2 (en) | 2006-12-06 | 2024-04-16 | Unification Technologies Llc | Systems and methods for identifying storage resources that are not in use |
US11847066B2 (en) | 2006-12-06 | 2023-12-19 | Unification Technologies Llc | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US11640359B2 (en) | 2006-12-06 | 2023-05-02 | Unification Technologies Llc | Systems and methods for identifying storage resources that are not in use |
US9734086B2 (en) | 2006-12-06 | 2017-08-15 | Sandisk Technologies Llc | Apparatus, system, and method for a device shared between multiple independent hosts |
US20090125671A1 (en) * | 2006-12-06 | 2009-05-14 | David Flynn | Apparatus, system, and method for storage space recovery after reaching a read count limit |
US9116823B2 (en) | 2006-12-06 | 2015-08-25 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for adaptive error-correction coding |
US9495241B2 (en) | 2006-12-06 | 2016-11-15 | Longitude Enterprise Flash S.A.R.L. | Systems and methods for adaptive data storage |
US8074011B2 (en) | 2006-12-06 | 2011-12-06 | Fusion-Io, Inc. | Apparatus, system, and method for storage space recovery after reaching a read count limit |
US11573909B2 (en) | 2006-12-06 | 2023-02-07 | Unification Technologies Llc | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US8209461B2 (en) | 2006-12-26 | 2012-06-26 | Sandisk Technologies Inc. | Configuration of host LBA interface with flash memory |
US20080155227A1 (en) * | 2006-12-26 | 2008-06-26 | Sinclair Alan W | Managing a LBA Interface in a Direct Data File Memory System |
US7917686B2 (en) | 2006-12-26 | 2011-03-29 | Sandisk Corporation | Host system with direct data file interface configurability |
US7739444B2 (en) | 2006-12-26 | 2010-06-15 | Sandisk Corporation | System using a direct data file system with a continuous logical address space interface |
US8166267B2 (en) | 2006-12-26 | 2012-04-24 | Sandisk Technologies Inc. | Managing a LBA interface in a direct data file memory system |
US20080155175A1 (en) * | 2006-12-26 | 2008-06-26 | Sinclair Alan W | Host System That Manages a LBA Interface With Flash Memory |
US8046522B2 (en) | 2006-12-26 | 2011-10-25 | SanDisk Technologies, Inc. | Use of a direct data file system with a continuous logical address space interface and control of file address storage in logical blocks |
US20080155178A1 (en) * | 2006-12-26 | 2008-06-26 | Sinclair Alan W | Use of a Direct Data File System With a Continuous Logical Address Space Interface |
WO2008082950A1 (en) * | 2006-12-28 | 2008-07-10 | Sandisk Corporation | System for block relinking |
US20080162787A1 (en) * | 2006-12-28 | 2008-07-03 | Andrew Tomlin | System for block relinking |
US20080162612A1 (en) * | 2006-12-28 | 2008-07-03 | Andrew Tomlin | Method for block relinking |
US20080222350A1 (en) * | 2007-03-09 | 2008-09-11 | Shih-Chieh Chang | Flash memory device for storing data and method thereof |
US20080294814A1 (en) * | 2007-05-24 | 2008-11-27 | Sergey Anatolievich Gorobets | Flash Memory System with Management of Housekeeping Operations |
WO2008147752A1 (en) * | 2007-05-24 | 2008-12-04 | Sandisk Corporation | Managing housekeeping operations in flash memory |
US8239639B2 (en) | 2007-06-08 | 2012-08-07 | Sandisk Technologies Inc. | Method and apparatus for providing data type and host file information to a mass storage system |
US20080307158A1 (en) * | 2007-06-08 | 2008-12-11 | Sinclair Alan W | Method and apparatus for providing data type and host file information to a mass storage system |
US20080307156A1 (en) * | 2007-06-08 | 2008-12-11 | Sinclair Alan W | System For Interfacing A Host Operating Through A Logical Address Space With A Direct File Storage Medium |
US8713283B2 (en) * | 2007-06-08 | 2014-04-29 | Sandisk Technologies Inc. | Method of interfacing a host operating through a logical address space with a direct file storage medium |
US20080307155A1 (en) * | 2007-06-08 | 2008-12-11 | Sinclair Alan W | Method of Interfacing A Host Operating Through A Logical Address Space With A Direct File STorage Medium |
US8352674B2 (en) * | 2007-06-13 | 2013-01-08 | Kabushiki Kaisha Toshiba | Semiconductor memory information storage apparatus and method of controlling writing |
US20100115194A1 (en) * | 2007-06-13 | 2010-05-06 | Toshio Suzuki | Semiconductor memory information storage apparatus and method of controlling writing |
US20090070520A1 (en) * | 2007-09-06 | 2009-03-12 | Nagamasa Mizushima | Semiconductor storage device and method of controlling semiconductor storage device |
US8312203B2 (en) * | 2007-09-06 | 2012-11-13 | Hitachi, Ltd. | Semiconductor storage device and method of controlling a semiconductor storage device by allocating a physical block composed of plural pages to a group of logical addresses |
US7818493B2 (en) | 2007-09-07 | 2010-10-19 | Sandisk Corporation | Adaptive block list management |
US20090070518A1 (en) * | 2007-09-07 | 2009-03-12 | Shai Traister | Adaptive Block List Management |
US20090089482A1 (en) * | 2007-09-28 | 2009-04-02 | Shai Traister | Dynamic metablocks |
US8566504B2 (en) | 2007-09-28 | 2013-10-22 | Sandisk Technologies Inc. | Dynamic metablocks |
US9201790B2 (en) * | 2007-10-09 | 2015-12-01 | Seagate Technology Llc | System and method of matching data rates |
US20090094389A1 (en) * | 2007-10-09 | 2009-04-09 | Seagate Technology, Llc | System and method of matching data rates |
EP2063428B1 (en) * | 2007-11-26 | 2018-12-26 | Kabushiki Kaisha Toshiba | Content data recording apparatus and method |
US9170754B2 (en) | 2007-12-06 | 2015-10-27 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment |
US8195912B2 (en) | 2007-12-06 | 2012-06-05 | Fusion-io, Inc | Apparatus, system, and method for efficient mapping of virtual and physical addresses |
US20090150641A1 (en) * | 2007-12-06 | 2009-06-11 | David Flynn | Apparatus, system, and method for efficient mapping of virtual and physical addresses |
US9600184B2 (en) | 2007-12-06 | 2017-03-21 | Sandisk Technologies Llc | Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment |
US20090164705A1 (en) * | 2007-12-21 | 2009-06-25 | Gorobets Sergey A | System and Method for Implementing Extensions to Intelligently Manage Resources of a Mass Storage System |
US20090198895A1 (en) * | 2008-02-05 | 2009-08-06 | Via Technologies, Inc. | Control method, memory, and processing system utilizing the same |
US8700844B2 (en) | 2008-02-05 | 2014-04-15 | Via Technologies, Inc. | Control method, memory, and processing system utilizing the same |
US8499115B2 (en) * | 2008-02-05 | 2013-07-30 | Via Technologies, Inc. | Control method, memory, and processing system utilizing the same |
US20090210611A1 (en) * | 2008-02-20 | 2009-08-20 | Nagamasa Mizushima | Storage system and data write method |
US8661191B2 (en) | 2008-03-01 | 2014-02-25 | Kabushiki Kaisha Toshiba | Memory system |
EP2260391A1 (en) * | 2008-03-01 | 2010-12-15 | Kabushiki Kaisha Toshiba | Memory system |
EP2260391A4 (en) * | 2008-03-01 | 2012-11-14 | Toshiba Kk | Memory system |
EP2359252A4 (en) * | 2008-12-17 | 2013-02-27 | Netapp Inc | Storage system snapshot assisted by ssd technology |
EP2359252A1 (en) * | 2008-12-17 | 2011-08-24 | LSI Corporation | Storage system snapshot assisted by ssd technology |
US8656084B2 (en) | 2009-02-27 | 2014-02-18 | Samsung Electronics Co., Ltd. | User device including flash memory storing index and index accessing method thereof |
US20100223421A1 (en) * | 2009-02-27 | 2010-09-02 | Samsung Electronics Co., Ltd. | User device including flash memory storing index and index accessing method thereof |
US8832353B2 (en) | 2009-04-07 | 2014-09-09 | Sandisk Technologies Inc. | Host stop-transmission handling |
US20100257308A1 (en) * | 2009-04-07 | 2010-10-07 | Sandisk Corporation | Host stop-transmission handling |
TWI417884B (en) * | 2009-04-23 | 2013-12-01 | Phison Electronics Corp | Data accessing method for flash memory and storage system and controller using the same |
US8307241B2 (en) | 2009-06-16 | 2012-11-06 | Sandisk Technologies Inc. | Data recovery in multi-level cell nonvolatile memory |
US20100318839A1 (en) * | 2009-06-16 | 2010-12-16 | Sandisk Corporation | Data recovery in multi-level cell nonvolatile memory |
US20100318721A1 (en) * | 2009-06-16 | 2010-12-16 | Sandisk Corporation | Program failure handling in nonvolatile memory |
US8132045B2 (en) | 2009-06-16 | 2012-03-06 | SanDisk Technologies, Inc. | Program failure handling in nonvolatile memory |
US20110238886A1 (en) * | 2010-03-23 | 2011-09-29 | Apple Inc. | Garbage collection schemes for index block |
US8812816B2 (en) * | 2010-03-23 | 2014-08-19 | Apple Inc. | Garbage collection schemes for index block |
US8762703B2 (en) | 2010-04-16 | 2014-06-24 | Micron Technology, Inc. | Boot partitions in memory devices and systems |
US9342371B2 (en) | 2010-04-16 | 2016-05-17 | Micron Technology, Inc. | Boot partitions in memory devices and systems |
US20120005451A1 (en) * | 2010-07-02 | 2012-01-05 | Samsung Electronics Co., Ltd | Data storage device and bad block managing method thereof |
US20120166718A1 (en) * | 2010-12-28 | 2012-06-28 | Silicon Motion, Inc. | Flash Storage Device and Data Writing Method Thereof |
US8966159B2 (en) * | 2010-12-28 | 2015-02-24 | Silicon Motion, Inc. | Flash storage device and data writing method thereof |
US10049040B2 (en) | 2011-01-21 | 2018-08-14 | Seagate Technology Llc | Just in time garbage collection |
US9817755B2 (en) | 2011-01-21 | 2017-11-14 | Seagate Technology Llc | Garbage collection management in memories |
US8874872B2 (en) | 2011-01-21 | 2014-10-28 | Seagate Technology Llc | Garbage collection management in memories |
US9021215B2 (en) | 2011-03-21 | 2015-04-28 | Apple Inc. | Storage system exporting internal storage rules |
US20120284469A1 (en) * | 2011-05-03 | 2012-11-08 | Seo Dong-Young | Memory system and bad block management method |
US9501225B2 (en) * | 2011-08-10 | 2016-11-22 | Alibaba Group Holding Limited | Shrinking virtual hard disk image |
US10331349B2 (en) | 2011-08-10 | 2019-06-25 | Alibaba Group Holding Limited | Shrinking virtual hard disk image |
US20130041927A1 (en) * | 2011-08-10 | 2013-02-14 | Alibaba Group Holding Limited | Shrinking Virtual Hard Disk Image |
CN102508788A (en) * | 2011-09-28 | 2012-06-20 | 成都市华为赛门铁克科技有限公司 | SSD (solid state drive) and SSD garbage collection method and device |
US9390801B2 (en) | 2011-11-07 | 2016-07-12 | Stmicroelectronics (Rousset) Sas | Secure memory which reduces degradation of data |
US8964471B2 (en) | 2011-11-07 | 2015-02-24 | Stmicroelectronics (Rousset) Sas | Secure memory which reduces degradation of data |
FR2982406A1 (en) * | 2011-11-07 | 2013-05-10 | St Microelectronics Rousset | SECURE MEMORY THAT PREVENTS DEGRADATION OF DATA |
CN103946805A (en) * | 2011-11-21 | 2014-07-23 | 桑迪士克科技股份有限公司 | Scrub techniques for use with dynamic read |
US20140059273A1 (en) * | 2012-08-24 | 2014-02-27 | Kabushiki Kaisha Toshiba | Host apparatus and memory device |
US20140089566A1 (en) * | 2012-09-25 | 2014-03-27 | Phison Electronics Corp. | Data storing method, and memory controller and memory storage apparatus using the same |
US10482009B1 (en) * | 2013-03-15 | 2019-11-19 | Google Llc | Use of a logical-to-logical translation map and a logical-to-physical translation map to access a data storage device |
US10942849B2 (en) | 2013-03-15 | 2021-03-09 | Google Llc | Use of a logical-to-logical translation map and a logical-to-physical translation map to access a data storage device |
US9037902B2 (en) | 2013-03-15 | 2015-05-19 | Sandisk Technologies Inc. | Flash memory techniques for recovering from write interrupt resulting from voltage fault |
US20160134683A1 (en) * | 2013-05-31 | 2016-05-12 | Koninklijke Philips N.V. | System and method for automatically uploading, downloading, and updating data such as sleep study data |
US9906583B2 (en) * | 2013-05-31 | 2018-02-27 | Koninklijke Philips N.V. | System and method for automatically uploading, downloading, and updating data such as sleep study data |
US9361236B2 (en) | 2013-06-18 | 2016-06-07 | Arm Limited | Handling write requests for a data array |
WO2014209984A1 (en) * | 2013-06-25 | 2014-12-31 | Marvell World Trade Ltd. | Adaptive cache memory controller |
US9645920B2 (en) | 2013-06-25 | 2017-05-09 | Marvell World Trade Ltd. | Adaptive cache memory controller |
US9455048B2 (en) | 2013-06-28 | 2016-09-27 | Sandisk Technologies Llc | NAND flash word line management using multiple fragment pools |
US10156990B2 (en) | 2013-08-16 | 2018-12-18 | Micron Technology, Inc. | Data storage management |
US10007428B2 (en) * | 2013-08-16 | 2018-06-26 | Micron Technology, Inc. | Data storage management |
US10387039B2 (en) | 2013-08-16 | 2019-08-20 | Micron Technology, Inc. | Data storage management |
US20150052300A1 (en) * | 2013-08-16 | 2015-02-19 | Micron Technology, Inc. | Data storage management |
US10853193B2 (en) * | 2013-09-04 | 2020-12-01 | Amazon Technologies, Inc. | Database system recovery using non-volatile system memory |
US20170286477A1 (en) * | 2013-09-04 | 2017-10-05 | Amazon Technologies, Inc. | Database system recovery using non-volatile system memory |
US9971514B2 (en) * | 2013-11-21 | 2018-05-15 | Sandisk Technologies Llc | Dynamic logical groups for mapping flash memory |
US20150143029A1 (en) * | 2013-11-21 | 2015-05-21 | Sandisk Technologies Inc. | Dynamic logical groups for mapping flash memory |
JP2017502376A (en) * | 2013-12-26 | 2017-01-19 | インテル・コーポレーション | Management of transfer buffer for non-volatile memory |
WO2015099922A1 (en) * | 2013-12-26 | 2015-07-02 | Intel Corporation | Managing a transfer buffer for a non-volatile memory |
EP3092570A4 (en) * | 2014-01-10 | 2017-09-13 | Samsung Electronics Co., Ltd. | Method for processing data in storage device and storage device |
US9965183B2 (en) | 2014-01-10 | 2018-05-08 | Samsung Electronics Co., Ltd. | Method for processing data in storage device and storage device |
US10114562B2 (en) | 2014-09-16 | 2018-10-30 | Sandisk Technologies Llc | Adaptive block allocation in nonvolatile memory |
US9891833B2 (en) * | 2015-10-22 | 2018-02-13 | HoneycombData Inc. | Eliminating garbage collection in nand flash devices |
CN106610900A (en) * | 2015-10-22 | 2017-05-03 | 蜂巢数据有限公司 | Eliminating garbage collection in NAND flash devices |
CN105302501A (en) * | 2015-11-27 | 2016-02-03 | 浙江宇视科技有限公司 | Control method and device for disk sectors |
US9983829B2 (en) * | 2016-01-13 | 2018-05-29 | Sandisk Technologies Llc | Physical addressing schemes for non-volatile memory systems employing multi-die interleave schemes |
US9817593B1 (en) | 2016-07-11 | 2017-11-14 | Sandisk Technologies Llc | Block management in non-volatile memory system with non-blocking control sync system |
US10255178B2 (en) * | 2016-09-06 | 2019-04-09 | Toshiba Memory Corporation | Storage device that maintains a plurality of layers of address mapping |
US10628303B2 (en) | 2016-09-06 | 2020-04-21 | Toshiba Memory Corporation | Storage device that maintains a plurality of layers of address mapping |
US10642523B2 (en) * | 2016-09-07 | 2020-05-05 | Boe Technology Group Co., Ltd. | Method and apparatus for updating data in a memory for electrical compensation |
US20180293016A1 (en) * | 2016-09-07 | 2018-10-11 | Boe Technology Group Co., Ltd. | Method and apparatus for updating data in a memory for electrical compensation |
CN110326045A (en) * | 2017-02-22 | 2019-10-11 | 美光科技公司 | Device and method for being operated in memory |
US10079612B1 (en) | 2017-03-15 | 2018-09-18 | Alibaba Group Holding Limited | Distributed erasure coding pool deployed in hyperscale infrastructure |
US20180275875A1 (en) * | 2017-03-23 | 2018-09-27 | Toshiba Memory Corporation | Memory system |
US10949090B2 (en) * | 2017-03-23 | 2021-03-16 | Toshiba Memory Corporation | Memory system that stores data designated by a deletion request in nonvolatile memory |
US20180341411A1 (en) * | 2017-05-24 | 2018-11-29 | Cisco Technology, Inc. | Intelligent layout of composite data structures in tiered storage |
US10922287B2 (en) * | 2017-05-24 | 2021-02-16 | Cisco Technology, Inc. | Intelligent layout of composite data structures in tiered storage |
US10678768B2 (en) * | 2017-06-30 | 2020-06-09 | Intel Corporation | Logical band-based key-value storage structure |
US20190005079A1 (en) * | 2017-06-30 | 2019-01-03 | Intel Corporation | Logical band-based key-value storage structure |
TWI651650B (en) * | 2018-02-05 | 2019-02-21 | 大陸商深圳大心電子科技有限公司 | Memory management method and storage controller using the same |
US11342028B2 (en) | 2018-06-29 | 2022-05-24 | Sandisk Technologies Llc | Concurrent programming of multiple cells for non-volatile memory devices |
US11545221B2 (en) | 2018-06-29 | 2023-01-03 | Sandisk Technologies Llc | Concurrent programming of multiple cells for non-volatile memory devices |
US10978156B2 (en) * | 2018-06-29 | 2021-04-13 | Sandisk Technologies Llc | Concurrent programming of multiple cells for non-volatile memory devices |
US10852965B2 (en) * | 2018-10-30 | 2020-12-01 | EMC IP Holding Company LLC | Write folding mechanism using reusable shared striping in a storage system |
US20200133526A1 (en) * | 2018-10-30 | 2020-04-30 | EMC IP Holding Company LLC | Write folding mechanism using reusable shared striping in a storage system |
US11705216B2 (en) | 2019-08-28 | 2023-07-18 | Micron Technology, Inc. | Data redirection upon failure of a program operation |
US11158396B2 (en) | 2019-08-28 | 2021-10-26 | Micron Technology, Inc. | Data redirection upon failure of a program operation |
US10854311B1 (en) * | 2019-08-28 | 2020-12-01 | Micron Technology, Inc. | Data redirection upon failure of a program operation |
US11436153B2 (en) * | 2020-05-26 | 2022-09-06 | Western Digital Technologies, Inc. | Moving change log tables to align to zones |
US11467744B2 (en) | 2020-05-27 | 2022-10-11 | Western Digital Technologies, Inc. | System to identify aggressor blocks causing back to back erase failure |
US11152071B1 (en) * | 2020-05-27 | 2021-10-19 | Western Digital Technologies, Inc. | Erase operation reattempt to recover misidentified bad blocks resulting from consecutive erase failures |
WO2023034535A1 (en) * | 2021-09-01 | 2023-03-09 | Micron Technology, Inc. | Virtual management unit scheme for two-pass programming in a memory sub-system |
US11922011B2 (en) | 2021-09-01 | 2024-03-05 | Micron Technology, Inc. | Virtual management unit scheme for two-pass programming in a memory sub-system |
FR3129763A1 (en) * | 2021-12-01 | 2023-06-02 | Stmicroelectronics (Rousset) Sas | SYSTEM-ON-CHIP WITH NON-VOLATILE MEMORY |
US12124713B2 (en) | 2021-12-01 | 2024-10-22 | Stmicroelectronics S.R.L. | System-on-chip comprising a non-volatile memory |
US20240241657A1 (en) * | 2023-01-12 | 2024-07-18 | SK Hynix Inc. | Storage device and method of operating the same |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7945759B2 (en) | Non-volatile memory and method with phased program failure handling | |
EP1758027B1 (en) | Non-volatile memory and method with control data management | |
US20080091871A1 (en) | Non-volatile memory with worst-case control data management | |
US7779056B2 (en) | Managing a pool of update memory blocks based on each block's activity and data order | |
US20080091901A1 (en) | Method for non-volatile memory with worst-case control data management | |
WO2005066793A2 (en) | Non-volatile memory and method with non-sequential update block management | |
WO2005066792A2 (en) | Non-volatile memory and method with memory planes alignment | |
WO2008045839A1 (en) | Non-volatile memory with worst-case control data management and methods therefor | |
EP1704479B1 (en) | Non-volatile memory and method with phased program failure handling | |
JP2007519996A6 (en) | Nonvolatile memory and method with phased program fault handling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SANDISK CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SINCLAIR, ALAN WELSH;GOROBETS, SERGEY ANATOLIEVICH;BENNETT, ALAN DAVID;AND OTHERS;REEL/FRAME:015468/0832 Effective date: 20041210 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: SANDISK TECHNOLOGIES INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SANDISK CORPORATION;REEL/FRAME:038438/0904 Effective date: 20160324 |
|
AS | Assignment |
Owner name: SANDISK TECHNOLOGIES LLC, TEXAS Free format text: CHANGE OF NAME;ASSIGNOR:SANDISK TECHNOLOGIES INC;REEL/FRAME:038807/0980 Effective date: 20160516 |