[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106776361B - Caching method and system for large-scale nonvolatile storage medium - Google Patents

Caching method and system for large-scale nonvolatile storage medium Download PDF

Info

Publication number
CN106776361B
CN106776361B CN201710141173.9A CN201710141173A CN106776361B CN 106776361 B CN106776361 B CN 106776361B CN 201710141173 A CN201710141173 A CN 201710141173A CN 106776361 B CN106776361 B CN 106776361B
Authority
CN
China
Prior art keywords
block
mapping item
item
logical address
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710141173.9A
Other languages
Chinese (zh)
Other versions
CN106776361A (en
Inventor
孙辉
陈国栋
施巍松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201710141173.9A priority Critical patent/CN106776361B/en
Publication of CN106776361A publication Critical patent/CN106776361A/en
Application granted granted Critical
Publication of CN106776361B publication Critical patent/CN106776361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a caching method and a caching system for a large-scale nonvolatile storage medium, which comprise the following steps: the method comprises the steps of obtaining a command request, when the command request is a write command request, storing a target Mapping item into an H-Cache according to a logical address prefix, when the residual space in the H-Cache is smaller than A, migrating the Mapping item which is not accessed for the longest time in the H-Cache to a C-Cache, when the residual space in the C-Cache is smaller than B, sorting blocks in the C-Cache according to the access times, checking a Mapping item flag bit in the Block with the minimum access times, if the Mapping item flag bit is 0, writing the Mapping item into Mapping, setting the Mapping item flag bit to be 1, and if the Mapping item flag bit is 1, deleting the Mapping item; when the command request is a read command request, acquiring a logical address of a target mapping item, retrieving the target mapping item corresponding to the logical address from the H-Cache, and if the target mapping item exists, outputting a physical address of the target mapping item; if not, searching a target Mapping item corresponding to the logical address in the C-Cache, if yes, outputting a physical address of the target Mapping item, and if not, retrieving and outputting a physical address corresponding to the logical address in the Mapping area.

Description

Caching method and system for large-scale nonvolatile storage medium
Technical Field
The invention relates to the technical field of data storage and retrieval, in particular to a caching method and a caching system for a large-scale nonvolatile storage medium.
Background
Compared with the traditional magnetic disk, the Non-Volatile Memory (NVM) -based solid state disk has many advantages of low power consumption, fast read/write speed, good shock resistance, etc., and in some places (such as data centers) with higher storage performance requirements, the usage ratio of NVM (Non-Volatile Memory) media devices exceeds that of magnetic disks.
In the big data era, international data corporation forecasts that the total amount of data generated globally per year will increase by 40% in five years from 2016 and will reach 44ZB by 2020. In NVM media based solid state disk systems, the disadvantages of traditional forms of data organization and management begin to emerge. Firstly, as the amount of stored data increases, on one hand, the amount of metadata for managing the mapping of logical data and physical data is large, and the cache space and the storage space required by the metadata are large; on the other hand, the implementation difficulty of cache and index management based on these metadata is complex, and especially as the large-capacity three-dimensional NVM media are applied to the solid-state disk system, the conventional cache and index management method is gradually unable to meet the requirement of high-performance storage access, which will become one of the bottlenecks of the data storage access performance.
In the existing solid-state disk structure, in order to be compatible with the access mode of the conventional host-side file system to the block device, an NVM device Translation layer (Non-Volatile Memory Translation L layer, NVMT L) is usually designed in the solid-state disk software stack, and mainly includes modules of address mapping, wear leveling, garbage collection, and the like, where the address mapping is the most core function of the NVMT L and is responsible for translating the logical address from the file system into the physical address in the NVM device.
In most NVMT L address mapping management, the relation entries of logical addresses and physical addresses are organized and managed in a two-dimensional table mode, in a high-capacity storage system based on an NVM medium, firstly, when the data volume is large, the data retrieval performance is low, and in addition, the existing partial solution of the NVM medium adopts a two-level mapping method to cache a part of frequently accessed mapping entries in the whole mapping entries, so that the retrieval performance under the condition of access miss is poor, and the storage access management is difficult to realize.
Disclosure of Invention
Based on the technical problems in the background art, the invention provides a cache method and a cache system for a large-scale nonvolatile storage medium;
the invention provides a cache method for a large-scale nonvolatile storage medium, which comprises the following steps:
s1, judging that the command request is a target mapping item write command request or a target mapping item read command request according to the user command request, and executing S2 when the command request is judged to be the target mapping item write command request; executing S3 when the command request is judged to be a target mapping item read command request;
s2, setting the tail flag position of the target mapping item to 0, and storing the target mapping item into the H-Cache area according to the logical address prefix;
s3, acquiring a logical address in a target mapping item read command request, judging whether a target mapping item corresponding to the logical address exists in the H-Cache, and outputting a physical address corresponding to the target mapping item when the judgment result is yes; if not, searching whether a target mapping item corresponding to the logical address exists in the C-Cache,
and when the retrieval result is yes, outputting the physical address corresponding to the target Mapping item, and when the retrieval result is no, retrieving the target Mapping item corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target Mapping item.
Step S2, specifically including:
s20, setting the tail flag position of the target mapping item to 0;
s21, storing the target mapping item into a non-compressed prefix complete binary tree of the H-Cache area according to the logical address prefix of the target mapping item, judging whether the residual space in the H-Cache area is smaller than a preset threshold value A, and when the judgment result is yes, migrating the original mapping item in the H-Cache area and executing S22; when the judgment result is negative, the operation is not carried out;
s22, transferring the Mapping item which is not accessed for the longest time in the H-Cache area to a compressed prefix complete binary tree of the C-Cache area, judging whether the residual space in the C-Cache area is smaller than a preset threshold value B, when the judgment result is yes, sorting the Block blocks in the C-Cache area according to the access times, checking the flag bit of the Mapping item in the Block Block with the minimum access times to be 0 or 1 item by item, if the flag bit is 0, transferring the Mapping item to a Mapping area, setting the flag bit of the Mapping item to be 1, and if the flag bit is 1, deleting the Mapping item; and when the judgment result is negative, not performing the operation.
In step S21, before storing the target mapping item into the uncompressed prefix complete binary tree of the H-Cache area according to the logical address prefix, the method further includes: establishing a non-compressed prefix complete binary tree of an H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
s200, establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
s201, setting the first bit of a logical address of a mapping item needing to be stored in a tree in an H-Cache area to be 0, and storing a first mapping item into the Block Block;
s202, storing a new mapping item into the Block Block, wherein the number of the mapping items stored in the Block Block is full, splitting the Block Block into two sub-nodes, the prefixes of the two sub-Block nodes are respectively set to 00 and 01, then checking the first two bits of the logic address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logic address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logic address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logic address;
and S203, repeating the operation of the step S202 until all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, and completing the tree through virtual nodes to obtain the non-compressed prefix complete binary tree.
In step S3, before determining whether there is a target mapping entry corresponding to the logical address in the H-Cache, the method includes: establishing a non-compressed prefix complete binary tree of an H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
s210, establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
s211, setting the first bit of a logical address of a mapping item needing to be stored in a tree in the H-Cache area to be 0, and storing a first mapping item into the Block Block;
s212, storing a new mapping item into the Block Block, wherein the number of the mapping items stored in the Block Block is full, splitting the Block Block into two sub-nodes, the prefixes of the two sub-Block nodes are respectively set to 00 and 01, checking the first two bits of the logic address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logic address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logic address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logic address;
s213, repeating the operation of the step S212 until all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, and completing the tree through virtual nodes to obtain a non-compressed prefix complete binary tree;
in step S3, it is determined whether there is a target mapping item corresponding to the logical address in the H-Cache by using a non-compressed prefix complete binary tree search algorithm, where the non-compressed prefix complete binary tree search algorithm specifically includes:
s300, receiving a logical address of a target mapping item;
s301, according to the height k of a complete binary tree of the prefix of the H-Cache area, taking the k bits before the logical address of the target mapping item, and calculating to obtain a Block number possibly existing in the target mapping item by using a binary tree Block sequence number rapid calculation algorithm;
s302, searching whether the logical address of the target mapping item is stored in the Block Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is no, executing S303;
s303, let K equal to K-1, perform S301 and S302 until K equals 1, perform S304;
s304, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether the logical address of the target mapping item is stored in the Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is negative, the logical address representing the target mapping item does not exist in the non-compressed prefix complete binary tree;
in step S3, before the retrieving whether there is a target mapping entry corresponding to the logical address in the C-Cache, the method further includes: establishing a compressed prefix complete binary tree of the C-Cache area by a compressed prefix complete binary tree construction method; before retrieving the target Mapping entry corresponding to the logical address in the Mapping region and outputting the physical address corresponding to the target Mapping entry, the method further includes: establishing a compressed prefix complete binary tree of a Mapping region by a compressed prefix complete binary tree construction method; the method for constructing the compressed prefix complete binary tree specifically comprises the following steps:
s310, establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is M, wherein 1 is less than M;
s311, setting the first bit of the logical address of the mapping item to be stored in the tree to be 0, storing the first mapping item into the Block Block and performing data compression;
s312, decompressing the Block Block, storing new mapping items into the Block Block item by item, splitting the Block Block into two sub-nodes when the number of the mapping items in the Block Block reaches a set threshold value M, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping items in the Block node with the original prefix of 0 item by item, transferring the mapping items with the first two bits of the logical address of 00 into the sub-Block node with the prefix of 00, transferring the mapping items with the first two bits of the logical address of 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping items into the corresponding sub-Block nodes according to the first two bits of the logical address;
s313, repeating the operation of the step S312 until all mapping items needing to be stored in the tree are stored in the tree, and completing the tree through the virtual nodes to obtain a compressed prefix complete binary tree;
in step S3, a compressed prefix complete binary tree retrieval algorithm is used to retrieve whether a target Mapping item corresponding to the logical address exists in the C-Cache, and a compressed prefix complete binary tree retrieval algorithm is used to retrieve whether a target Mapping item corresponding to the logical address exists in the Mapping area, where the compressed prefix complete binary tree retrieval algorithm specifically includes:
s320, receiving a logical address of the target mapping item;
s321, according to the height k of a complete binary tree of a compressed prefix in the C-Cache area or the Mapping area, taking the front k bits of a logical address of a target Mapping item, and calculating by using a binary tree Block sequence number rapid calculation algorithm to obtain a Block Block number possibly existing in the target Mapping item;
s322, searching whether the Block filter of the Block Block contains a storage mark of a target mapping item, when the search result is yes, decompressing the Block Block, returning a physical address corresponding to the logical address of the target mapping item, and then compressing the Block Block; when the retrieval result is no, executing S323;
s323, let K equal K-1, execute S321 and S322 operations, until K equals 1, execute S324;
s324, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether a Block filter of the Block contains a storage mark of the target mapping item, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; and when the retrieval result is negative, the logical address representing the target mapping item does not exist in the compressed prefix complete binary tree.
In step S3, the retrieving whether there is a target mapping item corresponding to the logical address in the C-Cache, and when the retrieval result is yes, in the process of outputting a physical address corresponding to the target mapping item, specifically includes: recording the retrieval time, and transferring the target mapping item to an H-Cache area when the target mapping item corresponding to the logical address is retrieved again within the preset time;
in step S3, in the process of retrieving the target Mapping entry corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target Mapping entry, the method further includes: and searching a Block Block where a target Mapping item corresponding to the logical address is located in a Mapping area, backing up the target Mapping item in the Block Block to an H-Cache area, and backing up other Mapping items in the Block Block to a C-Cache area.
A mass non-volatile storage media oriented caching system, comprising:
the judging module is used for judging that the user command request is a target mapping item writing command request or a target mapping item reading command request according to the user command request;
the write command module is used for setting the tail flag bit of the target mapping item to be 0 and then storing the target mapping item into the H-Cache area according to the logical address prefix when the judging module judges that the command request is the target mapping item write command request;
the read command module is used for acquiring a logical address in the target mapping item read command request when the judging module judges that the command request is the target mapping item read command request, judging whether a target mapping item corresponding to the logical address exists in the H-Cache or not, and outputting a physical address corresponding to the target mapping item when the judging result is yes; if not, searching whether a target mapping item corresponding to the logical address exists in the C-Cache,
and when the retrieval result is yes, outputting the physical address corresponding to the target Mapping item, and when the retrieval result is no, retrieving the target Mapping item corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target Mapping item.
The write command module includes: the system comprises a marking submodule, a judging submodule, a processing submodule and a writing submodule;
the mark submodule is used for setting the mark position of the tail of the target mapping item to 0;
the judgment submodule is used for judging whether the residual space in the H-Cache area is smaller than a preset threshold A or not;
the processing submodule is used for storing a target Mapping item into a non-compressed prefix complete binary tree of an H-Cache area according to the logical address prefix of the target Mapping item when the judgment result of the judgment submodule is yes, migrating the original Mapping item in the H-Cache area, migrating the original Mapping item which is not accessed for the longest time into a C-Cache area, judging whether the residual space in the C-Cache area is smaller than a preset threshold value B, sorting the Block blocks in the C-Cache area according to the access times when the judgment result is yes, checking that the flag bit of the Mapping item in the Block with the minimum access times is 0 or 1, writing the Mapping item into a Mapping area if the flag bit is 0, setting the flag bit of the Mapping item to be 1, and deleting the Mapping item if the flag bit is 1; when the judgment result is negative, no operation is carried out;
and the writing sub-module is used for storing the target mapping item into the non-compressed prefix complete binary tree of the H-Cache area according to the logical address prefix when the judgment result of the judgment sub-module is negative.
The write command module is further configured to: before judging whether a target mapping item corresponding to the logical address exists in the H-Cache, establishing a non-compressed prefix complete binary tree of the H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
setting the first bit of a logical address of a mapping item needing to be stored in a tree in an H-Cache area to be 0, and storing a first mapping item into the Block Block;
storing a new mapping item into the Block, wherein the number of the mapping items stored in the Block is full, splitting the Block into two sub-nodes, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logical address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logical address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logical address;
and after all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, completing the tree through virtual nodes to obtain the non-compressed prefix complete binary tree.
The read command module is specifically configured to: before judging whether a target mapping item corresponding to the logical address exists in the H-Cache, establishing a non-compressed prefix complete binary tree of the H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
setting the first bit of a logical address of a mapping item needing to be stored in a tree in an H-Cache area to be 0, and storing a first mapping item into the Block Block;
storing a new mapping item into the Block, wherein the number of the mapping items stored in the Block is full, splitting the Block into two sub-nodes, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logical address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logical address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logical address;
after all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, the tree is completed through virtual nodes to obtain a non-compressed prefix complete binary tree;
judging whether a target mapping item corresponding to the logical address exists in the H-Cache or not through a non-compressed prefix complete binary tree retrieval algorithm, wherein the non-compressed prefix complete binary tree retrieval algorithm specifically comprises the following steps:
s400, receiving a logical address of a target mapping item;
s401, according to the height k of a complete binary tree of the prefix of the H-Cache area, taking the k bits before the logical address of the target mapping item, and calculating to obtain a Block number possibly existing in the target mapping item by using a binary tree Block sequence number rapid calculation algorithm;
s402, searching whether the logical address of the target mapping item is stored in the Block Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is no, S403 is executed;
s403, setting K to K-1, executing S401 and S402 until K is 1, and executing S404;
s404, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether the logical address of the target mapping item is stored in the Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is negative, the logical address representing the target mapping item does not exist in the non-compressed prefix complete binary tree;
before searching whether a target mapping item corresponding to the logical address exists in the C-Cache, establishing a compressed prefix complete binary tree of the C-Cache area by a compressed prefix complete binary tree construction method; before retrieving a target Mapping item corresponding to the logical address in a Mapping area and outputting a physical address corresponding to the target Mapping item, establishing a compressed prefix complete binary tree of the Mapping area by a compressed prefix complete binary tree construction method; the method for constructing the compressed prefix complete binary tree comprises the following steps:
establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is M, wherein 1 is less than M;
setting the first bit of a logical address of a mapping item to be stored in a tree to be 0, storing a first mapping item into the Block Block and performing data compression;
decompressing the Block, storing new mapping items into the Block item by item, splitting the Block into two sub-nodes when the number of the mapping items in the Block reaches a set threshold value M, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping items in the Block node with the original prefix of 0 item by item, transferring the mapping items with the first two bits of the logical address being 00 into the sub-Block nodes with the prefix of 00, transferring the mapping items with the first two bits of the logical address being 01 into the sub-Block nodes with the prefix of 01, and finally storing the newly stored mapping items into the corresponding sub-Block nodes according to the first two bits of the logical address;
after all mapping items needing to be stored in the tree are stored in the tree, the tree is completed through virtual nodes to obtain a compressed prefix complete binary tree;
searching whether a target Mapping item corresponding to the logical address exists in the C-Cache through a compressed prefix complete binary tree search algorithm, and searching whether a target Mapping item corresponding to the logical address exists in a Mapping area through the compressed prefix complete binary tree search algorithm, wherein the compressed prefix complete binary tree search algorithm specifically comprises the following steps:
s410, receiving a logical address of a target mapping item;
s411, according to the height k of a complete binary tree of a compressed prefix in a C-Cache area or a Mapping area, taking the front k bits of a logical address of a target Mapping item, and calculating to obtain a Block number possibly existing in the target Mapping item by using a binary tree Block sequence number rapid calculation algorithm;
s412, retrieving whether the Block filter of the Block Block contains a storage mark of a target mapping item, when the retrieval result is yes, decompressing the Block Block, returning a physical address corresponding to a logical address of the target mapping item, and then compressing the Block Block; when the retrieval result is no, S413 is executed;
s413, making K equal to K-1, performing operations S411 and S412, and performing S414 until K is equal to 1;
s414, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether a Block filter of the Block contains a storage mark of the target mapping item, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; and when the retrieval result is negative, the logical address representing the target mapping item does not exist in the compressed prefix complete binary tree.
The read command module is further configured to: searching whether a target mapping item corresponding to the logical address exists in the C-Cache, recording the searching time in the process of outputting a physical address corresponding to the target mapping item when the searching result is yes, and transferring the target mapping item to an H-Cache area when the target mapping item corresponding to the logical address is searched again within preset time;
in the process of retrieving a target Mapping item corresponding to the logical address in a Mapping area and outputting a physical address corresponding to the target Mapping item, retrieving a Block in which the target Mapping item corresponding to the logical address is located in the Mapping area, backing up the target Mapping item in the Block to an H-Cache area, and backing up other Mapping items in the Block to a C-Cache area.
The invention divides the Cache into two parts, C-Cache and H-Cache. The H-Cache part is used for ensuring the high-speed access performance of the system, the C-Cache part enables a limited storage space to store more Mapping items by compressing Mapping item data, the hit rate of the Mapping items is improved, and the operation efficiency of the system is improved; compared with the conventional flash memory device, the writing and amplifying problems caused by the remote updating of the Mapping items are eliminated, the time overhead caused by copying and transferring the Mapping items during the remote updating is reduced, and the reading and writing speed of the Mapping area stored by the NVM medium is obviously improved; the access speed of the DRAM device is higher than that of other storage devices, the frequent data access operation is completed by putting the DRAM device in the DRAM device, the system operation efficiency is improved, the data access locality principle is fully considered, the Mapping item is taken out by taking a Block as a unit and backed up to a Cache, the frequent searching of the Mapping item in a Mapping area is avoided, the system operation efficiency is improved, through a partial write-back strategy, when a certain Block in the C-Cache needs to be migrated out of the Cache, only newly generated and updated Mapping items are written back to the Mapping area, the data volume written back to the NVM device is reduced while the data consistency is ensured, and the write-back time efficiency is improved.
Drawings
FIG. 1 is a schematic flow chart of a large-scale nonvolatile storage medium-oriented caching method according to the present invention;
FIG. 2 is a schematic diagram of a cache management execution logic;
FIG. 3 is a diagram illustrating a structure of a prefix complete binary tree;
FIG. 4 is a diagram illustrating a mapping item structure;
FIG. 5 is a schematic flow chart of a prefix complete binary tree search algorithm;
FIG. 6 is a block diagram of address mapping;
FIG. 7 is a system logic hierarchy diagram;
FIG. 8 is a diagram of a system hardware architecture;
FIG. 9 is a schematic view of an address mapping process;
fig. 10 is a block diagram of a cache system for large-scale non-volatile storage media according to the present invention.
Detailed Description
Referring to fig. 1, the present invention provides a caching method for a large-scale nonvolatile storage medium, including the following steps:
step S1, according to the user command request, judging that the command request is a target mapping item write command request or a target mapping item read command request, and executing S2 when the command request is judged to be the target mapping item write command request; executing S3 when the command request is judged to be a target mapping item read command request;
when a user sends a command request, whether the command request is a write command request or a read command request is judged, and corresponding operation is carried out according to different command requests.
Step S2, setting the tail flag position of the target mapping item to 0, and storing the target mapping item into the H-Cache area according to the logical address prefix;
step S2, specifically including:
s20, setting the tail flag position of the target mapping item to 0;
s21, storing the target mapping item into a non-compressed prefix complete binary tree of the H-Cache area according to the logical address prefix of the target mapping item, judging whether the residual space in the H-Cache area is smaller than a preset threshold value A, and when the judgment result is yes, migrating the original mapping item in the H-Cache area and executing S22; when the judgment result is negative, the operation is not carried out;
s22, transferring the Mapping item which is not accessed for the longest time in the H-Cache area to a compressed prefix complete binary tree of the C-Cache area, judging whether the residual space in the C-Cache area is smaller than a preset threshold value B, when the judgment result is yes, sorting the Block blocks in the C-Cache area according to the access times, checking the flag bit of the Mapping item in the Block Block with the minimum access times to be 0 or 1 item by item, if the flag bit is 0, transferring the Mapping item to a Mapping area, setting the flag bit of the Mapping item to be 1, and if the flag bit is 1, deleting the Mapping item; when the judgment result is negative, the operation is not carried out;
referring to fig. 2, when a newly generated Mapping item or an updated Mapping item is stored, it is first stored in the H-Cache, when the Mapping item enters the H-Cache, as shown in fig. 2, the system sets the end flag of the Mapping item to 0, so as to facilitate the identification when the Mapping item is migrated again into the Mapping area after the data is migrated to the C-Cache, reduce the data amount written into the Mapping area again, and then judge whether the remaining space in the H-Cache area is smaller than the preset threshold, when the remaining space in the H-Cache area is smaller than the threshold, reserve space for preparing a new written Mapping item, play a role of buffering, so that the writing and data migration run concurrently, improve the time efficiency, when there is data to be transmitted to the H-Cache, it is necessary to start the Mapping item data migration of the H-Cache area when the new Mapping item data is written, migrate the Mapping item which has not been accessed for the longest time to the appropriate position in the C-Cache area, as shown in fig. 2, it is determined whether the remaining space in the C-Cache area is smaller than a preset threshold, when the remaining space in the C-Cache area is smaller than the threshold, a space is reserved for preparing a new write Mapping item, a buffering function is performed, so that writing and data migration are performed concurrently, time efficiency is improved, if there is data to be transferred into the C-Cache, Mapping item data migration of the C-Cache area needs to be started while writing new Mapping item data, the access times of blocks in the C-Cache area within a recent period of time are sorted, the Block with the smallest access times is used as a sacrificial item, flag bits of all Mapping items in the Block are retrieved item by item, if the flag bit is 0, the Mapping item is written back to an appropriate position in the Mapping area, and written back to the Mapping area, as shown in fig. 2 as 3, and then the Mapping item flag bit is set to 1; if the flag bit is 1, the Mapping item is directly discarded, so as to reduce the data volume of the Mapping item in the Mapping region, reduce the write-back time and improve the system efficiency.
Referring to fig. 3, in this step, before storing the target mapping item into the uncompressed prefix complete binary tree of the H-Cache area according to the logical address prefix, the method further includes: establishing a non-compressed prefix complete binary tree of an H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
s200, establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
s201, setting the first bit of a logical address of a mapping item needing to be stored in a tree in an H-Cache area to be 0, and storing a first mapping item into the Block Block;
s202, storing a new mapping item into the Block Block, wherein the number of the mapping items stored in the Block Block is full, splitting the Block Block into two sub-nodes, the prefixes of the two sub-Block nodes are respectively set to 00 and 01, then checking the first two bits of the logic address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logic address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logic address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logic address;
s203, repeating the operation of the step S202 until all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, and completing the tree through virtual nodes to obtain a non-compressed prefix complete binary tree;
the binary tree in the H-Cache is an uncompressed prefix complete binary tree, so that the retrieval efficiency can be improved, the compression and decompression overhead is eliminated, and each node stores a logical address-physical address mapping item.
Referring to fig. 4, the flag bit in the mapping entry structure has two states, where 0 represents a newly formed mapping entry or an updated mapping entry, and 1 represents an un-updated mapping entry.
Step S3, acquiring a logical address in a target mapping item read command request, judging whether a target mapping item corresponding to the logical address exists in the H-Cache, and outputting a physical address corresponding to the target mapping item when the judgment result is yes; if not, searching whether a target mapping item corresponding to the logical address exists in the C-Cache,
when the retrieval result is yes, outputting a physical address corresponding to the target Mapping item, and when the retrieval result is no, retrieving the target Mapping item corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target Mapping item;
in this step, in the process of outputting the physical address corresponding to the target mapping item when the target mapping item corresponding to the logical address exists in the retrieval C-Cache, the method further includes: recording the retrieval time, and transferring the target mapping item to an H-Cache area when the target mapping item corresponding to the logical address is retrieved again within the preset time; in the process of retrieving the target Mapping item corresponding to the logical address in the Mapping region and outputting the physical address corresponding to the target Mapping item, the method further includes: and searching a Block Block where a target Mapping item corresponding to the logical address is located in a Mapping area, backing up the target Mapping item in the Block Block to an H-Cache area, and backing up other Mapping items in the Block Block to a C-Cache area.
When the logical address in the read command request of the target mapping item is obtained and the corresponding logical address-physical address mapping item is searched, firstly, the mapping item is searched in the H-Cache, if the mapping item is searched in the H-Cache, the corresponding physical address result is directly returned, if the mapping item is not searched, then the search is performed in the C-Cache, and when the search is performed in the C-Cache, returning a required physical address result to the upper layer system, recording the access time of the time in the C-Cache, when the mapping item corresponding to the logical address hits in the C-Cache again after a period of time, calculating the time interval of two accesses of the mapping item, if the time interval is less than a specific threshold value, transferring the mapping item to the corresponding position in the H-Cache area, if not, as in 4 in FIG. 2, then search in Mapping area, as in 5 in FIG. 2; finding the Block where the Mapping item in the Mapping area is located, backing up the Mapping item to a proper position in a tree of the H-Cache, and returning a physical address result of the Mapping item, wherein other Mapping items in the Block are backed up to a proper position in a tree of the C-Cache item by item, and according to the principle of locality, the Mapping items around the Mapping item are most likely to be accessed again, so the Mapping items are backed up to the Cache together, and the purpose is to consider the characteristic of locality of access, such as 6 in FIG. 2.
Referring to fig. 3, in this step, before determining whether there is a target mapping entry corresponding to the logical address in the H-Cache, the method includes: establishing a non-compressed prefix complete binary tree of an H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
s210, establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
s211, setting the first bit of a logical address of a mapping item needing to be stored in a tree in the H-Cache area to be 0, and storing a first mapping item into the Block Block;
s212, storing a new mapping item into the Block Block, wherein the number of the mapping items stored in the Block Block is full, splitting the Block Block into two sub-nodes, the prefixes of the two sub-Block nodes are respectively set to 00 and 01, checking the first two bits of the logic address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logic address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logic address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logic address;
s213, repeating the operation of the step S212 until all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, and completing the tree through virtual nodes to obtain a non-compressed prefix complete binary tree;
the binary tree in the H-Cache is an uncompressed prefix complete binary tree, so that the retrieval efficiency can be improved, the compression and decompression overhead is eliminated, and each node stores a logical address-physical address mapping item.
Referring to fig. 5, in this step, it is determined whether there is a target mapping item corresponding to the logical address in the H-Cache by using a non-compressed prefix complete binary tree search algorithm, where the non-compressed prefix complete binary tree search algorithm specifically includes:
s300, receiving a logical address of a target mapping item;
s301, according to the height k of a complete binary tree of the prefix of the H-Cache area, taking the k bits before the logical address of the target mapping item, and calculating to obtain a Block number possibly existing in the target mapping item by using a binary tree Block sequence number rapid calculation algorithm;
s302, searching whether the logical address of the target mapping item is stored in the Block Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is no, executing S303;
s303, let K equal to K-1, perform S301 and S302 until K equals 1, perform S304;
s304, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether the logical address of the target mapping item is stored in the Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; and when the retrieval result is negative, the logical address representing the target mapping item does not exist in the non-compressed prefix complete binary tree.
In this step, before the retrieving whether there is a target mapping item corresponding to the logical address in the C-Cache, the method further includes: establishing a compressed prefix complete binary tree of the C-Cache area by a compressed prefix complete binary tree construction method; before retrieving the target Mapping entry corresponding to the logical address in the Mapping region and outputting the physical address corresponding to the target Mapping entry, the method further includes: establishing a compressed prefix complete binary tree of a Mapping region by a compressed prefix complete binary tree construction method; the method for constructing the compressed prefix complete binary tree specifically comprises the following steps:
s310, establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is M, wherein 1 is less than M;
s311, setting the first bit of the logical address of the mapping item to be stored in the tree to be 0, storing the first mapping item into the Block Block and performing data compression;
s312, decompressing the Block Block, storing new mapping items into the Block Block item by item, splitting the Block Block into two sub-nodes when the number of the mapping items in the Block Block reaches a set threshold value M, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping items in the Block node with the original prefix of 0 item by item, transferring the mapping items with the first two bits of the logical address of 00 into the sub-Block node with the prefix of 00, transferring the mapping items with the first two bits of the logical address of 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping items into the corresponding sub-Block nodes according to the first two bits of the logical address;
s313, repeating the operation of the step S312 until all mapping items needing to be stored in the tree are stored in the tree, and completing the tree through the virtual nodes to obtain a compressed prefix complete binary tree;
in this step, a compressed prefix complete binary tree retrieval algorithm is used to retrieve whether a target Mapping item corresponding to the logical address exists in the C-Cache, and a compressed prefix complete binary tree retrieval algorithm is used to retrieve whether a target Mapping item corresponding to the logical address exists in a Mapping area, where the compressed prefix complete binary tree retrieval algorithm specifically includes:
s320, receiving a logical address of the target mapping item;
s321, according to the height k of a complete binary tree of a compressed prefix in the C-Cache area or the Mapping area, taking the front k bits of a logical address of a target Mapping item, and calculating by using a binary tree Block sequence number rapid calculation algorithm to obtain a Block Block number possibly existing in the target Mapping item;
s322, searching whether the Block filter of the Block Block contains a storage mark of a target mapping item, when the search result is yes, decompressing the Block Block, returning a physical address corresponding to the logical address of the target mapping item, and then compressing the Block Block; when the retrieval result is no, executing S323;
s323, let K equal K-1, execute S321 and S322 operations, until K equals 1, execute S324;
s324, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether a Block filter of the Block contains a storage mark of the target mapping item, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is negative, the logical address representing the target mapping item does not exist in the compressed prefix complete binary tree;
the binary tree in the C-Cache and Mapping area is a compressed prefix complete binary tree, each node is a compressed Block, and a plurality of Mapping items with the same prefix as the Block prefix are stored in each Block.
Referring to FIG. 6, the Cache is based on a DRAM implementation and is divided into two parts, H-Cache and C-Cache. The H-Cache mainly stores frequently accessed 'active' mapping items; the C-Cache mainly stores mapping items which are moved out of the H-Cache and are relatively inactive, a plurality of mapping items are stored in a proper Block according to the logical address prefixes of the mapping items for compression, and through a compression technology, a limited Cache space can store more mapping items, so that the hit rate of the mapping items is improved, and the system operation efficiency is improved;
the Mapping area is realized based on an NVM device supporting local updating, the target is to store complete Mapping item data, the Mapping item data mainly comes from C-Cache migration data, and a plurality of Mapping items are stored into a proper Block according to the logical address prefixes.
Referring to FIG. 7, the D-Cache is partially implemented based on DRAM media for temporarily storing data written or read by a user; the NVM Data Storage part is realized based on NVM medium, wherein only Data information is stored, and other information such as address mapping and the like is not stored.
Referring to fig. 8 and 9, when the system writes data into the storage system, the data and the logical address, offset, and size transmitted from the upper layer of the host operating system are temporarily stored in the D-Cache, and the controller allocates a physical space of an appropriate size to the data storage device according to the size of the data, and returns the physical address of the first address of the space to form a logical address-physical address mapping item, and stores the logical address-physical address mapping item into the address mapping module.
When the system reads data from the storage system, the upper layer of the host operating system transmits the logical address of the system into the address mapping module, obtains the physical address corresponding to the logical address by using the data caching and retrieval method, then reads the data from the storage space corresponding to the physical address, and returns the data to the upper layer system through the D-Cache.
Referring to fig. 10, the present invention provides a large-scale non-volatile storage medium-oriented cache system, which includes:
the judging module is used for acquiring a user command request and judging that the command request is a target mapping item write command request or a target mapping item read command request;
when a user sends a command request, whether the command request is a write command request or a read command request is judged, and corresponding operation is carried out according to different command requests.
The write command module is used for setting the tail flag bit of the target mapping item to be 0 and then storing the target mapping item into the H-Cache area according to the logical address prefix when the judging module judges that the command request is the target mapping item write command request;
the write command module includes: the system comprises a marking submodule, a judging submodule, a processing submodule and a writing submodule;
the mark submodule is used for setting the mark position of the tail of the target mapping item to 0;
the judgment submodule is used for judging whether the residual space in the H-Cache area is smaller than a preset threshold A or not;
the processing submodule is used for storing a target Mapping item into a non-compressed prefix complete binary tree of an H-Cache area according to the logical address prefix of the target Mapping item when the judgment result of the judgment submodule is yes, migrating the original Mapping item in the H-Cache area, migrating the original Mapping item which is not accessed for the longest time into a C-Cache area, judging whether the residual space in the C-Cache area is smaller than a preset threshold value B, sorting the Block blocks in the C-Cache area according to the access times when the judgment result is yes, checking that the flag bit of the Mapping item in the Block with the minimum access times is 0 or 1, writing the Mapping item into a Mapping area if the flag bit is 0, setting the flag bit of the Mapping item to be 1, and deleting the Mapping item if the flag bit is 1; when the judgment result is negative, no operation is carried out;
and the writing sub-module is used for storing the target mapping item into the non-compressed prefix complete binary tree of the H-Cache area according to the logical address prefix when the judgment result of the judgment sub-module is negative.
Referring to fig. 2, when a newly generated Mapping item or an updated Mapping item is stored, it is first stored in the H-Cache, when the Mapping item enters the H-Cache, as shown in fig. 2 as 1, the system sets the end flag of the Mapping item to 0, so as to facilitate the identification when the Mapping item is migrated again into the Mapping area after the subsequent possible data is migrated into the C-Cache, reduce the data amount written into the Mapping area again, and then allocates a threshold value to the H-Cache, when the remaining space in the H-Cache area is smaller than the threshold value, reserve a space for preparing a new written Mapping item, perform a buffering function, so that the writing and data migration run concurrently, improve the time efficiency, if there is data to be transferred into the H-Cache, it is necessary to start the migration of the Mapping item data accessed while writing the new Mapping item data, and migrate the longest time unmapped Mapping item to the appropriate position in the C-Cache area, as shown in fig. 2, a threshold is allocated to the C-Cache, when the remaining space in the C-Cache is smaller than the threshold, a space is reserved for preparing a new write Mapping item, a buffering function is performed, so that writing and data migration are performed concurrently, time efficiency is improved, if data is transmitted into the C-Cache, it is necessary to start Mapping item data migration of the C-Cache while writing new Mapping item data, sort access times of blocks in the C-Cache within a latest period of time, use the blocks with the fewest access times as a victim, retrieve flag bits of all Mapping items in the blocks item by item, if the flag bit is 0, write the Mapping item back to an appropriate position in a Mapping area, write the Mapping item back to the Mapping area, as shown in fig. 2 as 3, and then set the flag bit of the Mapping item to 1; if the flag bit is 1, the Mapping item is directly discarded, so as to reduce the data volume of the Mapping item in the Mapping region, reduce the write-back time and improve the system efficiency.
Referring to FIG. 3, the write command module is further to: before storing a target mapping item into a non-compressed prefix complete binary tree of an H-Cache area according to a logical address prefix of the target mapping item, establishing the non-compressed prefix complete binary tree of the H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
setting the first bit of a logical address of a mapping item needing to be stored in a tree in an H-Cache area to be 0, and storing a first mapping item into the Block Block;
storing a new mapping item into the Block, wherein the number of the mapping items stored in the Block is full, splitting the Block into two sub-nodes, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logical address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logical address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logical address;
after all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, the tree is completed through virtual nodes to obtain a non-compressed prefix complete binary tree;
the binary tree in the H-Cache is an uncompressed prefix complete binary tree, so that the retrieval efficiency can be improved, the compression and decompression overhead is eliminated, and each node stores a logical address-physical address mapping item.
The read command module is used for acquiring a logical address in the target mapping item read command request when the judging module judges that the command request is the target mapping item read command request, judging whether a target mapping item corresponding to the logical address exists in the H-Cache or not, and outputting a physical address corresponding to the target mapping item when the judging result is yes; when the judgment result is negative, searching whether a target Mapping item corresponding to the logical address exists in the C-Cache, when the search result is positive, outputting a physical address corresponding to the target Mapping item, and when the search result is negative, searching the target Mapping item corresponding to the logical address in a Mapping area and outputting the physical address corresponding to the target Mapping item;
a read command module to further: when a target mapping item corresponding to the logical address exists in the retrieval C-Cache, recording the retrieval time in the process of outputting a physical address corresponding to the target mapping item, and transferring the target mapping item to an H-Cache area when the target mapping item corresponding to the logical address is retrieved again within preset time;
in the process of retrieving a target Mapping item corresponding to the logical address in a Mapping area and outputting a physical address corresponding to the target Mapping item, retrieving a Block in which the target Mapping item corresponding to the logical address is located in the Mapping area, backing up the target Mapping item in the Block to an H-Cache area, and backing up other Mapping items in the Block to a C-Cache area.
When the logical address in the read command request of the target mapping item is obtained and the corresponding logical address-physical address mapping item is searched, firstly, the mapping item is searched in the H-Cache, if the mapping item is searched in the H-Cache, the corresponding physical address result is directly returned, if the mapping item is not searched, then the search is performed in the C-Cache, and when the search is performed in the C-Cache, returning a required physical address result to the upper layer system, recording the access time of the time in the C-Cache, when the mapping item corresponding to the logical address hits in the C-Cache again after a period of time, calculating the time interval of two accesses of the mapping item, if the time interval is less than a specific threshold value, transferring the mapping item to the corresponding position in the H-Cache area, if not, as in 4 in FIG. 2, then search in Mapping area, as in 5 in FIG. 2; finding the Block where the Mapping item in the Mapping area is located, backing up the Mapping item to a proper position in a tree of the H-Cache, and returning a physical address result of the Mapping item, wherein other Mapping items in the Block are backed up to a proper position in a tree of the C-Cache item by item, and according to the principle of locality, the Mapping items around the Mapping item are most likely to be accessed again, so the Mapping items are backed up to the Cache together, and the purpose is to consider the characteristic of locality of access, such as 6 in FIG. 2.
The read command module is specifically configured to: before judging whether a target mapping item corresponding to the logical address exists in the H-Cache, establishing a non-compressed prefix complete binary tree of the H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
setting the first bit of a logical address of a mapping item needing to be stored in a tree in an H-Cache area to be 0, and storing a first mapping item into the Block Block;
storing a new mapping item into the Block, wherein the number of the mapping items stored in the Block is full, splitting the Block into two sub-nodes, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logical address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logical address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logical address;
after all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, the tree is completed through virtual nodes to obtain a non-compressed prefix complete binary tree;
the binary tree in the H-Cache is an uncompressed prefix complete binary tree, so that the retrieval efficiency can be improved, the compression and decompression overhead is eliminated, and each node stores a logical address-physical address mapping item.
Referring to fig. 5, the read command module is further to: judging whether a target mapping item corresponding to the logical address exists in the H-Cache or not through a non-compressed prefix complete binary tree retrieval algorithm, wherein the non-compressed prefix complete binary tree retrieval algorithm specifically comprises the following steps:
s400, receiving a logical address of a target mapping item;
s401, according to the height k of a complete binary tree of the prefix of the H-Cache area, taking the k bits before the logical address of the target mapping item, and calculating to obtain a Block number possibly existing in the target mapping item by using a binary tree Block sequence number rapid calculation algorithm;
s402, searching whether the logical address of the target mapping item is stored in the Block Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is no, S403 is executed;
s403, setting K to K-1, executing S401 and S402 until K is 1, and executing S404;
s404, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether the logical address of the target mapping item is stored in the Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; and when the retrieval result is negative, the logical address representing the target mapping item does not exist in the non-compressed prefix complete binary tree.
A read command module to further: before searching whether a target mapping item corresponding to the logical address exists in the C-Cache, establishing a compressed prefix complete binary tree of the C-Cache area by a compressed prefix complete binary tree construction method; before retrieving a target Mapping item corresponding to the logical address in a Mapping area and outputting a physical address corresponding to the target Mapping item, establishing a compressed prefix complete binary tree of the Mapping area by a compressed prefix complete binary tree construction method; the method for constructing the compressed prefix complete binary tree comprises the following steps:
establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is M, wherein 1 is less than M;
setting the first bit of a logical address of a mapping item to be stored in a tree to be 0, storing a first mapping item into the Block Block and performing data compression;
decompressing the Block, storing new mapping items into the Block item by item, splitting the Block into two sub-nodes when the number of the mapping items in the Block reaches a set threshold value M, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping items in the Block node with the original prefix of 0 item by item, transferring the mapping items with the first two bits of the logical address being 00 into the sub-Block nodes with the prefix of 00, transferring the mapping items with the first two bits of the logical address being 01 into the sub-Block nodes with the prefix of 01, and finally storing the newly stored mapping items into the corresponding sub-Block nodes according to the first two bits of the logical address;
after all mapping items needing to be stored in the tree are stored in the tree, the tree is completed through virtual nodes to obtain a compressed prefix complete binary tree;
a read command module to further: searching whether a target Mapping item corresponding to the logical address exists in the C-Cache through a compressed prefix complete binary tree search algorithm, and searching whether a target Mapping item corresponding to the logical address exists in a Mapping area through the compressed prefix complete binary tree search algorithm, wherein the compressed prefix complete binary tree search algorithm specifically comprises the following steps:
s410, receiving a logical address of a target mapping item;
s411, according to the height k of a complete binary tree of a compressed prefix in a C-Cache area or a Mapping area, taking the front k bits of a logical address of a target Mapping item, and calculating to obtain a Block number possibly existing in the target Mapping item by using a binary tree Block sequence number rapid calculation algorithm;
s412, retrieving whether the Block filter of the Block Block contains a storage mark of a target mapping item, when the retrieval result is yes, decompressing the Block Block, returning a physical address corresponding to a logical address of the target mapping item, and then compressing the Block Block; when the retrieval result is no, S413 is executed;
s413, making K equal to K-1, performing operations S411 and S412, and performing S414 until K is equal to 1;
s414, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether a Block filter of the Block contains a storage mark of the target mapping item, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is negative, the logical address representing the target mapping item does not exist in the compressed prefix complete binary tree;
the binary tree in the C-Cache and Mapping area is a compressed prefix complete binary tree, each node is a compressed Block, and a plurality of Mapping items with the same prefix as the Block prefix are stored in each Block.
In the embodiment, the Cache is divided into two parts, namely C-Cache and H-Cache. The H-Cache part is used for ensuring the high-speed access performance of the system, the C-Cache part enables a limited storage space to store more Mapping items by compressing Mapping item data, the hit rate of the Mapping items is improved, and the operation efficiency of the system is improved; compared with the method for storing the Mapping item in the flash memory medium, the Mapping area space is stored by adopting the NVM medium with local updating characteristic, the method has the advantages that the problem of write amplification caused by remote updating of the Mapping item is solved, the time overhead caused by copying and transferring the Mapping item during remote updating is reduced, and meanwhile, compared with the traditional flash memory equipment, the read-write speed of the Mapping area stored by adopting the NVM medium is obviously improved; the access speed of the DRAM device is higher than that of other storage devices, the frequent data access operation is completed by putting the DRAM device in the DRAM device, the system operation efficiency is improved, the data access locality principle is fully considered, the Mapping item is taken out by taking a Block as a unit and backed up to a Cache, the frequent searching of the Mapping item in a Mapping area is avoided, the system operation efficiency is improved, through a partial write-back strategy, when a certain Block in the C-Cache needs to be migrated out of the Cache, only newly generated and updated Mapping items are written back to the Mapping area, the data volume written back to the NVM device is reduced while the data consistency is ensured, and the write-back time efficiency is improved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (8)

1. A caching method for large-scale nonvolatile storage media is characterized by comprising the following steps:
s1, judging that the command request is a target mapping item write command request or a target mapping item read command request according to the user command request, and executing S2 when the command request is judged to be the target mapping item write command request; executing S3 when the command request is judged to be a target mapping item read command request;
s2, setting the tail flag position of the target mapping item to 0, and storing the target mapping item into the H-Cache area according to the logical address prefix;
s3, acquiring a logical address in a target mapping item read command request, retrieving whether a target mapping item corresponding to the logical address exists in the H-Cache, and outputting a physical address corresponding to the target mapping item when the judgment result is yes; if not, searching whether a target mapping item corresponding to the logical address exists in the C-Cache,
when the retrieval result is yes, outputting a physical address corresponding to the target Mapping item, and when the retrieval result is no, retrieving the target Mapping item corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target Mapping item;
wherein, the Cache is divided into a C-Cache part and an H-Cache part; the Mapping area space is stored by adopting an NVM medium with local updating characteristics;
step S2, specifically including:
s20, setting the tail flag position of the target mapping item to 0;
s21, storing the target mapping item into a non-compressed prefix complete binary tree of the H-Cache area according to the logical address prefix of the target mapping item, judging whether the residual space in the H-Cache area is smaller than a preset threshold value A, and when the judgment result is yes, migrating the original mapping item in the H-Cache area and executing S22; when the judgment result is negative, the operation is not carried out;
s22, transferring the Mapping item which is not accessed for the longest time in the H-Cache area to a compressed prefix complete binary tree of the C-Cache area, judging whether the residual space in the C-Cache area is smaller than a preset threshold value B, when the judgment result is yes, sorting the Block blocks in the C-Cache area according to the access times, checking the flag bit of the Mapping item in the Block Block with the minimum access times to be 0 or 1 item by item, if the flag bit is 0, transferring the Mapping item to a Mapping area, setting the flag bit of the Mapping item to be 1, and if the flag bit is 1, deleting the Mapping item; and when the judgment result is negative, not performing the operation.
2. The mass-nonvolatile-storage-medium-oriented caching method according to claim 1, wherein in step S21, before storing the target mapping item into the non-compressed prefix complete binary tree of the H-Cache area according to the logical address prefix thereof, the method further comprises: establishing a non-compressed prefix complete binary tree of an H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
s200, establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
s201, setting the first bit of a logical address of a mapping item needing to be stored in a tree in an H-Cache area to be 0, and storing a first mapping item into the Block Block;
s202, storing a new mapping item into the Block Block, wherein the number of the mapping items stored in the Block Block is full, splitting the Block Block into two sub-nodes, the prefixes of the two sub-Block nodes are respectively set to 00 and 01, then checking the first two bits of the logic address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logic address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logic address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logic address;
and S203, repeating the operation of the step S202 until all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, and completing the tree through virtual nodes to obtain the non-compressed prefix complete binary tree.
3. The large-scale nonvolatile storage medium-oriented caching method according to claim 1, wherein in step S3, before retrieving whether there is a target mapping entry corresponding to the logical address in the H-Cache, the method includes: establishing a non-compressed prefix complete binary tree of an H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
s210, establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
s211, setting the first bit of a logical address of a mapping item needing to be stored in a tree in the H-Cache area to be 0, and storing a first mapping item into the Block Block;
s212, storing a new mapping item into the Block Block, wherein the number of the mapping items stored in the Block Block is full, splitting the Block Block into two sub-nodes, the prefixes of the two sub-Block nodes are respectively set to 00 and 01, checking the first two bits of the logic address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logic address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logic address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logic address;
s213, repeating the operation of the step S212 until all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, and completing the tree through virtual nodes to obtain a non-compressed prefix complete binary tree;
in step S3, a non-compressed prefix complete binary tree retrieval algorithm is used to retrieve whether there is a target mapping item corresponding to the logical address in the H-Cache, where the non-compressed prefix complete binary tree retrieval algorithm specifically includes:
s300, receiving a logical address of a target mapping item;
s301, according to the height k of a complete binary tree of the prefix of the H-Cache area, taking the k bits before the logical address of the target mapping item, and calculating to obtain a Block number possibly existing in the target mapping item by using a binary tree Block sequence number rapid calculation algorithm;
s302, searching whether the logical address of the target mapping item is stored in the Block Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is no, executing S303;
s303, let K equal to K-1, perform S301 and S302 until K equals 1, perform S304;
s304, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether the logical address of the target mapping item is stored in the Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is negative, the logical address representing the target mapping item does not exist in the non-compressed prefix complete binary tree;
in step S3, before the retrieving whether there is a target mapping entry corresponding to the logical address in the C-Cache, the method further includes: establishing a compressed prefix complete binary tree of the C-Cache area by a compressed prefix complete binary tree construction method; before retrieving the target Mapping entry corresponding to the logical address in the Mapping region and outputting the physical address corresponding to the target Mapping entry, the method further includes: establishing a compressed prefix complete binary tree of a Mapping region by a compressed prefix complete binary tree construction method; the method for constructing the compressed prefix complete binary tree specifically comprises the following steps:
s310, establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is M, wherein 1 is less than M;
s311, setting the first bit of the logical address of the mapping item to be stored in the tree to be 0, storing the first mapping item into the Block Block and performing data compression;
s312, decompressing the Block Block, storing new mapping items into the Block Block item by item, splitting the Block Block into two sub-nodes when the number of the mapping items in the Block Block reaches a set threshold value M, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping items in the Block node with the original prefix of 0 item by item, transferring the mapping items with the first two bits of the logical address of 00 into the sub-Block node with the prefix of 00, transferring the mapping items with the first two bits of the logical address of 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping items into the corresponding sub-Block nodes according to the first two bits of the logical address;
s313, repeating the operation of the step S312 until all mapping items needing to be stored in the tree are stored in the tree, and completing the tree through the virtual nodes to obtain a compressed prefix complete binary tree;
in step S3, a compressed prefix complete binary tree retrieval algorithm is used to retrieve whether a target Mapping item corresponding to the logical address exists in the C-Cache, and a compressed prefix complete binary tree retrieval algorithm is used to retrieve whether a target Mapping item corresponding to the logical address exists in the Mapping area, where the compressed prefix complete binary tree retrieval algorithm specifically includes:
s320, receiving a logical address of the target mapping item;
s321, according to the height k of a complete binary tree of a compressed prefix in the C-Cache area or the Mapping area, taking the front k bits of a logical address of a target Mapping item, and calculating by using a binary tree Block sequence number rapid calculation algorithm to obtain a Block Block number possibly existing in the target Mapping item;
s322, searching whether the Block filter of the Block Block contains a storage mark of a target mapping item, when the search result is yes, decompressing the Block Block, returning a physical address corresponding to the logical address of the target mapping item, and then compressing the Block Block; when the retrieval result is no, executing S323;
s323, let K equal K-1, execute S321 and S322 operations, until K equals 1, execute S324;
s324, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether a Block filter of the Block contains a storage mark of the target mapping item, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; and when the retrieval result is negative, the logical address representing the target mapping item does not exist in the compressed prefix complete binary tree.
4. The caching method for the large-scale nonvolatile storage medium according to claim 1, wherein in step S3, the retrieving whether a target mapping item corresponding to the logical address exists in the C-Cache, and when the retrieval result is yes, outputting a physical address corresponding to the target mapping item specifically includes: recording the retrieval time, and transferring the target mapping item to an H-Cache area when the target mapping item corresponding to the logical address is retrieved again within the preset time;
in step S3, in the process of retrieving the target Mapping entry corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target Mapping entry, the method further includes: and searching a Block Block where a target Mapping item corresponding to the logical address is located in a Mapping area, backing up the target Mapping item in the Block Block to an H-Cache area, and backing up other Mapping items in the Block Block to a C-Cache area.
5. A mass non-volatile storage media oriented caching system, comprising:
the judging module is used for judging that the user command request is a target mapping item writing command request or a target mapping item reading command request according to the user command request;
the write command module is used for setting the tail flag bit of the target mapping item to be 0 and then storing the target mapping item into the H-Cache area according to the logical address prefix when the judging module judges that the command request is the target mapping item write command request;
the read command module is used for acquiring a logical address in the target mapping item read command request when the judging module judges that the command request is the target mapping item read command request, retrieving whether a target mapping item corresponding to the logical address exists in the H-Cache, and outputting a physical address corresponding to the target mapping item when the judging result is yes; if not, searching whether a target mapping item corresponding to the logical address exists in the C-Cache,
when the retrieval result is yes, outputting a physical address corresponding to the target Mapping item, and when the retrieval result is no, retrieving the target Mapping item corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target Mapping item;
the write command module includes: the system comprises a marking submodule, a judging submodule, a processing submodule and a writing submodule;
the mark submodule is used for setting the mark position of the tail of the target mapping item to 0;
the judgment submodule is used for judging whether the residual space in the H-Cache area is smaller than a preset threshold A or not;
the processing submodule is used for storing a target Mapping item into a non-compressed prefix complete binary tree of an H-Cache area according to the logical address prefix of the target Mapping item when the judgment result of the judgment submodule is yes, migrating the original Mapping item in the H-Cache area, migrating the original Mapping item which is not accessed for the longest time into a C-Cache area, judging whether the residual space in the C-Cache area is smaller than a preset threshold value B, sorting the Block blocks in the C-Cache area according to the access times when the judgment result is yes, checking that the flag bit of the Mapping item in the Block with the minimum access times is 0 or 1, writing the Mapping item into a Mapping area if the flag bit is 0, setting the flag bit of the Mapping item to be 1, and deleting the Mapping item if the flag bit is 1; when the judgment result is negative, no operation is carried out;
and the writing sub-module is used for storing the target mapping item into the non-compressed prefix complete binary tree of the H-Cache area according to the logical address prefix when the judgment result of the judgment sub-module is negative.
6. The mass-nonvolatile-storage-medium-oriented caching system of claim 5, wherein the write command module is further configured to: before retrieving whether a target mapping item corresponding to the logical address exists in the H-Cache, establishing a non-compressed prefix complete binary tree of the H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
setting the first bit of a logical address of a mapping item needing to be stored in a tree in an H-Cache area to be 0, and storing a first mapping item into the Block Block;
storing a new mapping item into the Block, wherein the number of the mapping items stored in the Block is full, splitting the Block into two sub-nodes, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logical address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logical address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logical address;
and after all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, completing the tree through virtual nodes to obtain the non-compressed prefix complete binary tree.
7. The mass-nonvolatile-storage-medium-oriented cache system of claim 5, wherein the read command module is specifically configured to: before retrieving whether a target mapping item corresponding to the logical address exists in the H-Cache, establishing a non-compressed prefix complete binary tree of the H-Cache area by a non-compressed prefix complete binary tree construction method, wherein the non-compressed prefix complete binary tree construction method specifically comprises the following steps:
establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is 1;
setting the first bit of a logical address of a mapping item needing to be stored in a tree in an H-Cache area to be 0, and storing a first mapping item into the Block Block;
storing a new mapping item into the Block, wherein the number of the mapping items stored in the Block is full, splitting the Block into two sub-nodes, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping item in the Block node with the original prefix of 0 item by item, transferring the mapping item with the first two bits of the logical address being 00 into the sub-Block node with the prefix of 00, transferring the mapping item with the first two bits of the logical address being 01 into the sub-Block node with the prefix of 01, and finally storing the newly stored mapping item into the corresponding sub-Block node according to the first two bits of the logical address;
after all mapping items needing to be stored in the tree in the H-Cache area are stored in the tree, the tree is completed through virtual nodes to obtain a non-compressed prefix complete binary tree;
and searching whether a target mapping item corresponding to the logical address exists in the H-Cache through a non-compressed prefix complete binary tree searching algorithm, wherein the non-compressed prefix complete binary tree searching algorithm specifically comprises the following steps:
s400, receiving a logical address of a target mapping item;
s401, according to the height k of a complete binary tree of the prefix of the H-Cache area, taking the k bits before the logical address of the target mapping item, and calculating to obtain a Block number possibly existing in the target mapping item by using a binary tree Block sequence number rapid calculation algorithm;
s402, searching whether the logical address of the target mapping item is stored in the Block Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is no, S403 is executed;
s403, setting K to K-1, executing S401 and S402 until K is 1, and executing S404;
s404, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether the logical address of the target mapping item is stored in the Block, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; when the retrieval result is negative, the logical address representing the target mapping item does not exist in the non-compressed prefix complete binary tree;
before searching whether a target mapping item corresponding to the logical address exists in the C-Cache, establishing a compressed prefix complete binary tree of the C-Cache area by a compressed prefix complete binary tree construction method; before retrieving a target Mapping item corresponding to the logical address in a Mapping area and outputting a physical address corresponding to the target Mapping item, establishing a compressed prefix complete binary tree of the Mapping area by a compressed prefix complete binary tree construction method; the method for constructing the compressed prefix complete binary tree comprises the following steps:
establishing a Block Block as a root node, and setting the prefix of the Block Block to be 0; the number of mapping items stored in the Block Block is M, wherein 1 is less than M;
setting the first bit of a logical address of a mapping item to be stored in a tree to be 0, storing a first mapping item into the Block Block and performing data compression;
decompressing the Block, storing new mapping items into the Block item by item, splitting the Block into two sub-nodes when the number of the mapping items in the Block reaches a set threshold value M, setting the prefixes of the two sub-Block nodes to 00 and 01 respectively, checking the first two bits of the logical address of the mapping items in the Block node with the original prefix of 0 item by item, transferring the mapping items with the first two bits of the logical address being 00 into the sub-Block nodes with the prefix of 00, transferring the mapping items with the first two bits of the logical address being 01 into the sub-Block nodes with the prefix of 01, and finally storing the newly stored mapping items into the corresponding sub-Block nodes according to the first two bits of the logical address;
after all mapping items needing to be stored in the tree are stored in the tree, the tree is completed through virtual nodes to obtain a compressed prefix complete binary tree;
searching whether a target Mapping item corresponding to the logical address exists in the C-Cache through a compressed prefix complete binary tree search algorithm, and searching whether a target Mapping item corresponding to the logical address exists in a Mapping area through the compressed prefix complete binary tree search algorithm, wherein the compressed prefix complete binary tree search algorithm specifically comprises the following steps:
s410, receiving a logical address of a target mapping item;
s411, according to the height k of a complete binary tree of a compressed prefix in a C-Cache area or a Mapping area, taking the front k bits of a logical address of a target Mapping item, and calculating to obtain a Block number possibly existing in the target Mapping item by using a binary tree Block sequence number rapid calculation algorithm;
s412, retrieving whether the Block filter of the Block Block contains a storage mark of a target mapping item, when the retrieval result is yes, decompressing the Block Block, returning a physical address corresponding to a logical address of the target mapping item, and then compressing the Block Block; when the retrieval result is no, S413 is executed;
s413, making K equal to K-1, performing operations S411 and S412, and performing S414 until K is equal to 1;
s414, calculating to obtain a Block number possibly existing in the target mapping item according to the first 1 bit of the logical address of the target mapping item and a binary tree Block sequence number rapid calculation algorithm, searching whether a Block filter of the Block contains a storage mark of the target mapping item, and returning a physical address corresponding to the logical address of the target mapping item when the search result is yes; and when the retrieval result is negative, the logical address representing the target mapping item does not exist in the compressed prefix complete binary tree.
8. The mass-nonvolatile-storage-medium-oriented cache system of claim 5, wherein the read command module is further configured to: searching whether a target mapping item corresponding to the logical address exists in the C-Cache, recording the searching time in the process of outputting a physical address corresponding to the target mapping item when the searching result is yes, and transferring the target mapping item to an H-Cache area when the target mapping item corresponding to the logical address is searched again within preset time;
in the process of retrieving a target Mapping item corresponding to the logical address in a Mapping area and outputting a physical address corresponding to the target Mapping item, retrieving a Block in which the target Mapping item corresponding to the logical address is located in the Mapping area, backing up the target Mapping item in the Block to an H-Cache area, and backing up other Mapping items in the Block to a C-Cache area.
CN201710141173.9A 2017-03-10 2017-03-10 Caching method and system for large-scale nonvolatile storage medium Active CN106776361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710141173.9A CN106776361B (en) 2017-03-10 2017-03-10 Caching method and system for large-scale nonvolatile storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710141173.9A CN106776361B (en) 2017-03-10 2017-03-10 Caching method and system for large-scale nonvolatile storage medium

Publications (2)

Publication Number Publication Date
CN106776361A CN106776361A (en) 2017-05-31
CN106776361B true CN106776361B (en) 2020-07-10

Family

ID=58961930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710141173.9A Active CN106776361B (en) 2017-03-10 2017-03-10 Caching method and system for large-scale nonvolatile storage medium

Country Status (1)

Country Link
CN (1) CN106776361B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110019985B (en) * 2017-12-29 2021-09-24 阿里巴巴(中国)有限公司 Index file establishing and inquiring methods and devices
CN110377233B (en) * 2019-07-22 2022-03-29 深圳忆联信息系统有限公司 SSD (solid State disk) reading performance optimization method and device, computer equipment and storage medium
CN113792171B (en) * 2021-11-15 2022-02-18 西安热工研究院有限公司 Image retrieval method, system, equipment and storage medium based on memory management
CN116886719B (en) * 2023-09-05 2024-01-23 苏州浪潮智能科技有限公司 Data processing method and device of storage system, equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7069390B2 (en) * 2003-09-04 2006-06-27 International Business Machines Corporation Implementation of a pseudo-LRU algorithm in a partitioned cache
CN103019963A (en) * 2012-12-31 2013-04-03 华为技术有限公司 Cache mapping method and storage device
CN103514249A (en) * 2013-06-20 2014-01-15 易乐天 Method and system for automatic data reduction and storage device
WO2016065610A1 (en) * 2014-10-31 2016-05-06 华为技术有限公司 Method for accessing files, distributed storage system and storage node
CN105786721A (en) * 2014-12-25 2016-07-20 研祥智能科技股份有限公司 Memory address mapping management method and processor
CN105786717A (en) * 2016-03-22 2016-07-20 华中科技大学 DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management
CN105930356A (en) * 2016-04-08 2016-09-07 上海交通大学 Method for implementing log type heterogeneous hybrid memory file system
CN105975215A (en) * 2016-05-25 2016-09-28 深圳大学 STL mapping table management method based on Ondemand algorithm
CN106055679A (en) * 2016-06-02 2016-10-26 南京航空航天大学 Multi-level cache sensitive indexing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9128845B2 (en) * 2012-07-30 2015-09-08 Hewlett-Packard Development Company, L.P. Dynamically partition a volatile memory for a cache and a memory partition
GB2507758A (en) * 2012-11-08 2014-05-14 Ibm Cache hierarchy with first and second level instruction and data caches and a third level unified cache
US20150205724A1 (en) * 2014-01-20 2015-07-23 Honeywell International Inc. System and method of cache partitioning for processors with limited cached memory pools

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7069390B2 (en) * 2003-09-04 2006-06-27 International Business Machines Corporation Implementation of a pseudo-LRU algorithm in a partitioned cache
CN103019963A (en) * 2012-12-31 2013-04-03 华为技术有限公司 Cache mapping method and storage device
CN103514249A (en) * 2013-06-20 2014-01-15 易乐天 Method and system for automatic data reduction and storage device
WO2016065610A1 (en) * 2014-10-31 2016-05-06 华为技术有限公司 Method for accessing files, distributed storage system and storage node
CN105786721A (en) * 2014-12-25 2016-07-20 研祥智能科技股份有限公司 Memory address mapping management method and processor
CN105786717A (en) * 2016-03-22 2016-07-20 华中科技大学 DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management
CN105930356A (en) * 2016-04-08 2016-09-07 上海交通大学 Method for implementing log type heterogeneous hybrid memory file system
CN105975215A (en) * 2016-05-25 2016-09-28 深圳大学 STL mapping table management method based on Ondemand algorithm
CN106055679A (en) * 2016-06-02 2016-10-26 南京航空航天大学 Multi-level cache sensitive indexing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种优化的闪存地址映射方法;张琦等;《软件学报》;20140228;第25卷(第2期);第315-324页 *
多核系统共享内存资源分配和管理研究;高珂等;《计算机学报》;20150531;第38卷(第5期);第1020-1031页 *

Also Published As

Publication number Publication date
CN106776361A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
JP6967986B2 (en) Memory system
CN107066393B (en) Method for improving mapping information density in address mapping table
US9146877B2 (en) Storage system capable of managing a plurality of snapshot families and method of snapshot family based read
US11301379B2 (en) Access request processing method and apparatus, and computer device
US8832357B2 (en) Memory system having a plurality of writing mode
CN112395212B (en) Method and system for reducing garbage recovery and write amplification of key value separation storage system
US10740251B2 (en) Hybrid drive translation layer
CN109582593B (en) FTL address mapping reading and writing method based on calculation
CN114546296B (en) ZNS solid state disk-based full flash memory system and address mapping method
CN106776361B (en) Caching method and system for large-scale nonvolatile storage medium
WO2014015828A1 (en) Data storage space processing method and processing system, and data storage server
US20160041907A1 (en) Systems and methods to manage tiered cache data storage
CN109446117B (en) Design method for page-level flash translation layer of solid state disk
US11449430B2 (en) Key-value store architecture for key-value devices
CN110321301A (en) A kind of method and device of data processing
CN110968269A (en) SCM and SSD-based key value storage system and read-write request processing method
WO2017113211A1 (en) Method and device for processing access request, and computer system
CN113253926A (en) Memory internal index construction method for improving query and memory performance of novel memory
KR101026634B1 (en) A method of data storage for a hybrid flash memory
Lv et al. Zonedstore: A concurrent zns-aware cache system for cloud data storage
CN108664217A (en) A kind of caching method and system reducing the shake of solid-state disc storaging system write performance
CN110968527B (en) FTL provided caching
CN114840452A (en) Control component
US20220374360A1 (en) Memory device and method for accessing memory device
CN109840219B (en) Address translation system and method for mass solid state storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant