CN101788887A - System and method of I/O cache stream based on database in disk array - Google Patents
System and method of I/O cache stream based on database in disk array Download PDFInfo
- Publication number
- CN101788887A CN101788887A CN201010108204A CN201010108204A CN101788887A CN 101788887 A CN101788887 A CN 101788887A CN 201010108204 A CN201010108204 A CN 201010108204A CN 201010108204 A CN201010108204 A CN 201010108204A CN 101788887 A CN101788887 A CN 101788887A
- Authority
- CN
- China
- Prior art keywords
- read
- data
- request
- management module
- cache management
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000003909 pattern recognition Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention provides a method and a system of an I/O cache stream based on a database in a disk array. In the method, a buffer management module receives an I/O request generated by the operation of the database, and the data search is carried out in the buffer management module according to the received I/O request; if the corresponding data can be found out, the data is returned to the database; otherwise, a control bottom layer independent disk redundant array (raid) module reads the data according to the I/O request; a cache bottom layer raid module reads the data and returns the read data to the database; the buffer management module transmits the describing information of the received I/O request to a prereading module; and after receiving the describing information of the I/O request, the prereading module prereads the data from the bottom layer raid module according to a preset prereading strategy and caches the preread data in the buffer management module. The method improves the speed of inquiry response of a database server.
Description
Technical field
The present invention relates to a kind of I/O cache flow technology in the disk array, be generally used for the database application in the disk array based on database.
Background technology
Continuous development along with network application and ecommerce, the visit capacity of each website is increasing, and the database scale also constantly enlarges thereupon, and the performance issue of Database Systems is just more and more outstanding, the order of process user is too slow, will badly influence user's normal use.
Common enterprise database is used and is based upon on the disk array, therefore how to provide a kind of scheme in disk array, and it is applied in the database application system, can improve the performance of Database Systems, is that the current operation amount sharply increases the challenge that faces.
Summary of the invention
The technical problem to be solved in the present invention is, proposes the method and system of a kind of I/O cache flow based on database in the disk array, can improve the speed of database server inquiry response, improves the service response time.
In order to solve the problems of the technologies described above, the invention provides the system of a kind of I/O cache flow based on database in the disk array, comprise a cache management module, a pre-read through model and bottom Redundant Array of Independent Disks (RAID) (raid) module, described interface management module links to each other with described pre-read through model with described cache management module respectively, described cache management module links to each other with described database with described pre-read through model, described bottom raid module respectively, described pre-read through model also links to each other with described bottom raid module, wherein:
Described interface management module in order to the size to the pre-reading window of the buffer capacity of described cache management module, described pre-read through model, and is maximumly read in the size one or more in advance and is configured;
Described cache management module, be used to receive the I/O request that database manipulation produces, in described cache management module, search the corresponding data of described I/O request, if can find, then it is returned to database,, then control described bottom raid module and carry out data read according to described I/O request if can not find, the data that the described bottom raid of buffer memory module reads, and the described data that read are returned to database; And the descriptor that the I/O that receives is asked sends to described pre-read through model, the pre-read data that reception and the described pre-read through model of buffer memory send;
Described pre-read through model is used for after the descriptor that receives the I/O request from described cache management module, carry out data pre-head according to a default strategy of reading in advance from described bottom raid module, and the data that will read in advance sends to described cache management module;
Described bottom raid module in order to the I/O of process database under the control of described cache management module request, and returns to described cache management module with the data that read.
Further, said system also can have following characteristics:
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
Further, said system also can have following characteristics:
Described pre-read through model, when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.
Further, said system also can have following characteristics:
Described cache management module, when data search is carried out in request according to I/O, be after described pre-read through model identifies the reference position and side-play amount of the corresponding data of described I/O request, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
Further, said system also can have following characteristics:
Described interface management module belongs to application layer, and described cache management module, described pre-read through model and described bottom raid module belong to inner nuclear layer.
In order to solve the problems of the technologies described above, the invention allows for the method for a kind of I/O cache flow based on database in the disk array, comprising:
One cache management module receives the I/O request that database manipulation produces, in described cache management module, carry out data search according to the I/O request that receives, if can find corresponding data, then it is returned to database, otherwise control bottom Redundant Array of Independent Disks (RAID) (raid) module is carried out data read according to described I/O request, the data that the described bottom raid of buffer memory module reads, and the described data that read are returned to database; And
Described cache management module sends to a pre-read through model with the descriptor that the I/O that receives asks, described pre-read through model is after the descriptor that receives the I/O request, from described bottom raid module, carry out data pre-head according to a default strategy of reading in advance, and the metadata cache that will read in advance is to described cache management module.
Further, said method also can have following characteristics:
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
Further, said method also can have following characteristics:
Described pre-read through model is when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.
Further, said method also can have following characteristics:
Described cache management module is when data search is carried out in request according to I/O, be after described pre-read through model identifies the reference position and side-play amount of the corresponding data of described I/O request, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
Further, said method also can have following characteristics:
The size of the buffer capacity of described cache management module, the pre-reading window of described pre-read through model, and maximum read in the size one or more in advance and adjust according to user's configuration.
A kind of method and system of the I/O cache flow based on database in the disk array that the present invention proposes, can improve the speed of database server inquiry response, improve the service response time, can significantly improve the performance of system, thereby tackle the challenge that growing user faced the response time.
Description of drawings
Fig. 1 is the system block diagram of a kind of I/O cache flow based on database in the embodiment of the invention disk array;
Fig. 2 is the method flow diagram of a kind of I/O cache flow based on database in the embodiment of the invention disk array.
Embodiment
Database when in order to storage and the management of carrying out data, when carrying out data manipulation, for example inquire about, modification etc. operated, can produce corresponding I/O request.Described database generally can be databases such as oracle, db2, sqlserver.For the I/O request responding speed of accelerating database manipulation is produced, the embodiment of the invention provides a kind of system and method thereof of the I/O cache flow based on database, it is conceived substantially: read strategy in advance by active data, the data pre-head that may read comes out in advance, carry out buffer memory, when the I/O request comes, can feed back to application layer fast by in buffer memory, searching.
Describe embodiment of the present invention in detail below in conjunction with accompanying drawing.
Referring to Fig. 1, the figure shows the system of a kind of I/O cache flow based on database in the embodiment of the invention disk array, comprise an interface management module, a cache management module, a pre-read through model, and bottom Redundant Array of Independent Disks (RAID) (raid) module, wherein:
Described interface management module, link to each other with described pre-read through model with described cache management module respectively, in order to size, and maximumly read in the size one or more in advance and be configured the pre-reading window of the buffer memory capacity of described cache management module, described pre-read through model.
Described cache management module, link to each other with described bottom raid module with described pre-read through model respectively, be used to receive the I/O request that database manipulation produces, in described cache management module, search the corresponding data of described I/O request, if can find, then it is returned to database, if can not find, then control described bottom raid module and read the corresponding data of described I/O request, the data that the described bottom raid of buffer memory module reads, and the data that described bottom raid module is read return to described database; And the descriptor that the I/O that receives is asked sends to described pre-read through model, the pre-read data that reception and the described pre-read through model of buffer memory send.
Described pre-read through model, link to each other with described bottom raid module with described cache management module respectively, be used for after the descriptor that receives the I/O request from described cache management module, carry out data pre-head according to a default strategy of reading in advance, and the data that will read in advance send to described cache management module.
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.For example, current I/O request is for reading the data of 40-49, the last historical record of described I/O request is the data that read 30-39, then the data that read of twice request are 30-39 successively, and 40-49, can form sequential data stream, then next data that read is likely since 50 and reads one piece of data in proper order, so, can read one piece of data in proper order since 50 in advance, and it is saved in the described cache management module, like this, if next I/O request is the data that read 50-59, then can directly from described cache management module, find, need not to carry out bottom I/O operation, thereby can accelerate data search speed.Again for example, current I/O request is for reading the data of 39-30, the last historical record of described I/O request is the data that read, then the data that read of twice request are 49-40 successively, and 39-30, can form the backward data stream, then next data that read is likely since 29 and reads one piece of data in proper order, so, can read one piece of data in proper order since 29 in advance, and it is saved in the described cache management module, like this, if next I/O request is the data that read 29-20, then can directly from described cache management module, find, need not to carry out bottom I/O operation, thereby can accelerate data search speed.
Described pre-read through model, when carrying out data pre-head, the data length that reads is the size of the pre-reading window of described interface management block configuration, the size of described pre-reading window is no more than the maximum of described interface management block configuration and reads size in advance.
Described pre-read through model, when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.Described cache management module, when data search is carried out in request according to I/O, can after identifying the reference position and side-play amount of the corresponding data of described I/O request, described pre-read through model in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
Described bottom raid module in order to the I/O of process database under the control of described cache management module request, and returns to described cache management module with the data that read.
Described interface management module and database belong to application layer, and described cache management module, described pre-read through model and described bottom raid module belong to inner nuclear layer.
Referring to Fig. 2, the figure shows a kind of I/O cache flow method in the embodiment of the invention disk array based on database, comprise the steps:
Step S201: database is operated, and produces corresponding I/O request;
Described I/O request is when operation such as inquiry or modification is taken place by database, to be produced by operating system block device layer;
Step S202: a cache management module receives the I/O request that database manipulation produces, in described cache management module, carry out data search according to described I/O request earlier, if find corresponding data, then it is returned to database, otherwise control bottom raid module is carried out data read according to described I/O request, the data that buffer memory bottom raid module reads, and the described data that read are returned to database;
Step S203: described cache management module sends to a pre-read through model with the descriptor that the I/O that receives asks, described pre-read through model is after the descriptor that receives described I/O request, from bottom raid module, carry out data pre-head according to a default strategy of reading in advance, and the metadata cache that will read in advance is to described cache management module.
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
For example, current I/O request is for reading the data of 40-49, the last historical record of described I/O request is the data that read 30-39, then the data that read for twice are 30-39 successively, and 40-49, can form sequential data stream, then next data that read is likely since 50 and reads one piece of data in proper order, so, can read one piece of data in proper order since 50 in advance, and it is saved in the described cache management module, like this, if next I/O request is the data of searching 50-59, then can directly from described cache management module, find, need not to carry out bottom operation, thereby can accelerate data search speed.When carrying out data pre-head, the data length that reads is the size of the pre-reading window of described interface management block configuration, and the size of this pre-reading window is no more than the maximum of described interface management block configuration and reads size in advance.
Again for example, current I/O request is for reading the data of 39-30, the last historical record of described I/O request is the data that read, then the data that read of twice request are 49-40 successively, and 39-30, can form the backward data stream, then next data that read is likely since 29 and reads one piece of data in proper order, so, can read one piece of data in proper order since 29 in advance, and it is saved in the described cache management module, like this, if next I/O request is the data that read 29-20, then can directly from described cache management module, find, need not to carry out bottom I/O operation, thereby can accelerate data search speed.
The size of the buffer capacity of described cache management module, the pre-reading window of described pre-read through model, and maximum read in the size one or more in advance and can adjust according to user's configuration.For example, when carrying out data pre-head, can increase pre-reading window, default setting is 4,000,000, thereby make that by increasing pre-reading window the data that need to read in following a period of time can be in buffering at database application, make the application need data directly to obtain data from buffering, and need not to wait for the time of magnetic disc i/o, this method can greatly improve the performance of Database Systems.
Described cache management module, when data search is carried out in request according to I/O, can call reference position and side-play amount that described pre-read through model identifies the corresponding data of described I/O request earlier, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request then.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various changes and variation.Within the spirit and principles in the present invention all, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (10)
1. a kind of system of the I/O cache flow based on database in the disk array, it is characterized in that, comprise a cache management module, a pre-read through model and bottom Redundant Array of Independent Disks (RAID) (raid) module, described interface management module links to each other with described pre-read through model with described cache management module respectively, described cache management module links to each other with described database with described pre-read through model, described bottom raid module respectively, described pre-read through model also links to each other with described bottom raid module, wherein:
Described interface management module in order to the size to the pre-reading window of the buffer capacity of described cache management module, described pre-read through model, and is maximumly read in the size one or more in advance and is configured;
Described cache management module, be used to receive the I/O request that database manipulation produces, in described cache management module, search the corresponding data of described I/O request, if can find, then it is returned to database,, then control described bottom raid module and carry out data read according to described I/O request if can not find, the data that the described bottom raid of buffer memory module reads, and the described data that read are returned to database; And the descriptor that the I/O that receives is asked sends to described pre-read through model, the pre-read data that reception and the described pre-read through model of buffer memory send;
Described pre-read through model is used for after the descriptor that receives the I/O request from described cache management module, carry out data pre-head according to a default strategy of reading in advance from described bottom raid module, and the data that will read in advance sends to described cache management module;
Described bottom raid module in order to the I/O of process database under the control of described cache management module request, and returns to described cache management module with the data that read.
2. the system as claimed in claim 1 is characterized in that:
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
3. system as claimed in claim 1 or 2 is characterized in that:
Described pre-read through model, when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.
4. system as claimed in claim 3 is characterized in that:
Described cache management module, when data search is carried out in request according to I/O, be after described pre-read through model identifies the reference position and side-play amount of the corresponding data of described I/O request, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
5. the system as claimed in claim 1 is characterized in that:
Described interface management module belongs to application layer, and described cache management module, described pre-read through model and described bottom raid module belong to inner nuclear layer.
6. a kind of method of the I/O cache flow based on database in the disk array is characterized in that, comprising:
One cache management module receives the I/O request that database manipulation produces, in described cache management module, carry out data search according to the I/O request that receives, if can find corresponding data, then it is returned to database, otherwise control bottom Redundant Array of Independent Disks (RAID) (raid) module is carried out data read according to described I/O request, the data that the described bottom raid of buffer memory module reads, and the described data that read are returned to database; And
Described cache management module sends to a pre-read through model with the descriptor that the I/O that receives asks, described pre-read through model is after the descriptor that receives the I/O request, from described bottom raid module, carry out data pre-head according to a default strategy of reading in advance, and the metadata cache that will read in advance is to described cache management module.
7. method as claimed in claim 6 is characterized in that:
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
8. as claim 6 or 7 described methods, it is characterized in that:
Described pre-read through model is when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.
9. method as claimed in claim 8 is characterized in that:
Described cache management module is when data search is carried out in request according to I/O, be after described pre-read through model identifies the reference position and side-play amount of the corresponding data of described I/O request, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
10. method as claimed in claim 6 is characterized in that:
The size of the buffer capacity of described cache management module, the pre-reading window of described pre-read through model, and maximum read in the size one or more in advance and adjust according to user's configuration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010108204A CN101788887A (en) | 2010-02-05 | 2010-02-05 | System and method of I/O cache stream based on database in disk array |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010108204A CN101788887A (en) | 2010-02-05 | 2010-02-05 | System and method of I/O cache stream based on database in disk array |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101788887A true CN101788887A (en) | 2010-07-28 |
Family
ID=42532116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201010108204A Pending CN101788887A (en) | 2010-02-05 | 2010-02-05 | System and method of I/O cache stream based on database in disk array |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101788887A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930472A (en) * | 2010-09-09 | 2010-12-29 | 南京中兴特种软件有限责任公司 | Parallel query method for distributed database |
CN102073463A (en) * | 2010-12-28 | 2011-05-25 | 创新科存储技术有限公司 | Flow prediction method and device, and prereading control method and device |
CN102904923A (en) * | 2012-06-21 | 2013-01-30 | 华数传媒网络有限公司 | Data reading method and data reading system capable of relieving disk reading bottleneck |
CN105487987A (en) * | 2015-11-20 | 2016-04-13 | 深圳市迪菲特科技股份有限公司 | Method and device for processing concurrent sequential reading IO (Input/Output) |
CN106681939A (en) * | 2017-01-03 | 2017-05-17 | 北京华胜信泰数据技术有限公司 | Reading method and device for disk page |
CN107273053A (en) * | 2017-06-22 | 2017-10-20 | 郑州云海信息技术有限公司 | A kind of method and apparatus of digital independent |
CN113609093A (en) * | 2021-06-30 | 2021-11-05 | 济南浪潮数据技术有限公司 | Reverse order reading method, system and related device of distributed file system |
CN114442948A (en) * | 2022-01-14 | 2022-05-06 | 济南浪潮数据技术有限公司 | Method, device and equipment for pre-reading storage system and storage medium |
-
2010
- 2010-02-05 CN CN201010108204A patent/CN101788887A/en active Pending
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930472A (en) * | 2010-09-09 | 2010-12-29 | 南京中兴特种软件有限责任公司 | Parallel query method for distributed database |
CN102073463A (en) * | 2010-12-28 | 2011-05-25 | 创新科存储技术有限公司 | Flow prediction method and device, and prereading control method and device |
CN102073463B (en) * | 2010-12-28 | 2012-08-22 | 创新科存储技术有限公司 | Flow prediction method and device, and prereading control method and device |
CN102904923A (en) * | 2012-06-21 | 2013-01-30 | 华数传媒网络有限公司 | Data reading method and data reading system capable of relieving disk reading bottleneck |
CN102904923B (en) * | 2012-06-21 | 2016-01-06 | 华数传媒网络有限公司 | A kind of method and system alleviating the digital independent of disk reading bottleneck |
CN105487987B (en) * | 2015-11-20 | 2018-09-11 | 深圳市迪菲特科技股份有限公司 | A kind of concurrent sequence of processing reads the method and device of IO |
CN105487987A (en) * | 2015-11-20 | 2016-04-13 | 深圳市迪菲特科技股份有限公司 | Method and device for processing concurrent sequential reading IO (Input/Output) |
CN106681939A (en) * | 2017-01-03 | 2017-05-17 | 北京华胜信泰数据技术有限公司 | Reading method and device for disk page |
CN106681939B (en) * | 2017-01-03 | 2019-08-23 | 北京华胜信泰数据技术有限公司 | Reading method and device for disk page |
CN107273053A (en) * | 2017-06-22 | 2017-10-20 | 郑州云海信息技术有限公司 | A kind of method and apparatus of digital independent |
CN113609093A (en) * | 2021-06-30 | 2021-11-05 | 济南浪潮数据技术有限公司 | Reverse order reading method, system and related device of distributed file system |
CN113609093B (en) * | 2021-06-30 | 2023-12-22 | 济南浪潮数据技术有限公司 | Reverse order reading method, system and related device of distributed file system |
CN114442948A (en) * | 2022-01-14 | 2022-05-06 | 济南浪潮数据技术有限公司 | Method, device and equipment for pre-reading storage system and storage medium |
CN114442948B (en) * | 2022-01-14 | 2024-07-26 | 济南浪潮数据技术有限公司 | Storage system pre-reading method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101788887A (en) | System and method of I/O cache stream based on database in disk array | |
CN100583096C (en) | Methods for managing deletion of data | |
US9971796B2 (en) | Object storage using multiple dimensions of object information | |
CN110262922B (en) | Erasure code updating method and system based on duplicate data log | |
CN105183839A (en) | Hadoop-based storage optimizing method for small file hierachical indexing | |
CN111427844B (en) | A data migration system and method for file hierarchical storage | |
JP2020038623A (en) | Method, device, and system for storing data | |
CN104978362B (en) | Data migration method, device and the meta data server of distributed file system | |
US9817587B1 (en) | Memory-based on-demand data page generation | |
US11422721B2 (en) | Data storage scheme switching in a distributed data storage system | |
CN107291889A (en) | A kind of date storage method and system | |
TW201140430A (en) | Allocating storage memory based on future use estimates | |
CN109598156A (en) | Engine snapshot stream method is redirected when one kind is write | |
CN103246616A (en) | Global shared cache replacement method for realizing long-short cycle access frequency | |
JP2016512906A (en) | Multi-layer storage management for flexible data placement | |
CN101147118A (en) | Methods and apparatus for reconfiguring a storage system | |
CN114265814B (en) | Data lake file system based on object storage | |
CN107368608A (en) | The HDFS small documents buffer memory management methods of algorithm are replaced based on ARC | |
US12073123B2 (en) | Metadata compaction | |
CN109871181A (en) | A kind of Object Access method and device | |
CN102073690B (en) | Method for constructing memory database supporting historical Key information | |
CN103491124A (en) | Method for processing multimedia message data and distributed cache system | |
US11099983B2 (en) | Consolidating temporally-related data within log-based storage | |
CN105068757A (en) | File semantics and system real-time state based redundant data deduplication method | |
CN115904795A (en) | Data storage method and device in storage system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20100728 |