CN118276785A - Input/output processing method, system, device, equipment, storage medium and product - Google Patents
Input/output processing method, system, device, equipment, storage medium and product Download PDFInfo
- Publication number
- CN118276785A CN118276785A CN202410691960.0A CN202410691960A CN118276785A CN 118276785 A CN118276785 A CN 118276785A CN 202410691960 A CN202410691960 A CN 202410691960A CN 118276785 A CN118276785 A CN 118276785A
- Authority
- CN
- China
- Prior art keywords
- write
- data
- task
- host
- chip cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003860 storage Methods 0.000 title claims abstract description 89
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 238000012545 processing Methods 0.000 claims abstract description 147
- 230000004044 response Effects 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 claims abstract description 59
- 238000004891 communication Methods 0.000 claims abstract description 15
- 238000012795 verification Methods 0.000 claims description 79
- 238000002360 preparation method Methods 0.000 claims description 16
- 238000012986 modification Methods 0.000 claims description 12
- 230000004048 modification Effects 0.000 claims description 12
- 230000000977 initiatory effect Effects 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 8
- 239000000725 suspension Substances 0.000 claims description 4
- 230000008014 freezing Effects 0.000 claims description 3
- 238000007710 freezing Methods 0.000 claims description 3
- 230000003068 static effect Effects 0.000 abstract description 7
- 238000007726 management method Methods 0.000 description 21
- 238000013461 design Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 12
- 230000001133 acceleration Effects 0.000 description 8
- 238000011161 development Methods 0.000 description 7
- 238000011084 recovery Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000003990 capacitor Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000010354 integration Effects 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 101500009102 Penaeus monodon CHH precursor-related peptide 4 Proteins 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- CXOXHMZGEKVPMT-UHFFFAOYSA-N clobazam Chemical compound O=C1CC(=O)N(C)C2=CC=C(Cl)C=C2N1C1=CC=CC=C1 CXOXHMZGEKVPMT-UHFFFAOYSA-N 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 239000008187 granular material Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 229940044442 onfi Drugs 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000007711 solidification Methods 0.000 description 1
- 230000008023 solidification Effects 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0658—Controller construction arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The embodiment of the invention provides an input and output processing method, a system, a device, equipment, a storage medium and a product, relates to the technical field of communication, and is applied to a storage control chip in an IO processing system, wherein an on-chip cache pool is mounted on the storage control chip, and the method comprises the following steps of: determining a corresponding working mode of the IO processing system; if the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction into an on-chip cache pool according to the first write IO instruction issued by the host; and when the host receives the first write IO completion response signal, storing the first data to be written into the target disk side device from the on-chip cache pool. In the embodiment of the invention, the memory control chip is provided with the on-chip cache pool, so that data can be cached through a plurality of SRAM (static random Access memory) with MB (micro-segment) sizes on the chip, larger data access bandwidth is provided, and meanwhile, the memory control chip is prevented from integrating a DDR (double data rate) controller with high bandwidth.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an input/output processing method, system, device, equipment, storage medium, and product.
Background
A typical implementation of data caching (DATA CACHE) in a RAID (Redundant Array of INDEPENDENT DISKS ) storage control card is driven by a host Write operation, i.e., in a "Write Back" mode, data is written first into the data caching, rather than directly into the disk. This reduces the number of frequent writes to the disk to some extent, and thus can reduce the response delay of host side write IO (Input Output). However, while better system performance and lower latency performance can be achieved by applying the "write-back" mode, the data is only temporarily stored in the off-chip volatile memory DRAM (Dynamic Random Access Memory ) and the actual drop-out cure storage is not completed, and from the perspective of the host, a write IO complete response is received, meaning that the data has completed cure storage and is not allowed to be lost. If the system encounters unexpected power failure at this time, two potential risks are brought, namely, the host is already answered, only a plurality of write IO data temporarily stored in the DRAM are not actually dropped, and the part of data has the risk of losing; secondly, at this time, the system may be performing a modification write operation on the RAID stripe on the physical hard disk, where the consistency of the stripe is destroyed, and a potential RAID write hole risk (unexpected power failure and overlapping bad disk) is caused.
In this context, in order to achieve high performance and avoid the above risks, to ensure the safety and consistency of Data, in the related art, the high-speed interface controller of response is integrated in the memory control chip through the Double Data Rate (DDR) granules, and the corresponding cache management policy is used to ensure that Data is continuously flushed from the cache to the disk in time and high efficiency, however, the above steps involve the software and hardware design of the complex algorithm and additional processing overhead, which increases the complexity of the design of the memory control chip, increases the overall cost of the memory control card and reduces the reliability of the system.
Disclosure of Invention
The embodiment of the invention aims to provide an input and output processing method, an input and output processing system, an input and output processing device, input and output processing equipment, a storage medium and a product, and the specific technical scheme is as follows:
In a first aspect of the present invention, there is provided a storage control chip applied to an IO processing system, where an on-chip cache pool is mounted on the storage control chip, the method including:
Determining a corresponding working mode of the IO processing system;
If the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction into the on-chip cache pool according to the first write IO instruction issued by a host;
And when the host receives a first write IO completion response signal, storing the first data to be written into the on-chip cache pool to the target disk side device.
Optionally, storing, according to a first write IO instruction issued by the host, first data to be written corresponding to the first write IO instruction in the on-chip cache pool includes:
Receiving a first write IO instruction issued by a host;
Creating a first full hardware task processing chain according to the first write IO instruction, wherein the first full hardware task processing chain comprises a first write IO task;
Locking a first stripe corresponding to the first write IO task, and setting a verification update log state corresponding to the first write IO task as a preparation state;
Distributing target on-chip cache data pages corresponding to the first write IO task from the on-chip cache pool, wherein the target on-chip cache data pages comprise a first target on-chip cache data page, a second target on-chip cache data page and a third target on-chip cache data page;
and storing the data to be written corresponding to the first write IO task to the first target on-chip cache data page.
Optionally, the storing the first data to be written from the on-chip cache pool to the target disk side device while the host receives the first write IO completion response signal includes:
sending a first write IO completion response signal to the host;
Updating the verification update log state corresponding to the first write IO task from the preparation state to an early response state;
Storing first history write data corresponding to the target disk side device to the second target on-chip cache data page, and storing first history check data corresponding to the target disk side device to the third target on-chip cache data page;
generating first check data corresponding to the first data to be written according to the first data to be written, the first historical write data and the first historical check data;
And updating the verification update log state from the early response state to a stripe modification state in a write-back mode, and writing the first data to be written and the first verification data into the target disk side device.
Optionally, after the step of updating the verification update log state from the early answer state to a stripe modification state in a write-back mode and writing the first data to be written and the first verification data to the target disk side device, the method includes:
releasing the target on-chip cache data page;
unlocking the first strip, and updating the verification update log state to an invalid state.
Optionally, the locking the first stripe corresponding to the first write IO task and setting the check update log state corresponding to the first write IO task to the ready state includes:
locking a first stripe corresponding to the first write IO task;
And allocating a verification update log record slot to the first write IO task, and initializing the verification update log state to be a preparation state.
Optionally, the creating a first all-hardware task processing chain according to the first write IO instruction, where the first all-hardware task processing chain includes a first write IO task including:
And creating a first all-hardware task processing chain according to the first write IO instruction, wherein the all-hardware task processing chain comprises a first write IO task, and submitting the first write IO task to a task queue management engine for scheduling.
Optionally, the verification update log state is generated based on a verification update log, and a record unit format corresponding to the verification update log includes the verification update log state corresponding to the first write IO task, a logical volume ID, and an IO size.
Optionally, after the step of determining the corresponding operation mode of the IO processing system, the method includes:
if the working mode is a write-back mode, initiating a data reading request to the target disk side equipment according to a read IO instruction issued by the host;
And replacing the host side address corresponding to the read IO instruction with a storage control chip side address according to the data read request, so that the target disk side device directly sends the data to be read corresponding to the data read request to the host.
Optionally, if the working mode is a write-back mode, initiating a data read request to the target disk side device according to a read IO instruction issued by the host includes:
If the working mode is a write-back mode, receiving a read IO instruction issued by the host;
Creating a second full-hardware task processing chain according to the read IO instruction, wherein the second full-hardware task processing chain comprises a read IO task, and submitting the read IO task to a task queue management engine for scheduling;
And locking a second strip corresponding to the read IO task, and initiating a data reading request to the target disk side device according to the read IO task.
Optionally, after the step of replacing the host side address corresponding to the read IO instruction with the storage control chip side address according to the data read request, so that the target disk side device directly sends the data to be read corresponding to the data read request to the host, the method includes:
And unlocking a second strip corresponding to the read IO task, and sending a read IO completion response signal to the host.
Optionally, after the step of determining the corresponding operation mode of the IO processing system, the method includes:
If the working mode is a write-through mode, storing second data to be written corresponding to a second write IO instruction into the on-chip cache pool according to the second write IO instruction issued by the host;
and sending a second write IO completion response signal to the host under the condition that the second data to be written stored on the on-chip cache pool is detected to be stored in the target disk side device.
Optionally, storing, according to a second write IO instruction issued by the host, second data to be written corresponding to the second write IO instruction in the on-chip cache pool includes:
receiving a second write IO instruction issued by the host;
creating a second full hardware task processing chain according to the second write IO instruction, wherein the second full hardware task processing chain comprises a second write IO task;
Locking a second stripe corresponding to the second write IO task, and setting a verification update log state corresponding to the second write IO task as a preparation state;
Distributing target on-chip cache data pages corresponding to the second write IO task from the on-chip cache pool, wherein the target on-chip cache data pages comprise a fourth target on-chip cache data page, a fifth target on-chip cache data page and a sixth target on-chip cache data page;
And storing the data to be written corresponding to the second write IO task to the fourth target on-chip cache data page.
Optionally, the sending, when detecting that the second data to be written stored on the on-chip cache pool is stored to the target disk side device, a second write IO completion response signal to the host includes:
Storing second historical write data corresponding to the target disk side device into the fifth target on-chip cache data page, and storing second historical check data corresponding to the target disk side device into the sixth target on-chip cache data page;
Generating second verification data corresponding to the second data to be written according to the second data to be written, the second historical write data and the second historical verification data;
Updating the verification update log state from the preparation state to a stripe modification state in a write-through mode, and writing the second data to be written and the second verification data into the target disk side device;
and sending a second write IO completion response signal to the host.
Optionally, after the step of writing the second data to be written and the second verification data to the target disk side device, before the step of sending a second write IO completion response signal to the host, the method includes:
releasing the target on-chip cache data page;
unlocking the second stripe, and updating the verification update log state to an invalid state.
Optionally, after the step of determining the corresponding operation mode of the IO processing system, the method includes:
And if the working mode is a suspension mode, finishing the IO task corresponding to the current IO instruction issued by the host, and stopping responding to the IO instruction issued by the host.
Optionally, after the step of determining the corresponding operation mode of the IO processing system, the method includes:
And if the working mode is a freezing mode, stopping responding to a fourth write IO instruction issued by the host.
In a second aspect of the present invention, there is also provided an IO processing system, including: the system comprises a host, a storage control card and target disk side equipment, wherein the storage control card comprises a storage control chip, and an on-chip cache pool is mounted on the storage control chip;
The storage control chip is used for determining a working mode corresponding to the IO processing system; if the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction into the on-chip cache pool according to the first write IO instruction issued by a host; and when the host receives a first write IO completion response signal, storing the first data to be written into the on-chip cache pool to the target disk side device.
Optionally, the IO processing system further includes: an on-board integrated standby power module is integrated on the storage control card;
The on-board integrated standby power module is used for supplying power to the IO processing system when the IO processing system is detected to be in a power-down state.
Optionally, the IO processing system further includes: the storage control chip is connected with external nonvolatile storage through a low-speed IO interface.
In a third aspect of the present invention, there is also provided an IO processing apparatus, applied to a storage control chip in an IO processing system, on which an on-chip cache pool is mounted, the apparatus including:
the determining module is used for determining a working mode corresponding to the IO processing system;
The storage module is used for storing first data to be written corresponding to a first write IO instruction to the on-chip cache pool according to the first write IO instruction issued by the host if the working mode is a write-back mode;
And the parallel processing module is used for storing the first data to be written into the on-chip cache pool to the target disk side device when the host receives the first write IO completion response signal.
In a third aspect of the present invention, there is also provided a communication device comprising: a transceiver, a memory, a processor, and a program stored on the memory and executable on the processor;
The processor is configured to read a program in a memory to implement the input/output processing method according to any one of the first aspect.
In a fourth aspect of the present invention, there is also provided a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to implement an input output processing method as described in any of the first aspects.
In a fourth aspect of the invention, there is also provided a computer program product comprising a computer program/instruction which when executed by a processor implements the input output processing method according to any of the first aspects.
The input and output processing method provided by the embodiment of the invention is applied to a storage control chip in an IO processing system, and an on-chip cache pool is mounted on the storage control chip, and the method comprises the following steps: determining a corresponding working mode of the IO processing system; if the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction into the on-chip cache pool according to the first write IO instruction issued by a host; and when the host receives a first write IO completion response signal, storing the first data to be written into the on-chip cache pool to the target disk side device. In the embodiment of the invention, the memory control chip is applied to an IO processing system, and the chip is provided with an on-chip cache pool, so that data can be cached through a plurality of on-chip SRAM (static random Access memory) with the MB size, the on-chip SRAM scheme not only can provide larger data access bandwidth (hundreds of GB/s), but also can avoid integrating a DDR controller with high bandwidth (tens of GB/s) in the memory control chip, and design complexity and chip development cost (IP purchasing, chip area and integrated verification) are reduced. Through the processing flow of the write IO of the system in the write-back mode, the write IO data is only temporarily stored to the local storage and then immediately subjected to verification update and disk drop processing, so that the performance is further released, and the design complexity is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flowchart illustrating steps of a method for processing input and output according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps 102 in a flowchart illustrating steps of an input/output processing method according to an embodiment of the present invention;
Fig. 3 is a step flowchart of step 103 in a step flowchart of an input/output processing method according to an embodiment of the present invention;
FIG. 4 is a second flowchart illustrating steps of an input/output processing method according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating steps of an input/output processing method according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating steps of an input/output processing method according to an embodiment of the present invention;
FIG. 7 is a block diagram of an IO processing device provided by an embodiment of the present invention;
fig. 8 is a schematic diagram of a communication device according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an IO processing system architecture provided by an embodiment of the present invention;
FIG. 10 is a flowchart of a write-back mode write input/output processing method according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a verification update log according to an embodiment of the present invention;
FIG. 12 is a state transition diagram provided by an embodiment of the present invention;
FIG. 13 is a schematic diagram of an address mapping relationship according to an embodiment of the present invention;
FIG. 14 is a flowchart of a read I/O processing method in a write-back mode according to an embodiment of the present invention;
FIG. 15 is a schematic diagram of migration relationships of working modes under three failure scenarios according to an embodiment of the present invention;
fig. 16 is a flowchart of a write input/output processing method in a write pass mode according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present application, numerous technical details have been set forth in order to provide a better understanding of the present application. The claimed application may be practiced without these specific details and with various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present application, and the embodiments can be mutually combined and referred to without contradiction.
It should be noted that, in the embodiment of the present application, referring to fig. 9, fig. 9 is a schematic diagram of an IO processing system in the present application, where the IO processing system includes: the system comprises a host, a storage control card and target disk side equipment, wherein the storage control card comprises a storage control chip, and an on-chip cache pool is mounted on the storage control chip;
It should be noted that, in the embodiment of the present application, the IO processing system includes a host, a memory control card, and an SSD of the target disk side device, where the memory control card includes a memory control chip, on which an on-chip cache pool is mounted, and caches data through a plurality of SRAMs with MB sizes on the chip, instead of the DDR scheme with external GB level sizes in the conventional scheme, the on-chip SRAM scheme can provide a larger data access bandwidth (hundreds of GB/s), and meanwhile, avoid integrating a DDR controller with a high bandwidth (tens of GB/s) in the memory control chip, thereby reducing design complexity and chip development costs (IP procurement, chip area, and integrated verification).
The storage control chip is used for determining a working mode corresponding to the IO processing system; if the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction into the on-chip cache pool according to the first write IO instruction issued by a host; and when the host receives a first write IO completion response signal, storing the first data to be written into the on-chip cache pool to the target disk side device.
Further, the IO processing system further includes: an on-board integrated standby power module is integrated on the storage control card; the on-board integrated standby power module is used for supplying power to the IO processing system when the IO processing system is detected to be in a power-down state.
Further, the IO processing system further includes: the storage control chip is connected with external nonvolatile storage through a low-speed IO interface.
It should be noted that, the application can respectively expand the on-board integrated standby power module and/or nonvolatile storage on the IO system, and under the unexpected power failure situation, the smaller data cache (tens of MB) is only a few percent of the data quantity to be backed up of the traditional RAID card, so the application has the following two potential optimization choices: keeping the bandwidth of the nonvolatile storage for backup data unchanged (e.g., providing a bandwidth of 1GB/s over the ONFI interface), the conventional memory card W GB data requires at least W seconds to complete, and the present application uses the same data bandwidth, so tens of MB data requires only tens of milliseconds to complete the backup. If the power consumption of the chip is the same in the false equipment electric process, the capacitance of the spare power module based on the super capacitor can be reduced from tens of farads to thousands of microfarads, and the spare power capacitor is miniaturized, so that the integration on the storage card is possible; or alternatively
The traditional standby capacitor is kept unchanged, or the standby time of a plurality of seconds can be realized by the support system, so that the bandwidth requirement on nonvolatile data cache can be reduced, more mature and low-cost technical schemes such as QSPI interface NAND and MRAM are selected, and a complex wear balancing algorithm is avoided.
Further, the storage control chip may include a host interface module, a control page creation module, a stripe lock management module, a dynamic memory allocation module, a host data DMA module, a response ending module, an IO acceleration module, a RAID calculation acceleration module, and a dynamic memory reclamation module.
Referring to fig. 1, a first step flowchart of an input/output processing method provided by an embodiment of the present invention is shown, where the method may include:
Step 101, determining a corresponding working mode of an IO processing system;
it should be noted that, in the embodiment of the present application, the operation modes corresponding to the IO processing system may be multiple, default to the write-back mode, and other operation modes include a write-through mode, a suspend mode, and a freeze mode.
Therefore, for different working modes, the corresponding IO processing modes of the IO processing system are inconsistent.
Step 102, if the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction to the on-chip cache pool according to the first write IO instruction issued by a host;
It should be noted that, when the working mode is the write-back mode, aiming at the characteristic of high-speed SSD storage, after the response host is completed, the data is continuously subjected to the disk-dropping processing, so that the design and operation cost of a complex data cache management algorithm are avoided, the requirement on the data cache space is greatly reduced, and the size of the data cache is only required to meet the requirement of all concurrent IOs at present.
In a specific write IO processing flow, taking a 4KB random write IO processing procedure in a RAID5 mode as an example, a storage control chip processes in a read-modify-write manner according to characteristics of the IO, and a processing flow chart is shown in fig. 10.
Further, referring to fig. 2, step 102 includes:
Step 1021, receiving a first write IO instruction issued by a host;
Step 1022, creating a first all-hardware task processing chain according to the first write IO instruction, where the first all-hardware task processing chain includes a first write IO task;
Further, step 1022 includes: and creating a first all-hardware task processing chain according to the first write IO instruction, wherein the all-hardware task processing chain comprises a first write IO task, and submitting the first write IO task to a task queue management engine for scheduling.
Step 1023, locking a first stripe corresponding to the first write IO task, and setting a verification update log state corresponding to the first write IO task as a preparation state;
Further, step 1023 may include: locking a first stripe corresponding to the first write IO task; and allocating a verification update log record slot to the first write IO task, and initializing a verification update log state to be a preparation state.
Further, the verification update log state is generated based on a verification update log, and a record unit format corresponding to the verification update log includes the verification update log state corresponding to the first write IO task, a logical volume ID, and an IO size.
Step 1024, allocating a target on-chip cache data page corresponding to the first write IO task from the on-chip cache pool, where the target on-chip cache data page includes a first target on-chip cache data page, a second target on-chip cache data page, and a third target on-chip cache data page;
Step 1025, storing the data to be written corresponding to the first write IO task to the first target on-chip cache data page.
Specifically, in steps 1021-1025, the host issues a 4KB write IO instruction as an example.
Firstly, a host interface module receives a first write IO instruction issued by a host, namely an IO task command in FIG. 10, and forwards the first write IO instruction to a control page creation module;
Secondly, the control page creation module creates a full hardware task processing chain according to the first write IO instruction, wherein the full hardware task processing chain comprises a first write IO task corresponding to the first write IO instruction, and submits the first write IO task to a task queue management engine for scheduling;
Secondly, the stripe lock management module locks a first stripe corresponding to the first write IO task, allocates a check update log record slot and initializes the check update log state to a ready state, namely, in fig. 10, locks the stripe covered by the IO, allocates a PUL log record slot and initializes the PUL log state to the STB.
Secondly, the dynamic memory allocation module allocates corresponding target on-chip cache data pages based on the first write IO task from the on-chip cache pool, wherein the target on-chip cache data pages comprise a first target on-chip cache data page, a second target on-chip cache data page and a third target on-chip cache data page, so that referring to fig. 10, it can be obtained that 34 KB on-chip cache data pages X, Y and Z are allocated to the IO task by taking a 4KB random write IO processing process in RAID5 mode as an example, namely the first target on-chip cache data page, the second target on-chip cache data page and the third target on-chip cache data page, and the total is 12 KB.
The host data DMA module reads the data to be written corresponding to the first write IO task (i.e., the data to be written D' in fig. 10) from the host into the local first target on-chip cache data page X.
And step 103, storing the first data to be written from the on-chip cache pool to the target disk side device while the host receives the first write IO completion response signal.
Further, referring to fig. 3, step 103 includes:
step 1031, sending a first write IO completion response signal to the host;
Step 1032, updating the verification update log state corresponding to the first write IO task from the preparation state to an advance response state;
step 1033, storing the first history write data corresponding to the target disk side device to the second target on-chip cache data page, and storing the first history check data corresponding to the target disk side device to the third target on-chip cache data page;
Step 1034, generating first verification data corresponding to the first data to be written according to the first data to be written, the first historical write data and the first historical verification data;
Step 1035, updating the verification update log status from the early response status to a stripe modification status in a write-back mode, and writing the first data to be written and the first verification data into the target disk side device.
Further, after step 1035, the method further includes:
Releasing the target on-chip cache data page; unlocking the first strip, and updating the verification update log state to an invalid state; and sending the first write IO completion response signal to the host.
In the foregoing steps 1031 to 1035, the response ending module updates the PUL state to the ER first, then sends a completion response to the host, that is, sends a first write IO completion response signal to the host, and after the completion of the response by the host, continues to perform the disk-dropping processing on the data, specifically, initiates the history 4KB data (D) to the disk to the local second target on-chip cache data page Y and the history 4KB check data (P) to the local third target on-chip cache data page Z through the IO acceleration module, that is, stores the first history write data corresponding to the target disk-side device to the second target on-chip cache data page, and stores the first history check data corresponding to the target disk-side device to the third target on-chip cache data page.
And further, reading in data X (D '), Y (D) and Z (P), calculating to obtain new check data (P'), and covering and storing a third target on-chip cache data page Z, namely generating first check data corresponding to the first data to be written according to the first data to be written, the first historical write data and the first historical check data.
Further, the PUL state is updated to SIT_WB, and then the storage of new data to be written (D ') and new verification data (P') is initiated to the disk, namely, the verification update log state corresponding to the first write IO task is updated from the preparation state to the early response state.
After the data is stored in the target disk side equipment, the first data (D ') to be written and the first check data (P') corresponding to the first IO writing task are completely dropped, so that the storage space, namely the cache data pages X, Y and Z on the target chip, can be released through a dynamic memory recovery module, the strip can be unlocked, the PUL log record slot position can be recovered, the PUL state is refreshed into INV, and further recovery of the control resource of the first IO writing task is completed, namely the cache data pages on the target chip are released; unlocking the first strip, and updating the verification update log state to an invalid state; and sending the first write IO completion response signal to the host.
In the above operation, after the response ending engine completes the host response, the conventional "write back" process ends, however, the present solution continues to complete the subsequent disc-drop operation while maintaining the same low-latency operation for the host.
In particular, as can be seen in fig. 10, for a portion of the processing steps, the verification update log (PUL: parity Update Log) needs to be updated, specifically, the stripe lock management module needs to update the verification update log, that is, the PUL log state, the reply ending module needs to update the PUL log state, and the two IO acceleration modules need to update the PUL log state, and the other IO acceleration modules need not update the PUL log state, and one record unit format of the PUL is shown as follows, which includes a plurality of parameters of the service processing IO, wherein the most important variable is a "PUL state" flag, specifically, the PUL unit refers to fig. 11.
A plurality of PUL record entry resources are reserved in the system, dynamic allocation and recovery are carried out in the process of each IO writing process, and IO key updating steps are recorded faithfully in the process of IO processing. The following diagram is a state transition diagram, which is a diagram of two scenes, namely "Write Back (WB)" and "Write Through (WT)" from the INV state in the process of writing IO, and specifically, the state transition diagram refers to fig. 12.
The PUL status is an important basis for all services to develop, and its specific status definition and power down recovery policy are shown in table 1 below.
Table 1 PUL states and power down recovery strategy
From Table 1, it can be seen that, when in write IO processing, it can be determined how to perform a power down recovery processing strategy if a power down occurs based on the PUL status recorded during operation.
Specifically, when it is detected that the IO processing system is in an unexpected power failure condition, the control board load integrated standby module supplies power to the IO processing system, at this time, all PUL entries (refer to fig. 11) corresponding to the PUL states are written into a nonvolatile memory (NAND/MRAM) externally connected with the memory control chip along with data in the data cache for solidification and storage, and service and data recovery processing is performed after the power is turned on again.
Therefore, when the condition that the IO processing system is in unexpected power failure is detected, the control board load integrated standby module obtains the PUL state corresponding to the current IO processing system under the condition that the control board load integrated standby module supplies power for the IO processing system.
When the PUL state is INV or STB, the host is in a response incomplete state, namely the host does not receive the IO response completion signal, so that IO service and data do not need to be processed.
When the PUL state is ER, the host computer is in response to completion, and the stripe consistency of the corresponding stripe is not destroyed, so that the original IO of the host computer is analyzed, new data D 'is found out from the data list, and the disk dropping operation is completed on the new data D'.
When the PUL state is SIT_WB, the host is in a response completion state, and the stripe consistency of the corresponding stripe may be destroyed, so that the original IO of the host is analyzed, new data D' is found out from the data list, the data which is not covered in the same stripe is read from the storage device and is integrated into a full stripe, a new check P is calculated, and finally, all the data and the check are dropped.
When the PUL state is sit_wt, the host is in a response incomplete state, and the stripe consistency of the corresponding stripe may be destroyed, so that the data in the same stripe is read from the storage device, and check P is recalculated and dropped.
The input and output processing method provided by the embodiment of the invention is applied to a storage control chip in an IO processing system, and an on-chip cache pool is mounted on the storage control chip, and the method comprises the following steps: determining a corresponding working mode of the IO processing system; if the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction into the on-chip cache pool according to the first write IO instruction issued by a host; and when the host receives a first write IO completion response signal, storing the first data to be written into the on-chip cache pool to the target disk side device. In the embodiment of the invention, the memory control chip is applied to an IO processing system, and the chip is provided with an on-chip cache pool, so that data can be cached through a plurality of on-chip SRAM (static random Access memory) with the MB size, the on-chip SRAM scheme not only can provide larger data access bandwidth (hundreds of GB/s), but also can avoid integrating a DDR controller with high bandwidth (tens of GB/s) in the memory control chip, and design complexity and chip development cost (IP purchasing, chip area and integrated verification) are reduced. Through the processing flow of the write IO of the system in the write-back mode, the write IO data is only temporarily stored to the local storage and then immediately subjected to verification update and disk drop processing, so that the performance is further released, and the design complexity is reduced.
Referring to fig. 4, a second step flowchart of an input/output processing method provided by an embodiment of the present invention is shown, where the method may include:
Step 401, determining a corresponding working mode of an IO processing system;
step 402, if the working mode is a write-back mode, initiating a data reading request to the target disk side device according to a read IO instruction issued by the host;
Further, step 402 includes:
If the working mode is a write-back mode, receiving a read IO instruction issued by the host;
Creating a second full-hardware task processing chain according to the read IO instruction, wherein the second full-hardware task processing chain comprises a read IO task, and submitting the read IO task to a task queue management engine for scheduling;
And locking a second strip corresponding to the read IO task, and initiating a data reading request to the target disk side device according to the read IO task.
Step 403, replacing the host side address corresponding to the read IO instruction with a storage control chip side address according to the data read request, so that the target disk side device directly sends the data to be read corresponding to the data read request to the host.
Further, after step 403, it may include: and unlocking a second strip corresponding to the read IO task, and sending a read IO completion response signal to the host.
It should be noted that, in the embodiment of the present application, because the data cache management is skipped, the data will not reside in the storage control card, and correspondingly, the processing when the IO processing system processes the read IO issued by the host is simplified, and the read IO processing does not need to query whether the currently required IO data is in the data cache. Meanwhile, the data page read from the storage device is not cached in the storage card and then uploaded to the storage space of the Host, in the process of forwarding the IO instruction to the disk, HPRP4 (Host-PRP-4 KB) of the Host is required to be translated into a CPRP4 (Client-PRP-4 KB) address corresponding to one by one, the disk side directly routes the data to the storage space pointed by the HPRP4 at the Host side according to a preset address mapping relation after the data is DMA to the inside of the storage card according to the address CPRP4, and therefore caching of the data in the inside of the storage card is avoided. Referring to fig. 13, fig. 13 is a translation example of HPRR4 like CPRP4, and in the embodiment of the present application, the translation is implemented by the IO acceleration module.
Further, referring to fig. 14, fig. 14 shows a read IO instruction processing procedure, it can be seen that, first, the host interface module retrieves an l0 task instruction issued by the host, the well forwards the l0 task instruction to the control page creation module for processing, the control page creation module creates a full hardware task processing chain according to an IO request corresponding to the IO instruction, and the well submits a task to the task queue management engine for scheduling.
Furthermore, the stripe lock management module locks the second stripe, and at this time, write updating is not allowed to be performed on the corresponding second stripe after locking, and the read instruction does not perform PUL slot initialization.
The IO acceleration module initiates a data reading request to the disk, the host address (HPRP 4) is replaced by a storage control chip address (CPRP 4) in a one-to-one mapping mode, the disk side data is only directly routed to the host storage space by the storage control chip, and read IO data to be read is not cached in the storage control chip.
When the host computer has read the required data, the strip lock management module unlocks the second strip, the response ending module sends a read IO completion response signal to the host computer, and the host computer interface module replies to the host computer completion queue.
In summary, for read IO, the present application adds a very limited IO processing delay (within about ten microseconds) based on the disk side read IO delay.
In particular, when concurrent read-write IO data objects are overlapped, if the host does not receive a completion response of write IO, the host issues a read IO instruction related to a related data area, and according to a protocol, the storage control card cannot guarantee the data integrity, and the read data may be different from the write IO data issued in advance;
However, if a completion response of writing the IO has been received (but the memory card is still performing a subsequent disk-drop operation on the IO), the host issues a read IO instruction related to the relevant data area again, and the memory control card needs to ensure the data integrity. The memory card is internally realized by performing read-write mutual exclusion through the strip locking management module, if the predecessor write IO is not completed by the disk, the corresponding strip locking will not be unlocked, and the subsequent issued read IO will attempt to lock the same strip and be refused until the predecessor write IO is completed.
In the embodiment of the invention, the memory control chip is applied to an IO processing system, and the chip is provided with an on-chip cache pool, so that data can be cached through a plurality of on-chip SRAM (static random Access memory) with the MB size, the on-chip SRAM scheme not only can provide larger data access bandwidth (hundreds of GB/s), but also can avoid integrating a DDR controller with high bandwidth (tens of GB/s) in the memory control chip, and design complexity and chip development cost (IP purchasing, chip area and integrated verification) are reduced.
In addition, the complex data cache management module in the traditional memory control card is avoided, the write IO data is only temporarily stored to the local memory and then immediately subjected to verification update and disc-drop processing, and the read IO data is not cached in the local memory, so that the performance is further released, and the design complexity is reduced.
Referring to fig. 5, a step flowchart three of an input/output processing method provided by an embodiment of the present invention is shown, where the method may include:
step 501, determining a corresponding working mode of an IO processing system;
Step 502, if the working mode is a write-through mode, storing second data to be written corresponding to a second write IO instruction to the on-chip cache pool according to the second write IO instruction issued by the host;
further, step 502 includes:
receiving a second write IO instruction issued by the host;
creating a second full hardware task processing chain according to the second write IO instruction, wherein the second full hardware task processing chain comprises a second write IO task;
Locking a second stripe corresponding to the second write IO task, and setting a verification update log state corresponding to the second write IO task as a preparation state;
Distributing target on-chip cache data pages corresponding to the second write IO task from the on-chip cache pool, wherein the target on-chip cache data pages comprise a fourth target on-chip cache data page, a fifth target on-chip cache data page and a sixth target on-chip cache data page;
And storing the data to be written corresponding to the second write IO task to the fourth target on-chip cache data page.
Step 503, when it is detected that the second data to be written stored in the on-chip cache pool is stored in the target disk side device, sending a second write IO completion response signal to the host.
Further, step 503 includes:
Storing second historical write data corresponding to the target disk side device into the fifth target on-chip cache data page, and storing second historical check data corresponding to the target disk side device into the sixth target on-chip cache data page;
Generating second verification data corresponding to the second data to be written according to the second data to be written, the second historical write data and the second historical verification data;
Updating the verification update log state from the preparation state to a stripe modification state in a write-through mode, and writing the second data to be written and the second verification data into the target disk side device;
releasing the target on-chip cache data page;
unlocking the second strip, and updating the verification update log state to an invalid state;
and sending a second write IO completion response signal to the host.
It should be noted that, in the embodiment of the present application, the system works in WB mode by default, but it is unavoidable that the standby power of the system is at risk of being accidentally removed and disabled, and in this scenario, the system needs to make a corresponding response to ensure data security and integrity.
Therefore, the present application defines three possible operation modes under the failure scene, and the migration relationship thereof is shown in fig. 15, and as can be seen from fig. 15, WT (Write Though) is a write-through mode, and the host is not answered until the write IO data is dropped. Therefore, when unexpected power failure occurs, only the PUL log information is backed up, and due to the fact that the PUL information is incomplete in record, a write hole risk with a certain probability exists, after the unexpected power failure and the re-power-up, the whole RAID group is required to be subjected to stripe consistency check, and the inconsistent stripes are subjected to recalculation and refreshing of check data.
In the write-through mode, the IO processing flow diagram is shown in FIG. 16, in which the PUL still needs to be recorded and updated, but is somewhat different from the updated portion in the write-back mode.
Thus, in steps 502-503, the host issues a 4KB write IO instruction in write through mode as an example.
Firstly, the host interface module receives a second write IO instruction issued by the host, namely an IO task command in FIG. 10, and forwards the second write IO instruction to the control page creation module;
Secondly, the control page creation module creates a full hardware task processing chain according to the second write IO instruction, wherein the full hardware task processing chain comprises a second write IO task corresponding to the second write IO instruction, and submits the second write IO task to the task queue management engine for scheduling;
Secondly, the stripe lock management module locks a second stripe corresponding to the second write IO task, allocates a check update log record slot and initializes the check update log state to a ready state, namely, locks the stripe in fig. 16, allocates a PUL log record slot and initializes the PUL log state to the STB.
And secondly, the dynamic memory allocation module allocates corresponding target on-chip cache data pages from the on-chip cache pool based on the second write IO task, and in order to understand the application by the person skilled in the art, the target on-chip cache data pages are divided into a fourth target on-chip cache data page, a fifth target on-chip cache data page and a sixth target on-chip cache data page in the re-write through mode.
Therefore, referring to fig. 16, taking the 4KB random write IO processing procedure in the RAID5 mode as an example, 34 KB on-target cache data pages X, Y, Z are allocated, that is, the fourth on-target cache data page, the fifth on-target cache data page, and the sixth on-target cache data page, which total 12KB is given to the IO task.
The host data DMA module reads the second data to be written (i.e., the data to be written D' in fig. 16) corresponding to the second write IO task from the host into the local fourth target on-chip cache data page X.
Initiating historical 4KB data (D) to a local fifth target on-chip cache data page Y and historical 4KB check data (P) to a local sixth target on-chip cache data page Z to a disk through an IO acceleration module, namely storing second historical write-in data corresponding to target disk side equipment to the fifth target on-chip cache data page, and storing second historical check data corresponding to target disk side equipment to the sixth target on-chip cache data page.
And further, reading in data X (D '), Y (D) and Z (P), calculating to obtain new check data (P'), and overlaying and storing a sixth target on-chip cache data page Z, namely generating second check data corresponding to the second data to be written according to the second data to be written, the second historical write data and the second historical check data.
Further, the PUL status is updated to SIT_WT first, then a new data to be written (D ') and a new verification data (P') are initiated to be stored to the disk, i.e. the verification update log status is updated from the ready status to a stripe modification status in write-through mode, and the second data to be written and the second verification data are written to the target disk side device.
After the data is stored in the target disk side device, the first data to be written (D ') and the first check data (P') corresponding to the first write IO task are already in the disk, so that the storage space, namely the target on-chip cache data pages X, Y and Z, can be released through the dynamic memory reclamation module.
And the strip lock management module works again at the moment, namely the strip can be unlocked, the PUL log record slot position is recovered, the PUL state is refreshed to be INV, namely the second strip is unlocked, and the verification update log state is updated to be invalid.
And the response ending module is used for recovering IO control resources and sending a completion response to the host, namely sending a second write IO completion response signal to the host.
The host interface module replies to the host completion queue, i.e., the host knows that IO processing has been completed at this time.
In the embodiment of the invention, the memory control chip is applied to an IO processing system, and the chip is provided with an on-chip cache pool, so that data can be cached through a plurality of on-chip SRAM (static random Access memory) with the MB size, the on-chip SRAM scheme not only can provide larger data access bandwidth (hundreds of GB/s), but also can avoid integrating a DDR controller with high bandwidth (tens of GB/s) in the memory control chip, and design complexity and chip development cost (IP purchasing, chip area and integrated verification) are reduced. Through the processing flow of the write IO of the system in the write-back mode, the write IO data is only temporarily stored to the local storage and then immediately subjected to verification update and disk drop processing, so that the performance is further released, and the design complexity is reduced.
In addition, the system works in a WB mode by default, but the standby power supply of the system is inevitably removed and disabled accidentally, and in the scene, the system needs to respond correspondingly to ensure the safety and integrity of data, and the host is not answered until IO data is written after the data is dropped. The unexpected power failure happens, only the PUL log information is backed up, and due to the fact that the PUL information record is incomplete, a write hole risk with a certain probability exists, the whole RAID group is required to be checked for stripe consistency after the unexpected power failure and the power-up again, and the inconsistent stripes are subjected to recalculation and refreshing of check data.
Referring to fig. 6, a step flowchart of an input/output processing method provided by an embodiment of the present invention is shown, where the method may include:
step 601, determining a corresponding working mode of an IO processing system;
Step 602, if the working mode is a suspension mode, completing an IO task corresponding to a current IO instruction issued by the host, and stopping responding to the IO instruction issued by the host;
and step 603, if the working mode is a freezing mode, stopping responding to a fourth write IO instruction issued by the host.
It should be noted that, in the embodiment of the present application, if the operation Mode is a suspension Mode (HM, hang Mode), after the processing of the IO is completed, a new IO request of the host is not responded.
Optionally, after receiving the reporting error of the memory card, the host may trigger a "continuous" command to make the memory card work in the WT mode.
If the operation Mode is a Freeze Mode (FM), the write IO is not processed any more and only the read IO of the host is responded.
Optionally, after receiving the reporting error of the memory card, the host may trigger a "continuous" command to make the memory card work in the WT mode.
In the embodiment of the invention, the memory control chip is applied to an IO processing system, and the chip is provided with an on-chip cache pool, so that data can be cached through a plurality of on-chip SRAM (static random Access memory) with the MB size, the on-chip SRAM scheme not only can provide larger data access bandwidth (hundreds of GB/s), but also can avoid integrating a DDR controller with high bandwidth (tens of GB/s) in the memory control chip, and design complexity and chip development cost (IP purchasing, chip area and integrated verification) are reduced. Through the processing flow of the write IO of the system in the write-back mode, the write IO data is only temporarily stored to the local storage and then immediately subjected to verification update and disk drop processing, so that the performance is further released, and the design complexity is reduced.
Referring to fig. 7, a schematic structural diagram of an IO processing device according to an embodiment of the present invention is applied to a storage control chip in an IO processing system, where an on-chip cache pool is mounted on the storage control chip, and the device includes:
A determining module 701, configured to determine a working mode corresponding to the IO processing system;
the storage module 702 is configured to store, if the working mode is a write-back mode, first data to be written corresponding to a first write IO instruction issued by a host to the on-chip cache pool according to the first write IO instruction;
And the parallel processing module 703 is configured to store the first data to be written from the on-chip cache pool to a target disk side device while the host receives a first write IO completion response signal.
In the embodiment of the invention, the memory control chip is applied to an IO processing system, and the chip is provided with an on-chip cache pool, so that data can be cached through a plurality of on-chip SRAM (static random Access memory) with the MB size, the on-chip SRAM scheme not only can provide larger data access bandwidth (hundreds of GB/s), but also can avoid integrating a DDR controller with high bandwidth (tens of GB/s) in the memory control chip, and design complexity and chip development cost (IP purchasing, chip area and integrated verification) are reduced. Through the processing flow of the write IO of the system in the write-back mode, the write IO data is only temporarily stored to the local storage and then immediately subjected to verification update and disk drop processing, so that the performance is further released, and the design complexity is reduced.
The embodiment of the present invention also provides a communication device, as shown in fig. 8, including a processor 801, a communication interface 802, a memory 803, and a communication bus 804, where the processor 801, the communication interface 802, and the memory 803 complete communication with each other through the communication bus 804,
A memory 803 for storing a computer program;
the processor 801, when executing the program stored in the memory 803, may implement the following steps:
Determining a corresponding working mode of the IO processing system;
If the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction into the on-chip cache pool according to the first write IO instruction issued by a host;
And when the host receives a first write IO completion response signal, storing the first data to be written into the on-chip cache pool to the target disk side device.
Where the memory and the processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors and the memory together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor may be transmitted over a wired medium or through an antenna on a wireless medium, and the antenna further receives and transmits data to the processor. The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory may be used to store data used by the processor in performing operations.
The communication bus mentioned by the above terminal may be a peripheral Component interconnect standard (P advance answer state iph advance answer state al Component Int advance answer state connect, PCI for short), or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include random access memory (Random Access Memory, RAM) or may include non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application Specific Integrated Circuit (ASIC), field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In yet another embodiment of the present invention, a computer readable storage medium is provided, where instructions are stored, which when executed on a computer, cause the computer to perform the input/output processing method according to any one of the above embodiments.
In yet another embodiment of the present invention, a computer program product containing instructions that, when run on a computer, cause the computer to perform the input-output processing method of any of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.
Claims (23)
1. The input/output processing method is characterized by being applied to a storage control chip in an IO processing system, wherein an on-chip cache pool is mounted on the storage control chip, and the method comprises the following steps of:
Determining a corresponding working mode of the IO processing system;
If the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction into the on-chip cache pool according to the first write IO instruction issued by a host;
And when the host receives a first write IO completion response signal, storing the first data to be written into the on-chip cache pool to the target disk side device.
2. The method of claim 1, wherein storing the first data to be written corresponding to the first write IO instruction into the on-chip cache pool according to the first write IO instruction issued by the host includes:
Receiving a first write IO instruction issued by a host;
Creating a first full hardware task processing chain according to the first write IO instruction, wherein the first full hardware task processing chain comprises a first write IO task;
Locking a first stripe corresponding to the first write IO task, and setting a verification update log state corresponding to the first write IO task as a preparation state;
Distributing target on-chip cache data pages corresponding to the first write IO task from the on-chip cache pool, wherein the target on-chip cache data pages comprise a first target on-chip cache data page, a second target on-chip cache data page and a third target on-chip cache data page;
and storing the data to be written corresponding to the first write IO task to the first target on-chip cache data page.
3. The method of claim 1, wherein storing the first data to be written from the on-chip cache pool to a target disk side device while the host receives a first write IO completion acknowledgement signal comprises:
sending a first write IO completion response signal to the host;
updating the verification update log state corresponding to the first write IO task from the preparation state to the advanced response state;
Storing first historical writing data corresponding to the target disk side equipment to a second target on-chip cache data page, and storing first historical checking data corresponding to the target disk side equipment to a third target on-chip cache data page;
generating first check data corresponding to the first data to be written according to the first data to be written, the first historical write data and the first historical check data;
And updating the verification update log state from the early response state to a stripe modification state in a write-back mode, and writing the first data to be written and the first verification data into the target disk side device.
4. A method according to claim 3, wherein after the step of updating the check update log state from the early reply state to a stripe modification state in write-back mode and writing the first data to be written and the first check data to the target disk side device, the method comprises:
releasing the target on-chip cache data page;
Unlocking the first strip, and updating the verification update log state to an invalid state.
5. The method of claim 2, wherein locking the first stripe corresponding to the first write IO task and setting the check update log state corresponding to the first write IO task to a ready state comprises:
locking a first stripe corresponding to the first write IO task;
And allocating a verification update log record slot to the first write IO task, and initializing the verification update log state to be a preparation state.
6. The method of claim 2, wherein the creating a first full hardware task processing chain from the first write IO instruction, the first full hardware task processing chain comprising a first write IO task comprises:
And creating a first all-hardware task processing chain according to the first write IO instruction, wherein the all-hardware task processing chain comprises a first write IO task, and submitting the first write IO task to a task queue management engine for scheduling.
7. The method of claim 2, wherein the verification update log state is generated based on a verification update log, wherein a record unit format corresponding to the verification update log includes the verification update log state, a logical volume ID, and an IO size corresponding to the first write IO task.
8. The method of claim 1, wherein after the step of determining the corresponding operating mode of the IO processing system, the method comprises:
if the working mode is a write-back mode, initiating a data reading request to the target disk side equipment according to a read IO instruction issued by the host;
And replacing the host side address corresponding to the read IO instruction with a storage control chip side address according to the data read request, so that the target disk side device directly sends the data to be read corresponding to the data read request to the host.
9. The method of claim 8, wherein if the working mode is a write-back mode, the initiating a data read request to the target disk side device according to a read IO instruction issued by the host comprises:
If the working mode is a write-back mode, receiving a read IO instruction issued by the host;
Creating a second full-hardware task processing chain according to the read IO instruction, wherein the second full-hardware task processing chain comprises a read IO task, and submitting the read IO task to a task queue management engine for scheduling;
And locking a second strip corresponding to the read IO task, and initiating a data reading request to the target disk side device according to the read IO task.
10. The method according to claim 9, wherein after the step of replacing the host side address corresponding to the read IO instruction with a storage control chip side address according to the data read request, so that the target disk side device directly transmits the data to be read corresponding to the data read request to the host, the method includes:
And unlocking a second strip corresponding to the read IO task, and sending a read IO completion response signal to the host.
11. The method of claim 1, wherein after the step of determining the corresponding operating mode of the IO processing system, the method comprises:
If the working mode is a write-through mode, storing second data to be written corresponding to a second write IO instruction into the on-chip cache pool according to the second write IO instruction issued by the host;
and sending a second write IO completion response signal to the host under the condition that the second data to be written stored on the on-chip cache pool is detected to be stored in the target disk side device.
12. The method of claim 11, wherein storing the second data to be written corresponding to the second write IO instruction to the on-chip cache pool according to the second write IO instruction issued by the host includes:
receiving a second write IO instruction issued by the host;
creating a second full hardware task processing chain according to the second write IO instruction, wherein the second full hardware task processing chain comprises a second write IO task;
Locking a second stripe corresponding to the second write IO task, and setting a verification update log state corresponding to the second write IO task as a preparation state;
Distributing target on-chip cache data pages corresponding to the second write IO task from the on-chip cache pool, wherein the target on-chip cache data pages comprise a fourth target on-chip cache data page, a fifth target on-chip cache data page and a sixth target on-chip cache data page;
And storing the data to be written corresponding to the second write IO task to the fourth target on-chip cache data page.
13. The method of claim 11, wherein the sending a second write IO completion acknowledgement signal to the host if the second data to be written stored on the on-chip cache pool is detected to be stored to the target disk side device comprises:
Storing second historical write data corresponding to the target disk side equipment to a fifth target on-chip cache data page, and storing second historical check data corresponding to the target disk side equipment to a sixth target on-chip cache data page;
Generating second verification data corresponding to the second data to be written according to the second data to be written, the second historical write data and the second historical verification data;
Updating the verification update log state from a preparation state to a stripe modification state in a write-through mode, and writing the second data to be written and the second verification data into the target disk side device;
and sending a second write IO completion response signal to the host.
14. The method according to claim 13, wherein after the step of writing the second data to be written and the second check data to the target disk side device, the step of transmitting a second write IO completion response signal to the host includes:
releasing the target on-chip cache data page;
unlocking the second stripe, and updating the verification update log state to an invalid state.
15. The method of claim 1, wherein after the step of determining the corresponding operating mode of the IO processing system, the method comprises:
And if the working mode is a suspension mode, finishing the IO task corresponding to the current IO instruction issued by the host, and stopping responding to the IO instruction issued by the host.
16. The method of claim 1, wherein after the step of determining the corresponding operating mode of the IO processing system, the method comprises:
And if the working mode is a freezing mode, stopping responding to a fourth write IO instruction issued by the host.
17. An IO processing system, the IO processing system comprising: the system comprises a host, a storage control card and target disk side equipment, wherein the storage control card comprises a storage control chip, and an on-chip cache pool is mounted on the storage control chip;
The storage control chip is used for determining a working mode corresponding to the IO processing system; if the working mode is a write-back mode, storing first data to be written corresponding to a first write IO instruction into the on-chip cache pool according to the first write IO instruction issued by a host; and when the host receives a first write IO completion response signal, storing the first data to be written into the on-chip cache pool to the target disk side device.
18. The system of claim 17, wherein the IO processing system further comprises: an on-board integrated standby power module is integrated on the storage control card;
The on-board integrated standby power module is used for supplying power to the IO processing system when the IO processing system is detected to be in a power-down state.
19. The system of claim 17, wherein the IO processing system further comprises: the storage control chip is connected with external nonvolatile storage through a low-speed IO interface.
20. An IO processing apparatus, which is applied to a storage control chip in an IO processing system, wherein an on-chip cache pool is mounted on the storage control chip, the apparatus comprising:
the determining module is used for determining a working mode corresponding to the IO processing system;
The storage module is used for storing first data to be written corresponding to a first write IO instruction to the on-chip cache pool according to the first write IO instruction issued by the host if the working mode is a write-back mode;
And the parallel processing module is used for storing the first data to be written into the on-chip cache pool to the target disk side device when the host receives the first write IO completion response signal.
21. A communication device, comprising: a transceiver, a memory, a processor, and a program stored on the memory and executable on the processor;
The processor is configured to read a program in a memory to implement the input/output processing method according to any one of claims 1 to 16.
22. A readable storage medium storing a program, wherein the program, when executed by a processor, implements the input-output processing method according to any one of claims 1 to 16.
23. A computer program product comprising computer programs/instructions which when executed by a processor implement the input-output processing method of any of claims 1-16.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410691960.0A CN118276785B (en) | 2024-05-31 | 2024-05-31 | Input/output processing method, system, device, equipment, storage medium and product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410691960.0A CN118276785B (en) | 2024-05-31 | 2024-05-31 | Input/output processing method, system, device, equipment, storage medium and product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118276785A true CN118276785A (en) | 2024-07-02 |
CN118276785B CN118276785B (en) | 2024-09-13 |
Family
ID=91634122
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410691960.0A Active CN118276785B (en) | 2024-05-31 | 2024-05-31 | Input/output processing method, system, device, equipment, storage medium and product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118276785B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105335247A (en) * | 2015-09-24 | 2016-02-17 | 中国航天科技集团公司第九研究院第七七一研究所 | Fault-tolerant structure and fault-tolerant method for Cache in high-reliability system chip |
CN105528180A (en) * | 2015-12-03 | 2016-04-27 | 浙江宇视科技有限公司 | Data storage method, apparatus and device |
EP3352023A1 (en) * | 2016-09-23 | 2018-07-25 | Apex Microelectronics Co., Ltd | Storage medium, data processing method and cartridge chip using this method |
CN108334401A (en) * | 2018-01-31 | 2018-07-27 | 武汉噢易云计算股份有限公司 | Realize that logical volume dynamically distributes and supports the system and method for dynamic migration of virtual machine |
US20190332325A1 (en) * | 2018-04-28 | 2019-10-31 | EMC IP Holding Company LLC | Method, device and computer readable medium of i/o management |
US20220114111A1 (en) * | 2019-06-21 | 2022-04-14 | Huawei Technologies Co.,Ltd. | Integrated chip and data processing method |
US20220253252A1 (en) * | 2019-10-31 | 2022-08-11 | Huawei Technologies Co., Ltd. | Data processing method and apparatus |
US20220350578A1 (en) * | 2021-04-29 | 2022-11-03 | Sap Se | Custom integration flow step for integration service |
CN115617742A (en) * | 2022-12-19 | 2023-01-17 | 苏州浪潮智能科技有限公司 | Data caching method, system, equipment and storage medium |
CN115793985A (en) * | 2023-01-09 | 2023-03-14 | 苏州浪潮智能科技有限公司 | Safe storage method, device, equipment and storage medium |
-
2024
- 2024-05-31 CN CN202410691960.0A patent/CN118276785B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105335247A (en) * | 2015-09-24 | 2016-02-17 | 中国航天科技集团公司第九研究院第七七一研究所 | Fault-tolerant structure and fault-tolerant method for Cache in high-reliability system chip |
CN105528180A (en) * | 2015-12-03 | 2016-04-27 | 浙江宇视科技有限公司 | Data storage method, apparatus and device |
EP3352023A1 (en) * | 2016-09-23 | 2018-07-25 | Apex Microelectronics Co., Ltd | Storage medium, data processing method and cartridge chip using this method |
CN108334401A (en) * | 2018-01-31 | 2018-07-27 | 武汉噢易云计算股份有限公司 | Realize that logical volume dynamically distributes and supports the system and method for dynamic migration of virtual machine |
US20190332325A1 (en) * | 2018-04-28 | 2019-10-31 | EMC IP Holding Company LLC | Method, device and computer readable medium of i/o management |
US20220114111A1 (en) * | 2019-06-21 | 2022-04-14 | Huawei Technologies Co.,Ltd. | Integrated chip and data processing method |
US20220253252A1 (en) * | 2019-10-31 | 2022-08-11 | Huawei Technologies Co., Ltd. | Data processing method and apparatus |
US20220350578A1 (en) * | 2021-04-29 | 2022-11-03 | Sap Se | Custom integration flow step for integration service |
CN115617742A (en) * | 2022-12-19 | 2023-01-17 | 苏州浪潮智能科技有限公司 | Data caching method, system, equipment and storage medium |
CN115793985A (en) * | 2023-01-09 | 2023-03-14 | 苏州浪潮智能科技有限公司 | Safe storage method, device, equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
ZHANG, TIANSHENG等: "Dynamic Cache Pooling in 3D Multicore Processors", 《ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS》, vol. 12, no. 2, 2 September 2015 (2015-09-02), XP058070726, DOI: 10.1145/2700247 * |
尹洋;刘振军;许鲁;: "一种基于磁盘介质的网络存储系统缓存", 软件学报, no. 10, 22 December 2009 (2009-12-22), pages 2752 - 2765 * |
Also Published As
Publication number | Publication date |
---|---|
CN118276785B (en) | 2024-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101630583B1 (en) | Smart memory buffers | |
US10891078B2 (en) | Storage device with a callback response | |
CN110998562B (en) | Spacing nodes in a distributed cluster system | |
US11809707B2 (en) | File operations in a distributed storage system | |
US10303560B2 (en) | Systems and methods for eliminating write-hole problems on parity-based storage resources during an unexpected power loss | |
CN112912851B (en) | System and method for addressing, and media controller | |
US20210303181A1 (en) | Data processing method and apparatus | |
US10565108B2 (en) | Write-back cache for storage controller using persistent system memory | |
US10416895B2 (en) | Storage devices managing duplicated data based on the number of operations | |
CN111857540A (en) | Data access method, device and computer program product | |
US20060224639A1 (en) | Backup system, program and backup method | |
US9298636B1 (en) | Managing data storage | |
CN111580757B (en) | Data writing method and system and solid state disk | |
CN118276785B (en) | Input/output processing method, system, device, equipment, storage medium and product | |
CN115904795A (en) | Data storage method and device in storage system | |
US11630734B2 (en) | Scale-out storage system and storage control method | |
CN107562654B (en) | IO command processing method and device | |
CN107562639B (en) | Erase block read request processing method and device | |
CN105068896A (en) | Data processing method and device based on RAID backup | |
TW202401232A (en) | Storage system and method of operating storage system | |
CN115826882A (en) | Storage method, device, equipment and storage medium | |
CN114661230A (en) | RAID storage system and SSD RAID acceleration command design method | |
US20190332533A1 (en) | Maintaining multiple cache areas | |
US11449261B2 (en) | Low latency data mirroring in a large scale storage system | |
EP4273703A1 (en) | Computing system generating map data, and method of operating the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |