KR101462053B1 - Apparatus For File Input/Output - Google Patents
Apparatus For File Input/Output Download PDFInfo
- Publication number
- KR101462053B1 KR101462053B1 KR1020130131787A KR20130131787A KR101462053B1 KR 101462053 B1 KR101462053 B1 KR 101462053B1 KR 1020130131787 A KR1020130131787 A KR 1020130131787A KR 20130131787 A KR20130131787 A KR 20130131787A KR 101462053 B1 KR101462053 B1 KR 101462053B1
- Authority
- KR
- South Korea
- Prior art keywords
- file
- input
- disk
- data
- output
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0643—Management of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0674—Disk device
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
The present invention relates to a file input / output device.
It should be noted that the following description merely provides background information related to the present embodiment and does not constitute the prior art.
When a plurality of processes or threads request input / output (I / O) to the same disk at the same time, a bottleneck occurs in the process of inputting / outputting data from / to the disk , Which results in degradation of the system performance. When a plurality of processes or threads request input / output to the same disk at the same time, the operating system must process a plurality of input / output requests sequentially. The more I / O on the disk, the longer the process or thread waits for I / O.
Therefore, there is a need to improve overall system and individual process or thread throughput by reducing I / O to disk and reducing I / O latency.
The main object of the present embodiment is to provide a file input / output device that improves the performance of input / output on a disk.
According to an aspect of the present invention, there is provided a file input / output request unit for receiving a file input / output request including a path of a file, a start offset of request data in the file, and a length of the request data; At least one disk storing a file; An input / output request distribution unit for distributing an input / output request of the file for each disk in which an input / output requested file is stored; An input / output storage structure set including at least one disk input / output storage structure corresponding to each disk for sequentially processing input / output requests distributed for each disk; A disk manager including at least one disk manager for extracting an input / output request of the file from the disk I / O storage structure and reading data from the disk on which the file is stored; And a file data manager for receiving data corresponding to an input / output request of the file from the disk manager.
According to another aspect of the present invention, there is provided a method of managing a file, comprising: receiving an input / output request of a file; Distributing an input / output request of the file to a disk input / output storage structure corresponding to a disk storing the file; Extracting an input / output request of the file from the disk input / output storage structure, reading data corresponding to an input / output request of the file from the disk, and storing the read data in a buffer of a disk manager corresponding to the disk; Storing in a buffer of the disk manager in a block data buffer in a data file data manager corresponding to an input / output request of the file; And transferring data corresponding to an input / output request of the file stored in the block data buffer.
As described above, according to the present embodiment, even if a plurality of processes or threads request input / output of files stored on the same disk, if the data is buffered, the data stored in the buffer is transferred to the disk access The problem of bottleneck due to disk input / output can be solved. Thus, even if multiple processes or threads request I / O to the same disk at the same time, they can be processed sequentially in the disk I / O storage structure, and the performance of the entire system and each process or thread There is an effect that can be improved.
FIG. 1A is an example of a state in which a plurality of processes perform a plurality of input / output operations on a plurality of disks.
1B is an exemplary diagram illustrating a state in which a plurality of processes according to the present embodiment perform a plurality of input / output operations on a plurality of disks.
2 is a configuration diagram of a file input / output device according to the present embodiment.
3A and 3B are flowcharts of a file input / output process in the file input / output device according to the present embodiment.
Hereinafter, the present embodiment will be described in detail with reference to the accompanying drawings.
Hereinafter, a data structure for storing an input / output request to a file will be described using a queue. However, the structure for storing input / output requests is not limited to queues, and various data structures are used. can do. The data structure for storing the I / O request is a queue having a first-in-first-out (FIFO) characteristic, but it is not necessarily first-in-first-out. The input / output can be performed prior to the inserted input / output request considering the priority or the like even if the input / output request is inserted later in the data structure for storing the input / output request. In addition, the disk described below may include various kinds of storage devices including a hard disk.
FIG. 1A is an example of a state in which a plurality of processes perform a plurality of input / output operations on a plurality of disks.
FIG. 1A is a block diagram illustrating a system in which a plurality of processes or threads are accessed by a process or a thread on a disk in which a file is located and a plurality of input / output (I / O) ) Are generated at the same time. In this case, there is a time delay for input / output in the disk, and a process or a thread must wait for other operations in the system for input / output. If a lot of input / output is waiting in the system, (Bottle Neck) occurs.
1B is an exemplary diagram illustrating a state in which a plurality of processes according to the present embodiment perform a plurality of input / output operations on a plurality of disks.
In FIG. 1B, an I / O request is inserted into a disk I / O queue corresponding to each disk, and a file input / output is sequentially performed in the disk I / O queue do. When the file input / output device according to the present embodiment includes a plurality of disks, each disk may include a disk input / output queue for each disk. The disk I / O queue allows the I / O to be sequentially performed on the disk, buffering the input / output data of the disk to prevent a large number of I / O conflicts, thereby reducing the system delay, The performance in each process can be improved.
2 is a configuration diagram of a file input / output device according to the present embodiment.
2, the file input /
The file input /
Meanwhile, the input /
The input / output
The I / O
The
The operating system generally performs input / output in units of blocks of a predetermined size. For the description of this embodiment, the case where the operating system performs input / output in units of 128 Kbytes will be described as an example. The operating system performs 128 Kbytes of input / output in the disk input / output stage and performs the input / output of the 128 Kbytes block several times in the case of the larger input / output requests, even if there is an input / output request of less than 128 Kbytes in the process. However, when the processor requests input / output for a specific location of a specific file, it is very likely to request the next block adjacent to a specific location of the specific file. By buffering the data of the next block in the memory by utilizing this characteristic in the input / output of the file, it is possible to reduce the number of actual disk accesses and reduce the system delay due to input / output by performing input / output in the memory. The
According to the experiment according to this embodiment, in the experiment in which the size of the data file is 724 Mbytes and 10 processes simultaneously request input and output of the same disk, according to the conventional method, the input / output speed is 27 Mbytes / sec. When the size of the input / output unit of the
The disk manager reads the data from the disk by a predetermined input / output data size even if the request is smaller than the predetermined input / output data size, and when the size of the requested data is larger than the preset size of the input / output data, Perform input and output multiple times. When a disk manager performs input / output of a predetermined size of input / output data several times, the disk manager inserts an I / O request to be performed next in the disk I / O queue corresponding to the disk manager to perform a plurality of I / Wait for input and output.
When the disk manager completes the input / output, the disk manager transfers the data to the
The
The file input /
3A and 3B are flowcharts of a file input / output process in the file input / output device according to the present embodiment.
The file input /
In step S330, it is determined whether additional data is requested while the file requested to be input / output is opened or whether a file is newly opened to request data. The
The file input /
If the data is not cached in the disk manager in the
The disk manager corresponding to the disk in which the file in the
If the size of the requested data in the file input /
The disk manager reads all the data from the disk and stores the data in the buffer in the disk manager in step S368 and transfers the data to the
The foregoing description is merely illustrative of the technical idea of the present embodiment, and various modifications and changes may be made to those skilled in the art without departing from the essential characteristics of the embodiments. Therefore, the present embodiments are to be construed as illustrative rather than restrictive, and the scope of the technical idea of the present embodiment is not limited by these embodiments. The scope of protection of the present embodiment should be construed according to the following claims, and all technical ideas within the scope of equivalents thereof should be construed as being included in the scope of the present invention.
200 File I /
220 I / O request distribution unit 230 I /
240
260 File Data Manager
Claims (11)
At least one disk storing a file;
An input / output request distribution unit for distributing an input / output request of the file for each disk in which an input / output requested file is stored;
An input / output storage structure set including at least one disk input / output storage structure corresponding to each disk for sequentially processing input / output requests distributed for each disk;
A disk manager including at least one disk manager for extracting an input / output request of the file from the disk I / O storage structure and reading data from the disk on which the file is stored; And
A file data manager for receiving data corresponding to an input / output request of the file from the disk manager;
The file input / output device comprising:
Wherein the disk management unit includes a disk manager corresponding to each disk, and the input / output storage structure aggregation unit includes a disk input / output storage structure corresponding to each disk manager.
The file input /
The file requested to be input / output is opened,
Generates the file data manager,
And transfers the file input / output request to the file data manager.
Wherein the input /
A disk in which the file requested to be input / output is stored,
Wherein the file input / output unit inserts the input / output request of the file into the disk input / output storage structure in the input / output storage structure set unit corresponding to the disk storing the input / output requested file.
The disk manager allocates a buffer of a predetermined buffer size of a disk manager for storing data,
Wherein the disk manager reads data from the disk corresponding to the disk manager with the predetermined buffer size and stores the read data in the buffer.
Wherein the controller inserts a request for a portion exceeding the predetermined buffer size into the disk I / O storage structure corresponding to the disk manager when the length of the requested data exceeds the preset buffer size.
Wherein the file data manager includes a block data buffer for storing data corresponding to an input / output request of the file,
Wherein the block data buffer is input / output at a predetermined block size of the block data buffer.
The file data manager,
Receiving data corresponding to an input / output request of the file from the disk manager by the predetermined block size,
When the length of the requested data is larger than the predetermined block size, the data is repeatedly received at the predetermined block size until the size of the data received from the disk manager is equal to or greater than the length of the requested data. File input / output device.
The file data manager,
When the data corresponding to the input / output request of the file is stored in the block data buffer and the data corresponding to the input / output request of the file is stored in the block data buffer, the data corresponding to the input / In the block data buffer.
The file data manager,
When data corresponding to an input / output request of the file is not stored in the block data buffer and there is data corresponding to an input / output request of the file in a buffer in a disk manager corresponding to the disk where the file is located, Output request to the file input / output request unit, and transfers the data to the file input / output request unit.
Distributing an input / output request of the file to a disk input / output storage structure corresponding to a disk storing the file;
Extracting an input / output request of the file from the disk input / output storage structure, reading data corresponding to an input / output request of the file from the disk, and storing the read data in a buffer of a disk manager corresponding to the disk;
Storing in a buffer of the disk manager in a block data buffer in a data file data manager corresponding to an input / output request of the file; And
A step of transmitting data corresponding to an input / output request of the file stored in the block data buffer
And outputting the file.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130131787A KR101462053B1 (en) | 2013-10-31 | 2013-10-31 | Apparatus For File Input/Output |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130131787A KR101462053B1 (en) | 2013-10-31 | 2013-10-31 | Apparatus For File Input/Output |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101462053B1 true KR101462053B1 (en) | 2014-11-17 |
Family
ID=52290643
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020130131787A KR101462053B1 (en) | 2013-10-31 | 2013-10-31 | Apparatus For File Input/Output |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101462053B1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07248883A (en) * | 1994-03-11 | 1995-09-26 | Nec Corp | Prior buffer flash system |
KR20110088287A (en) * | 2010-01-28 | 2011-08-03 | 주식회사 우리씨에스티 | The high performance video on demand server using storage access scheduling technology |
-
2013
- 2013-10-31 KR KR1020130131787A patent/KR101462053B1/en active IP Right Grant
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07248883A (en) * | 1994-03-11 | 1995-09-26 | Nec Corp | Prior buffer flash system |
KR20110088287A (en) * | 2010-01-28 | 2011-08-03 | 주식회사 우리씨에스티 | The high performance video on demand server using storage access scheduling technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10649969B2 (en) | Memory efficient persistent key-value store for non-volatile memories | |
CN112527730A (en) | System, apparatus and method for processing remote direct memory access operations with device attached memory | |
WO2016011894A1 (en) | Message processing method and apparatus | |
US8255593B2 (en) | Direct memory access with striding across memory | |
US20180157418A1 (en) | Solid state drive (ssd) memory cache occupancy prediction | |
US8429315B1 (en) | Stashing system and method for the prevention of cache thrashing | |
US10657087B2 (en) | Method of out of order processing of scatter gather lists | |
US9336153B2 (en) | Computer system, cache management method, and computer | |
US10805392B2 (en) | Distributed gather/scatter operations across a network of memory nodes | |
CN107154013A (en) | Additional card, content delivery network server and execution method for image procossing | |
WO2019061270A1 (en) | Data caching device and control method therefor, data processing chip, and data processing system | |
CN102314400B (en) | Method and device for dispersing converged DMA (Direct Memory Access) | |
US20190146935A1 (en) | Data transfer device, arithmetic processing device, and data transfer method | |
US9311044B2 (en) | System and method for supporting efficient buffer usage with a single external memory interface | |
WO2016019554A1 (en) | Queue management method and apparatus | |
CN105589664A (en) | Virtual storage high-speed transmission method | |
WO2017210015A1 (en) | Improving throughput in openfabrics environments | |
KR102523418B1 (en) | Processor and method for processing data thereof | |
US9781225B1 (en) | Systems and methods for cache streams | |
US9137167B2 (en) | Host ethernet adapter frame forwarding | |
CN106201918A (en) | A kind of method and system quickly discharged based on big data quantity and extensive caching | |
US10061513B2 (en) | Packet processing system, method and device utilizing memory sharing | |
KR102334473B1 (en) | Adaptive Deep Learning Accelerator and Method thereof | |
KR101462053B1 (en) | Apparatus For File Input/Output | |
CN110399219B (en) | Memory access method, DMC and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20171102 Year of fee payment: 4 |
|
FPAY | Annual fee payment |
Payment date: 20181105 Year of fee payment: 5 |