[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2017045545A1 - 多存储盘负载管理方法、装置、文件系统及存储网络系统 - Google Patents

多存储盘负载管理方法、装置、文件系统及存储网络系统 Download PDF

Info

Publication number
WO2017045545A1
WO2017045545A1 PCT/CN2016/098071 CN2016098071W WO2017045545A1 WO 2017045545 A1 WO2017045545 A1 WO 2017045545A1 CN 2016098071 W CN2016098071 W CN 2016098071W WO 2017045545 A1 WO2017045545 A1 WO 2017045545A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
storage disk
disk
file
file access
Prior art date
Application number
PCT/CN2016/098071
Other languages
English (en)
French (fr)
Inventor
张斌
陈颖川
张宇
王井贵
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2017045545A1 publication Critical patent/WO2017045545A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers

Definitions

  • the embodiment of the invention relates to the field of communications, and in particular, to a multi-storage disk load management method, device, file system and storage network system.
  • multi-disk mechanical hard disk or solid state hard disk, hereinafter collectively referred to as "multi-disk”
  • multi-disk mechanical hard disk or solid state hard disk, hereinafter collectively referred to as "multi-disk”
  • multi-disk load-balanced, high-concurrency, high-throughput "storage service systems people have done a lot of design and implementation.
  • metadata ie metadata
  • most of the traditional methods are to provide a metadata area (ie metadata), and to achieve balanced access of multiple files in the metadata area, that is, the file is on multiple disks. The location is evenly mapped in the metadata area. Each file path lookup must go through the metadata area.
  • the metadata controller consumes a large amount of CPU resources when the storage system is busy (the problem is increased CPU performance and increased cost), and at the same time, with the number of files. With a sharp increase, the metadata area consumes a lot of valuable physical memory (the problem is memory expansion and increased cost). Even with the most streamlined and efficient data structure, the memory overhead of the metadata area cannot be ignored. On the other hand, if the metadata area is corrupted, or the metadata controller crashes, it means " ⁇ " of the system.
  • the method of implementing multi-disk load balancing through the metadata area has the problems of high overhead, high cost, and system failure caused by the failure of the metadata area.
  • the main technical problem to be solved by the embodiments of the present invention is to provide a multi-storage disk load management method and apparatus, and to solve the related method of implementing multi-disk load balancing through the metadata area, which has a large overhead, high cost, and a metadata area failure. Awkward questions.
  • an embodiment of the present invention provides a multi-storage disk load management method, including:
  • selecting, by using a hash algorithm, a target storage disk that is accessed as the file access request by using a hash algorithm according to the file full path information and the identification identifier of each storage disk includes:
  • the identification identifiers of the storage disks are processed by a hash algorithm to obtain a storage medium factor of each storage disk;
  • selecting a target storage disk that is accessed as the file access request from the storage disk according to an integration factor corresponding to each storage disk includes:
  • the storage disk corresponding to the selection factor having the largest value is selected as the target storage disk.
  • integrating the file full path factor with the storage medium factor of each storage disk includes: respectively performing the file full path factor and the storage medium factor of each storage disk XOR processing to obtain an integration factor corresponding to each of the storage disks.
  • the method further includes: monitoring an operating state of each storage disk, and replacing an abnormal storage disk according to the monitoring result.
  • the identification identifier is a physical location identification identifier of each storage disk.
  • the physical location identification identifier includes a frame number of a frame in which the storage disk is located and a slot number of a slot in which the storage disk is located.
  • the embodiment of the present invention further provides a multi-storage disk load management device, including:
  • a multi-disk location management module configured to obtain a storage disk list, where the storage disk list includes an identification identifier of each storage disk
  • a request receiving module configured to receive a file access request including file full path information
  • a multi-disk load storage management module configured to select one of the storage disks as the file access request access by using a hash algorithm according to the file full path information in the file access request and the identification identifier of each storage disk Target storage disk.
  • the multi-disk load storage management module includes a calculation sub-module, an integration sub-module, and a selection sub-module;
  • the calculation sub-module is configured to process the storage medium factor of each storage disk by using a hash algorithm for the identification identifier of each storage disk; and set the full path factor of the file by processing the full path information of the file by using a hash algorithm. ;
  • the integration submodule is configured to store the full path factor of the file and the storage of each storage disk
  • the storage medium factor is integrated to obtain an integration factor corresponding to each storage disk
  • the selection submodule is configured to select one of the storage disks as the target storage disk accessed by the file access request according to an integration factor corresponding to each storage disk.
  • the selecting sub-module selects, from the storage disk, a target storage disk accessed as the file access request according to an integration factor corresponding to each storage disk, including:
  • the storage disk corresponding to the selection factor having the largest value is selected as the target storage disk.
  • a status monitoring module is further included to monitor the working status of each storage disk.
  • the identification identifier is a physical location identification identifier of each storage disk.
  • an embodiment of the present invention further provides a distributed file system, including a file access client, a file access interface, a plurality of storage disks, and a multiple storage disk load management device as described above;
  • the multiple storage disk load management device receives the file access request and selects one of the plurality of storage disks as a target storage disk accessed by the file access request.
  • an embodiment of the present invention further provides a distributed storage network system, including a file access client, a file access interface, a plurality of storage nodes, and a multiple storage disk load management device as described above; the storage node Contains multiple storage disks;
  • the file access client sends the multi-storage disk load tube through the file access interface
  • the device sends a file access request
  • the multi-storage disk load management device Receiving, by the multi-storage disk load management device, the file access request, selecting one of the plurality of storage nodes as a target storage node according to the file access request, and selecting from a plurality of storage disks of the target storage node A target storage disk accessed as the file access request.
  • a computer storage medium is further provided, and the computer storage medium may store an execution instruction for executing the multiple storage disk load management method in the foregoing embodiment.
  • the multi-storage disk load management method, apparatus, file system and storage network system provided by the embodiment of the present invention first obtain a storage disk list including each storage disk identification mark, and then, after receiving the file access request, extract the file access request.
  • the file full path information is further selected by the hash algorithm from the storage disk as a target storage disk accessed by the file access request according to the file full path information and the identification identifier of each storage disk.
  • the embodiment of the present invention implements a multi-disk load balancing mechanism by using a hash algorithm, which can distribute massive files very evenly on multiple disks without the existence of metadata, and the system structure becomes very simple and efficient, and the hardware is Mainly memory) has low requirements, no metadata, and no single point of failure caused by metadata corruption, which can improve the security of system storage.
  • the embodiment of the present invention can also monitor the status of each storage disk, replace the bad storage disk, and ensure the normal storage of the file; and in terms of elastic expansion, only need to increase the storage disk, the capacity and throughput of the entire system. Aspects will be improved.
  • FIG. 1 is a schematic flowchart of a multi-storage disk load management method according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a process of selecting a target storage disk by using a hash algorithm according to Embodiment 1 of the present invention
  • FIG. 3 is a schematic diagram of a process of selecting a target storage disk according to an integration factor according to Embodiment 1 of the present invention
  • FIG. 4 is a schematic structural diagram 1 of a multi-storage disk load management device according to Embodiment 2 of the present invention.
  • FIG. 5 is a schematic structural diagram 2 of a multi-storage disk load management device according to Embodiment 2 of the present invention.
  • FIG. 6 is a schematic structural diagram 3 of a multi-storage disk load management device according to Embodiment 2 of the present invention.
  • FIG. 7 is a schematic structural diagram of a distributed file system according to Embodiment 3 of the present invention.
  • FIG. 9 is a schematic flowchart of a multi-storage disk load management method according to Embodiment 3 of the present invention.
  • FIG. 10 is a schematic structural diagram of a distributed storage network system according to Embodiment 4 of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the hash algorithm is used to manage the load of the multi-storage disk.
  • the system architecture is very simple, and no additional metadata area is needed.
  • servers and storage disks ie, storage disks, including mechanical hard disks and/or solid state disks
  • the access performance is high, and the original metadata retrieval operation evolves into a hash computing operation. Whether it is a billion-level or a billion-level file, it is a fast three-column operation, and the physical location of the file storage can be obtained.
  • the multi-storage disk load management method provided in this embodiment is described below by taking a file storage process as an example. Referring to FIG. 1, the method includes:
  • Step 101 Acquire a storage disk list, where the storage disk list includes an identification identifier of each storage disk;
  • Step 102 Receive a file access request, where the file access request includes file full path information
  • Step 103 Select, according to the file full path information and the identification identifier of each storage disk, a hash storage algorithm from the storage disk as a target storage disk accessed by the file access request.
  • Step 104 Perform a corresponding file access operation on the target storage disk.
  • the file access request in this embodiment may be a file storage request or a file read request; when the file storage request is performed, the corresponding file is written on the target storage disk; when the file is read, the file is The target storage disk is read by the corresponding file.
  • a hash storage algorithm is used to select a target storage disk that is accessed as a file access request from the storage disk. See FIG. 2, which includes:
  • Step 201 The identification identifier of each storage disk is processed by a hash algorithm to obtain a storage medium factor of each storage disk; here, the identification identifier may be mapped to a positive integer by a hash algorithm, and of course, the mapping is not excluded to other forms; Apply the even distribution characteristics of the hash algorithm;
  • Step 202 The full path information of the file is processed by the hash algorithm to obtain a full path factor of the file; here, the identification identifier may be mapped to a positive integer by a hash algorithm, and the mapping is not excluded to other forms; as long as the hash algorithm can be applied.
  • the even distribution function may be; the specific hash algorithm in the corresponding embodiment may also be flexibly selected, as long as the above purpose can be achieved;
  • Step 203 Integrate the obtained file full path factor with the storage medium factor of each storage disk to obtain an integration factor corresponding to each storage disk; that is, how many storage disks are there? Integration factor
  • Step 204 Select a target storage disk accessed as a file access request from each storage disk according to an integration factor corresponding to each storage disk.
  • the hash algorithm in the above steps 201 and 202 can use the same algorithm.
  • the integration process in the above step 203 may specifically perform an exclusive OR process on the obtained file full path factor and the storage medium factor of each storage disk to obtain an integration factor corresponding to each storage disk.
  • Step 301 The integration factor corresponding to each storage disk is processed by a hash algorithm to obtain a selection factor corresponding to each storage disk; the algorithm used in this step is the same as step 201 and step 202;
  • Step 302 Select a storage disk corresponding to the selection factor with the largest value as the target storage disk.
  • the identification identifier in the storage disk list is a physical location identification identifier of each storage disk; and the storage device may specifically include a storage server and/or a disk cluster (JBOD: Just a Bunch Of Disks), a storage server, and a disk cluster. It contains multiple storage disks, and the storage disk can be a solid state hard disk or a mechanical hard disk.
  • an application for file access that is, a file access client, is further provided on the storage device.
  • the storage server and the JBOD may be numbered, for example, the storage server number is frame number 0, the first disk cluster number is frame number 1, the second disk cluster number is frame number 2, and so on, Nth The disk cluster number is the frame number N;
  • the distributed file system daemon processes the slots of each storage disk (mechanical hard disk or solid state hard disk) in the storage server, and numbers the slots of each storage disk in the disk cluster;
  • each storage disk on the storage device has a unified and unique physical location number, that is, a "frame number + slot number", which is called a physical location identification identifier of the storage disk; All the storage disks on the disk cluster, first press the frame number where the storage disk is located. Sequence, and then sorted by the slot number in the frame to form a set of one-dimensional storage disk physical location identification identifier list, that is, the storage disk list:
  • the physical location identification identifier of each storage disk is calculated by hashing (ie, HASH), and then the storage disk physical location identification identifier is mapped into a set of discrete and evenly distributed positive integers, which are called “storage medium factors”.
  • the physical location identification identifier of each storage disk is used.
  • the “storage medium factor” calculated according to the physical location string group is the same, that is, in this embodiment. "storage medium factor" only with Each physical location is related, and regardless of the storage disk, reliability can be further improved.
  • the storage disk number may be added, and each storage disk is uniquely numbered for each storage disk number, for example, disk0001, disk0002, disk0003, ..., disk000N.
  • the physical location identification at this time is represented by the frame number +_slot number + storage disk number.
  • the drive letter corresponding to the storage disk ie, a Linux or other Unix-like corresponding block device file, such as /dev/sda
  • the drive letter corresponding to the storage disk may be corresponding to the physical location identifier of the storage disk.
  • Mount for example:
  • the file full path information in this embodiment may include file type information + several storage directory paths + file names; the hash path algorithm may be used to map the file full path information into a positive integer.
  • the target storage disk is selected and stored by using the balanced hash algorithm of the embodiment, when the user When the file needs to be read, the storage disk with the largest "selection factor" value is still found in the same way, and the storage disk must be the target storage disk at the time of storage.
  • the working state of each storage disk can be monitored, and the abnormal storage disk is removed according to the monitoring result, and then replaced.
  • the files on the storage disk can be balancedly transferred to other storage disks, or completely transferred to the replaced new storage disk.
  • SSDs solid-state drives
  • traditional mechanical hard disks can be independently grouped, that is, SSDs form a set of solid-state hard disk storage sub-lists, and the solid-state hard disk storage sub-lists Including the identification of each solid state drive, such as ssd_0001, ssd_0002...ssd_000N;
  • the two sub-lists can be monitored in real time.
  • the user wants to store the frequently accessed files (ie, "hot” files) in the SSD storage sub-list corresponding to the SSD, and only for the solid state hard disk.
  • the identification identifiers of the SSDs in the storage sub-list are hashed, and the frequently accessed files (ie, "hot” files) are mapped to the SSD storage sub-list.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • This embodiment provides a multi-storage disk load management device, as shown in FIG. 4, including:
  • the multi-disk location management module 1 is configured to obtain a storage disk list, where the storage disk list includes an identification identifier of each storage disk;
  • the request receiving module 2 is configured to receive a file access request including file full path information
  • the multi-disk load storage management module 3 is configured to select a target storage disk accessed as a file access request from the storage disk by using a hash algorithm according to the file full path information in the file access request and the identification identifier of each storage disk.
  • the multi-disk load storage management module 3 in this embodiment includes a calculation sub-module 31, an integration sub-module 32 and a selection sub-module 33;
  • the calculation sub-module 31 is configured to process the storage medium factor of each storage disk by using a hash algorithm for the identification identifier of each storage disk; and set the file full path factor to be processed by the hash algorithm for the file full path information;
  • the module 31 may specifically map the identification identifier and the file full path factor to a positive integer by a hash algorithm, and of course does not exclude mapping into other forms; as long as the even distribution characteristic of the hash algorithm can be applied.
  • the integration sub-module 32 is configured to integrate the file full path factor with the storage medium factor of each storage disk to obtain an integration factor corresponding to each storage disk;
  • the selection sub-module 33 is configured to select a target storage disk that is accessed as the file access request from the storage disk according to an integration factor corresponding to each storage disk, and the specific process includes:
  • the integration factor corresponding to each storage disk is processed by a hash algorithm to obtain a selection factor corresponding to each storage disk;
  • the storage disk corresponding to the selection factor having the largest value is selected as the target storage disk.
  • the calculation of the cubic hash algorithm in this embodiment may specifically adopt the same algorithm.
  • the identification identifier in the storage disk list is a physical location identifier of each storage disk.
  • the storage device may specifically include a storage server and/or a cluster of disks (JBOD: Just a Bunch Of Disks).
  • the storage server and the disk cluster each include a plurality of storage disks, and the storage disk may be a solid state hard disk or a mechanical hard disk.
  • an application for file access that is, a file access client, is further provided on the storage device.
  • the storage server and the JBOD may be numbered, for example, the storage server number is frame number 0, the first disk cluster number is frame number 1, the second disk cluster number is frame number 2, and so on, Nth The disk cluster number is the frame number N;
  • the distributed file system daemon processes the slots of each storage disk (mechanical hard disk or solid state hard disk) in the storage server, and numbers the slots of each storage disk in the disk cluster;
  • each storage disk on the storage device has a unified and unique physical location number, that is, a "frame number + slot number”, which is called a physical location identification identifier of the storage disk; All the storage disks on the disk cluster are first sorted according to the frame number of the storage disk, and then sorted by the slot number in the frame to form a set of one-dimensional storage disk physical location identification identifiers, that is, a storage disk list. Then, the calculation sub-module 31 calculates the physical location identification identifier of each storage disk by performing hash (ie, HASH) calculation, and then mapping the storage disk physical location identification identifier into a set of discrete and evenly distributed positive integers, which are called “storage medium factors”. .
  • hash ie, HASH
  • the physical location identification identifier of each storage disk is used. Regardless of the storage medium inserted in the physical location, the “storage medium factor” calculated according to the physical location string group is the same, that is, in this embodiment.
  • the "storage medium factor” is only related to each physical location, and is independent of the storage disk, which further improves reliability.
  • the storage disk number may be added, and each storage disk number is uniquely numbered for each storage disk number.
  • the physical location identification identifier at this time is the frame number +_slot number + storage disk number.
  • the corresponding storage disk can be further
  • the drive letter (that is, Linux or other Unix-like corresponding block device files, such as /dev/sda) is mounted corresponding to the physical location identifier of the storage disk.
  • the file full path information in this embodiment may include file type information + several storage directory paths + file names; the calculation sub-module 31 may use a hash algorithm to map the file full path information into a positive integer.
  • the target storage disk is selected and stored by using the balanced hash algorithm of the embodiment, when the user needs to read the file, the storage disk with the largest selection factor value is still found according to the same method, and the storage disk must be stored. The target storage disk at the time.
  • the working state of each storage disk can be monitored, and the abnormal storage disk is removed according to the monitoring result, and then replaced.
  • the files on the storage disk can be balancedly transferred to other storage disks, or completely transferred to the replaced new storage disk.
  • the multiple storage disk load management device in this embodiment may further include a state monitoring module 4 configured to monitor the working state of each storage disk.
  • the abnormal storage disk can be removed according to the monitoring result, and then replaced.
  • the files on the storage disk can be balancedly transferred to other storage disks, or completely transferred to the replaced new storage disk.
  • the multi-storage disk load management device in this embodiment further includes a category management module 5 configured to independently group the SSDs and the traditional mechanical hard disks, that is, the SSD forms a set of solid state hard disk storage sub-lists.
  • the SSD storage sub-list includes identification identifiers of the SSDs, such as ssd_0001, ssd_0002...ssd_000N;
  • the two sub-lists can be monitored in real time.
  • the user wants to store the frequently accessed files (ie, "hot” files) in the SSD storage sub-list corresponding to the SSD, and only for the solid state hard disk.
  • the identification identifiers of the SSDs in the storage sub-list are hashed, and the frequently accessed files (ie, "hot” files) are mapped to the SSD storage sub-list.
  • the hash algorithm used in this embodiment can support hot plugging for real-time updating.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • This embodiment provides a distributed file system, as shown in FIG. 7, which includes a file access client 71, a file access interface 72, a plurality of storage disks 73, and a multiple storage disk load management device shown in the second embodiment. 74; the file access client 71 can be implemented by various user programs, and the file access interface 72 can be implemented by a universal interface dynamic link library.
  • the mapping relationship between the plurality of storage disks 73 and the mount point in the "distributed file system" in this embodiment is shown, involving a storage server and a plurality of JBODs, and the storage server has a plurality of storage disks, JBOD. There are also some storage disks.
  • the storage server is connected to JBOD using SAS (Serial Attached SCSI).
  • Each storage disk has a unique physical location identification, that is, using the "frame number - slot number" identifier.
  • the physical location identifier of the storage disk is used as the mount directory.
  • Figure 8 shows a one-to-one mapping of all storage disks to mount points in the operating system; at the same time, each storage disk has a unique For the calculation process of the storage medium factor, see the second embodiment.
  • the file access client 71 transmits a file access request to the multiple storage disk load management device 74 through the file access interface 72; the multiple storage disk load management device 74 receives the file access request and selects one of the plurality of storage disks as the file access request access.
  • Target storage disk The following is a specific example of file storage, as shown in Figure 9, including:
  • Step 901 The file access client 71 invokes the file access interface 72 to initiate a file access request, and provides a "full path name of the file";
  • Step 902 The multi-storage disk load management device 74 maps the "full path name of the file” into a positive integer, which is called "file full path factor";
  • Step 903 The multi-storage disk load management device 74 provides available storage disk physical locations and lists, and obtains a "storage medium factor" of each storage disk;
  • Step 904 The multi-storage disk load management device 74 merges each of the "storage medium factor” and the “file full path factor” into an "integration factor” (how many "storage medium factors” there are” “integration factors”);
  • Step 905 The multi-storage disk load management device 74 calculates each "whole factor” to obtain a plurality of “selection factors” (how many "storage medium factors” are there, how many “integration factors”, and thus how many “selection factors” ");
  • Step 906 The multi-storage disk load management device 74 selects the "selection factor" of the maximum value, and finally maps the file to the storage disk with the largest "selection factor" value;
  • Step 907 The multi-storage disk load management device 74 completes the read and write operations of the file on the selected storage disk.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • This embodiment provides a distributed storage network system, as shown in FIG. 10, including a file access client 01, a file access interface 02, a plurality of storage nodes 03, and a multiple storage disk load management device 04 as shown in the second embodiment; the storage node 03 includes a plurality of storage disks; The storage disk acts as a storage node, and the combination of multiple storage nodes constitutes a storage network system.
  • each storage node in the storage network may be numbered, for example, in the form of node1, node2, ..., nodeN; the numbering and management manner of multiple storage disks in each storage node are in the above embodiments.
  • the specific control process is as follows:
  • the file access client 01 sends a file access request to the multi-storage disk load management device 04 through the file access interface 02;
  • the multi-storage disk load management device 04 receives the file access request, and selects one of the plurality of storage nodes as the target storage node nodeX according to the file access request, and the selection manner thereof may also adopt the selection target storage disk in the above embodiments. Alternatively, the selection determination may be performed in other manners; then, one of the plurality of storage disks of the target storage node nodeX is selected as the target storage disk accessed by the file access request.
  • the mechanism for selecting a storage node in the storage network is completed by the multi-storage disk load management device 04, and further, the selection operation of the plurality of disks is completed inside the storage node.
  • This embodiment supports elastic expansion. By expanding the storage node, a large-scale storage network can be constructed. The storage load of the entire storage network is balanced and distributed to each storage node. Within each storage node, the storage load is balanced and distributed. Multiple discs on each disc.
  • the embodiment of the present invention has at least the following advantages:
  • the system architecture is very simple, no need for additional metadata controller, as long as there are servers and storage media (mechanical hard disk or solid state hard disk), it can carry out massive file access services, which is very easy to deploy and implement.
  • Embodiments of the present invention also provide a storage medium.
  • the foregoing storage medium may be configured to store program code for performing the following steps:
  • S1 Obtain a storage disk list, where the storage disk list includes an identification identifier of each storage disk;
  • S2 Receive a file access request, and obtain a file full path information in the file access request.
  • the storage medium is further arranged to store program code for performing the following steps:
  • the identification identifier of each storage disk is processed by a hash algorithm to obtain a storage medium factor of each storage disk;
  • S4 Select a target storage disk accessed as a file access request from the storage disk according to an integration factor corresponding to each storage disk.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • the storage disk list including each storage disk identification mark is obtained first, and after receiving the file access request, the file full path information in the file access request is extracted, and then according to the file full path information and each storage disk.
  • the identification identifier is selected from the storage disk by using a hash algorithm as a target storage disk accessed by the file access request. That is to say, the embodiment of the present invention implements a multi-disk load balancing mechanism by using a hash algorithm, which can distribute massive files very evenly on multiple disks without the existence of metadata, and the system structure becomes very simple and efficient, and the hardware is Mainly memory) has low requirements, no metadata, and no single point of failure caused by metadata corruption, which can improve the security of system storage.
  • the embodiment of the present invention can also monitor the status of each storage disk, replace the bad storage disk, and ensure the normal storage of the file; and in terms of elastic expansion, only need to increase the storage disk, the capacity and throughput of the entire system. Aspects will be improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种多存储盘负载管理方法、装置、文件系统及存储网络系统,先获取包含各存储盘识别标志的存储盘列表,然后接收到文件访问请求后,提取该文件访问请求中的文件全路径信息,进而根据文件全路径信息和各存储盘的识别标识,采用散列算法从存储盘中选择一个作为所述文件访问请求访问的目标存储盘。也即采用散列算法实现了多盘负载平衡机制,可将海量文件非常均匀地分布在多盘上,并不需要元数据的存在,系统结构变得非常简单高效,对硬件(主要是内存)的要求较低,没有元数据,也不会存在元数据损坏导致的单点故障,可提升系统存储的安全性。

Description

多存储盘负载管理方法、装置、文件系统及存储网络系统 技术领域
本发明实施例涉及通信领域,具体涉及一种多存储盘负载管理方法、装置、文件系统及存储网络系统。
背景技术
随着硬件设计与制造工艺的提升,现在的服务器产品往往可以扩展更多的存储盘(机械式硬盘或者固态硬盘,以下统一称为“多盘”),如何高效利用多盘形成一个可以面向“多盘负载平衡的,高并发的,高吞吐量的”存储服务系统,人们进行了很多设计与实现。当前,从“多盘负载平衡的”的角度讲,大部分传统的方式是提供一块元数据区(即metadata),在元数据区内完成多盘文件的均衡存取,即将文件在多盘上的位置均匀映射在元数据区,每次的文件路径查找都必须经过元数据区,在元数据区查找到文件物理位置后,进行实际的存取操作。为了维护该元数据,需要额外提供元数据控制器,元数据控制器在存储系统忙时会耗费大量的CPU资源(带来的问题是CPU性能提升,增加成本),同时,随着文件数量的急剧增加,元数据区会耗费大量宝贵的物理内存(带来的问题是内存扩容,又增加了成本),即便是使用最精简高效的数据结构,元数据区的内存开销仍然不可忽视。另一方面,如果元数据区损坏,或者元数据控制器崩溃,则意味着系统的“瘫痪”。
可见,相关通过元数据区实现多盘负载平衡的方式存在开销大、成本高且元数据区故障导致系统瘫痪的问题。
发明内容
本发明实施例要解决的主要技术问题是,提供一种多存储盘负载管理方法和装置,解决相关通过元数据区实现多盘负载平衡的方式存在开销大、成本高且元数据区故障导致系统瘫痪的问题。
为解决上述技术问题,本发明实施例提供一种多存储盘负载管理方法,包括:
获取存储盘列表,所述存储盘列表包含各存储盘的识别标识;
接收文件访问请求,获取所述文件访问请求中的文件全路径信息;
根据所述文件全路径信息和所述各存储盘的识别标识,采用散列算法从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
在本发明的一种实施例中,根据所述文件全路径信息和各存储盘的识别标识,采用散列算法从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘包括:
对所述各存储盘的识别标识通过散列算法处理得到各存储盘的存储介质因子;
对所述文件全路径信息通过散列算法处理得到文件全路径因子;
将所述文件全路径因子与所述各存储盘的存储介质因子进行整合处理得到与所述各存储盘对应的整合因子;
根据所述各存储盘对应的整合因子从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
在本发明的一种实施例中,根据所述各存储盘对应的整合因子从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘包括:
对所述各存储盘对应的整合因子通过散列算法处理得到所述各存储盘对应的选择因子;
选取值最大的选择因子对应的存储盘作为所述目标存储盘。
在本发明的一种实施例中,将所述文件全路径因子与所述各存储盘的存储介质因子进行整合包括:将所述文件全路径因子与所述各存储盘的存储介质因子分别进行异或处理,得到与所述各存储盘对应的整合因子。
在本发明的一种实施例中,还包括:对所述各存储盘的工作状态进行监测,根据监测结果将出现异常的存储盘进行更换。
在本发明的一种实施例中,所述识别标识为各存储盘的物理位置识别标识。
在本发明的一种实施例中,所述物理位置识别标识包括存储盘所在框架的框架号和存储盘所在插槽的插槽号。
为了解决上述问题,本发明实施例还提供了一种多存储盘负载管理装置,包括:
多盘位置管理模块,设置为获取存储盘列表,所述存储盘列表包含各存储盘的识别标识;
请求接收模块,设置为接收包含文件全路径信息的文件访问请求;
多盘负载存储管理模块,设置为根据所述文件访问请求中的文件全路径信息和所述各存储盘的识别标识,采用散列算法从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
在本发明的一种实施例中,所述多盘负载存储管理模块包括计算子模块、整合子模块和选择子模块;
所述计算子模块设置为对所述各存储盘的识别标识通过散列算法处理得到各存储盘的存储介质因子;以及设置为对所述文件全路径信息通过散列算法处理得到文件全路径因子;
所述整合子模块设置为将所述文件全路径因子与所述各存储盘的存 储介质因子进行整合处理得到与所述各存储盘对应的整合因子;
所述选择子模块设置为根据所述各存储盘对应的整合因子从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
在本发明的一种实施例中,所述选择子模块根据所述各存储盘对应的整合因子从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘包括:
对所述各存储盘对应的整合因子通过散列算法处理得到所述各存储盘对应的选择因子;
选取值最大的选择因子对应的存储盘作为所述目标存储盘。
在本发明的一种实施例中,还包括状态监测模块,对所述各存储盘的工作状态进行监测。
在本发明的一种实施例中,所述识别标识为各存储盘的物理位置识别标识。
为了解决上述问题,本发明实施例还提供了一种分布式文件系统,包括文件访问客户端、文件访问接口、多个存储盘以及如上所述的多存储盘负载管理装置;
所述文件访问客户端通过所述文件访问接口向所述多存储盘负载管理装置发送文件访问请求;
所述多存储盘负载管理装置接收所述文件访问请求,并从所述多个存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
为了解决上述问题,本发明实施例还提供了一种分布式存储网络系统,包括文件访问客户端、文件访问接口、多个存储节点以及如上所述的多存储盘负载管理装置;所述存储节点包含多个存储盘;
所述文件访问客户端通过所述文件访问接口向所述多存储盘负载管 理装置发送文件访问请求;
所述多存储盘负载管理装置接收所述文件访问请求,根据所述文件访问请求从所述多个存储节点中选择一个作为目标存储节点,并从所述目标存储节点的多个存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
在本发明实施例中,还提供了一种计算机存储介质,该计算机存储介质可以存储有执行指令,该执行指令用于执行上述实施例中的多存储盘负载管理方法。
本发明实施例的有益效果是:
本发明实施例提供的多存储盘负载管理方法、装置、文件系统及存储网络系统,先获取包含各存储盘识别标志的存储盘列表,然后接收到文件访问请求后,提取该文件访问请求中的文件全路径信息,进而根据文件全路径信息和各存储盘的识别标识,采用散列算法从存储盘中选择一个作为所述文件访问请求访问的目标存储盘。也即本发明实施例采用散列算法实现了多盘负载平衡机制,可将海量文件非常均匀地分布在多盘上,并不需要元数据的存在,系统结构变得非常简单高效,对硬件(主要是内存)的要求较低,没有元数据,也不会存在元数据损坏导致的单点故障,可提升系统存储的安全性。
另外,本发明实施例还可对各存储盘的状态进行监测,将坏的存储盘进行更换,保证文件的正常存储;且在弹性扩展方面,只需增加存储盘,整个系统在容量和吞吐量方面都会得到提升。
附图说明
图1为本发明实施例一提供的多存储盘负载管理方法流程示意图;
图2为本发明实施例一提供的采用散列算法选择目标存储盘过程的示意图;
图3为本发明实施例一提供的根据整合因子选择目标存储盘过程的示意图;
图4为本发明实施例二提供的多存储盘负载管理装置结构示意图一;
图5为本发明实施例二提供的多存储盘负载管理装置结构示意图二;
图6为本发明实施例二提供的多存储盘负载管理装置结构示意图三;
图7为本发明实施例三提供的分布式文件系统结构示意图;
图8为本发明实施例三提供的存储盘与挂载点映射关系;
图9为本发明实施例三提供的多存储盘负载管理方法流程示意图;
图10为本发明实施例四提供的分布式存储网络系统结构示意图。
具体实施方式
下面通过具体实施方式结合附图对本发明作进一步详细说明。
实施例一:
本实施例采用散列算法实现对多存储盘的负载进行管理,系统架构非常简单,不需要额外的元数据区,只要有服务器和存储盘(即存储盘,包括机械式硬盘和/或固态硬盘),就可以开展海量的文件存取业务,非常便于部署与实施。其存取性能高,由原有的元数据检索操作演进为散列计算操作,不管亿级,还是十亿级数量的文件,都是一次快速的三列运算,即可得到文件存储的物理位置;还可对各存储盘的状态进行监测,将坏的存储盘进行更换,保证文件的正常存储;同时还具备易扩展的特性,只要增加存储盘(机械式硬盘或者固态硬盘),系统容量和吞吐量都会线性提升。下面以具体的示例对本发明做进一步详细说明:
本实施例提供的多存储盘负载管理方法,下面以文件存储过程为例进行说明,请参见图1所示,包括:
步骤101:获取存储盘列表,该存储盘列表中包含各存储盘的识别标识;
步骤102:接收文件访问请求,该文件访问请求包含文件全路径信息;
步骤103:根据所述文件全路径信息和所述各存储盘的识别标识,采用散列算法从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘;
步骤104:在该目标存储盘上进行相应的文件访问操作。本实施例中的文件访问请求可以是文件存储请求,也可以文件读取请求;为文件存储请求时,则在该目标存储盘上进行相应文件的写操作;为文件读取请求时,则在该目标存储盘上进行相应文件的读操作。
上述步骤103中,根据文件全路径信息和各存储盘的识别标识,采用散列算法从存储盘中选择一个作为文件访问请求访问的目标存储盘请参见图2所示,包括:
步骤201:对各存储盘的识别标识通过散列算法处理得到各存储盘的存储介质因子;此处具体可通过散列算法将识别标识映射成正整数,当然并不排除映射成其他形式;只要能应用散列算法的均分分布特性即可;
步骤202:对文件全路径信息通过散列算法处理得到文件全路径因子;此处具体也可通过散列算法将识别标识映射成正整数,当然并不排除映射成其他形式;只要能应用散列算法的均分分布特性即可;对应的本实施例中具体的散列算法也可灵活选择,只要能实现上述目的即可;
步骤203:将得到的文件全路径因子与各存储盘的存储介质因子进行整合处理得到与各存储盘对应的整合因子;也即有多少个存储盘就有多少 个整合因子;
步骤204:根据各存储盘对应的整合因子从各存储盘中选择一个作为文件访问请求访问的目标存储盘。
上述步骤201和步骤202中的散列算法可采用相同的算法。
上述步骤203中的整合处理具体可为将得到的文件全路径因子与各存储盘的存储介质因子分别进行异或处理,得到与各存储盘对应的整合因子。
上述步骤204的具体过程请参见图3所示,包括:
步骤301:对各存储盘对应的整合因子通过散列算法处理得到各存储盘对应的选择因子;该步骤中所采用的算法与步骤201和步骤202相同;
步骤302:选取值最大的选择因子对应的存储盘作为目标存储盘。
本实施例中存储盘列表中的识别标识为各存储盘的物理位置识别标识;而存储设备具体可包括存储服务器和/或磁盘簇(JBOD:Just a Bunch Of Disks),存储服务器、磁盘簇都包含多块存储盘,且存储盘可以是固态硬盘,也可以是机械硬盘。本实施例在存储设备上还设有用于进行文件访问的应用程序,也即文件访问客户端。
本实施例中,可对存储服务器和JBOD进行编号,例如存储服务器编号为框架号0,第一个磁盘簇编号为框架号1,第二个磁盘簇编号为框架号2,依次类推,第N个磁盘簇编号为框架号N;
进一步地,分布式文件系统守护进程对存储服务器中的每个存储盘(机械式硬盘或者固态硬盘)的插槽进行编号,对磁盘簇中的每个存储盘的插槽进行编号;
本实施例中存储设备上的每个存储盘就都设一个统一且唯一的物理位置编号,即"框架号+插槽号",称为存储盘的物理位置识别标识;启动时获取存储服务器和磁盘簇上的所有存储盘,首先按存储盘所在的框架号排 序,再按框架内的插槽号排序,形成一组一维的存储盘物理位置识别标识列表,即存储盘列表:
框架号0_插槽号0
框架号0_插槽号1
框架号0_插槽号2
框架号0_插槽号N'
框架号1_插槽号0
框架号1_插槽号1
框架号1_插槽号2
框架号1_插槽号N”
框架号2_插槽号0
框架号2_插槽号1
框架号2_插槽号2
框架号2_插槽号N”'
……
框架号N_插槽号0
框架号N_插槽号1
框架号N_插槽号2
框架号N_插槽号N””
然后将各存储盘的物理位置识别标识按进行散列(即HASH)计算,进而将存储盘物理位置识别标识映射成一组离散且均匀分布的正整数,称为“存储介质因子”。本实施例中采用各存储盘的物理位置识别标识,不管物理位置插入什么样的存储介质,按照物理位置字符串组计算出来的“存储介质因子”都是一样的,也就是说本实施例中的“存储介质因子”仅与 各物理位置相关,而与存储盘无关,可进一步提升可靠性。本实施例中,还可在加上存储盘编号,对各存储盘编号时刻为各存储盘进行唯一编号,例如disk0001,disk0002,disk0003,……,disk000N。
此时的物理位置识别标识则由框架号+_插槽号+存储盘编号。
得到各存储盘对应的“存储介质因子”后,可进一步将存储盘对应的盘符(即Linux或者其他Unix-like对应的块设备文件,如/dev/sda)与存储盘物理位置标识进行对应挂载,例如:
Figure PCTCN2016098071-appb-000001
本实施例中的文件全路径信息可包括文件类型信息+若干存储目录路径+文件名;采用散列算法可将文件全路径信息映射成一个正整数。
采用本实施例的平衡散列算法选择好目标存储盘进行存储后,当用户 需要读取该文件时,仍然按照同样的方法找到“选择因子”值最大的存储盘,且该存储盘一定是存储时的目标存储盘。
本实施例中,在上述过程中,可对各存储盘的工作状态进行监测,根据监测结果将出现异常的存储盘剔除,然后进行更换。在剔除时,可将该存储盘上的文件均衡转移到其他存储盘上,也可完全转移到更换后的新存储盘上。
当前在存储行业,固态硬盘(SSD)越来越趋于主流,本实施例可以将SSD与传统的机械式硬盘分别独立编组,即SSD形成一组固态硬盘存储子列表,该固态硬盘存储子列表中包括各固态硬盘的识别标识,如ssd_0001,ssd_0002…ssd_000N;
传统的机械式硬盘形成一组机械硬盘存储子列表,如disk_0001,disk_0002…disk_000N。
在进行状态监测时,可分别实时监测两组子列表。
此时在进行负载管理过程中,可根据用户请求的行为,比如,用户想把访问频繁的文件(即“热”文件),存放到SSD对应的固态硬盘存储子列表中,则仅针对固态硬盘存储子列表中的各固态硬盘的识别标识进行散列计算,将访问频繁的文件(即“热”文件)映射到固态硬盘存储子列表中。
如果用户想把访问很少的文件(即“冷”文件),存放到传统的机械式硬盘对应的机械硬盘存储子列表中,则仅针对传统的机械硬盘存储子列表的各机械硬盘的识别标识进行散列计算,将访问很少的文件(即“冷”文件)映射到机械硬盘存储子列表中。这样可以进一步提升用户体验的满意度。
实施例二:
本实施例提供了一种多存储盘负载管理装置,请参见图4所示,包括:
多盘位置管理模块1,设置为获取存储盘列表,存储盘列表包含各存储盘的识别标识;
请求接收模块2,设置为接收包含文件全路径信息的文件访问请求;
多盘负载存储管理模块3,设置为根据文件访问请求中的文件全路径信息和各存储盘的识别标识,采用散列算法从存储盘中选择一个作为文件访问请求访问的目标存储盘。
本实施例中的多盘负载存储管理模块3包括计算子模块31、整合子模块32和选择子模块33;
计算子模块31设置为对各存储盘的识别标识通过散列算法处理得到各存储盘的存储介质因子;以及设置为对文件全路径信息通过散列算法处理得到文件全路径因子;此处计算子模块31具体可通过散列算法将识别标识以及文件全路径因子映射成正整数,当然并不排除映射成其他形式;只要能应用散列算法的均分分布特性即可。
整合子模块32设置为将文件全路径因子与各存储盘的存储介质因子进行整合处理得到与各存储盘对应的整合因子;
选择子模块33设置为根据各存储盘对应的整合因子从存储盘中选择一个作为所述文件访问请求访问的目标存储盘,具体过程包括:
对各存储盘对应的整合因子通过散列算法处理得到各存储盘对应的选择因子;
选取值最大的选择因子对应的存储盘作为所述目标存储盘。
本实施例中三次散列算法的计算具体可采用相同的算法。
本实施例中存储盘列表中的识别标识为各存储盘的物理位置识别标 识;而存储设备具体可包括存储服务器和/或磁盘簇(JBOD:Just a Bunch Of Disks),存储服务器、磁盘簇都包含多块存储盘,且存储盘可以是固态硬盘,也可以是机械硬盘。本实施例在存储设备上还设有用于进行文件访问的应用程序,也即文件访问客户端。
本实施例中,可对存储服务器和JBOD进行编号,例如存储服务器编号为框架号0,第一个磁盘簇编号为框架号1,第二个磁盘簇编号为框架号2,依次类推,第N个磁盘簇编号为框架号N;
进一步地,分布式文件系统守护进程对存储服务器中的每个存储盘(机械式硬盘或者固态硬盘)的插槽进行编号,对磁盘簇中的每个存储盘的插槽进行编号;
本实施例中存储设备上的每个存储盘就都设一个统一且唯一的物理位置编号,即"框架号+插槽号",称为存储盘的物理位置识别标识;启动时获取存储服务器和磁盘簇上的所有存储盘,首先按存储盘所在的框架号排序,再按框架内的插槽号排序,形成一组一维的存储盘物理位置识别标识列表,即存储盘列表。然后计算子模块31将各存储盘的物理位置识别标识按进行散列(即HASH)计算,进而将存储盘物理位置识别标识映射成一组离散且均匀分布的正整数,称为“存储介质因子”。本实施例中采用各存储盘的物理位置识别标识,不管物理位置插入什么样的存储介质,按照物理位置字符串组计算出来的“存储介质因子”都是一样的,也就是说本实施例中的“存储介质因子”仅与各物理位置相关,而与存储盘无关,可进一步提升可靠性。本实施例中,还可在加上存储盘编号,对各存储盘编号时刻为各存储盘进行唯一编号,此时的物理位置识别标识则由框架号+_插槽号+存储盘编号。
得到各存储盘对应的“存储介质因子”后,可进一步将存储盘对应的 盘符(即Linux或者其他Unix-like对应的块设备文件,如/dev/sda)与存储盘物理位置标识进行对应挂载。
本实施例中的文件全路径信息可包括文件类型信息+若干存储目录路径+文件名;计算子模块31采用散列算法可将文件全路径信息映射成一个正整数。
采用本实施例的平衡散列算法选择好目标存储盘进行存储后,当用户需要读取该文件时,仍然按照同样的方法找到“选择因子”值最大的存储盘,且该存储盘一定是存储时的目标存储盘。
本实施例中,在上述过程中,可对各存储盘的工作状态进行监测,根据监测结果将出现异常的存储盘剔除,然后进行更换。在剔除时,可将该存储盘上的文件均衡转移到其他存储盘上,也可完全转移到更换后的新存储盘上。
请参见图5所示,本实施例中的多存储盘负载管理装置还可进一步包括状态监测模块4,设置为对各存储盘的工作状态进行监测。进而可根据监测结果将出现异常的存储盘剔除,然后进行更换。在剔除时,可将该存储盘上的文件均衡转移到其他存储盘上,也可完全转移到更换后的新存储盘上。
请参见图6所示,本实施例中的多存储盘负载管理装置还包括分类管理模块5,设置为将SSD与传统的机械式硬盘分别独立编组,即SSD形成一组固态硬盘存储子列表,该固态硬盘存储子列表中包括各固态硬盘的识别标识,如ssd_0001,ssd_0002…ssd_000N;
传统的机械式硬盘形成一组机械硬盘存储子列表,如disk_0001,disk_0002…disk_000N。
在进行状态监测时,可分别实时监测两组子列表。
此时在进行负载管理过程中,可根据用户请求的行为,比如,用户想把访问频繁的文件(即“热”文件),存放到SSD对应的固态硬盘存储子列表中,则仅针对固态硬盘存储子列表中的各固态硬盘的识别标识进行散列计算,将访问频繁的文件(即“热”文件)映射到固态硬盘存储子列表中。
如果用户想把访问很少的文件(即“冷”文件),存放到传统的机械式硬盘对应的机械硬盘存储子列表中,则仅针对传统的机械硬盘存储子列表的各机械硬盘的识别标识进行散列计算,将访问很少的文件(即“冷”文件)映射到机械硬盘存储子列表中。这样可以进一步提升用户体验的满意度。
本实施例中所采用的散列算法可支持热拔插的方式进行实时更新。
实施例三:
本实施例提供了一种分布式文件系统,请参见图7所示,其包括文件访问客户端71、文件访问接口72、多个存储盘73以及实施例二所示的多存储盘负载管理装置74;文件访问客户端71可由各种用户程序实现,文件访问接口72则可采用通用接口动态链接库实现。
如图8所示,展示了本实施例中的“分布式文件系统”中的多个存储盘73与挂载点映射关系,涉及到存储服务器和若干JBOD,存储服务器上有若干存储盘,JBOD上也有一些存储盘,存储服务器与JBOD使用SAS((Serial Attached SCSI,)即串行连接SCSI)线缆相连。每个存储盘都有一个唯一的物理位置识别标识,即使用"框架号-插槽号"标识,在操作系统上,使用存储盘的物理位置标识作为挂载目录。图8显示了所有存储盘与操作系统中的挂载点的一一映射关系;同时,每个存储盘都有唯一的 "存储介质因子",其计算得到过程请参见实施例二所示。
文件访问客户端71通过文件访问接口72向多存储盘负载管理装置74发送文件访问请求;多存储盘负载管理装置74接收文件访问请求,并从多个存储盘中选择一个作为文件访问请求访问的目标存储盘。下面以一个文件存储具体示例进行说明,请参见图9所示,包括:
步骤901:文件访问客户端71调用文件访问接口72发起文件访问请求,并提供“文件的全路径名”;
步骤902:多存储盘负载管理装置74将”文件的全路径名”映射为一个正整数,称为“文件全路径因子”;
步骤903:多存储盘负载管理装置74提供可用的存储盘质物理位置和列表,并得到各存储盘的“存储介质因子”;
步骤904:多存储盘负载管理装置74将每一个“存储介质因子”与“文件全路径因子”合并为"整合因子"(有多少个“存储介质因子”就有多少个"整合因子");
步骤905:多存储盘负载管理装置74计算每一个"整个因子",得到多个“选择因子”(有多少个“存储介质因子”就有多少个"整合因子",从而有多少个“选择因子”);
步骤906:多存储盘负载管理装置74选择最大值的“选择因子”,最终将文件映射到“选择因子”值最大的存储盘上;
步骤907:多存储盘负载管理装置74在选择好的存储盘上完成文件的读写操作。
实施例四:
本实施例提供了一种分布式存储网络系统,请参见图10所示,包括 文件访问客户端01、文件访问接口02、多个存储节点03以及如实施例二所示的多存储盘负载管理装置04;存储节点03包含多个存储盘;也即将实施例三中的多个存储盘作为一个存储节点,多个存储节点的结合构成存储网络系统。本实施例中可对存储网络中的各存储节点进行编号,例如形如node1,node2,……,nodeN;对每个存储节点中的多个存储盘的编号及管理方式采用上述各实施例中的方式。具体的控制过程如下:
文件访问客户端01通过文件访问接口02向多存储盘负载管理装置04发送文件访问请求;
多存储盘负载管理装置04接收所述文件访问请求,根据文件访问请求从所述多个存储节点中选择一个作为目标存储节点nodeX,其选择方式也可采用上述各实施例中选择目标存储盘的方式,也可采用其他的方式进行选择确定;然后并从目标存储节点nodeX的多个存储盘中选择一个作为文件访问请求访问的目标存储盘。
本实施例中,利用多存储盘负载管理装置04完成在存储网络中选择存储节点的机制,进一步,在存储节点内部完成多盘的选择操作。本实施例支持弹性扩展,通过扩展存储节点即可构建一个大规模的存储网络,整个存储网络存储负载被均衡分担到每个存储节点上,在每个存储节点内部,存储负载又被均衡分担到多盘的每个盘上。
本发明实施例与相关技术方案的对比,至少具备以下优点:
(1)系统架构非常简单,不需要额外的元数据控制器,只要有服务器和存储介质(机械式硬盘或者固态硬盘),就可以开展海量的文件存取业务,非常便于部署与实施。
(2)性能高,由原有的元数据检索操作演进为散列计算操作,不管亿级,还是十亿级数量的文件,都是一次快速的三级运算,即可得到文件 存储的物理位置
(3)易扩展,只要增加存储介质(机械式硬盘或者固态硬盘),系统容量和吞吐量都会线性提升。
本发明的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的程序代码:
S1,获取存储盘列表,存储盘列表包含各存储盘的识别标识;
S2,接收文件访问请求,获取文件访问请求中的文件全路径信息;
S3,根据文件全路径信息和各存储盘的识别标识,采用散列算法从存储盘中选择一个作为文件访问请求访问的目标存储盘。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:
S1,对各存储盘的识别标识通过散列算法处理得到各存储盘的存储介质因子;
S2,对文件全路径信息通过散列算法处理得到文件全路径因子;
S3,将文件全路径因子与各存储盘的存储介质因子进行整合处理得到与各存储盘对应的整合因子;
S4,根据各存储盘对应的整合因子从存储盘中选择一个作为文件访问请求访问的目标存储盘。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。以上内容是结合具体的实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。
工业实用性
在本发明实施例中,先获取包含各存储盘识别标志的存储盘列表,然后接收到文件访问请求后,提取该文件访问请求中的文件全路径信息,进而根据文件全路径信息和各存储盘的识别标识,采用散列算法从存储盘中选择一个作为所述文件访问请求访问的目标存储盘。也即本发明实施例采用散列算法实现了多盘负载平衡机制,可将海量文件非常均匀地分布在多盘上,并不需要元数据的存在,系统结构变得非常简单高效,对硬件(主要是内存)的要求较低,没有元数据,也不会存在元数据损坏导致的单点故障,可提升系统存储的安全性。另外,本发明实施例还可对各存储盘的状态进行监测,将坏的存储盘进行更换,保证文件的正常存储;且在弹性扩展方面,只需增加存储盘,整个系统在容量和吞吐量方面都会得到提升。

Claims (14)

  1. 一种多存储盘负载管理方法,包括:
    获取存储盘列表,所述存储盘列表包含各存储盘的识别标识;
    接收文件访问请求,获取所述文件访问请求中的文件全路径信息;
    根据所述文件全路径信息和所述各存储盘的识别标识,采用散列算法从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
  2. 如权利要求1所述的多存储盘负载管理方法,其中,根据所述文件全路径信息和各存储盘的识别标识,采用散列算法从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘包括:
    对所述各存储盘的识别标识通过散列算法处理得到各存储盘的存储介质因子;
    对所述文件全路径信息通过散列算法处理得到文件全路径因子;
    将所述文件全路径因子与所述各存储盘的存储介质因子进行整合处理得到与所述各存储盘对应的整合因子;
    根据所述各存储盘对应的整合因子从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
  3. 如权利要求2所述的多存储盘负载管理方法,其中,根据所述各存储盘对应的整合因子从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘包括:
    对所述各存储盘对应的整合因子通过散列算法处理得到所述各存储盘对应的选择因子;
    选取值最大的选择因子对应的存储盘作为所述目标存储盘。
  4. 如权利要求3所述的多存储盘负载管理方法,其中,将所述文件全路径因子与所述各存储盘的存储介质因子进行整合包括:将所述文件全路径因子与所述各存储盘的存储介质因子分别进行异或处理,得到与所 述各存储盘对应的整合因子。
  5. 如权利要求1-4任一项所述的多存储盘负载管理方法,其中,还包括:对所述各存储盘的工作状态进行监测,根据监测结果将出现异常的存储盘进行更换。
  6. 如权利要求1-4任一项所述的多存储盘负载管理方法,其中,所述识别标识为各存储盘的物理位置识别标识。
  7. 如权利要求6所述的多存储盘负载管理方法,其中,所述物理位置识别标识包括存储盘所在框架的框架号和存储盘所在插槽的插槽号。
  8. 一种多存储盘负载管理装置,包括:
    多盘位置管理模块,设置为获取存储盘列表,所述存储盘列表包含各存储盘的识别标识;
    请求接收模块,设置为接收包含文件全路径信息的文件访问请求;
    多盘负载存储管理模块,设置为根据所述文件访问请求中的文件全路径信息和所述各存储盘的识别标识,采用散列算法从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
  9. 如权利要求8所述的多存储盘负载管理装置,其中,所述多盘负载存储管理模块包括计算子模块、整合子模块和选择子模块;
    所述计算子模块设置为对所述各存储盘的识别标识通过散列算法处理得到各存储盘的存储介质因子;以及设置为对所述文件全路径信息通过散列算法处理得到文件全路径因子;
    所述整合子模块设置为将所述文件全路径因子与所述各存储盘的存储介质因子进行整合处理得到与所述各存储盘对应的整合因子;
    所述选择子模块设置为根据所述各存储盘对应的整合因子从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
  10. 如权利要求8所述的多存储盘负载管理装置,其中,所述选择子模块根据所述各存储盘对应的整合因子从所述存储盘中选择一个作为所述文件访问请求访问的目标存储盘包括:
    对所述各存储盘对应的整合因子通过散列算法处理得到所述各存储盘对应的选择因子;
    选取值最大的选择因子对应的存储盘作为所述目标存储盘。
  11. 如权利要求8-11任一项所述的多存储盘负载管理装置,其中,还包括状态监测模块,对所述各存储盘的工作状态进行监测。
  12. 如权利要求8-11任一项所述的多存储盘负载管理装置,其中,所述识别标识为各存储盘的物理位置识别标识。
  13. 一种分布式文件系统,包括文件访问客户端、文件访问接口、多个存储盘以及如权利要求8-12任一项所述的多存储盘负载管理装置;
    所述文件访问客户端通过所述文件访问接口向所述多存储盘负载管理装置发送文件访问请求;
    所述多存储盘负载管理装置接收所述文件访问请求,并从所述多个存储盘中选择一个作为所述文件访问请求访问的目标存储盘。
  14. 一种分布式存储网络系统,包括文件访问客户端、文件访问接口、多个存储节点以及如权利要求8-12任一项所述的多存储盘负载管理装置;所述存储节点包含多个存储盘;
    所述文件访问客户端通过所述文件访问接口向所述多存储盘负载管理装置发送文件访问请求;
    所述多存储盘负载管理装置接收所述文件访问请求,根据所述文件访问请求从所述多个存储节点中选择一个作为目标存储节点,并从所述目标存储节点的多个存储盘中选择一个作为所述文件访问请求访问的目标存 储盘。
PCT/CN2016/098071 2015-09-14 2016-09-05 多存储盘负载管理方法、装置、文件系统及存储网络系统 WO2017045545A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510582124.XA CN106527960B (zh) 2015-09-14 2015-09-14 多存储盘负载管理方法、装置、文件系统及存储网络系统
CN201510582124.X 2015-09-14

Publications (1)

Publication Number Publication Date
WO2017045545A1 true WO2017045545A1 (zh) 2017-03-23

Family

ID=58288162

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/098071 WO2017045545A1 (zh) 2015-09-14 2016-09-05 多存储盘负载管理方法、装置、文件系统及存储网络系统

Country Status (2)

Country Link
CN (1) CN106527960B (zh)
WO (1) WO2017045545A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112988065A (zh) * 2021-02-08 2021-06-18 北京星网锐捷网络技术有限公司 数据迁移方法、装置、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488127B (zh) * 2020-04-16 2023-01-10 苏州浪潮智能科技有限公司 基于磁盘簇的数据并行存储方法、装置以及数据读取方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1641610A (zh) * 2004-01-08 2005-07-20 英业达股份有限公司 用于网络储存系统的硬盘更换控制管理方法
US20140181119A1 (en) * 2012-12-26 2014-06-26 Industrial Technology Research Institute Method and system for accessing files on a storage system
CN104123359A (zh) * 2014-07-17 2014-10-29 江苏省邮电规划设计院有限责任公司 一种分布式对象存储系统的资源管理方法
CN104375781A (zh) * 2013-08-16 2015-02-25 深圳市腾讯计算机系统有限公司 数据存取方法及装置
CN104660643A (zh) * 2013-11-25 2015-05-27 南京中兴新软件有限责任公司 请求响应方法、装置及分布式文件系统

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8095509B2 (en) * 2007-08-11 2012-01-10 Novell, Inc. Techniques for retaining security restrictions with file versioning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1641610A (zh) * 2004-01-08 2005-07-20 英业达股份有限公司 用于网络储存系统的硬盘更换控制管理方法
US20140181119A1 (en) * 2012-12-26 2014-06-26 Industrial Technology Research Institute Method and system for accessing files on a storage system
CN104375781A (zh) * 2013-08-16 2015-02-25 深圳市腾讯计算机系统有限公司 数据存取方法及装置
CN104660643A (zh) * 2013-11-25 2015-05-27 南京中兴新软件有限责任公司 请求响应方法、装置及分布式文件系统
CN104123359A (zh) * 2014-07-17 2014-10-29 江苏省邮电规划设计院有限责任公司 一种分布式对象存储系统的资源管理方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112988065A (zh) * 2021-02-08 2021-06-18 北京星网锐捷网络技术有限公司 数据迁移方法、装置、设备及存储介质
CN112988065B (zh) * 2021-02-08 2023-11-17 北京星网锐捷网络技术有限公司 数据迁移方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN106527960A (zh) 2017-03-22
CN106527960B (zh) 2021-04-02

Similar Documents

Publication Publication Date Title
US11645183B1 (en) User interface for correlation of virtual machine information and storage information
US9582297B2 (en) Policy-based data placement in a virtualized computing environment
CN102541990B (zh) 利用虚拟分区的数据库重新分布方法和系统
US8954545B2 (en) Fast determination of compatibility of virtual machines and hosts
US9213489B1 (en) Data storage architecture and system for high performance computing incorporating a distributed hash table and using a hash on metadata of data items to obtain storage locations
US10216450B2 (en) Mirror vote synchronization
JP2020525906A (ja) データベーステナントマイグレーションのシステム及び方法
US10908834B2 (en) Load balancing for scalable storage system
US11880578B2 (en) Composite aggregate architecture
US9984139B1 (en) Publish session framework for datastore operation records
US20140337457A1 (en) Using network addressable non-volatile memory for high-performance node-local input/output
US9110820B1 (en) Hybrid data storage system in an HPC exascale environment
US9525729B2 (en) Remote monitoring pool management
CN107948229B (zh) 分布式存储的方法、装置及系统
US11079960B2 (en) Object storage system with priority meta object replication
WO2017045545A1 (zh) 多存储盘负载管理方法、装置、文件系统及存储网络系统
US9053100B1 (en) Systems and methods for compressing database objects
US11074002B2 (en) Object storage system with meta object replication
US11093465B2 (en) Object storage system with versioned meta objects
US9436697B1 (en) Techniques for managing deduplication of data
US9794326B1 (en) Log information transmission integrity
US20240111414A1 (en) Systems and methods for establishing scalable storage targets
Xu et al. Online Encoding for Erasure-Coded Distributed Storage Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16845659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16845659

Country of ref document: EP

Kind code of ref document: A1