WO2017071563A1 - 一种存储数据的方法及集群管理节点 - Google Patents
一种存储数据的方法及集群管理节点 Download PDFInfo
- Publication number
- WO2017071563A1 WO2017071563A1 PCT/CN2016/103267 CN2016103267W WO2017071563A1 WO 2017071563 A1 WO2017071563 A1 WO 2017071563A1 CN 2016103267 W CN2016103267 W CN 2016103267W WO 2017071563 A1 WO2017071563 A1 WO 2017071563A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- data
- hard disk
- cluster
- disk group
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000007726 management method Methods 0.000 title abstract description 45
- 238000013500 data storage Methods 0.000 title abstract description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1074—Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
- H04L67/1078—Resource delivery mechanisms
Definitions
- the present invention relates to the field of storage technologies, and in particular, to a method for storing data and a cluster management node.
- NAS Scale-out
- the cluster size can be expanded from several nodes to hundreds of nodes.
- the hard disks are grouped first.
- the hash group is used to select the hard disk group to be written by the file. Take the redundancy ratio of 2+1 as an example, that is, each hard disk group includes 3 hard disks.
- data is written, two hard disks store the original data, and one hard disk stores the checksum.
- a hard disk in the group fails, it will Re-select a hard disk for the packet, and then recover the data of the failed hard disk according to the contents of the remaining two hard disks by using the Erasure Code algorithm.
- a hard disk group selects only one hard disk in one node.
- any node fails, it does not affect the data reading and writing service: for the data reading service, if the original data is in the faulty node, the original data can be restored according to the checksum; For the service that writes data, only the data is written to the normal node.
- the faulty node returns to normal, the missing data during the fault is calculated according to the Erasure Code algorithm, and the faulty node after the normal recovery is written.
- a hard disk generally belongs to multiple hard disk groups.
- the above prior art can ensure that data read and write services are performed normally when a node fails, for example, when the redundancy ratio is 4+2, any two node failures will not affect the data read/write service, but for the cross-region.
- the clusters are clustered in multiple areas (usually one area corresponds to one machine room). When all the nodes in any area fail, the data read and write services cannot be performed normally, and thus the regional level reliability cannot be achieved.
- the embodiment of the invention provides a method for storing data and a cluster management node, so as to solve the problem that the data reading and writing service cannot be performed normally when all the nodes in a certain area of the cluster in a region are faulty.
- a first aspect of the present invention provides a method of storing data, including:
- the cluster management node receives the node information reported by each storage node and stores it in the node information table, where the node information includes a node identifier, a hard disk list of the node, and an area to which the node belongs;
- the cluster management node divides the hard disk group according to the node information in the node information table. For the cluster with the redundancy ratio of N+M, the number of hard disks selected for each area is less than M, where N is used for storing the original.
- the number of hard disks of data, M is the number of hard disks used to store the checksum, and N and M are integers greater than one;
- the cluster management node updates the status of the storage node in the fault area in the node information table to a fault status
- the cluster management node synchronizes the content in the updated node information table and the information of the hard disk group to the normal storage node and the client proxy node, so that the client proxy node receives the data read and write service request, and
- the normal storage node in the hard disk group exchanges data read and write services.
- the cluster management node divides the hard disk group according to the node information in the node information table, and the redundancy ratio is N+M.
- Cluster the number of hard disks selected for each zone is less than M, and also includes:
- the number of hard disks is selected on average for the regions in the cluster according to the redundancy ratio and the number of regions in the cluster.
- the node information reported by each storage node is reported by the heartbeat information.
- the client proxy node receives the write data service of the client If requested, the hard disk packet is selected, and after the write data message is sent to the normal storage node in the hard disk packet and the data is written, the metadata is written to the metadata management node.
- the client proxy node receives the read data service of the client If the request is, the metadata is read to the metadata management node, and the hard disk group corresponding to the file is obtained according to the metadata, and the read data message read data is sent to the normal storage node in the hard disk group, and is read according to the normal storage node. After the obtained redundant data recovers the original data, the original data is returned to the client.
- a second aspect of the present invention provides a cluster management node, including:
- the node information includes a node identifier, a hard disk list of the node, and an area to which the node belongs;
- a grouping unit configured to divide a hard disk group according to node information in the node information table, and for a cluster with a redundancy ratio of N+M, the number of hard disks selected for each area is less than M, where N is used for storing original The number of hard disks of data, M is the number of hard disks used to store the checksum, and N and M are integers greater than one;
- An update unit configured to update a state of the storage node in the fault area in the node information table to a fault state if a certain area fails;
- a sending unit configured to synchronize the content in the updated node information table and the information of the hard disk group to the normal storage node and the client proxy node, so that the client proxy node receives the service request for data reading and writing, and
- the normal storage node in the selected hard disk group exchanges data read and write services.
- the grouping unit is further configured to:
- the number of hard disks is selected on average for the regions in the cluster according to the redundancy ratio and the number of regions in the cluster.
- the node information reported by each storage node is reported by the heartbeat information.
- the client proxy node receives the write data service of the client If requested, the hard disk packet is selected, and after the write data message is sent to the normal storage node in the hard disk packet and the data is written, the metadata is written to the metadata management node.
- the client proxy node receives the read data service of the client If the request is, the metadata is read to the metadata management node, and the hard disk group corresponding to the file is obtained according to the metadata, and the read data message read data is sent to the normal storage node in the hard disk group, and is read according to the normal storage node. After the obtained redundant data recovers the original data, the original data is returned to the client.
- the number of hard disks selected for each area is less than M, so that when a certain area failure causes all the storage nodes it contains to fail,
- the status of the faulty storage node can be updated, and when the data is read or written, the data read and write service can be ensured normally only through the interaction between the CA node and the normal storage node, thereby improving the reliability of data storage and data reading.
- Write the reliability of the service and extend this reliability to the regional level, which is conducive to the normal operation of the storage system after the capacity expansion, providing more capacity and stable storage performance.
- FIG. 1 is a schematic flow chart of a first embodiment of a method for storing data according to the present invention
- FIG. 2 is a schematic flow chart of a second embodiment of a method for storing data according to the present invention.
- FIG. 3 is a schematic flowchart diagram of a third embodiment of a method for storing data according to the present invention.
- FIG. 4 is a schematic structural diagram of a first embodiment of a cluster management node according to the present invention.
- FIG. 5 is a schematic structural diagram of a second embodiment of a cluster management node according to the present invention.
- FIG. 1 is a schematic flowchart of a first embodiment of a method for storing data according to the present invention.
- the method includes:
- the cluster management node receives the node information reported by each storage node and stores the node information in the node information table.
- the node information includes a node identifier, a hard disk list of the node, and an area to which the node belongs.
- the cluster management node divides the hard disk group according to the node information in the node information table. For a cluster with a redundancy ratio of N+M, the number of hard disks selected for each area is less than M.
- N is the number of hard disks used to store the original data
- M is the number of hard disks used to store the checksum.
- a quantity, and N and M are integers greater than one.
- the number of hard disks may be selected for the average of the regions in the cluster according to the redundancy ratio and the number of regions in the cluster.
- the redundancy ratio is 5+4, the number of hard disks in each area can be 3; the redundancy ratio is 7+5, and the number of hard disks in each area can be 4; if it cannot be completely averaged, such as redundancy
- the ratio is 6+5, the number of hard disks in each area can be 4, 4, 3; the redundancy ratio is 9+6, and the number of hard disks in each area can be 5;
- the redundancy ratio is 8+4, the number of hard disks in each area can be 3; the redundancy ratio is 11+5, and the number of hard disks in each area can be 4.
- the cluster management node updates the status of the storage node in the fault area in the node information table to a fault status.
- the cluster management node synchronizes the content in the updated node information table and the information of the hard disk group to the normal storage node and the client proxy node, so that the client proxy node receives the service request for data reading and writing. And interacting with a normal storage node in the hard disk group to complete data read and write services.
- the number of hard disks selected for each area is less than M, so that a certain area failure causes all the storage nodes it contains to fail, and the fault storage node can be updated.
- the interaction between the CA node and the normal storage node ensures that the data read and write services are performed normally, thereby improving the reliability of data storage and the reliability of data reading and writing services. And extend this reliability to the regional level, which is conducive to the normal operation of the storage system after the capacity expansion, providing greater capacity and stable storage performance.
- FIG. 2 is a schematic flowchart of a second embodiment of a method for storing data according to the present invention.
- a redundancy ratio of 5+4 is used, and there are three areas in the cluster, that is, three machine rooms, all of which are in a normal state, and each storage node in the area can normally send a heartbeat message.
- the method includes: a system power-on initialization process, a write data process, and a read data process. details as follows:
- the storage node in the area 1-3 reports the heartbeat message to the cluster management node, and carries the storage node identifier, the hard disk list, and the area to which the node belongs.
- the cluster management node adds the received node information to the node information table.
- the hard disk grouping is divided according to the redundancy ratio and the number of regions. Make sure that the number of hard disks selected for each zone is less than 4. For example, for a group with a redundancy ratio of 5+4, the cluster is divided into three zones, and the hard disk can be selected by region average, and three hard disks are selected for each zone.
- the cluster management node synchronizes the node information and the group information to the storage node.
- the cluster management node synchronizes the node information and the group information to the client agent (CA) node.
- the CA node receives a write data service request from the client.
- the CA node selects the hard disk grouping.
- the CA node sends a data message to each storage node in the hard disk group and writes the data.
- the CA node writes metadata to the metadata management node.
- the CA node receives the read data service request from the client.
- the CA node sends a metadata message to the metadata management node and reads the metadata.
- the CA node obtains the hard disk group corresponding to the file according to the metadata.
- the CA node reads the data message and reads the data to each storage node in the hard disk group.
- the CA node returns the read data to the client.
- the hard disk grouping and the data reading and writing process when the area is normal are described.
- the hard disk grouping is divided, the relationship between the area information and the redundancy ratio is fully considered, and the data reading and writing service can be satisfied when the area is normal.
- the requirements can also meet the requirements of the data reading and writing service in the case of a regional failure. For details, refer to the embodiment shown in FIG. 3.
- FIG. 3 is a schematic flowchart of a third embodiment of a method for storing data according to the present invention.
- a redundancy ratio of 5+4 there are three areas in the cluster, that is, three computer rooms.
- the area 1 is in an abnormal state, and the area 2 and the area 3 are both in a normal state.
- the method includes an area failure process, a data writing process, and a data reading process. details as follows:
- the area 1 is faulty, and the storage nodes 1, 2, and 3 included therein no longer report the heartbeat message.
- the cluster management node timeout does not receive the heartbeat message of the storage node in the area 1, and the state of the storage nodes 1, 2, and 3 in the update node information table is a fault state.
- the cluster management node synchronizes the updated node information to the normal storage nodes 4-9 in the area 2 and the area 3.
- the cluster management node synchronizes the updated node information to the CA node.
- the storage nodes in the default area 2 and the area 3 have reported their own node information through the heartbeat message in the steps of the embodiment shown in FIG. 2, so this step is omitted here. If the nodes in the area 2 and the area 3 have not reported their own node information, the step of the storage node 4-9 reporting the information of the own node through the heartbeat message may be added here.
- the CA node receives a write data service request from the client.
- the CA node selects the hard disk grouping.
- the CA node sends a data message to the normal storage node in the hard disk group and writes the data.
- the CA node judges that for the 5+4 redundancy ratio, if the number of normal storage nodes in the hard disk group is greater than or equal to 5, the data is considered to be successful.
- the CA node writes metadata to the metadata management node.
- the CA node receives a read data service request from the client.
- the CA node reads the metadata message and reads the metadata to the metadata management center.
- the CA node obtains the hard disk group corresponding to the file according to the metadata.
- the CA node reads the data message and reads the data to the normal storage node in the hard disk group.
- the CA node recovers the original data according to the read redundant data, and returns the original data to the client.
- the reliability of data storage and the reliability of data read and write services are improved by the partitioning of the hard disk, so that the failure of a certain area causes all the storage nodes it contains to fail. Extending this reliability to the regional level will help the storage system to work properly after capacity expansion, providing more capacity and stable storage performance.
- FIG. 4 is a schematic structural diagram of a first embodiment of a cluster management node according to the present invention.
- the cluster management node includes:
- the receiving unit 100 is configured to receive the node information reported by each storage node and store the information to the node information table, where the node information includes a node identifier, a hard disk list of the node, and an area to which the node belongs;
- the grouping unit 200 is configured to divide the hard disk group according to the node information in the node information table. For a cluster with a redundancy ratio of N+M, the number of hard disks selected for each area is less than M, where N is used for storage. The number of hard disks of the original data, M is the number of hard disks used to store the checksum, and N and M are integers greater than one;
- the updating unit 300 is configured to update a state of the storage node in the fault area in the node information table to a fault state if a certain area fails.
- the sending unit 400 is configured to synchronize the content in the updated node information table and the information of the hard disk group to the normal storage node and the client proxy node, so that when the client proxy node receives the service request for data reading and writing, The service of reading and writing data is completed by interacting with the normal storage node in the selected hard disk group.
- the grouping unit 200 is further configured to:
- the number of hard disks is selected on average for the regions in the cluster according to the redundancy ratio and the number of regions in the cluster.
- the node information reported by each storage node is reported by the heartbeat information.
- the client proxy node receives the write data service request from the client, the hard disk packet is selected, and after the write data message is sent to the normal storage node in the hard disk packet and the data is written, the metadata is written to the metadata management node.
- the client proxy node receives the read data service request from the client, reading the metadata to the metadata management node, and obtaining the hard disk packet where the corresponding file is located according to the metadata, and sending the data to the normal storage node in the hard disk group
- the read data message reads the data and returns the original data to the client after recovering the original data according to the redundant data read in the normal storage node.
- the foregoing receiving unit 100, the grouping unit 200, the updating unit 300, and the sending unit 400 may exist independently or may be integrated, and the receiving unit 100, the grouping unit 200, the updating unit 300 or the above cluster management node embodiment may
- the sending unit 400 may be separately set in the form of hardware independently of the processor of the cluster management node, and may be in the form of a microprocessor; or may be embedded in the processor of the cluster management node in hardware, or may be in software.
- the form is stored in the memory of the cluster management node, so that the processor of the cluster management node invokes the operations corresponding to the above receiving unit 100, the grouping unit 200, the updating unit 300, and the transmitting unit 400.
- the group unit 200 may be a processor of the cluster management node, and the functions of the receiving unit 100, the updating unit 300, and the sending unit 400 may be embedded in the processor, or may be separately set independently of the processor, or may be in the form of software. Stored in memory and called by the processor to implement its functions.
- the sending unit 400 can be integrated with the processor, or can be set independently, or can also be used as an interface circuit of the cluster management node, and can be set independently or integrated.
- the embodiment of the invention does not impose any limitation.
- the above processor may be a central processing unit (CPU), a microprocessor, a single chip microcomputer, or the like.
- the cluster management node includes:
- the input device 10, the output device 20, the memory 30, and the processor 40 are provided.
- the memory 30 is configured to store a set of program codes
- the processor 40 is configured to invoke the program code stored in the memory 30, and execute any one of the first to third embodiments of the method for storing data according to the present invention. operating.
- the present invention has the following advantages:
- the number of hard disks selected for each area is less than M, so that a certain area failure causes all the storage nodes it contains to fail, and the fault storage node can be updated.
- the interaction between the CA node and the normal storage node ensures that the data read and write services are performed normally, thereby improving the reliability of data storage and the reliability of data reading and writing services. And extend this reliability to the regional level, which is conducive to the normal operation of the storage system after the capacity expansion, providing greater capacity and stable storage performance.
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
本发明实施例公开了一种存储数据的方法及集群管理节点,所述方法包括:集群管理节点接收各个存储节点上报的节点信息并存储至节点信息表;根据所述节点信息表中的节点信息划分硬盘分组,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M;若某个区域发生故障,则将故障区域内的存储节点在所述节点信息表中的状态更新为故障状态;将更新后的节点信息表中的内容以及硬盘分组的信息同步至正常存储节点以及客户端代理节点,以便所述客户端代理节点在接收到数据读写的业务请求时,与所述硬盘分组中的正常存储节点交互完成数据读写的业务。采用本发明,可提升数据存储以及数据读写业务的可靠性。
Description
本发明涉及存储技术领域,尤其涉及一种存储数据的方法及集群管理节点。
横向扩展(Scale-out,NAS)的存储系统具有灵活的扩展性,随着用户对容量、性能的要求逐步提升,集群规模可以从几个节点扩展为几百个节点。集群部署完成后,会先把硬盘分组,文件写入时,通过哈希(hash)算法选择文件写入的硬盘分组。以冗余配比为2+1为例,即每个硬盘分组包括3块硬盘,数据写入时,两块硬盘存放原始数据,一块硬盘存放校验和,当分组中一块硬盘故障后,会为该分组重新选一块硬盘,然后通过纠删码(Erasure Code)算法,根据剩下两块硬盘的内容恢复出故障硬盘的数据。通常一个硬盘分组在一个节点只选择一块硬盘,当任一节点故障时,不影响数据读写的业务:对于读数据的业务,如果原始数据在故障节点,可根据校验和恢复出原始数据;对于写数据的业务,只将数据写入正常的节点。当故障节点恢复正常后,会根据Erasure Code算法计算出故障期间缺失的数据,并写入恢复正常后的故障节点。为了保证各硬盘占用率均衡,一块硬盘一般属于多个硬盘分组。
虽然以上的现有技术可以确保节点故障时数据读写的业务正常进行,例如当冗余配比为4+2时,任何2个节点故障都不会影响数据读写的业务,但是对于跨区域的集群即集群分布在多个区域(通常一个区域对应一个机房),任一区域的节点全部故障时,数据读写的业务将无法正常进行,因此也就不能实现区域级可靠性。
发明内容
本发明实施例提供一种存储数据的方法及集群管理节点,以解决跨区域的集群中某个区域的节点全部故障时数据读写的业务无法正常进行的问题。
本发明第一方面提供了一种存储数据的方法,包括:
集群管理节点接收各个存储节点上报的节点信息并存储至节点信息表,所述节点信息中包含节点标识、节点的硬盘列表及节点所属区域;
所述集群管理节点根据所述节点信息表中的节点信息划分硬盘分组,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,其中,N为用于存储原始数据的硬盘数量,M为用于存储校验和的硬盘数量,且N和M均为大于1的整数;
若某个区域发生故障,则所述集群管理节点将故障区域内的存储节点在所述节点信息表中的状态更新为故障状态;
所述集群管理节点将更新后的节点信息表中的内容以及硬盘分组的信息同步至正常存储节点以及客户端代理节点,以便所述客户端代理节点在接收到数据读写的业务请求时,与所述硬盘分组中的正常存储节点交互完成数据读写的业务。
结合第一方面的实现方式,在第一方面第一种可能的实现方式中,所述集群管理节点根据所述节点信息表中的节点信息划分硬盘分组,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,还包括:
根据所述冗余配比和所述集群中的区域数量,为所述集群中的区域平均选择硬盘数量。
结合第一方面的第一种可能的实现方式,在第一方面第二种可能的实现方式中,所述各个存储节点上报的节点信息通过心跳信息上报。
结合第一方面、或第一方面第一种至第二种任一可能的实现方式,在第一方面第三种可能的实现方式中,若所述客户端代理节点接收到客户端的写数据业务请求,则选择硬盘分组,在向该硬盘分组中的正常存储节点发送写数据消息并进行写数据之后,向元数据管理节点写元数据。
结合第一方面、或第一方面第一种至第二种任一可能的实现方式,在第一方面第四种可能的实现方式中,若所述客户端代理节点接收到客户端的读数据业务请求,则向元数据管理节点读元数据,并根据所述元数据得到对应文件所在的硬盘分组,向该硬盘分组中的正常存储节点发送读数据消息读数据,并在根据正常存储节点中读到的冗余数据恢复出原始数据之后,将所述原始数据返回给所述客户端。
本发明第二方面提供了一种集群管理节点,包括:
接收单元,用于接收各个存储节点上报的节点信息并存储至节点信息表,
所述节点信息中包含节点标识、节点的硬盘列表及节点所属区域;
分组单元,用于根据所述节点信息表中的节点信息划分硬盘分组,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,其中,N为用于存储原始数据的硬盘数量,M为用于存储校验和的硬盘数量,且N和M均为大于1的整数;
更新单元,用于若某个区域发生故障,则将故障区域内的存储节点在所述节点信息表中的状态更新为故障状态;
发送单元,用于将更新后的节点信息表中的内容以及硬盘分组的信息同步至正常存储节点以及客户端代理节点,以便所述客户端代理节点在接收到数据读写的业务请求时,与选择的硬盘分组中的正常存储节点交互完成数据读写的业务。
结合第二方面的实现方式,在第二方面第一种可能的实现方式中,所述分组单元还用于:
根据所述冗余配比和所述集群中的区域数量,为所述集群中的区域平均选择硬盘数量。
结合第二方面的第一种可能的实现方式,在第二方面第二种可能的实现方式中,所述各个存储节点上报的节点信息通过心跳信息上报。
结合第二方面、或第二方面第一种至第二种任一可能的实现方式,在第二方面第三种可能的实现方式中,若所述客户端代理节点接收到客户端的写数据业务请求,则选择硬盘分组,在向该硬盘分组中的正常存储节点发送写数据消息并进行写数据之后,向元数据管理节点写元数据。
结合第二方面、或第二方面第一种至第二种任一可能的实现方式,在第二方面第四种可能的实现方式中,若所述客户端代理节点接收到客户端的读数据业务请求,则向元数据管理节点读元数据,并根据所述元数据得到对应文件所在的硬盘分组,向该硬盘分组中的正常存储节点发送读数据消息读数据,并在根据正常存储节点中读到的冗余数据恢复出原始数据之后,将所述原始数据返回给所述客户端。
实施本发明实施例,具有如下有益效果:
通过在划分硬盘分组时,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,使得某个区域故障导致其包含的存储节点全部故障时,
可以更新故障存储节点的状态,并在进行数据读写的业务时,仅通过CA节点和正常存储节点的交互便能够确保数据读写的业务正常进行,从而提成了数据存储的可靠性以及数据读写业务的可靠性,并将这种可靠性扩大至区域级别,利于存储系统在容量扩大后的正常工作,提供更加大容量且稳定的存储性能。
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明存储数据的方法的第一实施例的流程示意图;
图2为本发明存储数据的方法的第二实施例的流程示意图;
图3为本发明存储数据的方法的第三实施例的流程示意图
图4为本发明集群管理节点的第一实施例的组成示意图;
图5为本发明集群管理节点的第二实施例的组成示意图。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,为本发明存储数据的方法的第一实施例的流程示意图,在本实施例中,所述方法包括:
S101,集群管理节点接收各个存储节点上报的节点信息并存储至节点信息表。
其中,所述节点信息中包含节点标识、节点的硬盘列表及节点所属区域。
S102,所述集群管理节点根据所述节点信息表中的节点信息划分硬盘分组,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M。
其中,N为用于存储原始数据的硬盘数量,M为用于存储校验和的硬盘数
量,且N和M均为大于1的整数。
进一步地,还可以根据所述冗余配比和所述集群中的区域数量,为所述集群中的区域平均选择硬盘数量。
以3个区域为例:冗余配比为5+4,每个区域硬盘数可以为3;冗余配比为7+5,每个区域硬盘数可以为4;若无法完全平均如冗余配比为6+5,每个区域硬盘数可以为4、4、3;冗余配比为9+6,每个区域硬盘数可以为5;
以4个区域为例:冗余配比为8+4,每个区域硬盘数可以为3;冗余配比为11+5,每个区域硬盘数可以为4。
S103,若某个区域发生故障,则所述集群管理节点将故障区域内的存储节点在所述节点信息表中的状态更新为故障状态。
S104,所述集群管理节点将更新后的节点信息表中的内容以及硬盘分组的信息同步至正常存储节点以及客户端代理节点,以便所述客户端代理节点在接收到数据读写的业务请求时,与所述硬盘分组中的正常存储节点交互完成数据读写的业务。
通过在划分硬盘分组时,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,使得某个区域故障导致其包含的存储节点全部故障时,可以更新故障存储节点的状态,并在进行数据读写的业务时,仅通过CA节点和正常存储节点的交互便能够确保数据读写的业务正常进行,从而提成了数据存储的可靠性以及数据读写业务的可靠性,并将这种可靠性扩大至区域级别,利于存储系统在容量扩大后的正常工作,提供更加大容量且稳定的存储性能。
请参见图2,为本发明存储数据的方法的第二实施例的流程示意图。在本实施例中,假设采用5+4的冗余配比,集群内一共存在三个区域即三个机房,所有区域都处于正常状态,区域内各个存储节点都可正常发送心跳消息。所述方法包括:系统上电初始化过程、写数据过程及读数据过程。具体如下:
系统上电初始化过程
1)系统启动。存储节点上电启动。
2)区域1-3中的存储节点向集群管理节点上报心跳消息,携带本存储节点标识、硬盘列表及节点所属区域。
3)集群管理节点在节点信息表中加入接收到的节点信息。
4)根据冗余配比和区域数量划分硬盘分组。确保为每个区域选择的硬盘数量小于4,例如对于冗余配比为5+4的分组,集群分为3个区域,可以按区域平均选择硬盘,每个区域选择3块硬盘。
5)集群管理节点把节点信息和分组信息同步到存储节点。
6)集群管理节点把节点信息和分组信息同步到客户端代理(Client Agent,CA)节点。
写数据过程
7)CA节点接收到来自客户端的写数据业务请求。
8)CA节点选择硬盘分组。
9)CA节点向硬盘分组中的各个存储节点发写数据消息并写数据。
10)CA节点向元数据管理节点写元数据。
读数据过程
11)CA节点接收到来自客户端的读数据业务请求.
12)CA节点向元数据管理节点发读元数据消息并读元数据。
13)CA节点根据元数据得到对应文件所在硬盘分组。
14)CA节点向硬盘分组中的各个存储节点发读数据消息并读数据。
15)CA节点将读到的数据返回给客户端。
在本实施例中,描述了区域正常时的硬盘分组及数据读写流程,在划分硬盘分组时充分考虑了区域信息以及冗余配比的关系,既可以在区域正常时满足数据读写业务的需求,也可以在区域故障时满足数据读写业务的需求,具体可参照图3所示的实施例。
请参见图3,为本发明存储数据的方法的第三实施例的流程示意图,在本实施例中,假设采用5+4的冗余配比,集群内一共存在三个区域即三个机房,区域1处于异常状态,区域2和区域3都处于正常状态,所述方法包括:区域故障过程、写数据过程及读数据过程。具体如下:
区域故障过程
1)区域1故障,其包含的存储节点1、2、3不再上报心跳消息。
2)集群管理节点超时没有收到区域1内存储节点的心跳消息,更新节点信息表中存储节点1、2、3的状态为故障状态。
3)集群管理节点将更新后的节点信息同步到区域2和区域3内的正常存储节点4~9。
4)集群管理节点将更新后的节点信息同步到CA节点。
需要说明的是,在本发明实施例中默认区域2和区域3内的存储节点已在图2所示实施例的步骤中通过心跳消息上报过自己的节点信息,因此此处省略了该步骤,若区域2和区域3内的节点未上报过自己的节点信息,则此处可增加存储节点4-9通过心跳消息上报自身节点信息的步骤。
写数据过程
5)CA节点接收到来自客户端的写数据业务请求。
6)CA节点选择硬盘分组。
7)CA节点向硬盘分组中正常存储节点发写数据消息并写数据。其中,CA节点判断对于5+4冗余配比,硬盘分组中正常存储节点数大于或等于5,就认为写数据成功。
8)CA节点向元数据管理节点写元数据。
读数据过程
9)CA节点接收到来自客户端的读数据业务请求。
10)CA节点向元数据管理中心发读元数据消息并读元数据。
11)CA节点根据元数据得到对应文件所在硬盘分组。
12)CA节点向硬盘分组中正常存储节点发读数据消息并读数据。
13)CA节点根据读到的冗余数据恢复出原始数据,将原始数据返回给客户端。
通过硬盘分组的划分,使得某个区域故障导致其包含的存储节点全部故障时,仍然不影响数据读写的业务正常进行,从而提成了数据存储的可靠性以及数据读写业务的可靠性,并将这种可靠性扩大至区域级别,利于存储系统在容量扩大后的正常工作,提供更加大容量且稳定的存储性能。
请参见图4,为本发明集群管理节点的第一实施例的组成示意图,在本实施例中,所述集群管理节点包括:
接收单元100,用于接收各个存储节点上报的节点信息并存储至节点信息表,所述节点信息中包含节点标识、节点的硬盘列表及节点所属区域;
分组单元200,用于根据所述节点信息表中的节点信息划分硬盘分组,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,其中,N为用于存储原始数据的硬盘数量,M为用于存储校验和的硬盘数量,且N和M均为大于1的整数;
更新单元300,用于若某个区域发生故障,则将故障区域内的存储节点在所述节点信息表中的状态更新为故障状态;
发送单元400,用于将更新后的节点信息表中的内容以及硬盘分组的信息同步至正常存储节点以及客户端代理节点,以便所述客户端代理节点在接收到数据读写的业务请求时,与选择的硬盘分组中的正常存储节点交互完成数据读写的业务。
可选地,所述分组单元200还用于:
根据所述冗余配比和所述集群中的区域数量,为所述集群中的区域平均选择硬盘数量。
所述各个存储节点上报的节点信息通过心跳信息上报。
若所述客户端代理节点接收到客户端的写数据业务请求,则选择硬盘分组,在向该硬盘分组中的正常存储节点发送写数据消息并进行写数据之后,向元数据管理节点写元数据。
若所述客户端代理节点接收到客户端的读数据业务请求,则向元数据管理节点读元数据,并根据所述元数据得到对应文件所在的硬盘分组,向该硬盘分组中的正常存储节点发送读数据消息读数据,并在根据正常存储节点中读到的冗余数据恢复出原始数据之后,将所述原始数据返回给所述客户端。
需要说明的是,以上接收单元100、分组单元200、更新单元300和发送单元400可以独立存在,也可以集成设置,且以上集群管理节点实施例中接收单元100、分组单元200、更新单元300或发送单元400可以以硬件的形式独立于集群管理节点的处理器单独设置,且设置形式可以是微处理器的形式;也可以以硬件形式内嵌于集群管理节点的处理器中,还可以以软件形式存储于集群管理节点的存储器中,以便于集群管理节点的处理器调用执行以上接收单元100、分组单元200、更新单元300和发送单元400对应的操作。
例如,在本发明集群管理节点的第一实施例(图4所示的实施例)中,分
组单元200可以为集群管理节点的处理器,而接收单元100、更新单元300和发送单元400的功能可以内嵌于该处理器中,也可以独立于处理器单独设置,也可以以软件的形式存储于存储器中,由处理器调用实现其功能。当然,发送单元400可以与处理器集成设置,也可以独立设置,或者还可以作为集群管理节点的接口电路,独立设置或集成设置。本发明实施例不做任何限制。以上处理器可以为中央处理单元(CPU)、微处理器、单片机等。
请参见图5,为本发明集群管理节点的第二实施例的组成示意图,在本实施例中,所述集群管理节点包括:
输入装置10、输出装置20、存储器30及处理器40。其中,所述存储器30用于存储一组程序代码,所述处理器40用于调用所述存储器30中存储的程序代码,执行本发明存储数据的方法第一至第三实施例中的任一操作。
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
通过上述实施例的描述,本发明具有以下优点:
通过在划分硬盘分组时,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,使得某个区域故障导致其包含的存储节点全部故障时,可以更新故障存储节点的状态,并在进行数据读写的业务时,仅通过CA节点和正常存储节点的交互便能够确保数据读写的业务正常进行,从而提成了数据存储的可靠性以及数据读写业务的可靠性,并将这种可靠性扩大至区域级别,利于存储系统在容量扩大后的正常工作,提供更加大容量且稳定的存储性能。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。
Claims (10)
- 一种存储数据的方法,其特征在于,包括:集群管理节点接收各个存储节点上报的节点信息并存储至节点信息表,所述节点信息中包含节点标识、节点的硬盘列表及节点所属区域;所述集群管理节点根据所述节点信息表中的节点信息划分硬盘分组,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,其中,N为用于存储原始数据的硬盘数量,M为用于存储校验和的硬盘数量,且N和M均为大于1的整数;若某个区域发生故障,则所述集群管理节点将故障区域内的存储节点在所述节点信息表中的状态更新为故障状态;所述集群管理节点将更新后的节点信息表中的内容以及硬盘分组的信息同步至正常存储节点以及客户端代理节点,以便所述客户端代理节点在接收到数据读写的业务请求时,与所述硬盘分组中的正常存储节点交互完成数据读写的业务。
- 如权利要求1所述的方法,其特征在于,所述集群管理节点根据所述节点信息表中的节点信息划分硬盘分组,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,还包括:根据所述冗余配比和所述集群中的区域数量,为所述集群中的区域平均选择硬盘数量。
- 如权利要求2所述的方法,其特征在于,所述各个存储节点上报的节点信息通过心跳信息上报。
- 如权利要求1-3任一项所述的方法,其特征在于,若所述客户端代理节点接收到客户端的写数据业务请求,则选择硬盘分组,在向该硬盘分组中的正常存储节点发送写数据消息并进行写数据之后,向元数据管理节点写元数据。
- 如权利要求1-3任一项所述的方法,其特征在于,若所述客户端代理节点接收到客户端的读数据业务请求,则向元数据管理节点读元数据,并根据所述元数据得到对应文件所在的硬盘分组,向该硬盘分组中的正常存储节点发送读数据消息读数据,并在根据正常存储节点中读到的冗余数 据恢复出原始数据之后,将所述原始数据返回给所述客户端。
- 一种集群管理节点,其特征在于,包括:接收单元,用于接收各个存储节点上报的节点信息并存储至节点信息表,所述节点信息中包含节点标识、节点的硬盘列表及节点所属区域;分组单元,用于根据所述节点信息表中的节点信息划分硬盘分组,对于冗余配比为N+M的集群,为每个区域选择的硬盘数量小于M,其中,N为用于存储原始数据的硬盘数量,M为用于存储校验和的硬盘数量,且N和M均为大于1的整数;更新单元,用于若某个区域发生故障,则将故障区域内的存储节点在所述节点信息表中的状态更新为故障状态;发送单元,用于将更新后的节点信息表中的内容以及硬盘分组的信息同步至正常存储节点以及客户端代理节点,以便所述客户端代理节点在接收到数据读写的业务请求时,与选择的硬盘分组中的正常存储节点交互完成数据读写的业务。
- 如权利要求6所述的集群管理节点,其特征在于,所述分组单元还用于:根据所述冗余配比和所述集群中的区域数量,为所述集群中的区域平均选择硬盘数量。
- 如权利要求7所述的集群管理节点,其特征在于,所述各个存储节点上报的节点信息通过心跳信息上报。
- 如权利要求6-8任一项所述的集群管理节点,其特征在于,若所述客户端代理节点接收到客户端的写数据业务请求,则选择硬盘分组,在向该硬盘分组中的正常存储节点发送写数据消息并进行写数据之后,向元数据管理节点写元数据。
- 如权利要求6-8任一项所述的集群管理节点,其特征在于,若所述客户端代理节点接收到客户端的读数据业务请求,则向元数据管理节点读元数据,并根据所述元数据得到对应文件所在的硬盘分组,向该硬盘分组中的正常存储节点发送读数据消息读数据,并在根据正常存储节点中读到的冗余数据恢复出原始数据之后,将所述原始数据返回给所述客户端。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510727893.4A CN105357294B (zh) | 2015-10-31 | 2015-10-31 | 一种存储数据的方法及集群管理节点 |
CN201510727893.4 | 2015-10-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017071563A1 true WO2017071563A1 (zh) | 2017-05-04 |
Family
ID=55333153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/103267 WO2017071563A1 (zh) | 2015-10-31 | 2016-10-25 | 一种存储数据的方法及集群管理节点 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105357294B (zh) |
WO (1) | WO2017071563A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111026621A (zh) * | 2019-12-23 | 2020-04-17 | 杭州安恒信息技术股份有限公司 | 面向Elasticsearch集群的监控报警方法、装置、设备、介质 |
CN113470726A (zh) * | 2021-07-28 | 2021-10-01 | 浙江大华技术股份有限公司 | 一种硬盘上线检测方法与装置 |
CN113625957A (zh) * | 2021-06-30 | 2021-11-09 | 济南浪潮数据技术有限公司 | 一种硬盘故障的检测方法、装置及设备 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105357294B (zh) * | 2015-10-31 | 2018-10-02 | 成都华为技术有限公司 | 一种存储数据的方法及集群管理节点 |
CN106020975B (zh) * | 2016-05-13 | 2020-01-21 | 华为技术有限公司 | 数据操作方法、装置和系统 |
CN108153615B (zh) * | 2016-12-02 | 2019-07-23 | 中科星图股份有限公司 | 一种故障数据恢复方法 |
CN108205573B (zh) * | 2016-12-20 | 2023-04-14 | 中兴通讯股份有限公司 | 一种数据分布式存储方法及系统 |
CN106844108B (zh) * | 2016-12-29 | 2019-05-24 | 成都华为技术有限公司 | 一种数据存储方法、服务器以及存储系统 |
CN106789362B (zh) * | 2017-02-20 | 2020-04-14 | 京信通信系统(中国)有限公司 | 一种设备管理方法及网管系统 |
CN111488124A (zh) * | 2020-04-08 | 2020-08-04 | 深信服科技股份有限公司 | 一种数据更新方法、装置、电子设备及存储介质 |
CN112711382B (zh) * | 2020-12-31 | 2024-04-26 | 百果园技术(新加坡)有限公司 | 基于分布式系统的数据存储方法、装置和存储节点 |
CN113885798A (zh) * | 2021-09-29 | 2022-01-04 | 浙江大华技术股份有限公司 | 一种数据操作方法、装置、设备及介质 |
CN115826876B (zh) * | 2023-01-09 | 2023-05-16 | 苏州浪潮智能科技有限公司 | 数据写入方法、系统、存储硬盘、电子设备及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102081508A (zh) * | 2009-11-27 | 2011-06-01 | 中国移动通信集团四川有限公司 | 一种划分磁盘到主机的方法及装置 |
CN103793182A (zh) * | 2012-09-04 | 2014-05-14 | Lsi公司 | 可扩展存储保护 |
US8799429B1 (en) * | 2008-05-06 | 2014-08-05 | American Megatrends, Inc. | Boot acceleration by consolidating client-specific boot data in a data storage system |
CN105357294A (zh) * | 2015-10-31 | 2016-02-24 | 成都华为技术有限公司 | 一种存储数据的方法及集群管理节点 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0606016B1 (en) * | 1993-01-07 | 2002-10-09 | Kabushiki Kaisha Toshiba | Data communication system using an adaptive hybrid ARQ scheme |
CN101840377A (zh) * | 2010-05-13 | 2010-09-22 | 上海交通大学 | 基于rs纠删码的数据存储方法 |
CN103984607A (zh) * | 2013-02-08 | 2014-08-13 | 华为技术有限公司 | 分布式存储的方法、装置和系统 |
CN103699494B (zh) * | 2013-12-06 | 2017-03-15 | 北京奇虎科技有限公司 | 一种数据存储方法、数据存储设备和分布式存储系统 |
-
2015
- 2015-10-31 CN CN201510727893.4A patent/CN105357294B/zh active Active
-
2016
- 2016-10-25 WO PCT/CN2016/103267 patent/WO2017071563A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8799429B1 (en) * | 2008-05-06 | 2014-08-05 | American Megatrends, Inc. | Boot acceleration by consolidating client-specific boot data in a data storage system |
CN102081508A (zh) * | 2009-11-27 | 2011-06-01 | 中国移动通信集团四川有限公司 | 一种划分磁盘到主机的方法及装置 |
CN103793182A (zh) * | 2012-09-04 | 2014-05-14 | Lsi公司 | 可扩展存储保护 |
CN105357294A (zh) * | 2015-10-31 | 2016-02-24 | 成都华为技术有限公司 | 一种存储数据的方法及集群管理节点 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111026621A (zh) * | 2019-12-23 | 2020-04-17 | 杭州安恒信息技术股份有限公司 | 面向Elasticsearch集群的监控报警方法、装置、设备、介质 |
CN111026621B (zh) * | 2019-12-23 | 2023-04-07 | 杭州安恒信息技术股份有限公司 | 面向Elasticsearch集群的监控报警方法、装置、设备、介质 |
CN113625957A (zh) * | 2021-06-30 | 2021-11-09 | 济南浪潮数据技术有限公司 | 一种硬盘故障的检测方法、装置及设备 |
CN113625957B (zh) * | 2021-06-30 | 2024-02-13 | 济南浪潮数据技术有限公司 | 一种硬盘故障的检测方法、装置及设备 |
CN113470726A (zh) * | 2021-07-28 | 2021-10-01 | 浙江大华技术股份有限公司 | 一种硬盘上线检测方法与装置 |
Also Published As
Publication number | Publication date |
---|---|
CN105357294B (zh) | 2018-10-02 |
CN105357294A (zh) | 2016-02-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017071563A1 (zh) | 一种存储数据的方法及集群管理节点 | |
US11360854B2 (en) | Storage cluster configuration change method, storage cluster, and computer system | |
US11163653B2 (en) | Storage cluster failure detection | |
JP6382454B2 (ja) | 分散ストレージ及びレプリケーションシステム、並びに方法 | |
CN106662983B (zh) | 分布式存储系统中的数据重建的方法、装置和系统 | |
JP5486682B2 (ja) | クラウドコンピューティング・ベースの仮想計算機・ファイルシステムにおいてディスク画像を複製するシステム及び方法 | |
JP6491210B2 (ja) | 分散データグリッドにおいて永続性パーティションリカバリをサポートするためのシステムおよび方法 | |
US9785691B2 (en) | Method and apparatus for sequencing transactions globally in a distributed database cluster | |
US8856091B2 (en) | Method and apparatus for sequencing transactions globally in distributed database cluster | |
CN106776130B (zh) | 一种日志恢复方法、存储装置和存储节点 | |
US8626722B2 (en) | Consolidating session information for a cluster of sessions in a coupled session environment | |
JP2012528382A (ja) | キャッシュクラスタを構成可能モードで用いるキャッシュデータ処理 | |
US20230123923A1 (en) | Methods and systems for data resynchronization in a replication environment | |
WO2012069091A1 (en) | Real time database system | |
US20090063486A1 (en) | Data replication using a shared resource | |
CN113326251B (zh) | 数据管理方法、系统、设备和存储介质 | |
CN114518973A (zh) | 分布式集群节点宕机重启恢复方法 | |
US8775734B2 (en) | Virtual disks constructed from unused distributed storage | |
CN111752892B (zh) | 分布式文件系统及其实现方法、管理系统、设备及介质 | |
US11016863B2 (en) | Self-contained disaster detection for replicated multi-controller systems | |
CN118301172A (zh) | 分布式存储设备、实现方法及分布式存储系统 | |
CN117749818A (zh) | 跨机房数据同步系统 | |
JP2016051267A (ja) | ノードクラスタシステム及びノードクラスタ管理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16858998 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16858998 Country of ref document: EP Kind code of ref document: A1 |