CN112446905B - Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association - Google Patents
Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association Download PDFInfo
- Publication number
- CN112446905B CN112446905B CN202110126538.7A CN202110126538A CN112446905B CN 112446905 B CN112446905 B CN 112446905B CN 202110126538 A CN202110126538 A CN 202110126538A CN 112446905 B CN112446905 B CN 112446905B
- Authority
- CN
- China
- Prior art keywords
- map
- panoramic
- semantic
- real
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 48
- 239000011159 matrix material Substances 0.000 claims abstract description 51
- 238000010276 construction Methods 0.000 claims abstract 3
- 230000003068 static effect Effects 0.000 claims description 20
- 230000000007 visual effect Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Remote Sensing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明属于实时定位与建图、计算机视觉技术领域,具体涉及一种基于多自由度传感关联的三维实时全景监控方法、系统、装置。The invention belongs to the technical fields of real-time positioning, mapping and computer vision, and in particular relates to a three-dimensional real-time panoramic monitoring method, system and device based on multi-degree-of-freedom sensing correlation.
背景技术Background technique
视频监控是一个重要且具有挑战性的经典计算机视觉任务,在安防监控、智能视频分析、人员搜救检索等领域具有广泛的应用。通常监控摄像机安装到固定的位置,采集多角度,多姿态的二维行人图像,监控人员若想追踪行人的实时位置,运动轨迹,往往需要一些经验的积累,无法直观获取这些信息。只依赖单一自由度的传感器,并不能很好的满足视频监控的需求。本方法提出了基于多自由度传感关联的三维全景监控的方法,通过结合三维环境建模,实例分割,三维模型投影等多种技术,实现三维立体的视频监控,可以较好的解决这一问题。Video surveillance is an important and challenging classic computer vision task, which has a wide range of applications in the fields of security monitoring, intelligent video analysis, personnel search and rescue and retrieval. Usually surveillance cameras are installed in a fixed position to collect multi-angle and multi-pose 2D pedestrian images. If the monitoring personnel want to track the real-time position and movement trajectory of pedestrians, they often need to accumulate some experience and cannot obtain this information intuitively. Sensors that only rely on a single degree of freedom cannot meet the needs of video surveillance very well. This method proposes a three-dimensional panoramic monitoring method based on multi-degree-of-freedom sensor correlation. By combining three-dimensional environment modeling, instance segmentation, three-dimensional model projection and other technologies to achieve three-dimensional video monitoring, it can better solve this problem. question.
发明内容SUMMARY OF THE INVENTION
为了解决现有技术中的上述问题,即为了解决现有视频监控技术大多依赖于固定视角的单一自由度传感器,无法实现大范围三维全景视频监控,以及对监控人员的经验要求高,监控效率低、效果差的问题,本发明第一方面,提出了一种基于多自由度传感关联的三维实时全景监控方法,该方法包括:In order to solve the above problems in the prior art, that is, in order to solve the problem that most of the existing video surveillance technologies rely on a single degree of freedom sensor with a fixed viewing angle, it is impossible to realize a large-scale three-dimensional panoramic video surveillance, and the experience requirements of the monitoring personnel are high, and the monitoring efficiency is low. , the problem of poor effect, the first aspect of the present invention proposes a three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation, the method includes:
步骤S10,获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;Step S10: Obtain real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data includes observation time, real external parameter matrix;
步骤S20,对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;Step S20, integrating the local maps generated by each sensor to obtain a panoramic map of the scene to be monitored, as the first map;
步骤S30,依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;Step S30, sequentially registering the first map with each local map, and obtaining the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm;
步骤S40,计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图。Step S40: Calculate the error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; based on each error, update the first map to obtain a second map as the final panoramic map obtained at the current moment of the scene to be monitored.
在一些优选的实施方式中,所述N种不同自由度的传感器包括固定视角监控摄像机、PTZ监控摄像机、可移动监控机器人、视觉监控无人机。In some preferred embodiments, the N sensors with different degrees of freedom include fixed viewing angle surveillance cameras, PTZ surveillance cameras, movable surveillance robots, and visual surveillance drones.
在一些优选的实施方式中,所述三维语义地图包括静态背景模型、动态语义实例;如下式所示:In some preferred embodiments, the three-dimensional semantic map includes a static background model and a dynamic semantic instance; as shown in the following formula:
其中,表示时刻的三维语义地图,表示静态背景模型,表示动态语义 实例,表示实例的类别,表示实例对应的三维模型,表示实例的空间位置和方向。 in, express 3D semantic map of moments, represents a static background model, represents a dynamic semantic instance, represents the class of the instance, represents the 3D model corresponding to the instance, Represents the spatial location and orientation of the instance.
在一些优选的实施方式中,所述全景地图,即全景三维语义地图,其获取方法为:In some preferred embodiments, the panoramic map, namely the panoramic three-dimensional semantic map, the acquisition method is:
在可移动监控机器人导航过程中,通过基于TSDF的实时定位与建图算法自动构建全景地图的静态背景模型;During the navigation process of the mobile monitoring robot, the static background model of the panoramic map is automatically constructed through the real-time positioning and mapping algorithm based on TSDF;
针对行人类别语义实例,使用基于RGB图像的行人重识别算法,匹配各局部地图中的同一语义实例;针对非行人类别语义实例,计算各局部地图中语义实例对应的三维模型之间的体积重叠比,将体积重叠比高于设定阈值的语义实例,作为同一语义实例;结合匹配后的同一语义实例,获取全景地图中的动态语义实例;For the semantic instances of pedestrian categories, the pedestrian re-identification algorithm based on RGB images is used to match the same semantic instance in each local map; for non-pedestrian category semantic instances, the volume overlap ratio between the 3D models corresponding to the semantic instances in each local map is calculated. , take the semantic instances whose volume overlap ratio is higher than the set threshold as the same semantic instance; combine the matched same semantic instance to obtain the dynamic semantic instance in the panoramic map;
结合获取的全景地图的静态背景模型、全景地图中的动态语义实例,构建全景地图。The panoramic map is constructed by combining the obtained static background model of the panoramic map and the dynamic semantic instances in the panoramic map.
在一些优选的实施方式中,步骤S30中“通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵”,其方法为:In some preferred embodiments, in step S30, "acquiring the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm", the method is as follows:
选取所述第一地图与各传感器对应局部地图的共有语义实例;selecting the shared semantic instance of the first map and the local map corresponding to each sensor;
根据各共有语义实例的位置,采用RANSAC算法获取各传感器估计的外参矩阵。According to the position of each shared semantic instance, the RANSAC algorithm is used to obtain the extrinsic parameter matrix estimated by each sensor.
在一些优选的实施方式中,步骤S40中“基于各误差,对所述第一地图进行更新”,其方法为;In some preferred embodiments, in step S40, "update the first map based on each error", and the method is:
判断所述误差是否小于等于设定阈值,若是,则不进行更新;Determine whether the error is less than or equal to the set threshold, and if so, do not update;
否则,对第一地图中的静态背景模型不进行更新,对第一地图中的动态语义实例,结合所述误差,更新动态语义实例的空间位置和方向。Otherwise, the static background model in the first map is not updated, and the dynamic semantic instance in the first map is updated with the spatial position and direction of the dynamic semantic instance in combination with the error.
在一些优选的实施方式中,“结合所述误差,对动态语义实例的空间位置和方向进行更新”,其方法为:In some preferred embodiments, "in combination with the error, the spatial position and orientation of the dynamic semantic instance are updated", and the method is:
若动态语义实例仅被传感器观测到,则更新后的空间位置和方向为: If the dynamic semantic instance is only used by the sensor observed, the updated spatial position and orientation are:
若动态语义实例被多个传感器观测到,则更新后的空间位置和方向为:If the dynamic semantic instance is observed by multiple sensors, the updated spatial position and orientation are:
其中,表示所有观测到动态语义实例的传感器集合,、表示全景地图与第 、 个传感器对应的局部地图之间的误差。 in, represents the set of all sensors that observe dynamic semantic instances, , Represents a panoramic map with the , The error between the local maps corresponding to each sensor.
本发明的第二方面,提出了一种基于多自由度传感关联的三维实时全景监控系统,该系统包括局部地图获取模块、全景地图获取模块、配准模块、更新输出模块;In a second aspect of the present invention, a three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing correlation is proposed, the system includes a local map acquisition module, a panoramic map acquisition module, a registration module, and an update output module;
所述局部地图获取模块,配置为获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;The local map acquisition module is configured to acquire real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data includes Observation time, real external parameter matrix;
所述全景地图获取模块,配置为对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;The panoramic map acquisition module is configured to integrate the local maps generated by each sensor to obtain a panoramic map of the scene to be monitored as the first map;
所述配准模块,配置为依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;The registration module is configured to sequentially register the first map with each local map, and obtain the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm;
所述更新输出模块,配置为计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图。The update output module is configured to calculate the error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; based on each error, the first map is updated to obtain a second map, which is used as the final map at the current moment of the scene to be monitored. Acquired panoramic map.
本发明的第三方面,提出了一种存储装置,其中存储有多条程序,所述程序适用于由处理器加载并执行以实现上述的基于多自由度传感关联的三维实时全景监控方法。In a third aspect of the present invention, a storage device is provided, wherein a plurality of programs are stored, and the programs are suitable for being loaded and executed by a processor to realize the above-mentioned three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensor correlation.
本发明的第四方面,提出了一种处理装置,包括处理器、存储装置;处理器,适用于执行各条程序;存储装置,适用于存储多条程序;所述程序适用于由处理器加载并执行以实现上述的基于多自由度传感关联的三维实时全景监控方法。In a fourth aspect of the present invention, a processing device is proposed, including a processor and a storage device; the processor is adapted to execute various programs; the storage device is adapted to store multiple programs; the programs are adapted to be loaded by the processor And execute to realize the above-mentioned three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensor correlation.
本发明的有益效果:Beneficial effects of the present invention:
本发明可以实现大范围内三维全景视频监控且监控画面连续,提高了监控效率,保证了监控的质量与效果。The present invention can realize three-dimensional panoramic video monitoring in a large range with continuous monitoring pictures, improve monitoring efficiency, and ensure monitoring quality and effect.
(1)本发明引入了多自由度传感器,并融合多自由度传感器的观测数据构建动态的、语义丰富的三维全景监控地图,该地图不仅包含静态的背景模型,而且包含动态的语义实例模型,实现大范围内三维全景视频监控且监控画面连续。(1) The present invention introduces a multi-degree-of-freedom sensor, and integrates the observation data of the multi-degree-of-freedom sensor to construct a dynamic and semantically rich three-dimensional panoramic monitoring map. The map not only includes a static background model, but also includes a dynamic semantic instance model. Realize three-dimensional panoramic video surveillance in a large range and the monitoring picture is continuous.
(2)本发明引入了多自由度传感器自动标定方法,使用三维全景地图中的语义实例作为标定模板,自动计算多自由度传感器观测所产生的局部地图与全景地图中语义实例之间的变换矩阵,标定多自由度传感器的外参矩阵。然后以当前估计的外参矩阵计算局部地图与全景地图的误差矩阵,更新全景地图,以获取更加准确的外参矩阵与更加精确的三维全景地图,提高了监控效率,保证了监控的质量与效果。(2) The present invention introduces an automatic calibration method for multi-degree-of-freedom sensors, and uses the semantic instances in the three-dimensional panoramic map as calibration templates to automatically calculate the transformation matrix between the local map generated by the multi-degree-of-freedom sensor observation and the semantic instances in the panoramic map. , calibrating the extrinsic parameter matrix of the multi-degree-of-freedom sensor. Then, the error matrix between the local map and the panoramic map is calculated with the currently estimated external parameter matrix, and the panoramic map is updated to obtain a more accurate external parameter matrix and a more accurate three-dimensional panoramic map, which improves the monitoring efficiency and ensures the quality and effect of monitoring. .
附图说明Description of drawings
通过阅读参照以下附图所做的对非限制性实施例所做的详细描述,本申请的其他特征、目的和优点将会变得更明显。Other features, objects and advantages of the present application will become more apparent upon reading the detailed description of non-limiting embodiments taken with reference to the following drawings.
图1是本发明一种实施例的基于多自由度传感关联的三维实时全景监控方法的流程示意图;1 is a schematic flowchart of a three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation according to an embodiment of the present invention;
图2是本发明一种实施例的基于多自由度传感关联的三维实时全景监控系统的框架示意图;2 is a schematic diagram of a framework of a three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing correlation according to an embodiment of the present invention;
图3是本发明一种实施例的基于多自由度传感关联的三维实时全景监控方法的简略流程示意图。FIG. 3 is a schematic flowchart of a three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation according to an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the objectives, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. Obviously, the described embodiments are part of the embodiments of the present invention, not All examples. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。The present application will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the related invention, but not to limit the invention. In addition, it should be noted that, for the convenience of description, only the parts related to the related invention are shown in the drawings.
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。It should be noted that the embodiments in the present application and the features of the embodiments may be combined with each other in the case of no conflict.
本发明的基于多自由度传感关联的三维实时全景监控方法,如图1所示,包括以下步骤:The three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation of the present invention, as shown in FIG. 1 , includes the following steps:
步骤S10,获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;Step S10: Obtain real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data includes observation time, real external parameter matrix;
步骤S20,对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;Step S20, integrating the local maps generated by each sensor to obtain a panoramic map of the scene to be monitored, as the first map;
步骤S30,依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;Step S30, sequentially registering the first map with each local map, and obtaining the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm;
步骤S40,计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图。Step S40: Calculate the error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; based on each error, update the first map to obtain a second map as the final panoramic map obtained at the current moment of the scene to be monitored.
为了更清晰地对本发明基于多自由度传感关联的三维实时全景监控方法进行说明,下面结合附图对本发明方法一种实施例中各步骤进行展开详述。In order to more clearly describe the three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation of the present invention, each step in an embodiment of the method of the present invention will be described in detail below with reference to the accompanying drawings.
本发明引入了多自由度传感器来提供更加丰富的视觉信息,并利用三维建模、实例分割等方法,构建场景的三维全景语义地图,最后迭代更新多自由度传感器的外参矩阵与全景地图,实现实时的三维全景地图更新。具体如下:The invention introduces a multi-degree-of-freedom sensor to provide more abundant visual information, and uses three-dimensional modeling, instance segmentation and other methods to construct a three-dimensional panoramic semantic map of the scene, and finally iteratively updates the external parameter matrix and panoramic map of the multi-degree-of-freedom sensor. Real-time 3D panoramic map update. details as follows:
步骤S10,获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;Step S10: Obtain real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data includes observation time, real external parameter matrix;
在本实施例中,N种不同自由度的传感器优选采用固定视角监控摄像机(零自由度)、PTZ监控摄像机(摄像机姿态自由度:2维和尺度缩放自由度:1维)、可移动监控机器人(摄像机姿态自由度:2维、尺度缩放自由度:1维、机器人位姿自由度:6维)、视觉监控无人机(摄像机姿态自由度:2维、尺度缩放自由度:1维、无人机位姿自由度:6维),在其他实施例中可以根据实际需要进行选取传感器。In this embodiment, N kinds of sensors with different degrees of freedom preferably adopt fixed viewing angle surveillance cameras (zero degrees of freedom), PTZ surveillance cameras (camera attitude degrees of freedom: 2 dimensions and scaling degrees of freedom: 1 dimension), movable surveillance robots ( Camera attitude freedom: 2D, scaling freedom: 1D, robot pose freedom: 6D), visual surveillance drone (camera attitude freedom: 2D, scaling freedom: 1D, unmanned The degree of freedom of the camera position and attitude: 6 dimensions), in other embodiments, the sensor can be selected according to actual needs.
本发明中,多传感器在时刻的观测数据表示为:,其 中,为摄像机的外参矩阵(真实的外参矩阵),表示传感器的位置和姿态,即位姿,对于PTZ 监控摄像机、可移动监控机器人、视觉监控无人机,外参矩阵为时变函数;为传感器 采样矩阵。同样,摄像机的成像方式不同,表示的含义不同,对于RGB相机,为相机内参。 对于变焦相机,时变函数,对于激光雷达等能够直接测量外部环境三维坐标的传感 器,为单位矩阵。 In the present invention, multiple sensors exist The observation data at time is expressed as: ,in, is the extrinsic parameter matrix of the camera (the real extrinsic parameter matrix), which represents the position and attitude of the sensor, that is, the pose. For PTZ surveillance cameras, mobile surveillance robots, and visual surveillance drones, the extrinsic parameter matrix is a time-varying function ; Sample matrix for the sensor. Similarly, the imaging method of the camera is different, The meaning of the representation is different, for RGB cameras, Internal parameters for the camera. For zoom cameras, time-varying function , for sensors such as lidar that can directly measure the three-dimensional coordinates of the external environment, is the identity matrix.
各传感器根据观测数据构建其对应的三维语义地图,作为局部地图。三维语义地 图包括静态背景模型和动态语义实例模型,如式(1)所示: Each sensor builds its corresponding three-dimensional semantic map according to the observation data as a local map. 3D semantic map including static background model and dynamic semantic instance model , as shown in formula (1):
(1) (1)
动态语义实例模型其如式(2)所示: Dynamic Semantic Instance Model It is shown in formula (2):
= (2) = (2)
其中,表示动态语义实例的类别,表示动态语义实例的三维模型,表示动 态语义实例的空间位置和方向。由于监控环境中动态语义实例的位置、姿态甚至模型都 会发生改变,因而可表示为时间的函数。 in, a class representing a dynamic semantic instance, 3D models representing dynamic semantic instances, Represents the spatial location and orientation of dynamic semantic instances. Due to dynamic semantic instances in the monitoring environment The position, attitude and even the model will change, so can be expressed as a function of time .
步骤S20,对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;Step S20, integrating the local maps generated by each sensor to obtain a panoramic map of the scene to be monitored, as the first map;
在本实施例中,通过空间和时间的配准和同步,将具有不同自由度的多种感知信息源进行整合,构建待监控场景的全景地图。In this embodiment, through the registration and synchronization of space and time, multiple perception information sources with different degrees of freedom are integrated to construct a panoramic map of the scene to be monitored.
在构建全景地图时,静态背景模型在可移动监控机器人导航过程中,通过基于TSDF的实时定位与建图算法自动构建。动态语义实例基于实时多自由度传感器获取的观测数据通过语义实例提取,三维模型映射,跨传感器实例重识别三个步骤构建。具体如下:When constructing a panoramic map, the static background model is automatically constructed by the real-time positioning and mapping algorithm based on TSDF during the navigation process of the mobile monitoring robot. Dynamic semantic instances are constructed based on the observation data acquired by real-time multi-DOF sensors through three steps: semantic instance extraction, 3D model mapping, and cross-sensor instance re-identification. details as follows:
步骤S21,对实时获取的多自由度传感器的观测数据进行语义实例提取;Step S21, performing semantic instance extraction on the observation data of the multi-degree-of-freedom sensor acquired in real time;
其中,针对视觉传感器,使用基于RGB图像的实例分割算法提取;针对激光雷达传感器,使用基于点云的三维实例分割算法提取。Among them, for the visual sensor, the instance segmentation algorithm based on RGB image is used for extraction; for the lidar sensor, the 3D instance segmentation algorithm based on point cloud is used for extraction.
步骤S22,将提取的语义实例与该类别的三维模型一一对应起来,并结合深度传感器信息获取该模型的三维空间位置和方向;Step S22, one-to-one correspondence between the extracted semantic instance and the three-dimensional model of the category, and combining the depth sensor information to obtain the three-dimensional space position and direction of the model;
经过步骤S21、步骤S22,得到各传感器对应的局部地图,接下来,对各局部地图进行整合,得到全景地图,即全景三维语义地图。After steps S21 and S22, local maps corresponding to each sensor are obtained. Next, each local map is integrated to obtain a panoramic map, that is, a panoramic three-dimensional semantic map.
步骤S23,跨传感器语义实例重识别。Step S23, re-identification of cross-sensor semantic instances.
针对行人类别语义实例,使用基于RGB图像的行人重识别算法(由于本发明中传感器多为视觉传感器),匹配不同传感器视野中(局部地图)的同一语义实例;在其他实施例中,行人类别语义实例可以根据传感器选取适合的重识别算法获取。For the semantic instance of pedestrian category, a pedestrian re-identification algorithm based on RGB images is used (since most of the sensors in the present invention are visual sensors) to match the same semantic instance in the field of view (local map) of different sensors; in other embodiments, pedestrian category semantics The instance can be obtained by selecting a suitable re-identification algorithm according to the sensor.
针对非行人类别实例,计算不同传感器观测下,语义实例对应的三维模型之间体积的重叠比重,比重高于设定阈值(本发明中优先设置为0.5),认为该语义实例为不同传感器视野中的同一语义实例。For non-pedestrian category instances, the overlapping proportions of the volumes between the three-dimensional models corresponding to the semantic instances under observation by different sensors are calculated. the same semantic instance of .
步骤S30,依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;Step S30, sequentially registering the first map with each local map, and obtaining the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm;
在本实施例中,多自由度传感器产生的局部地图与全局地图进行配 准,计算时刻多自由度传感器各自在地图中的观测数据。具体 来说,在全景地图中选取与当前传感器对应的局部地图共有(同一)的语义实例,根据语义 实例的位置,使用RANSAC算法获取当前传感器估计的外参矩阵。 In this embodiment, the local map generated by the multi-degree-of-freedom sensor with global map register, calculate The observation data of each multi-degree-of-freedom sensor in the map at time . Specifically, the semantic instance shared (identical) with the local map corresponding to the current sensor is selected in the panoramic map, and the extrinsic parameter matrix estimated by the current sensor is obtained using the RANSAC algorithm according to the location of the semantic instance.
步骤S40,计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图,具体如下:Step S40, calculating the error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; based on each error, the first map is updated to obtain a second map, which is used as the panorama map finally obtained at the current moment of the scene to be monitored, details as follows:
在本实施例中,将各局部地图投影到全局坐标系,然后将投影后的局部地 图与全景地图进行配准,计算全景地图与真实观测的误差,并矫正全景地图。 In this embodiment, each local map is Project to the global coordinate system, and then combine the projected local map with the panoramic map Perform registration, calculate the error between the panoramic map and the real observation, and correct the panoramic map.
具体来说,在当前局部地图与全景地图(即第一地图)上选取共有的语义实例,如 图3所示,利用各传感器外参矩阵,将共有的语义实例从传感器(图中给出了传感器1、传感 器2两个传感器)坐标系投影到全局坐标系中。在全局坐标系下,传感器(局部地图)与全景 地图中的对应语义实例之间存在一定的空间位姿(空间位置和方向)误差(即外参矩阵的误 差)。根据共有实例的位置集合,使用RANSAC方法计算全景地图与局部地图之间的误差 (或简称误差矩阵),当误差大于设定阈值,则矫正更新全景地图。 Specifically, the shared semantic instance is selected on the current local map and the panoramic map (ie, the first map), as shown in Figure 3, using the external parameter matrix of each sensor, the shared semantic instance is extracted from the sensor (given in the figure). The coordinate system of sensor 1 and sensor 2 is projected into the global coordinate system. In the global coordinate system, there is a certain spatial pose (spatial position and orientation) error (ie, the error of the external parameter matrix) between the sensor (local map) and the corresponding semantic instance in the panoramic map. Using the RANSAC method to calculate the error between the panoramic map and the local map according to the location set of the shared instances (or error matrix for short), when the error is greater than the set threshold, the panoramic map is corrected and updated.
其中,“当误差大于设定阈值,则矫正更新全景地图”的方法为:Among them, the method of "correcting and updating the panoramic map when the error is greater than the set threshold" is:
对于全景地图中的静态背景模型,不进行更新,对于全景地图中的动态语义实例 ,更新其空间位姿,若动态语义实例仅在传感器中被观测到,则更新后的空间位姿为:,其中×为矩阵乘法。若动态语义实例在多个传感器中被观测到,则更新后 的空间位姿为:,其中为所有可观测到实例的传感器集合。 For static background models in panorama maps, no update is performed, for dynamic semantic instances in panorama maps , update its spatial pose, if the dynamic semantic instance is only in the sensor is observed in , the updated spatial pose is: , where × is the matrix multiplication. If dynamic semantic instance is observed in multiple sensors, the updated spatial pose is: ,in for all observable instances collection of sensors.
重复执行以上步骤S30,S40,迭代更新多自由度传感器观测数据与全景地图。 Repeat the above steps S30, S40 to iteratively update the multi-degree-of-freedom sensor observation data with panoramic map.
在获取全景地图后,本发明将全景地图转换为GLB格式导入Habitat-sim模拟器中,并采用Habitat-lab库训练基于强化学习的视觉导航算法模型,需要注意的是,模拟器中虚拟智能体搭载的传感器参数(包括传感器类型与外参)与真实环境保持一致。After acquiring the panoramic map, the present invention converts the panoramic map into GLB format and imports it into the Habitat-sim simulator, and uses the Habitat-lab library to train the visual navigation algorithm model based on reinforcement learning. It should be noted that the virtual agent in the simulator The loaded sensor parameters (including sensor type and external parameters) are consistent with the real environment.
其中,“基于强化学习的视觉导航算法”包含三个模块,依次为:Among them, "Visual Navigation Algorithm Based on Reinforcement Learning" consists of three modules, which are as follows:
实时定位与建图模块(SLAM模块),输入多自由度传感器的实时数据到神经网络模 型中,生成局部空间地图,为传感器,为时间戳,将局部空间地图拼接到一起生成 全局地图,维度为2×M×M; Real-time positioning and mapping module (SLAM module), input real-time data of multi-degree-of-freedom sensors into the neural network model to generate local spatial maps , for the sensor, For timestamps, stitch the local spatial maps together to generate a global map , the dimension is 2×M×M;
全局决策模块,根据全局地图规划虚拟智能体的全局行动路径; Global decision module, according to the global map Plan the global action path of the virtual agent;
局部决策模块,根据全局路径及当前可达区域规划虚拟智能体的局部行动路径。The local decision-making module plans the local action path of the virtual agent according to the global path and the current reachable area.
本发明第二实施例的一种基于多自由度传感关联的三维实时全景监控系统,如图2所示,包括:局部地图获取模块100、全景地图获取模块200、配准模块300、更新输出模块400;A three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing correlation according to the second embodiment of the present invention, as shown in FIG. 2, includes: a local map acquisition module 100, a panoramic
所述局部地图获取模块100,配置为获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;The local map acquisition module 100 is configured to acquire real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data Including observation time, real external parameter matrix;
所述全景地图获取模块200,配置为对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;The panoramic
所述配准模块300,配置为依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;The
所述更新输出模块400,配置为计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图。The
所述技术领域的技术人员可以清楚的了解到,为描述的方便和简洁,上述描述的系统的具体的工作过程及有关说明,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the technical field can clearly understand that, for the convenience and brevity of description, for the specific working process and related description of the system described above, reference may be made to the corresponding process in the foregoing method embodiments, which will not be repeated here.
需要说明的是,上述实施例提供的基于多自由度传感关联的三维实时全景监控系统,仅以上述各功能模块的划分进行举例说明,在实际应用中,可以根据需要而将上述功能分配由不同的功能模块来完成,即将本发明实施例中的模块或者步骤再分解或者组合,例如,上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块,以完成以上描述的全部或者部分功能。对于本发明实施例中涉及的模块、步骤的名称,仅仅是为了区分各个模块或者步骤,不视为对本发明的不当限定。It should be noted that the three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing correlation provided by the above embodiments is only illustrated by the division of the above functional modules. In practical applications, the above functions can be allocated by It can be completed by different functional modules, that is, the modules or steps in the embodiments of the present invention are decomposed or combined. For example, the modules in the above-mentioned embodiments can be combined into one module, and can also be further split into multiple sub-modules, so as to complete the above description. All or part of the functionality. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing each module or step, and should not be regarded as an improper limitation of the present invention.
本发明第三实施例的一种存储装置,其中存储有多条程序,所述程序适用于由处理器加载并实现上述的基于多自由度传感关联的三维实时全景监控方法。A storage device according to a third embodiment of the present invention stores a plurality of programs, and the programs are suitable for being loaded by a processor and implementing the above-mentioned three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation.
本发明第四实施例的一种处理装置,包括处理器、存储装置;处理器,适于执行各条程序;存储装置,适于存储多条程序;所述程序适于由处理器加载并执行以实现上述的基于多自由度传感关联的三维实时全景监控方法。A processing device according to a fourth embodiment of the present invention includes a processor and a storage device; the processor is adapted to execute various programs; the storage device is adapted to store multiple programs; the programs are adapted to be loaded and executed by the processor In order to realize the above-mentioned three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensor correlation.
所述技术领域的技术人员可以清楚的了解到,未描述的方便和简洁,上述描述的存储装置、处理装置的具体工作过程及有关说明,可以参考前述方法实例中的对应过程,在此不再赘述。Those skilled in the technical field can clearly understand that the undescribed convenience and brevity are not described. Repeat.
本领域技术人员应该能够意识到,结合本文中所公开的实施例描述的各示例的模块、方法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,软件模块、方法步骤对应的程序可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。为了清楚地说明电子硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以电子硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those skilled in the art should be aware that the modules and method steps of each example described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software or a combination of the two, and the programs corresponding to the software modules and method steps Can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or as known in the art in any other form of storage medium. In order to clearly illustrate the interchangeability of electronic hardware and software, the components and steps of each example have been described generally in terms of functionality in the foregoing description. Whether these functions are performed in electronic hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods of implementing the described functionality for each particular application, but such implementations should not be considered beyond the scope of the present invention.
术语“第一”、 “第二”等是用于区别类似的对象,而不是用于描述或表示特定的顺序或先后次序。The terms "first," "second," etc. are used to distinguish between similar objects, and are not used to describe or indicate a particular order or sequence.
术语“包括”或者任何其它类似用语旨在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备/装置不仅包括那些要素,而且还包括没有明确列出的其它要素,或者还包括这些过程、方法、物品或者设备/装置所固有的要素。The term "comprising" or any other similar term is intended to encompass a non-exclusive inclusion such that a process, method, article or device/means comprising a list of elements includes not only those elements but also other elements not expressly listed, or Also included are elements inherent to these processes, methods, articles or devices/devices.
至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。So far, the technical solutions of the present invention have been described with reference to the preferred embodiments shown in the accompanying drawings, however, those skilled in the art can easily understand that the protection scope of the present invention is obviously not limited to these specific embodiments. Without departing from the principle of the present invention, those skilled in the art can make equivalent changes or substitutions to the relevant technical features, and the technical solutions after these changes or substitutions will fall within the protection scope of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110126538.7A CN112446905B (en) | 2021-01-29 | 2021-01-29 | Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110126538.7A CN112446905B (en) | 2021-01-29 | 2021-01-29 | Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112446905A CN112446905A (en) | 2021-03-05 |
CN112446905B true CN112446905B (en) | 2021-05-11 |
Family
ID=74740114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110126538.7A Active CN112446905B (en) | 2021-01-29 | 2021-01-29 | Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112446905B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114399606A (en) * | 2021-12-24 | 2022-04-26 | 中国科学院自动化研究所 | Interactive display system, method and equipment based on stereoscopic visualization |
CN115293508B (en) * | 2022-07-05 | 2023-06-02 | 国网江苏省电力有限公司南通市通州区供电分公司 | Visual optical cable running state monitoring method and system |
CN115620201B (en) * | 2022-10-25 | 2023-06-16 | 北京城市网邻信息技术有限公司 | House model construction method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109640032A (en) * | 2018-04-13 | 2019-04-16 | 河北德冠隆电子科技有限公司 | Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence |
CN110706279A (en) * | 2019-09-27 | 2020-01-17 | 清华大学 | Global position and pose estimation method based on information fusion of global map and multiple sensors |
CN111561923A (en) * | 2020-05-19 | 2020-08-21 | 北京数字绿土科技有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
CN112016612A (en) * | 2020-08-26 | 2020-12-01 | 四川阿泰因机器人智能装备有限公司 | A multi-sensor fusion SLAM method based on monocular depth estimation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2481209A1 (en) * | 2009-09-22 | 2012-08-01 | Tenebraex Corporation | Systems and methods for correcting images in a multi-sensor system |
-
2021
- 2021-01-29 CN CN202110126538.7A patent/CN112446905B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109640032A (en) * | 2018-04-13 | 2019-04-16 | 河北德冠隆电子科技有限公司 | Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence |
CN110706279A (en) * | 2019-09-27 | 2020-01-17 | 清华大学 | Global position and pose estimation method based on information fusion of global map and multiple sensors |
CN111561923A (en) * | 2020-05-19 | 2020-08-21 | 北京数字绿土科技有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
CN112016612A (en) * | 2020-08-26 | 2020-12-01 | 四川阿泰因机器人智能装备有限公司 | A multi-sensor fusion SLAM method based on monocular depth estimation |
Non-Patent Citations (1)
Title |
---|
"全息位置地图概念内涵及其关键技术初探";朱欣焰、周成虎、呙维、胡涛、刘洪强、高文秀;《武汉大学学报· 信息科学版》;20150331;第40卷(第3期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112446905A (en) | 2021-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111586360B (en) | Unmanned aerial vehicle projection method, device, equipment and storage medium | |
CN110568447B (en) | Visual positioning method, device and computer readable medium | |
US10740964B2 (en) | Three-dimensional environment modeling based on a multi-camera convolver system | |
CN107808407B (en) | Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium | |
CN112446905B (en) | Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association | |
US20190355173A1 (en) | Leveraging crowdsourced data for localization and mapping within an environment | |
Majdik et al. | Air‐ground matching: Appearance‐based GPS‐denied urban localization of micro aerial vehicles | |
CN114820924B (en) | A method and system for museum visit analysis based on BIM and video surveillance | |
EP3274964B1 (en) | Automatic connection of images using visual features | |
US10699438B2 (en) | Mobile device localization in complex, three-dimensional scenes | |
WO2023283929A1 (en) | Method and apparatus for calibrating external parameters of binocular camera | |
KR20200110120A (en) | A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof | |
US20210019910A1 (en) | Systems and methods for a real-time intelligent inspection assistant | |
US20220215576A1 (en) | Information processing device, information processing method, and computer program product | |
Tran et al. | Low-cost 3D scene reconstruction for response robots in real-time | |
US11631195B2 (en) | Indoor positioning system and indoor positioning method | |
CN117315015A (en) | Robot pose determining method and device, medium and electronic equipment | |
Javed et al. | PanoVILD: a challenging panoramic vision, inertial and LiDAR dataset for simultaneous localization and mapping | |
US20180350216A1 (en) | Generating Representations of Interior Space | |
CN114299230A (en) | A data generation method, device, electronic device and storage medium | |
Mojtahedzadeh | Robot obstacle avoidance using the Kinect | |
JP2023168262A (en) | Data division device and method | |
WO2023088127A1 (en) | Indoor navigation method, server, apparatus and terminal | |
CN118941633B (en) | Device positioning method, computer device and medium | |
Wang et al. | Research on 3D Modeling Method of Unmanned System Based on ORB-SLAM and Oblique Photogrammetry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |