[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112446905B - Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association - Google Patents

Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association Download PDF

Info

Publication number
CN112446905B
CN112446905B CN202110126538.7A CN202110126538A CN112446905B CN 112446905 B CN112446905 B CN 112446905B CN 202110126538 A CN202110126538 A CN 202110126538A CN 112446905 B CN112446905 B CN 112446905B
Authority
CN
China
Prior art keywords
map
panoramic
semantic
real
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110126538.7A
Other languages
Chinese (zh)
Other versions
CN112446905A (en
Inventor
张兆翔
张驰
陈文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202110126538.7A priority Critical patent/CN112446905B/en
Publication of CN112446905A publication Critical patent/CN112446905A/en
Application granted granted Critical
Publication of CN112446905B publication Critical patent/CN112446905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of real-time positioning, image construction and computer vision, and particularly relates to a three-dimensional real-time panoramic monitoring method, system and device based on multi-degree-of-freedom sensing association, aiming at solving the problems that the existing monitoring technology cannot realize large-range three-dimensional panoramic video monitoring, and is low in monitoring efficiency and poor in effect. The system method comprises the steps of obtaining real-time observation data of sensors with N different degrees of freedom, and constructing a three-dimensional semantic map corresponding to each sensor to serve as a local map; integrating local maps generated by all sensors to obtain a panoramic map serving as a first map; acquiring an external reference matrix correspondingly estimated in a first map by each sensor through a RANSAC algorithm; and calculating the error between the real external parameter matrix and the estimated external parameter matrix, and updating the first map to obtain the panoramic map finally obtained at the current moment of the scene to be monitored. The invention realizes the three-dimensional panoramic video monitoring in a large range, improves the monitoring efficiency and ensures the monitoring quality and effect.

Description

基于多自由度传感关联的三维实时全景监控方法Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensor correlation

技术领域technical field

本发明属于实时定位与建图、计算机视觉技术领域,具体涉及一种基于多自由度传感关联的三维实时全景监控方法、系统、装置。The invention belongs to the technical fields of real-time positioning, mapping and computer vision, and in particular relates to a three-dimensional real-time panoramic monitoring method, system and device based on multi-degree-of-freedom sensing correlation.

背景技术Background technique

视频监控是一个重要且具有挑战性的经典计算机视觉任务,在安防监控、智能视频分析、人员搜救检索等领域具有广泛的应用。通常监控摄像机安装到固定的位置,采集多角度,多姿态的二维行人图像,监控人员若想追踪行人的实时位置,运动轨迹,往往需要一些经验的积累,无法直观获取这些信息。只依赖单一自由度的传感器,并不能很好的满足视频监控的需求。本方法提出了基于多自由度传感关联的三维全景监控的方法,通过结合三维环境建模,实例分割,三维模型投影等多种技术,实现三维立体的视频监控,可以较好的解决这一问题。Video surveillance is an important and challenging classic computer vision task, which has a wide range of applications in the fields of security monitoring, intelligent video analysis, personnel search and rescue and retrieval. Usually surveillance cameras are installed in a fixed position to collect multi-angle and multi-pose 2D pedestrian images. If the monitoring personnel want to track the real-time position and movement trajectory of pedestrians, they often need to accumulate some experience and cannot obtain this information intuitively. Sensors that only rely on a single degree of freedom cannot meet the needs of video surveillance very well. This method proposes a three-dimensional panoramic monitoring method based on multi-degree-of-freedom sensor correlation. By combining three-dimensional environment modeling, instance segmentation, three-dimensional model projection and other technologies to achieve three-dimensional video monitoring, it can better solve this problem. question.

发明内容SUMMARY OF THE INVENTION

为了解决现有技术中的上述问题,即为了解决现有视频监控技术大多依赖于固定视角的单一自由度传感器,无法实现大范围三维全景视频监控,以及对监控人员的经验要求高,监控效率低、效果差的问题,本发明第一方面,提出了一种基于多自由度传感关联的三维实时全景监控方法,该方法包括:In order to solve the above problems in the prior art, that is, in order to solve the problem that most of the existing video surveillance technologies rely on a single degree of freedom sensor with a fixed viewing angle, it is impossible to realize a large-scale three-dimensional panoramic video surveillance, and the experience requirements of the monitoring personnel are high, and the monitoring efficiency is low. , the problem of poor effect, the first aspect of the present invention proposes a three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation, the method includes:

步骤S10,获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;Step S10: Obtain real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data includes observation time, real external parameter matrix;

步骤S20,对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;Step S20, integrating the local maps generated by each sensor to obtain a panoramic map of the scene to be monitored, as the first map;

步骤S30,依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;Step S30, sequentially registering the first map with each local map, and obtaining the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm;

步骤S40,计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图。Step S40: Calculate the error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; based on each error, update the first map to obtain a second map as the final panoramic map obtained at the current moment of the scene to be monitored.

在一些优选的实施方式中,所述N种不同自由度的传感器包括固定视角监控摄像机、PTZ监控摄像机、可移动监控机器人、视觉监控无人机。In some preferred embodiments, the N sensors with different degrees of freedom include fixed viewing angle surveillance cameras, PTZ surveillance cameras, movable surveillance robots, and visual surveillance drones.

在一些优选的实施方式中,所述三维语义地图包括静态背景模型、动态语义实例;如下式所示:In some preferred embodiments, the three-dimensional semantic map includes a static background model and a dynamic semantic instance; as shown in the following formula:

Figure 922375DEST_PATH_IMAGE001
Figure 922375DEST_PATH_IMAGE001

Figure 26466DEST_PATH_IMAGE002
Figure 26466DEST_PATH_IMAGE002

其中,

Figure 9466DEST_PATH_IMAGE003
表示
Figure 60467DEST_PATH_IMAGE004
时刻的三维语义地图,
Figure 609260DEST_PATH_IMAGE005
表示静态背景模型,
Figure 962881DEST_PATH_IMAGE006
表示动态语义 实例,
Figure 902018DEST_PATH_IMAGE007
表示实例的类别,
Figure 835339DEST_PATH_IMAGE008
表示实例对应的三维模型,
Figure 300955DEST_PATH_IMAGE009
表示实例的空间位置和方向。 in,
Figure 9466DEST_PATH_IMAGE003
express
Figure 60467DEST_PATH_IMAGE004
3D semantic map of moments,
Figure 609260DEST_PATH_IMAGE005
represents a static background model,
Figure 962881DEST_PATH_IMAGE006
represents a dynamic semantic instance,
Figure 902018DEST_PATH_IMAGE007
represents the class of the instance,
Figure 835339DEST_PATH_IMAGE008
represents the 3D model corresponding to the instance,
Figure 300955DEST_PATH_IMAGE009
Represents the spatial location and orientation of the instance.

在一些优选的实施方式中,所述全景地图,即全景三维语义地图,其获取方法为:In some preferred embodiments, the panoramic map, namely the panoramic three-dimensional semantic map, the acquisition method is:

在可移动监控机器人导航过程中,通过基于TSDF的实时定位与建图算法自动构建全景地图的静态背景模型;During the navigation process of the mobile monitoring robot, the static background model of the panoramic map is automatically constructed through the real-time positioning and mapping algorithm based on TSDF;

针对行人类别语义实例,使用基于RGB图像的行人重识别算法,匹配各局部地图中的同一语义实例;针对非行人类别语义实例,计算各局部地图中语义实例对应的三维模型之间的体积重叠比,将体积重叠比高于设定阈值的语义实例,作为同一语义实例;结合匹配后的同一语义实例,获取全景地图中的动态语义实例;For the semantic instances of pedestrian categories, the pedestrian re-identification algorithm based on RGB images is used to match the same semantic instance in each local map; for non-pedestrian category semantic instances, the volume overlap ratio between the 3D models corresponding to the semantic instances in each local map is calculated. , take the semantic instances whose volume overlap ratio is higher than the set threshold as the same semantic instance; combine the matched same semantic instance to obtain the dynamic semantic instance in the panoramic map;

结合获取的全景地图的静态背景模型、全景地图中的动态语义实例,构建全景地图。The panoramic map is constructed by combining the obtained static background model of the panoramic map and the dynamic semantic instances in the panoramic map.

在一些优选的实施方式中,步骤S30中“通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵”,其方法为:In some preferred embodiments, in step S30, "acquiring the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm", the method is as follows:

选取所述第一地图与各传感器对应局部地图的共有语义实例;selecting the shared semantic instance of the first map and the local map corresponding to each sensor;

根据各共有语义实例的位置,采用RANSAC算法获取各传感器估计的外参矩阵。According to the position of each shared semantic instance, the RANSAC algorithm is used to obtain the extrinsic parameter matrix estimated by each sensor.

在一些优选的实施方式中,步骤S40中“基于各误差,对所述第一地图进行更新”,其方法为;In some preferred embodiments, in step S40, "update the first map based on each error", and the method is:

判断所述误差是否小于等于设定阈值,若是,则不进行更新;Determine whether the error is less than or equal to the set threshold, and if so, do not update;

否则,对第一地图中的静态背景模型不进行更新,对第一地图中的动态语义实例,结合所述误差,更新动态语义实例的空间位置和方向。Otherwise, the static background model in the first map is not updated, and the dynamic semantic instance in the first map is updated with the spatial position and direction of the dynamic semantic instance in combination with the error.

在一些优选的实施方式中,“结合所述误差,对动态语义实例的空间位置和方向进行更新”,其方法为:In some preferred embodiments, "in combination with the error, the spatial position and orientation of the dynamic semantic instance are updated", and the method is:

若动态语义实例仅被传感器

Figure 28740DEST_PATH_IMAGE010
观测到,则更新后的空间位置和方向为: If the dynamic semantic instance is only used by the sensor
Figure 28740DEST_PATH_IMAGE010
observed, the updated spatial position and orientation are:

Figure 704441DEST_PATH_IMAGE011
Figure 704441DEST_PATH_IMAGE011

若动态语义实例被多个传感器观测到,则更新后的空间位置和方向为:If the dynamic semantic instance is observed by multiple sensors, the updated spatial position and orientation are:

Figure 441453DEST_PATH_IMAGE012
Figure 441453DEST_PATH_IMAGE012

其中,

Figure 699259DEST_PATH_IMAGE013
表示所有观测到动态语义实例的传感器集合,
Figure 925841DEST_PATH_IMAGE014
Figure 901887DEST_PATH_IMAGE015
表示全景地图与第
Figure 849114DEST_PATH_IMAGE016
Figure 227006DEST_PATH_IMAGE010
个传感器对应的局部地图之间的误差。 in,
Figure 699259DEST_PATH_IMAGE013
represents the set of all sensors that observe dynamic semantic instances,
Figure 925841DEST_PATH_IMAGE014
,
Figure 901887DEST_PATH_IMAGE015
Represents a panoramic map with the
Figure 849114DEST_PATH_IMAGE016
,
Figure 227006DEST_PATH_IMAGE010
The error between the local maps corresponding to each sensor.

本发明的第二方面,提出了一种基于多自由度传感关联的三维实时全景监控系统,该系统包括局部地图获取模块、全景地图获取模块、配准模块、更新输出模块;In a second aspect of the present invention, a three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing correlation is proposed, the system includes a local map acquisition module, a panoramic map acquisition module, a registration module, and an update output module;

所述局部地图获取模块,配置为获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;The local map acquisition module is configured to acquire real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data includes Observation time, real external parameter matrix;

所述全景地图获取模块,配置为对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;The panoramic map acquisition module is configured to integrate the local maps generated by each sensor to obtain a panoramic map of the scene to be monitored as the first map;

所述配准模块,配置为依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;The registration module is configured to sequentially register the first map with each local map, and obtain the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm;

所述更新输出模块,配置为计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图。The update output module is configured to calculate the error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; based on each error, the first map is updated to obtain a second map, which is used as the final map at the current moment of the scene to be monitored. Acquired panoramic map.

本发明的第三方面,提出了一种存储装置,其中存储有多条程序,所述程序适用于由处理器加载并执行以实现上述的基于多自由度传感关联的三维实时全景监控方法。In a third aspect of the present invention, a storage device is provided, wherein a plurality of programs are stored, and the programs are suitable for being loaded and executed by a processor to realize the above-mentioned three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensor correlation.

本发明的第四方面,提出了一种处理装置,包括处理器、存储装置;处理器,适用于执行各条程序;存储装置,适用于存储多条程序;所述程序适用于由处理器加载并执行以实现上述的基于多自由度传感关联的三维实时全景监控方法。In a fourth aspect of the present invention, a processing device is proposed, including a processor and a storage device; the processor is adapted to execute various programs; the storage device is adapted to store multiple programs; the programs are adapted to be loaded by the processor And execute to realize the above-mentioned three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensor correlation.

本发明的有益效果:Beneficial effects of the present invention:

本发明可以实现大范围内三维全景视频监控且监控画面连续,提高了监控效率,保证了监控的质量与效果。The present invention can realize three-dimensional panoramic video monitoring in a large range with continuous monitoring pictures, improve monitoring efficiency, and ensure monitoring quality and effect.

(1)本发明引入了多自由度传感器,并融合多自由度传感器的观测数据构建动态的、语义丰富的三维全景监控地图,该地图不仅包含静态的背景模型,而且包含动态的语义实例模型,实现大范围内三维全景视频监控且监控画面连续。(1) The present invention introduces a multi-degree-of-freedom sensor, and integrates the observation data of the multi-degree-of-freedom sensor to construct a dynamic and semantically rich three-dimensional panoramic monitoring map. The map not only includes a static background model, but also includes a dynamic semantic instance model. Realize three-dimensional panoramic video surveillance in a large range and the monitoring picture is continuous.

(2)本发明引入了多自由度传感器自动标定方法,使用三维全景地图中的语义实例作为标定模板,自动计算多自由度传感器观测所产生的局部地图与全景地图中语义实例之间的变换矩阵,标定多自由度传感器的外参矩阵。然后以当前估计的外参矩阵计算局部地图与全景地图的误差矩阵,更新全景地图,以获取更加准确的外参矩阵与更加精确的三维全景地图,提高了监控效率,保证了监控的质量与效果。(2) The present invention introduces an automatic calibration method for multi-degree-of-freedom sensors, and uses the semantic instances in the three-dimensional panoramic map as calibration templates to automatically calculate the transformation matrix between the local map generated by the multi-degree-of-freedom sensor observation and the semantic instances in the panoramic map. , calibrating the extrinsic parameter matrix of the multi-degree-of-freedom sensor. Then, the error matrix between the local map and the panoramic map is calculated with the currently estimated external parameter matrix, and the panoramic map is updated to obtain a more accurate external parameter matrix and a more accurate three-dimensional panoramic map, which improves the monitoring efficiency and ensures the quality and effect of monitoring. .

附图说明Description of drawings

通过阅读参照以下附图所做的对非限制性实施例所做的详细描述,本申请的其他特征、目的和优点将会变得更明显。Other features, objects and advantages of the present application will become more apparent upon reading the detailed description of non-limiting embodiments taken with reference to the following drawings.

图1是本发明一种实施例的基于多自由度传感关联的三维实时全景监控方法的流程示意图;1 is a schematic flowchart of a three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation according to an embodiment of the present invention;

图2是本发明一种实施例的基于多自由度传感关联的三维实时全景监控系统的框架示意图;2 is a schematic diagram of a framework of a three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing correlation according to an embodiment of the present invention;

图3是本发明一种实施例的基于多自由度传感关联的三维实时全景监控方法的简略流程示意图。FIG. 3 is a schematic flowchart of a three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation according to an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the objectives, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. Obviously, the described embodiments are part of the embodiments of the present invention, not All examples. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。The present application will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the related invention, but not to limit the invention. In addition, it should be noted that, for the convenience of description, only the parts related to the related invention are shown in the drawings.

需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。It should be noted that the embodiments in the present application and the features of the embodiments may be combined with each other in the case of no conflict.

本发明的基于多自由度传感关联的三维实时全景监控方法,如图1所示,包括以下步骤:The three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation of the present invention, as shown in FIG. 1 , includes the following steps:

步骤S10,获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;Step S10: Obtain real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data includes observation time, real external parameter matrix;

步骤S20,对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;Step S20, integrating the local maps generated by each sensor to obtain a panoramic map of the scene to be monitored, as the first map;

步骤S30,依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;Step S30, sequentially registering the first map with each local map, and obtaining the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm;

步骤S40,计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图。Step S40: Calculate the error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; based on each error, update the first map to obtain a second map as the final panoramic map obtained at the current moment of the scene to be monitored.

为了更清晰地对本发明基于多自由度传感关联的三维实时全景监控方法进行说明,下面结合附图对本发明方法一种实施例中各步骤进行展开详述。In order to more clearly describe the three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation of the present invention, each step in an embodiment of the method of the present invention will be described in detail below with reference to the accompanying drawings.

本发明引入了多自由度传感器来提供更加丰富的视觉信息,并利用三维建模、实例分割等方法,构建场景的三维全景语义地图,最后迭代更新多自由度传感器的外参矩阵与全景地图,实现实时的三维全景地图更新。具体如下:The invention introduces a multi-degree-of-freedom sensor to provide more abundant visual information, and uses three-dimensional modeling, instance segmentation and other methods to construct a three-dimensional panoramic semantic map of the scene, and finally iteratively updates the external parameter matrix and panoramic map of the multi-degree-of-freedom sensor. Real-time 3D panoramic map update. details as follows:

步骤S10,获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;Step S10: Obtain real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data includes observation time, real external parameter matrix;

在本实施例中,N种不同自由度的传感器优选采用固定视角监控摄像机(零自由度)、PTZ监控摄像机(摄像机姿态自由度:2维和尺度缩放自由度:1维)、可移动监控机器人(摄像机姿态自由度:2维、尺度缩放自由度:1维、机器人位姿自由度:6维)、视觉监控无人机(摄像机姿态自由度:2维、尺度缩放自由度:1维、无人机位姿自由度:6维),在其他实施例中可以根据实际需要进行选取传感器。In this embodiment, N kinds of sensors with different degrees of freedom preferably adopt fixed viewing angle surveillance cameras (zero degrees of freedom), PTZ surveillance cameras (camera attitude degrees of freedom: 2 dimensions and scaling degrees of freedom: 1 dimension), movable surveillance robots ( Camera attitude freedom: 2D, scaling freedom: 1D, robot pose freedom: 6D), visual surveillance drone (camera attitude freedom: 2D, scaling freedom: 1D, unmanned The degree of freedom of the camera position and attitude: 6 dimensions), in other embodiments, the sensor can be selected according to actual needs.

本发明中,多传感器

Figure 358910DEST_PATH_IMAGE017
Figure 87832DEST_PATH_IMAGE018
时刻的观测数据表示为:
Figure 901067DEST_PATH_IMAGE019
,其 中,
Figure 320416DEST_PATH_IMAGE020
为摄像机的外参矩阵(真实的外参矩阵),表示传感器的位置和姿态,即位姿,对于PTZ 监控摄像机、可移动监控机器人、视觉监控无人机,外参矩阵为时变函数
Figure 826484DEST_PATH_IMAGE021
Figure 777122DEST_PATH_IMAGE022
为传感器 采样矩阵。同样,摄像机的成像方式不同,
Figure 456365DEST_PATH_IMAGE022
表示的含义不同,对于RGB相机,
Figure 543270DEST_PATH_IMAGE022
为相机内参。 对于变焦相机,
Figure 157922DEST_PATH_IMAGE022
时变函数
Figure 330277DEST_PATH_IMAGE023
,对于激光雷达等能够直接测量外部环境三维坐标的传感 器,
Figure 750894DEST_PATH_IMAGE022
为单位矩阵。 In the present invention, multiple sensors
Figure 358910DEST_PATH_IMAGE017
exist
Figure 87832DEST_PATH_IMAGE018
The observation data at time is expressed as:
Figure 901067DEST_PATH_IMAGE019
,in,
Figure 320416DEST_PATH_IMAGE020
is the extrinsic parameter matrix of the camera (the real extrinsic parameter matrix), which represents the position and attitude of the sensor, that is, the pose. For PTZ surveillance cameras, mobile surveillance robots, and visual surveillance drones, the extrinsic parameter matrix is a time-varying function
Figure 826484DEST_PATH_IMAGE021
;
Figure 777122DEST_PATH_IMAGE022
Sample matrix for the sensor. Similarly, the imaging method of the camera is different,
Figure 456365DEST_PATH_IMAGE022
The meaning of the representation is different, for RGB cameras,
Figure 543270DEST_PATH_IMAGE022
Internal parameters for the camera. For zoom cameras,
Figure 157922DEST_PATH_IMAGE022
time-varying function
Figure 330277DEST_PATH_IMAGE023
, for sensors such as lidar that can directly measure the three-dimensional coordinates of the external environment,
Figure 750894DEST_PATH_IMAGE022
is the identity matrix.

各传感器根据观测数据构建其对应的三维语义地图,作为局部地图。三维语义地 图包括静态背景模型

Figure 20202DEST_PATH_IMAGE024
和动态语义实例模型
Figure 868072DEST_PATH_IMAGE025
,如式(1)所示: Each sensor builds its corresponding three-dimensional semantic map according to the observation data as a local map. 3D semantic map including static background model
Figure 20202DEST_PATH_IMAGE024
and dynamic semantic instance model
Figure 868072DEST_PATH_IMAGE025
, as shown in formula (1):

Figure 714674DEST_PATH_IMAGE026
(1)
Figure 714674DEST_PATH_IMAGE026
(1)

动态语义实例模型

Figure 938982DEST_PATH_IMAGE025
其如式(2)所示: Dynamic Semantic Instance Model
Figure 938982DEST_PATH_IMAGE025
It is shown in formula (2):

Figure 734900DEST_PATH_IMAGE025
=
Figure 815988DEST_PATH_IMAGE027
(2)
Figure 734900DEST_PATH_IMAGE025
=
Figure 815988DEST_PATH_IMAGE027
(2)

其中,

Figure 962936DEST_PATH_IMAGE028
表示动态语义实例的类别,
Figure 663038DEST_PATH_IMAGE029
表示动态语义实例的三维模型,
Figure 844621DEST_PATH_IMAGE030
表示动 态语义实例的空间位置和方向。由于监控环境中动态语义实例
Figure 768715DEST_PATH_IMAGE025
的位置、姿态甚至模型都 会发生改变,因而
Figure 465275DEST_PATH_IMAGE025
可表示为时间的函数
Figure 765807DEST_PATH_IMAGE031
。 in,
Figure 962936DEST_PATH_IMAGE028
a class representing a dynamic semantic instance,
Figure 663038DEST_PATH_IMAGE029
3D models representing dynamic semantic instances,
Figure 844621DEST_PATH_IMAGE030
Represents the spatial location and orientation of dynamic semantic instances. Due to dynamic semantic instances in the monitoring environment
Figure 768715DEST_PATH_IMAGE025
The position, attitude and even the model will change, so
Figure 465275DEST_PATH_IMAGE025
can be expressed as a function of time
Figure 765807DEST_PATH_IMAGE031
.

步骤S20,对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;Step S20, integrating the local maps generated by each sensor to obtain a panoramic map of the scene to be monitored, as the first map;

在本实施例中,通过空间和时间的配准和同步,将具有不同自由度的多种感知信息源进行整合,构建待监控场景的全景地图。In this embodiment, through the registration and synchronization of space and time, multiple perception information sources with different degrees of freedom are integrated to construct a panoramic map of the scene to be monitored.

在构建全景地图时,静态背景模型在可移动监控机器人导航过程中,通过基于TSDF的实时定位与建图算法自动构建。动态语义实例基于实时多自由度传感器获取的观测数据通过语义实例提取,三维模型映射,跨传感器实例重识别三个步骤构建。具体如下:When constructing a panoramic map, the static background model is automatically constructed by the real-time positioning and mapping algorithm based on TSDF during the navigation process of the mobile monitoring robot. Dynamic semantic instances are constructed based on the observation data acquired by real-time multi-DOF sensors through three steps: semantic instance extraction, 3D model mapping, and cross-sensor instance re-identification. details as follows:

步骤S21,对实时获取的多自由度传感器的观测数据进行语义实例提取;Step S21, performing semantic instance extraction on the observation data of the multi-degree-of-freedom sensor acquired in real time;

其中,针对视觉传感器,使用基于RGB图像的实例分割算法提取;针对激光雷达传感器,使用基于点云的三维实例分割算法提取。Among them, for the visual sensor, the instance segmentation algorithm based on RGB image is used for extraction; for the lidar sensor, the 3D instance segmentation algorithm based on point cloud is used for extraction.

步骤S22,将提取的语义实例与该类别的三维模型一一对应起来,并结合深度传感器信息获取该模型的三维空间位置和方向;Step S22, one-to-one correspondence between the extracted semantic instance and the three-dimensional model of the category, and combining the depth sensor information to obtain the three-dimensional space position and direction of the model;

经过步骤S21、步骤S22,得到各传感器对应的局部地图,接下来,对各局部地图进行整合,得到全景地图,即全景三维语义地图。After steps S21 and S22, local maps corresponding to each sensor are obtained. Next, each local map is integrated to obtain a panoramic map, that is, a panoramic three-dimensional semantic map.

步骤S23,跨传感器语义实例重识别。Step S23, re-identification of cross-sensor semantic instances.

针对行人类别语义实例,使用基于RGB图像的行人重识别算法(由于本发明中传感器多为视觉传感器),匹配不同传感器视野中(局部地图)的同一语义实例;在其他实施例中,行人类别语义实例可以根据传感器选取适合的重识别算法获取。For the semantic instance of pedestrian category, a pedestrian re-identification algorithm based on RGB images is used (since most of the sensors in the present invention are visual sensors) to match the same semantic instance in the field of view (local map) of different sensors; in other embodiments, pedestrian category semantics The instance can be obtained by selecting a suitable re-identification algorithm according to the sensor.

针对非行人类别实例,计算不同传感器观测下,语义实例对应的三维模型之间体积的重叠比重,比重高于设定阈值(本发明中优先设置为0.5),认为该语义实例为不同传感器视野中的同一语义实例。For non-pedestrian category instances, the overlapping proportions of the volumes between the three-dimensional models corresponding to the semantic instances under observation by different sensors are calculated. the same semantic instance of .

步骤S30,依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;Step S30, sequentially registering the first map with each local map, and obtaining the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm;

在本实施例中,多自由度传感器产生的局部地图

Figure 988846DEST_PATH_IMAGE032
与全局地图
Figure 349421DEST_PATH_IMAGE033
进行配 准,计算
Figure 470960DEST_PATH_IMAGE034
时刻多自由度传感器各自在地图中的观测数据
Figure 637500DEST_PATH_IMAGE019
。具体 来说,在全景地图中选取与当前传感器对应的局部地图共有(同一)的语义实例,根据语义 实例的位置,使用RANSAC算法获取当前传感器估计的外参矩阵。 In this embodiment, the local map generated by the multi-degree-of-freedom sensor
Figure 988846DEST_PATH_IMAGE032
with global map
Figure 349421DEST_PATH_IMAGE033
register, calculate
Figure 470960DEST_PATH_IMAGE034
The observation data of each multi-degree-of-freedom sensor in the map at time
Figure 637500DEST_PATH_IMAGE019
. Specifically, the semantic instance shared (identical) with the local map corresponding to the current sensor is selected in the panoramic map, and the extrinsic parameter matrix estimated by the current sensor is obtained using the RANSAC algorithm according to the location of the semantic instance.

步骤S40,计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图,具体如下:Step S40, calculating the error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; based on each error, the first map is updated to obtain a second map, which is used as the panorama map finally obtained at the current moment of the scene to be monitored, details as follows:

在本实施例中,将各局部地图

Figure 262516DEST_PATH_IMAGE032
投影到全局坐标系,然后将投影后的局部地 图与全景地图
Figure 59571DEST_PATH_IMAGE035
进行配准,计算全景地图与真实观测的误差,并矫正全景地图。 In this embodiment, each local map is
Figure 262516DEST_PATH_IMAGE032
Project to the global coordinate system, and then combine the projected local map with the panoramic map
Figure 59571DEST_PATH_IMAGE035
Perform registration, calculate the error between the panoramic map and the real observation, and correct the panoramic map.

具体来说,在当前局部地图与全景地图(即第一地图)上选取共有的语义实例,如 图3所示,利用各传感器外参矩阵,将共有的语义实例从传感器(图中给出了传感器1、传感 器2两个传感器)坐标系投影到全局坐标系中。在全局坐标系下,传感器(局部地图)与全景 地图中的对应语义实例之间存在一定的空间位姿(空间位置和方向)误差(即外参矩阵的误 差)。根据共有实例的位置集合,使用RANSAC方法计算全景地图与局部地图之间的误差

Figure 340510DEST_PATH_IMAGE036
(或简称误差矩阵),当误差大于设定阈值,则矫正更新全景地图。 Specifically, the shared semantic instance is selected on the current local map and the panoramic map (ie, the first map), as shown in Figure 3, using the external parameter matrix of each sensor, the shared semantic instance is extracted from the sensor (given in the figure). The coordinate system of sensor 1 and sensor 2 is projected into the global coordinate system. In the global coordinate system, there is a certain spatial pose (spatial position and orientation) error (ie, the error of the external parameter matrix) between the sensor (local map) and the corresponding semantic instance in the panoramic map. Using the RANSAC method to calculate the error between the panoramic map and the local map according to the location set of the shared instances
Figure 340510DEST_PATH_IMAGE036
(or error matrix for short), when the error is greater than the set threshold, the panoramic map is corrected and updated.

其中,“当误差大于设定阈值,则矫正更新全景地图”的方法为:Among them, the method of "correcting and updating the panoramic map when the error is greater than the set threshold" is:

对于全景地图中的静态背景模型,不进行更新,对于全景地图中的动态语义实例

Figure 248424DEST_PATH_IMAGE025
,更新其空间位姿,若动态语义实例仅在传感器
Figure 55843DEST_PATH_IMAGE037
中被观测到,则更新后的空间位姿为:
Figure 758219DEST_PATH_IMAGE038
,其中×为矩阵乘法。若动态语义实例
Figure 854351DEST_PATH_IMAGE025
在多个传感器中被观测到,则更新后 的空间位姿为:
Figure 487327DEST_PATH_IMAGE039
,其中
Figure 86935DEST_PATH_IMAGE040
为所有可观测到实例
Figure 22530DEST_PATH_IMAGE025
的传感器集合。 For static background models in panorama maps, no update is performed, for dynamic semantic instances in panorama maps
Figure 248424DEST_PATH_IMAGE025
, update its spatial pose, if the dynamic semantic instance is only in the sensor
Figure 55843DEST_PATH_IMAGE037
is observed in , the updated spatial pose is:
Figure 758219DEST_PATH_IMAGE038
, where × is the matrix multiplication. If dynamic semantic instance
Figure 854351DEST_PATH_IMAGE025
is observed in multiple sensors, the updated spatial pose is:
Figure 487327DEST_PATH_IMAGE039
,in
Figure 86935DEST_PATH_IMAGE040
for all observable instances
Figure 22530DEST_PATH_IMAGE025
collection of sensors.

重复执行以上步骤S30,S40,迭代更新多自由度传感器观测数据

Figure 340379DEST_PATH_IMAGE019
与全景地图。 Repeat the above steps S30, S40 to iteratively update the multi-degree-of-freedom sensor observation data
Figure 340379DEST_PATH_IMAGE019
with panoramic map.

在获取全景地图后,本发明将全景地图转换为GLB格式导入Habitat-sim模拟器中,并采用Habitat-lab库训练基于强化学习的视觉导航算法模型,需要注意的是,模拟器中虚拟智能体搭载的传感器参数(包括传感器类型与外参)与真实环境保持一致。After acquiring the panoramic map, the present invention converts the panoramic map into GLB format and imports it into the Habitat-sim simulator, and uses the Habitat-lab library to train the visual navigation algorithm model based on reinforcement learning. It should be noted that the virtual agent in the simulator The loaded sensor parameters (including sensor type and external parameters) are consistent with the real environment.

其中,“基于强化学习的视觉导航算法”包含三个模块,依次为:Among them, "Visual Navigation Algorithm Based on Reinforcement Learning" consists of three modules, which are as follows:

实时定位与建图模块(SLAM模块),输入多自由度传感器的实时数据到神经网络模 型中,生成局部空间地图

Figure 590095DEST_PATH_IMAGE041
Figure 247472DEST_PATH_IMAGE017
为传感器,
Figure 291652DEST_PATH_IMAGE018
为时间戳,将局部空间地图拼接到一起生成 全局地图
Figure 159114DEST_PATH_IMAGE042
,维度为2×M×M; Real-time positioning and mapping module (SLAM module), input real-time data of multi-degree-of-freedom sensors into the neural network model to generate local spatial maps
Figure 590095DEST_PATH_IMAGE041
,
Figure 247472DEST_PATH_IMAGE017
for the sensor,
Figure 291652DEST_PATH_IMAGE018
For timestamps, stitch the local spatial maps together to generate a global map
Figure 159114DEST_PATH_IMAGE042
, the dimension is 2×M×M;

全局决策模块,根据全局地图

Figure 946941DEST_PATH_IMAGE042
规划虚拟智能体的全局行动路径; Global decision module, according to the global map
Figure 946941DEST_PATH_IMAGE042
Plan the global action path of the virtual agent;

局部决策模块,根据全局路径及当前可达区域规划虚拟智能体的局部行动路径。The local decision-making module plans the local action path of the virtual agent according to the global path and the current reachable area.

本发明第二实施例的一种基于多自由度传感关联的三维实时全景监控系统,如图2所示,包括:局部地图获取模块100、全景地图获取模块200、配准模块300、更新输出模块400;A three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing correlation according to the second embodiment of the present invention, as shown in FIG. 2, includes: a local map acquisition module 100, a panoramic map acquisition module 200, a registration module 300, and an update output module 400;

所述局部地图获取模块100,配置为获取待监控场景中N种不同自由度的传感器的实时观测数据,构建各传感器对应的三维语义地图,作为局部地图;N为正整数;所述实时观测数据包括观测时间、真实的外参矩阵;The local map acquisition module 100 is configured to acquire real-time observation data of N sensors with different degrees of freedom in the scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; N is a positive integer; the real-time observation data Including observation time, real external parameter matrix;

所述全景地图获取模块200,配置为对各传感器产生的局部地图进行整合,得到待监控场景的全景地图,作为第一地图;The panoramic map acquisition module 200 is configured to integrate the local maps generated by each sensor to obtain a panoramic map of the scene to be monitored, as the first map;

所述配准模块300,配置为依次将所述第一地图与各局部地图进行配准,并通过RANSAC算法获取各传感器在第一地图中对应估计的外参矩阵;The registration module 300 is configured to sequentially register the first map with each local map, and obtain the corresponding estimated external parameter matrix of each sensor in the first map through the RANSAC algorithm;

所述更新输出模块400,配置为计算真实的外参矩阵与估计的外参矩阵的误差;基于各误差,对所述第一地图进行更新,得到第二地图,作为所述待监控场景当前时刻最终获取的全景地图。The update output module 400 is configured to calculate the error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; based on each error, update the first map to obtain a second map, which is used as the current moment of the scene to be monitored The final obtained panoramic map.

所述技术领域的技术人员可以清楚的了解到,为描述的方便和简洁,上述描述的系统的具体的工作过程及有关说明,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the technical field can clearly understand that, for the convenience and brevity of description, for the specific working process and related description of the system described above, reference may be made to the corresponding process in the foregoing method embodiments, which will not be repeated here.

需要说明的是,上述实施例提供的基于多自由度传感关联的三维实时全景监控系统,仅以上述各功能模块的划分进行举例说明,在实际应用中,可以根据需要而将上述功能分配由不同的功能模块来完成,即将本发明实施例中的模块或者步骤再分解或者组合,例如,上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块,以完成以上描述的全部或者部分功能。对于本发明实施例中涉及的模块、步骤的名称,仅仅是为了区分各个模块或者步骤,不视为对本发明的不当限定。It should be noted that the three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing correlation provided by the above embodiments is only illustrated by the division of the above functional modules. In practical applications, the above functions can be allocated by It can be completed by different functional modules, that is, the modules or steps in the embodiments of the present invention are decomposed or combined. For example, the modules in the above-mentioned embodiments can be combined into one module, and can also be further split into multiple sub-modules, so as to complete the above description. All or part of the functionality. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing each module or step, and should not be regarded as an improper limitation of the present invention.

本发明第三实施例的一种存储装置,其中存储有多条程序,所述程序适用于由处理器加载并实现上述的基于多自由度传感关联的三维实时全景监控方法。A storage device according to a third embodiment of the present invention stores a plurality of programs, and the programs are suitable for being loaded by a processor and implementing the above-mentioned three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing correlation.

本发明第四实施例的一种处理装置,包括处理器、存储装置;处理器,适于执行各条程序;存储装置,适于存储多条程序;所述程序适于由处理器加载并执行以实现上述的基于多自由度传感关联的三维实时全景监控方法。A processing device according to a fourth embodiment of the present invention includes a processor and a storage device; the processor is adapted to execute various programs; the storage device is adapted to store multiple programs; the programs are adapted to be loaded and executed by the processor In order to realize the above-mentioned three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensor correlation.

所述技术领域的技术人员可以清楚的了解到,未描述的方便和简洁,上述描述的存储装置、处理装置的具体工作过程及有关说明,可以参考前述方法实例中的对应过程,在此不再赘述。Those skilled in the technical field can clearly understand that the undescribed convenience and brevity are not described. Repeat.

本领域技术人员应该能够意识到,结合本文中所公开的实施例描述的各示例的模块、方法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,软件模块、方法步骤对应的程序可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。为了清楚地说明电子硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以电子硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those skilled in the art should be aware that the modules and method steps of each example described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software or a combination of the two, and the programs corresponding to the software modules and method steps Can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or as known in the art in any other form of storage medium. In order to clearly illustrate the interchangeability of electronic hardware and software, the components and steps of each example have been described generally in terms of functionality in the foregoing description. Whether these functions are performed in electronic hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods of implementing the described functionality for each particular application, but such implementations should not be considered beyond the scope of the present invention.

术语“第一”、 “第二”等是用于区别类似的对象,而不是用于描述或表示特定的顺序或先后次序。The terms "first," "second," etc. are used to distinguish between similar objects, and are not used to describe or indicate a particular order or sequence.

术语“包括”或者任何其它类似用语旨在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备/装置不仅包括那些要素,而且还包括没有明确列出的其它要素,或者还包括这些过程、方法、物品或者设备/装置所固有的要素。The term "comprising" or any other similar term is intended to encompass a non-exclusive inclusion such that a process, method, article or device/means comprising a list of elements includes not only those elements but also other elements not expressly listed, or Also included are elements inherent to these processes, methods, articles or devices/devices.

至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。So far, the technical solutions of the present invention have been described with reference to the preferred embodiments shown in the accompanying drawings, however, those skilled in the art can easily understand that the protection scope of the present invention is obviously not limited to these specific embodiments. Without departing from the principle of the present invention, those skilled in the art can make equivalent changes or substitutions to the relevant technical features, and the technical solutions after these changes or substitutions will fall within the protection scope of the present invention.

Claims (8)

1. A three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association is characterized by comprising the following steps:
step S10, acquiring real-time observation data of N sensors with different degrees of freedom in a scene to be monitored, and constructing a three-dimensional semantic map corresponding to each sensor as a local map; n is a positive integer; the real-time observation data comprises observation time and a real external parameter matrix; the three-dimensional semantic map comprises a static background model and a dynamic semantic instance, and is represented by the following formula:
Figure DEST_PATH_IMAGE001
wherein,
Figure 96750DEST_PATH_IMAGE002
to represent
Figure 38161DEST_PATH_IMAGE003
A three-dimensional semantic map of the time of day,
Figure 636764DEST_PATH_IMAGE004
a static background model is represented that represents a static background model,
Figure 30836DEST_PATH_IMAGE005
a dynamic instance of semantics is represented that,
Figure 723986DEST_PATH_IMAGE006
the categories of the instances are shown in the figure,
Figure 254324DEST_PATH_IMAGE007
a three-dimensional model corresponding to the instance is represented,
Figure 273096DEST_PATH_IMAGE008
representing the spatial position and orientation of the instance;
step S20, integrating local maps generated by each sensor to obtain a panoramic map of a scene to be monitored as a first map;
the construction method of the panoramic map, namely the panoramic three-dimensional semantic map, comprises the following steps:
in the navigation process of the movable monitoring robot, a static background model of the panoramic map is automatically constructed through a real-time positioning and mapping algorithm based on TSDF;
aiming at the pedestrian category semantic instances, matching the same semantic instance in each local map by using a pedestrian re-recognition algorithm based on an RGB image; calculating volume overlap ratio between three-dimensional models corresponding to semantic instances in each local map aiming at the non-pedestrian category semantic instances, and taking the semantic instances with the volume overlap ratio higher than a set threshold value as the same semantic instance; acquiring a dynamic semantic instance in the panoramic map by combining the matched same semantic instance;
constructing a panoramic map by combining the obtained static background model of the panoramic map and the dynamic semantic instances in the panoramic map;
step S30, registering the first map and each local map in sequence, and acquiring the corresponding estimated external reference matrix of each sensor in the first map through RANSAC algorithm;
step S40, calculating the error between the real external parameter matrix and the estimated external parameter matrix; and updating the first map based on each error to obtain a second map serving as a panoramic map finally obtained at the current moment of the scene to be monitored.
2. The three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association as claimed in claim 1, wherein the sensors with N different degrees of freedom include fixed view monitoring cameras, PTZ monitoring cameras, movable monitoring robots, and visual monitoring unmanned aerial vehicles.
3. The three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing association as claimed in claim 2, wherein in step S30, "obtaining the corresponding estimated external reference matrix of each sensor in the first map by RANSAC algorithm" comprises:
selecting a common semantic instance of the first map and local maps corresponding to the sensors;
and acquiring the external parameter matrix estimated by each sensor by adopting a RANSAC algorithm according to the position of each common semantic instance.
4. The three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing association according to claim 3, wherein in step S40, "update the first map based on each error" includes:
judging whether the error is less than or equal to a set threshold value, if so, not updating;
otherwise, the static background model in the first map is not updated, and the space position and the direction of the dynamic semantic instance in the first map are updated by combining the error with the dynamic semantic instance in the first map.
5. The three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association according to claim 4, wherein the method for updating the spatial position and direction of the dynamic semantic instance in combination with the error comprises the following steps:
if the dynamic semantic instance is only sensed by the sensor
Figure 403732DEST_PATH_IMAGE009
It is observed that then the updated spatial position and orientation are:
Figure 900572DEST_PATH_IMAGE010
if the dynamic semantic instance is observed by multiple sensors, the updated spatial position and direction are:
Figure DEST_PATH_IMAGE011
wherein,
Figure 285417DEST_PATH_IMAGE012
represents the set of all sensors observing dynamic semantic instances,
Figure DEST_PATH_IMAGE013
Figure 160576DEST_PATH_IMAGE014
representing a first map and a second map
Figure 529240DEST_PATH_IMAGE015
Figure 564192DEST_PATH_IMAGE009
Error between the local maps corresponding to the individual sensors.
6. A three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing association is characterized by comprising a local map acquisition module, a panoramic map acquisition module, a registration module and an update output module;
the local map acquisition module is configured to acquire real-time observation data of the sensors with N different degrees of freedom in a scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; n is a positive integer; the real-time observation data comprises observation time and a real external parameter matrix; the three-dimensional semantic map comprises a static background model and a dynamic semantic instance, and is represented by the following formula:
Figure 583970DEST_PATH_IMAGE016
Figure 678965DEST_PATH_IMAGE017
wherein,
Figure 738188DEST_PATH_IMAGE002
to represent
Figure 327563DEST_PATH_IMAGE003
A three-dimensional semantic map of the time of day,
Figure 687000DEST_PATH_IMAGE004
a static background model is represented that represents a static background model,
Figure 952896DEST_PATH_IMAGE005
a dynamic instance of semantics is represented that,
Figure 296153DEST_PATH_IMAGE006
the categories of the instances are shown in the figure,
Figure 656596DEST_PATH_IMAGE007
a three-dimensional model corresponding to the instance is represented,
Figure 136119DEST_PATH_IMAGE008
representing the spatial position and orientation of the instance;
the panoramic map acquisition module is configured to integrate local maps generated by the sensors to obtain a panoramic map of a scene to be monitored as a first map;
the construction method of the panoramic map, namely the panoramic three-dimensional semantic map, comprises the following steps:
in the navigation process of the movable monitoring robot, a static background model of the panoramic map is automatically constructed through a real-time positioning and mapping algorithm based on TSDF;
aiming at the pedestrian category semantic instances, matching the same semantic instance in each local map by using a pedestrian re-recognition algorithm based on an RGB image; calculating volume overlap ratio between three-dimensional models corresponding to semantic instances in each local map aiming at the non-pedestrian category semantic instances, and taking the semantic instances with the volume overlap ratio higher than a set threshold value as the same semantic instance; acquiring a dynamic semantic instance in the panoramic map by combining the matched same semantic instance;
constructing a panoramic map by combining the obtained static background model of the panoramic map and the dynamic semantic instances in the panoramic map;
the registration module is configured to sequentially register the first map with each local map, and acquire an external reference matrix corresponding to and estimated by each sensor in the first map through a RANSAC algorithm;
the update output module is configured to calculate an error between the real external parameter matrix and the estimated external parameter matrix; and updating the first map based on each error to obtain a second map serving as a panoramic map finally obtained at the current moment of the scene to be monitored.
7. A storage device having a plurality of programs stored therein, wherein the programs are adapted to be loaded and executed by a processor to implement the three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing correlation according to any one of claims 1 to 5.
8. A processing device comprising a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; characterized in that the program is suitable for being loaded and executed by a processor to realize the three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association as set forth in any one of claims 1 to 5.
CN202110126538.7A 2021-01-29 2021-01-29 Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association Active CN112446905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110126538.7A CN112446905B (en) 2021-01-29 2021-01-29 Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110126538.7A CN112446905B (en) 2021-01-29 2021-01-29 Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association

Publications (2)

Publication Number Publication Date
CN112446905A CN112446905A (en) 2021-03-05
CN112446905B true CN112446905B (en) 2021-05-11

Family

ID=74740114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110126538.7A Active CN112446905B (en) 2021-01-29 2021-01-29 Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association

Country Status (1)

Country Link
CN (1) CN112446905B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399606A (en) * 2021-12-24 2022-04-26 中国科学院自动化研究所 Interactive display system, method and equipment based on stereoscopic visualization
CN115293508B (en) * 2022-07-05 2023-06-02 国网江苏省电力有限公司南通市通州区供电分公司 Visual optical cable running state monitoring method and system
CN115620201B (en) * 2022-10-25 2023-06-16 北京城市网邻信息技术有限公司 House model construction method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109640032A (en) * 2018-04-13 2019-04-16 河北德冠隆电子科技有限公司 Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence
CN110706279A (en) * 2019-09-27 2020-01-17 清华大学 Global position and pose estimation method based on information fusion of global map and multiple sensors
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN112016612A (en) * 2020-08-26 2020-12-01 四川阿泰因机器人智能装备有限公司 A multi-sensor fusion SLAM method based on monocular depth estimation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2481209A1 (en) * 2009-09-22 2012-08-01 Tenebraex Corporation Systems and methods for correcting images in a multi-sensor system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109640032A (en) * 2018-04-13 2019-04-16 河北德冠隆电子科技有限公司 Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence
CN110706279A (en) * 2019-09-27 2020-01-17 清华大学 Global position and pose estimation method based on information fusion of global map and multiple sensors
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN112016612A (en) * 2020-08-26 2020-12-01 四川阿泰因机器人智能装备有限公司 A multi-sensor fusion SLAM method based on monocular depth estimation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"全息位置地图概念内涵及其关键技术初探";朱欣焰、周成虎、呙维、胡涛、刘洪强、高文秀;《武汉大学学报· 信息科学版》;20150331;第40卷(第3期);全文 *

Also Published As

Publication number Publication date
CN112446905A (en) 2021-03-05

Similar Documents

Publication Publication Date Title
CN111586360B (en) Unmanned aerial vehicle projection method, device, equipment and storage medium
CN110568447B (en) Visual positioning method, device and computer readable medium
US10740964B2 (en) Three-dimensional environment modeling based on a multi-camera convolver system
CN107808407B (en) Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium
CN112446905B (en) Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association
US20190355173A1 (en) Leveraging crowdsourced data for localization and mapping within an environment
Majdik et al. Air‐ground matching: Appearance‐based GPS‐denied urban localization of micro aerial vehicles
CN114820924B (en) A method and system for museum visit analysis based on BIM and video surveillance
EP3274964B1 (en) Automatic connection of images using visual features
US10699438B2 (en) Mobile device localization in complex, three-dimensional scenes
WO2023283929A1 (en) Method and apparatus for calibrating external parameters of binocular camera
KR20200110120A (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
US20210019910A1 (en) Systems and methods for a real-time intelligent inspection assistant
US20220215576A1 (en) Information processing device, information processing method, and computer program product
Tran et al. Low-cost 3D scene reconstruction for response robots in real-time
US11631195B2 (en) Indoor positioning system and indoor positioning method
CN117315015A (en) Robot pose determining method and device, medium and electronic equipment
Javed et al. PanoVILD: a challenging panoramic vision, inertial and LiDAR dataset for simultaneous localization and mapping
US20180350216A1 (en) Generating Representations of Interior Space
CN114299230A (en) A data generation method, device, electronic device and storage medium
Mojtahedzadeh Robot obstacle avoidance using the Kinect
JP2023168262A (en) Data division device and method
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
CN118941633B (en) Device positioning method, computer device and medium
Wang et al. Research on 3D Modeling Method of Unmanned System Based on ORB-SLAM and Oblique Photogrammetry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant