CN112446905B - Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association - Google Patents
Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association Download PDFInfo
- Publication number
- CN112446905B CN112446905B CN202110126538.7A CN202110126538A CN112446905B CN 112446905 B CN112446905 B CN 112446905B CN 202110126538 A CN202110126538 A CN 202110126538A CN 112446905 B CN112446905 B CN 112446905B
- Authority
- CN
- China
- Prior art keywords
- map
- panoramic
- semantic
- dimensional
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 title claims abstract description 48
- 239000011159 matrix material Substances 0.000 claims abstract description 51
- 238000010276 construction Methods 0.000 claims abstract description 4
- 230000003068 static effect Effects 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 10
- 230000000007 visual effect Effects 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Remote Sensing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention belongs to the technical field of real-time positioning, image construction and computer vision, and particularly relates to a three-dimensional real-time panoramic monitoring method, system and device based on multi-degree-of-freedom sensing association, aiming at solving the problems that the existing monitoring technology cannot realize large-range three-dimensional panoramic video monitoring, and is low in monitoring efficiency and poor in effect. The system method comprises the steps of obtaining real-time observation data of sensors with N different degrees of freedom, and constructing a three-dimensional semantic map corresponding to each sensor to serve as a local map; integrating local maps generated by all sensors to obtain a panoramic map serving as a first map; acquiring an external reference matrix correspondingly estimated in a first map by each sensor through a RANSAC algorithm; and calculating the error between the real external parameter matrix and the estimated external parameter matrix, and updating the first map to obtain the panoramic map finally obtained at the current moment of the scene to be monitored. The invention realizes the three-dimensional panoramic video monitoring in a large range, improves the monitoring efficiency and ensures the monitoring quality and effect.
Description
Technical Field
The invention belongs to the technical field of real-time positioning, image construction and computer vision, and particularly relates to a three-dimensional real-time panoramic monitoring method, system and device based on multi-degree-of-freedom sensing association.
Background
Video monitoring is an important and challenging classic computer vision task and has wide application in the fields of security monitoring, intelligent video analysis, personnel search and rescue retrieval and the like. Generally, a monitoring camera is installed at a fixed position, multi-angle and multi-attitude two-dimensional pedestrian images are collected, and monitoring personnel often need some experience accumulation if wanting to track the real-time position and the motion track of a pedestrian, so that the information cannot be intuitively acquired. The requirement of video monitoring cannot be well met only by a sensor with single degree of freedom. The method provides a three-dimensional panoramic monitoring method based on multi-degree-of-freedom sensing association, realizes three-dimensional video monitoring by combining multiple technologies such as three-dimensional environment modeling, instance segmentation and three-dimensional model projection, and can better solve the problem.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, to solve the problems that the existing video monitoring technology mostly depends on a single-degree-of-freedom sensor with a fixed view angle, cannot realize large-range three-dimensional panoramic video monitoring, has high experience requirements on monitoring personnel, and has low monitoring efficiency and poor effect, the invention provides a three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association, which comprises the following steps:
step S10, acquiring real-time observation data of N sensors with different degrees of freedom in a scene to be monitored, and constructing a three-dimensional semantic map corresponding to each sensor as a local map; n is a positive integer; the real-time observation data comprises observation time and a real external parameter matrix;
step S20, integrating local maps generated by each sensor to obtain a panoramic map of a scene to be monitored as a first map;
step S30, registering the first map and each local map in sequence, and acquiring the corresponding estimated external reference matrix of each sensor in the first map through RANSAC algorithm;
step S40, calculating the error between the real external parameter matrix and the estimated external parameter matrix; and updating the first map based on each error to obtain a second map serving as a panoramic map finally obtained at the current moment of the scene to be monitored.
In some preferred embodiments, the sensors of N different degrees of freedom include fixed view surveillance cameras, PTZ surveillance cameras, movable surveillance robots, visual surveillance drones.
In some preferred embodiments, the three-dimensional semantic map comprises a static background model, a dynamic semantic instance; as shown in the following formula:
wherein,to representA three-dimensional semantic map of the time of day,a static background model is represented that represents a static background model,a dynamic instance of semantics is represented that,the categories of the instances are shown in the figure,a three-dimensional model corresponding to the instance is represented,showing the spatial position and orientation of the instances.
In some preferred embodiments, the panoramic map, i.e. the panoramic three-dimensional semantic map, is obtained by:
in the navigation process of the movable monitoring robot, a static background model of the panoramic map is automatically constructed through a real-time positioning and mapping algorithm based on TSDF;
aiming at the pedestrian category semantic instances, matching the same semantic instance in each local map by using a pedestrian re-recognition algorithm based on an RGB image; calculating volume overlap ratio between three-dimensional models corresponding to semantic instances in each local map aiming at the non-pedestrian category semantic instances, and taking the semantic instances with the volume overlap ratio higher than a set threshold value as the same semantic instance; acquiring a dynamic semantic instance in the panoramic map by combining the matched same semantic instance;
and constructing the panoramic map by combining the obtained static background model of the panoramic map and the dynamic semantic instances in the panoramic map.
In some preferred embodiments, in step S30, "obtaining the corresponding estimated external reference matrix of each sensor in the first map by the RANSAC algorithm" includes:
selecting a common semantic instance of the first map and local maps corresponding to the sensors;
and acquiring the external parameter matrix estimated by each sensor by adopting a RANSAC algorithm according to the position of each common semantic instance.
In some preferred embodiments, step S40, "update the first map based on errors", is performed by;
judging whether the error is less than or equal to a set threshold value, if so, not updating;
otherwise, the static background model in the first map is not updated, and the space position and the direction of the dynamic semantic instance in the first map are updated by combining the error with the dynamic semantic instance in the first map.
In some preferred embodiments, "updating the spatial position and orientation of the dynamic semantic instance in combination with the error" is performed by:
if the dynamic semantic instance is only sensed by the sensorIt is observed that then the updated spatial position and orientation are:
if the dynamic semantic instance is observed by multiple sensors, the updated spatial position and direction are:
wherein,represents the set of all sensors observing dynamic semantic instances,、to represent a panoramic map and 、 error between the local maps corresponding to the individual sensors.
The invention provides a three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing association, which comprises a local map acquisition module, a panoramic map acquisition module, a registration module and an update output module;
the local map acquisition module is configured to acquire real-time observation data of the sensors with N different degrees of freedom in a scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; n is a positive integer; the real-time observation data comprises observation time and a real external parameter matrix;
the panoramic map acquisition module is configured to integrate local maps generated by the sensors to obtain a panoramic map of a scene to be monitored as a first map;
the registration module is configured to sequentially register the first map with each local map, and acquire an external reference matrix corresponding to and estimated by each sensor in the first map through a RANSAC algorithm;
the update output module is configured to calculate an error between the real external parameter matrix and the estimated external parameter matrix; and updating the first map based on each error to obtain a second map serving as a panoramic map finally obtained at the current moment of the scene to be monitored.
In a third aspect of the present invention, a storage device is provided, in which a plurality of programs are stored, and the programs are suitable for being loaded and executed by a processor to implement the above three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing association.
In a fourth aspect of the present invention, a processing apparatus is provided, which includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is suitable for being loaded and executed by a processor to realize the three-dimensional real-time panoramic monitoring method based on the multi-degree-of-freedom sensing association.
The invention has the beneficial effects that:
the invention can realize the three-dimensional panoramic video monitoring in a large range and has continuous monitoring pictures, thereby improving the monitoring efficiency and ensuring the monitoring quality and effect.
(1) The invention introduces the multi-degree-of-freedom sensor, integrates the observation data of the multi-degree-of-freedom sensor to construct a dynamic three-dimensional panoramic monitoring map with rich semantics, and the map not only contains a static background model, but also contains a dynamic semantic instance model, thereby realizing the three-dimensional panoramic video monitoring in a large range and realizing continuous monitoring pictures.
(2) The invention introduces an automatic calibration method of the multi-degree-of-freedom sensor, uses semantic instances in a three-dimensional panoramic map as a calibration template, automatically calculates a transformation matrix between a local map generated by the observation of the multi-degree-of-freedom sensor and the semantic instances in the panoramic map, and calibrates an external reference matrix of the multi-degree-of-freedom sensor. And then, calculating an error matrix of the local map and the panoramic map by using the currently estimated external parameter matrix, and updating the panoramic map to obtain a more accurate external parameter matrix and a more accurate three-dimensional panoramic map, so that the monitoring efficiency is improved, and the monitoring quality and effect are ensured.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings.
FIG. 1 is a schematic flow chart of a three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing association according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a three-dimensional real-time panoramic monitoring system based on multiple degrees of freedom sensing association according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing association according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The invention discloses a three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association, which comprises the following steps as shown in figure 1:
step S10, acquiring real-time observation data of N sensors with different degrees of freedom in a scene to be monitored, and constructing a three-dimensional semantic map corresponding to each sensor as a local map; n is a positive integer; the real-time observation data comprises observation time and a real external parameter matrix;
step S20, integrating local maps generated by each sensor to obtain a panoramic map of a scene to be monitored as a first map;
step S30, registering the first map and each local map in sequence, and acquiring the corresponding estimated external reference matrix of each sensor in the first map through RANSAC algorithm;
step S40, calculating the error between the real external parameter matrix and the estimated external parameter matrix; and updating the first map based on each error to obtain a second map serving as a panoramic map finally obtained at the current moment of the scene to be monitored.
In order to more clearly describe the three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association, the following describes each step in an embodiment of the method in detail with reference to the accompanying drawings.
The invention introduces the multi-degree-of-freedom sensor to provide more abundant visual information, constructs a three-dimensional panoramic semantic map of a scene by using methods such as three-dimensional modeling and instance segmentation, and finally iteratively updates an external parameter matrix and a panoramic map of the multi-degree-of-freedom sensor, thereby realizing real-time updating of the three-dimensional panoramic map. The method comprises the following specific steps:
step S10, acquiring real-time observation data of N sensors with different degrees of freedom in a scene to be monitored, and constructing a three-dimensional semantic map corresponding to each sensor as a local map; n is a positive integer; the real-time observation data comprises observation time and a real external parameter matrix;
in this embodiment, the sensors with N different degrees of freedom preferably adopt a fixed-view monitoring camera (zero degree of freedom), a PTZ monitoring camera (camera pose degree of freedom: 2 dimension and scale degree of freedom: 1 dimension), a movable monitoring robot (camera pose degree of freedom: 2 dimension, scale degree of freedom: 1 dimension, robot pose degree of freedom: 6 dimension), a visual monitoring unmanned aerial vehicle (camera pose degree of freedom: 2 dimension, scale degree of freedom: 1 dimension, unmanned aerial vehicle pose degree of freedom: 6 dimension), and in other embodiments, the sensors can be selected according to actual needs.
In the present invention, multiple sensors are providedIn thatThe observed data at the time are expressed as:whereinthe external parameter matrix (real external parameter matrix) of the camera represents the position and the attitude of the sensor, namely the attitude, and for a PTZ monitoring camera, a movable monitoring robot and a visual monitoring unmanned aerial vehicle, the external parameter matrix is a time-varying function;A matrix is sampled for the sensors. Also, the way in which the cameras are imaged is different,the meaning of the representation is different, for an RGB camera,is the camera internal reference. In the case of a zoom camera,function of time variationFor a sensor such as a laser radar capable of directly measuring the three-dimensional coordinates of the external environment,is an identity matrix.
And each sensor constructs a corresponding three-dimensional semantic map according to the observation data to serve as a local map. The three-dimensional semantic map comprises a static background modelAnd dynamic semantic instance modelAs shown in formula (1):
wherein,a category representing an instance of the dynamic semantics,a three-dimensional model representing an instance of dynamic semantics,representing the spatial position and orientation of the dynamic semantic instance. Due to dynamic semantic instances in a monitoring environmentThe position, posture and even the model of (A) are changed, therebyCan be expressed as a function of time。
Step S20, integrating local maps generated by each sensor to obtain a panoramic map of a scene to be monitored as a first map;
in the embodiment, through spatial and temporal registration and synchronization, a plurality of perception information sources with different degrees of freedom are integrated to construct a panoramic map of a scene to be monitored.
When a panoramic map is constructed, a static background model is automatically constructed through a real-time positioning and map building algorithm based on TSDF in the navigation process of the movable monitoring robot. The dynamic semantic instance is constructed by three steps of semantic instance extraction, three-dimensional model mapping and cross-sensor instance re-identification based on observation data acquired by a real-time multi-degree-of-freedom sensor. The method comprises the following specific steps:
step S21, semantic instance extraction is carried out on the observation data of the multi-degree-of-freedom sensor acquired in real time;
wherein, for the vision sensor, an example segmentation algorithm based on RGB image is used for extraction; for the lidar sensor, a point cloud-based three-dimensional instance segmentation algorithm is used for extraction.
Step S22, the extracted semantic instances are in one-to-one correspondence with the three-dimensional models of the category, and the three-dimensional spatial position and direction of the models are obtained by combining the depth sensor information;
through the steps of S21 and S22, local maps corresponding to the sensors are obtained, and then the local maps are integrated to obtain a panoramic map, that is, a panoramic three-dimensional semantic map.
Step S23, re-recognition across sensor semantic instances.
Aiming at the pedestrian category semantic instance, matching the same semantic instance in different sensor fields (local maps) by using a pedestrian re-recognition algorithm based on RGB images (as most sensors in the invention are visual sensors); in other embodiments, the pedestrian category semantic instances may be obtained by selecting a suitable re-recognition algorithm according to the sensor.
And (3) calculating the overlapping proportion of the volumes between the three-dimensional models corresponding to the semantic instances under the observation of different sensors according to the non-pedestrian category instances, wherein the proportion is higher than a set threshold (preferentially set to be 0.5 in the invention), and considering the semantic instances as the same semantic instance in the fields of view of different sensors.
Step S30, registering the first map and each local map in sequence, and acquiring the corresponding estimated external reference matrix of each sensor in the first map through RANSAC algorithm;
in the embodiment, the local map generated by the multi-degree-of-freedom sensorAnd global mapPerforming registration and calculationObservation data of time multiple freedom degree sensors in map. Specifically, a common (same) semantic instance of a local map corresponding to the current sensor is selected from the panoramic map, and the RANSAC algorithm is used for acquiring the external parameter matrix estimated by the current sensor according to the position of the semantic instance.
Step S40, calculating the error between the real external parameter matrix and the estimated external parameter matrix; updating the first map based on each error to obtain a second map, wherein the second map is used as a panoramic map finally obtained by the current moment of the scene to be monitored, and the method specifically comprises the following steps:
in the present embodiment, each partial map is divided intoProjecting the map to a global coordinate system, and then projecting the projected local map and the panoramic mapAnd carrying out registration, calculating the error between the panoramic map and the real observation, and correcting the panoramic map.
Specifically, a common semantic instance is selected on the current local map and the panoramic map (i.e., the first map), and as shown in fig. 3, the common semantic instance is projected from the sensor (two sensors, sensor 1 and sensor 2 are shown in the figure) coordinate system to the global coordinate system by using the sensor external reference matrix. Under the global coordinate system, certain spatial pose (spatial position and direction) errors (namely errors of an external reference matrix) exist between the sensor (local map) and corresponding semantic instances in the panoramic map. Calculating the error between the panoramic map and the local map using the RANSAC method from the set of positions of the common exampleAnd (or simply referred to as an error matrix), and correcting and updating the panoramic map when the error is larger than a set threshold value.
The method for correcting and updating the panoramic map when the error is greater than the set threshold value comprises the following steps:
for a static background model in the panoramic map, updating is not carried out, and for a dynamic semantic instance in the panoramic mapUpdating the spatial pose, and if the dynamic semantic instance is only in the sensorIs observed, then the space after updatingThe pose is as follows:where x is the matrix multiplication. If dynamic semantic instanceObserved in multiple sensors, the updated spatial pose is:whereinFor all observable examplesThe set of sensors of (1).
Repeatedly executing the steps S30 and S40, and iteratively updating the observation data of the multi-degree-of-freedom sensorAnd a panoramic map.
After a panoramic map is obtained, the panoramic map is converted into a GLB format and is led into a Habitut-sim simulator, a Habitut-lab library is adopted to train a visual navigation algorithm model based on reinforcement learning, and it needs to be noted that sensor parameters (including sensor types and external parameters) carried by a virtual intelligent body in the simulator are consistent with a real environment.
The visual navigation algorithm based on reinforcement learning comprises three modules which are sequentially as follows:
a real-time positioning and mapping module (SLAM module) for inputting the real-time data of the multi-degree-of-freedom sensor into the neural network model to generate a local space map,For sensingThe device is used for cleaning the surface of the workpiece,stitching local space maps together to generate a global map for time stampingThe dimension is 2 XMXM;
and the local decision module plans a local action path of the virtual agent according to the global path and the current reachable area.
A three-dimensional real-time panoramic monitoring system based on multiple degrees of freedom sensing association according to a second embodiment of the present invention, as shown in fig. 2, includes: a local map acquisition module 100, a panoramic map acquisition module 200, a registration module 300, and an update output module 400;
the local map acquisition module 100 is configured to acquire real-time observation data of N sensors with different degrees of freedom in a scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; n is a positive integer; the real-time observation data comprises observation time and a real external parameter matrix;
the panoramic map acquisition module 200 is configured to integrate local maps generated by the sensors to obtain a panoramic map of a scene to be monitored, which is used as a first map;
the registration module 300 is configured to sequentially register the first map with each local map, and acquire an external reference matrix corresponding to and estimated by each sensor in the first map through a RANSAC algorithm;
the update output module 400 is configured to calculate an error between the real extrinsic parameter matrix and the estimated extrinsic parameter matrix; and updating the first map based on each error to obtain a second map serving as a panoramic map finally obtained at the current moment of the scene to be monitored.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
It should be noted that, the three-dimensional real-time panoramic monitoring system based on multiple degrees of freedom sensing association provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
A storage apparatus according to a third embodiment of the present invention stores therein a plurality of programs adapted to be loaded by a processor and to implement the above-described three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing association.
A processing apparatus according to a fourth embodiment of the present invention includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is suitable for being loaded and executed by a processor to realize the three-dimensional real-time panoramic monitoring method based on the multi-degree-of-freedom sensing association.
It can be clearly understood by those skilled in the art that, for convenience and brevity, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method examples, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (8)
1. A three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association is characterized by comprising the following steps:
step S10, acquiring real-time observation data of N sensors with different degrees of freedom in a scene to be monitored, and constructing a three-dimensional semantic map corresponding to each sensor as a local map; n is a positive integer; the real-time observation data comprises observation time and a real external parameter matrix; the three-dimensional semantic map comprises a static background model and a dynamic semantic instance, and is represented by the following formula:
wherein,to representA three-dimensional semantic map of the time of day,a static background model is represented that represents a static background model,a dynamic instance of semantics is represented that,the categories of the instances are shown in the figure,a three-dimensional model corresponding to the instance is represented,representing the spatial position and orientation of the instance;
step S20, integrating local maps generated by each sensor to obtain a panoramic map of a scene to be monitored as a first map;
the construction method of the panoramic map, namely the panoramic three-dimensional semantic map, comprises the following steps:
in the navigation process of the movable monitoring robot, a static background model of the panoramic map is automatically constructed through a real-time positioning and mapping algorithm based on TSDF;
aiming at the pedestrian category semantic instances, matching the same semantic instance in each local map by using a pedestrian re-recognition algorithm based on an RGB image; calculating volume overlap ratio between three-dimensional models corresponding to semantic instances in each local map aiming at the non-pedestrian category semantic instances, and taking the semantic instances with the volume overlap ratio higher than a set threshold value as the same semantic instance; acquiring a dynamic semantic instance in the panoramic map by combining the matched same semantic instance;
constructing a panoramic map by combining the obtained static background model of the panoramic map and the dynamic semantic instances in the panoramic map;
step S30, registering the first map and each local map in sequence, and acquiring the corresponding estimated external reference matrix of each sensor in the first map through RANSAC algorithm;
step S40, calculating the error between the real external parameter matrix and the estimated external parameter matrix; and updating the first map based on each error to obtain a second map serving as a panoramic map finally obtained at the current moment of the scene to be monitored.
2. The three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association as claimed in claim 1, wherein the sensors with N different degrees of freedom include fixed view monitoring cameras, PTZ monitoring cameras, movable monitoring robots, and visual monitoring unmanned aerial vehicles.
3. The three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing association as claimed in claim 2, wherein in step S30, "obtaining the corresponding estimated external reference matrix of each sensor in the first map by RANSAC algorithm" comprises:
selecting a common semantic instance of the first map and local maps corresponding to the sensors;
and acquiring the external parameter matrix estimated by each sensor by adopting a RANSAC algorithm according to the position of each common semantic instance.
4. The three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing association according to claim 3, wherein in step S40, "update the first map based on each error" includes:
judging whether the error is less than or equal to a set threshold value, if so, not updating;
otherwise, the static background model in the first map is not updated, and the space position and the direction of the dynamic semantic instance in the first map are updated by combining the error with the dynamic semantic instance in the first map.
5. The three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association according to claim 4, wherein the method for updating the spatial position and direction of the dynamic semantic instance in combination with the error comprises the following steps:
if the dynamic semantic instance is only sensed by the sensorIt is observed that then the updated spatial position and orientation are:
if the dynamic semantic instance is observed by multiple sensors, the updated spatial position and direction are:
6. A three-dimensional real-time panoramic monitoring system based on multi-degree-of-freedom sensing association is characterized by comprising a local map acquisition module, a panoramic map acquisition module, a registration module and an update output module;
the local map acquisition module is configured to acquire real-time observation data of the sensors with N different degrees of freedom in a scene to be monitored, and construct a three-dimensional semantic map corresponding to each sensor as a local map; n is a positive integer; the real-time observation data comprises observation time and a real external parameter matrix; the three-dimensional semantic map comprises a static background model and a dynamic semantic instance, and is represented by the following formula:
wherein,to representA three-dimensional semantic map of the time of day,a static background model is represented that represents a static background model,a dynamic instance of semantics is represented that,the categories of the instances are shown in the figure,a three-dimensional model corresponding to the instance is represented,representing the spatial position and orientation of the instance;
the panoramic map acquisition module is configured to integrate local maps generated by the sensors to obtain a panoramic map of a scene to be monitored as a first map;
the construction method of the panoramic map, namely the panoramic three-dimensional semantic map, comprises the following steps:
in the navigation process of the movable monitoring robot, a static background model of the panoramic map is automatically constructed through a real-time positioning and mapping algorithm based on TSDF;
aiming at the pedestrian category semantic instances, matching the same semantic instance in each local map by using a pedestrian re-recognition algorithm based on an RGB image; calculating volume overlap ratio between three-dimensional models corresponding to semantic instances in each local map aiming at the non-pedestrian category semantic instances, and taking the semantic instances with the volume overlap ratio higher than a set threshold value as the same semantic instance; acquiring a dynamic semantic instance in the panoramic map by combining the matched same semantic instance;
constructing a panoramic map by combining the obtained static background model of the panoramic map and the dynamic semantic instances in the panoramic map;
the registration module is configured to sequentially register the first map with each local map, and acquire an external reference matrix corresponding to and estimated by each sensor in the first map through a RANSAC algorithm;
the update output module is configured to calculate an error between the real external parameter matrix and the estimated external parameter matrix; and updating the first map based on each error to obtain a second map serving as a panoramic map finally obtained at the current moment of the scene to be monitored.
7. A storage device having a plurality of programs stored therein, wherein the programs are adapted to be loaded and executed by a processor to implement the three-dimensional real-time panoramic monitoring method based on multiple degrees of freedom sensing correlation according to any one of claims 1 to 5.
8. A processing device comprising a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; characterized in that the program is suitable for being loaded and executed by a processor to realize the three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association as set forth in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110126538.7A CN112446905B (en) | 2021-01-29 | 2021-01-29 | Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110126538.7A CN112446905B (en) | 2021-01-29 | 2021-01-29 | Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112446905A CN112446905A (en) | 2021-03-05 |
CN112446905B true CN112446905B (en) | 2021-05-11 |
Family
ID=74740114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110126538.7A Active CN112446905B (en) | 2021-01-29 | 2021-01-29 | Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112446905B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115293508B (en) * | 2022-07-05 | 2023-06-02 | 国网江苏省电力有限公司南通市通州区供电分公司 | Visual optical cable running state monitoring method and system |
CN115620201B (en) * | 2022-10-25 | 2023-06-16 | 北京城市网邻信息技术有限公司 | House model construction method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109640032A (en) * | 2018-04-13 | 2019-04-16 | 河北德冠隆电子科技有限公司 | Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence |
CN110706279A (en) * | 2019-09-27 | 2020-01-17 | 清华大学 | Global position and pose estimation method based on information fusion of global map and multiple sensors |
CN111561923A (en) * | 2020-05-19 | 2020-08-21 | 北京数字绿土科技有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
CN112016612A (en) * | 2020-08-26 | 2020-12-01 | 四川阿泰因机器人智能装备有限公司 | Monocular depth estimation-based multi-sensor fusion SLAM method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011037964A1 (en) * | 2009-09-22 | 2011-03-31 | Tenebraex Corporation | Systems and methods for correcting images in a multi-sensor system |
-
2021
- 2021-01-29 CN CN202110126538.7A patent/CN112446905B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109640032A (en) * | 2018-04-13 | 2019-04-16 | 河北德冠隆电子科技有限公司 | Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence |
CN110706279A (en) * | 2019-09-27 | 2020-01-17 | 清华大学 | Global position and pose estimation method based on information fusion of global map and multiple sensors |
CN111561923A (en) * | 2020-05-19 | 2020-08-21 | 北京数字绿土科技有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
CN112016612A (en) * | 2020-08-26 | 2020-12-01 | 四川阿泰因机器人智能装备有限公司 | Monocular depth estimation-based multi-sensor fusion SLAM method |
Non-Patent Citations (1)
Title |
---|
"全息位置地图概念内涵及其关键技术初探";朱欣焰、周成虎、呙维、胡涛、刘洪强、高文秀;《武汉大学学报· 信息科学版》;20150331;第40卷(第3期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112446905A (en) | 2021-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6896077B2 (en) | Vehicle automatic parking system and method | |
CN106679648B (en) | Visual inertia combination SLAM method based on genetic algorithm | |
US20210012520A1 (en) | Distance measuring method and device | |
CN110377015B (en) | Robot positioning method and robot positioning device | |
CN107808407B (en) | Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium | |
Rambach et al. | Learning to fuse: A deep learning approach to visual-inertial camera pose estimation | |
Panahandeh et al. | Vision-aided inertial navigation based on ground plane feature detection | |
CN107967457A (en) | A kind of place identification for adapting to visual signature change and relative positioning method and system | |
Merino et al. | Vision-based multi-UAV position estimation | |
Tian et al. | Accurate human navigation using wearable monocular visual and inertial sensors | |
US20180190014A1 (en) | Collaborative multi sensor system for site exploitation | |
CN108051002A (en) | Transport vehicle space-location method and system based on inertia measurement auxiliary vision | |
CN113568435B (en) | Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system | |
CN111982133B (en) | Method and device for positioning vehicle based on high-precision map and electronic equipment | |
CN112446905B (en) | Three-dimensional real-time panoramic monitoring method based on multi-degree-of-freedom sensing association | |
CN109084749B (en) | Method and device for semantic positioning through objects in environment | |
CN110260866A (en) | A kind of robot localization and barrier-avoiding method of view-based access control model sensor | |
CN111811502B (en) | Motion carrier multi-source information fusion navigation method and system | |
US20180350216A1 (en) | Generating Representations of Interior Space | |
CN111812978B (en) | Cooperative SLAM method and system for multiple unmanned aerial vehicles | |
Mojtahedzadeh | Robot obstacle avoidance using the Kinect | |
Cremona et al. | GNSS‐stereo‐inertial SLAM for arable farming | |
CN117870716A (en) | Map interest point display method and device, electronic equipment and storage medium | |
JP2023168262A (en) | Data division device and method | |
Pogorzelski et al. | Vision Based Navigation Securing the UAV Mission Reliability |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |