[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113534786A - SLAM method-based environment reconstruction method and system and mobile robot - Google Patents

SLAM method-based environment reconstruction method and system and mobile robot Download PDF

Info

Publication number
CN113534786A
CN113534786A CN202010312739.1A CN202010312739A CN113534786A CN 113534786 A CN113534786 A CN 113534786A CN 202010312739 A CN202010312739 A CN 202010312739A CN 113534786 A CN113534786 A CN 113534786A
Authority
CN
China
Prior art keywords
information
map
semantic
point cloud
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010312739.1A
Other languages
Chinese (zh)
Inventor
王继鑫
魏楠哲
潘俊威
王二飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen 360 Smart Life Technology Co Ltd
Original Assignee
Shenzhen Qihu Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qihu Intelligent Technology Co Ltd filed Critical Shenzhen Qihu Intelligent Technology Co Ltd
Priority to CN202010312739.1A priority Critical patent/CN113534786A/en
Publication of CN113534786A publication Critical patent/CN113534786A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

本发明揭露一种基于SLAM方法的环境重建方法和系统与移动机器人。基于SLAM方法的环境重建方法包括:利用多个数据源基于同时定位与地图构建方法进行融合构建一3D语义地图并存储于一地图数据库中;根据深度测量和语义信息更新所述3D语义地图。本发明通过利用多个数据源融合,可更精准的构建环境的3D语义地图。

Figure 202010312739

The invention discloses an environment reconstruction method and system based on the SLAM method and a mobile robot. The environment reconstruction method based on the SLAM method includes: using multiple data sources based on simultaneous localization and map construction to construct a 3D semantic map and storing it in a map database; updating the 3D semantic map according to depth measurement and semantic information. The present invention can construct a 3D semantic map of the environment more accurately by using the fusion of multiple data sources.

Figure 202010312739

Description

SLAM method-based environment reconstruction method and system and mobile robot
Technical Field
The invention relates to the technical field of mobile robots, in particular to an environment reconstruction method and system and a mobile robot.
Background
The existing mobile robot, such as a sweeper, whether a gyroscope, a camera or a single-line laser based sweeper lacks the sensing capability for a real ground environment, or the information amount is very small, and only discrete two-dimensional sampling information of a specific height space structure or sparse characteristic point cloud information of the top space of the sweeper can be obtained. Moreover, it is difficult to reconstruct the geometric structure and semantic information of the environment with the height below 10cm on the ground in the map, so that the existing sweeper can not solve the problems of sensing and collision of short obstacles (slippers, cables, small toys, pet excrement and the like).
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide an environment reconstruction method, an environment reconstruction system and a mobile robot based on a SLAM (simultaneous localization and mapping) method, so as to solve the above-mentioned drawbacks or at least partially solve the above-mentioned drawbacks.
According to an aspect of the present invention, there is provided an environment reconstruction method based on a simultaneous localization and mapping method, including: fusing a plurality of data sources based on a simultaneous positioning and map construction method to construct a 3D semantic map and store the 3D semantic map in a map database; and updating the 3D semantic map according to the depth measurements and the semantic information.
In an embodiment of the invention, the plurality of data sources comprises inertial measurement unit data, wheel encoder data, 2D point cloud data, depth map data, and IR map data.
In an embodiment of the invention, the fusing comprises: establishing a motion equation according to motion constraint of a mobile robot, and acquiring prior information of a state quantity of the mobile robot at the current moment; establishing an observation equation according to the characteristics of a plurality of sensors on the mobile robot, and acquiring the measurement information of the plurality of sensors; and fusing the prior information and the measurement information to obtain fused positioning information.
In an embodiment of the present invention, the updating includes: and projecting depth map data acquired by a depth camera to obtain a 3D point cloud, and updating probability information in a map voxel of the 3D semantic map by combining the fused positioning information.
In an embodiment of the present invention, the environment reconstruction method further includes: and analyzing and rejecting point cloud noise points in the 3D point cloud by using IR image data acquired by the depth camera through a confidence histogram method.
In an embodiment of the present invention, the environment reconstruction method further includes: and performing down-sampling processing on the 3D point cloud.
In an embodiment of the present invention, the environment reconstruction method further includes: and performing semantic segmentation and edge extraction on the object by using the IR image data acquired by the depth camera to obtain semantic information and edge information of the object, and updating the 3D semantic map by combining the semantic information and the edge information.
In an embodiment of the present invention, updating the 3D semantic map by combining the semantic information and the edge information includes: carrying out Bayesian inference according to the semantic information and the edge information to obtain an inference result; and carrying out re-projection on the map points of the 3D point cloud according to the inference result to obtain the 3D point cloud after re-projection.
In an embodiment of the present invention, the object is a light-transmitting object, a light-reflecting object and/or a light-absorbing object.
In an embodiment of the present invention, the state quantity of the motion equation includes at least one of a position, an attitude, a linear velocity, an angular velocity, and an acceleration.
In one embodiment of the invention, the sensors include a lidar, a depth camera, an inertial measurement unit, and a wheel encoder.
In an embodiment of the present invention, the fusion is to fuse the prior information and the measurement information by using a bayesian recursive estimation algorithm.
In an embodiment of the present invention, during the fusion, a fusion result of fast-varying data measured by the inertial measurement unit and the wheel encoder is used as an initial value of slow-varying data measured by the laser radar and the depth camera, and then bayesian inference is performed together with a registration result.
In an embodiment of the present invention, when performing fusion, the method further includes: and performing pose optimization on the posture.
According to another aspect of the present invention, the present invention further provides an environment reconstruction system based on a simultaneous localization and mapping method, including: a map database for storing data, including a plurality of data sources; a processor configured to perform: constructing a 3D semantic map by utilizing the data sources based on a simultaneous positioning and map construction method and storing the 3D semantic map in the map database; and updating the 3D semantic map according to the depth measurements and the semantic information.
In another embodiment of the invention, the plurality of data sources includes inertial measurement unit data, wheel encoder data, 2D point cloud data, depth map data, and IR map data.
In another embodiment of the present invention, the processor comprises: the fusion unit is used for fusing prior information and measurement information to obtain fused positioning information, wherein the prior information is prior information of a current moment state quantity of a mobile robot, which is obtained by establishing a motion equation according to motion constraint of the mobile robot, and the measurement information is measurement information of a plurality of sensors, which is obtained by establishing an observation equation according to characteristics of the plurality of sensors on the mobile robot; and the updating unit is used for obtaining a 3D point cloud by utilizing the projection of the depth map data acquired by the depth camera and updating probability information in the map voxel of the 3D semantic map by combining the fused positioning information.
In another embodiment of the present invention, the processor further comprises: and the confidence histogram processing unit is used for analyzing and eliminating point cloud noise points in the 3D point cloud by using the IR image data acquired by the depth camera through a confidence histogram method.
In another embodiment of the present invention, the processor further comprises: and the down-sampling processing unit is used for performing down-sampling processing on the 3D point cloud.
In another embodiment of the present invention, the processor further comprises: the semantic segmentation unit is used for performing semantic segmentation on the object by utilizing the IR image data acquired by the depth camera to obtain semantic information of the object; the edge extraction unit is used for extracting the edge of the object by utilizing the IR image data collected by the depth camera so as to obtain the edge information of the object; wherein the processor is to update the 3D semantic map in conjunction with the semantic information and the edge information.
In another embodiment of the present invention, the processor further comprises: the Bayesian inference unit is used for carrying out Bayesian inference according to the semantic information and the edge information to obtain an inference result; and the map point re-projection unit is used for re-projecting the map points of the 3D point cloud according to the inference result to obtain the re-projected 3D point cloud.
In another embodiment of the present invention, the object is a light-transmitting object, a light-reflecting object and/or a light-absorbing object.
In another embodiment of the present invention, the state quantity of the motion equation includes at least one of a position, an attitude, a linear velocity, an angular velocity, and an acceleration.
In another embodiment of the present invention, the sensors include a lidar, a depth camera, an inertial measurement unit, and a wheel encoder.
In another embodiment of the present invention, the fusion unit fuses the prior information and the measurement information by using a bayesian recursive estimation algorithm.
In another embodiment of the present invention, when the fusion unit performs fusion, a fusion result of fast-varying data measured by the inertial measurement unit and the wheel encoder is used as an initial value of slow-varying data measured by the laser radar and the depth camera, and then bayesian inference is performed together with a registration result.
In another embodiment of the present invention, the processor further comprises: and the pose optimization unit is used for optimizing the pose of the gesture during fusion.
In order to achieve the above object, the present invention further provides a mobile robot comprising: the environment reconstruction system based on the simultaneous localization and mapping method as described above.
In yet another embodiment of the present invention, the mobile robot is a sweeper.
According to the invention, by utilizing the fusion of a plurality of data sources, the 3D semantic map of the environment can be constructed more accurately. In addition, the 3D semantic map is updated according to the depth measurement and the semantic information, so that the reconstruction of a high-precision environment map can be realized, the mobile robot can really see the home environment, and the problems that the existing sweeper cannot see low obstacles and can not avoid pet excrement, slippers, clothes and socks, cables and the like are thoroughly solved.
The invention can also utilize IR image data collected by a depth camera to analyze and eliminate point cloud noise points with low confidence coefficient by a confidence histogram method. Meanwhile, for light-transmitting, light-reflecting and light-absorbing objects which cannot be measured, the 3D semantic map can be updated by combining the semantic information and the edge information of the objects, so that the high-precision environment map reconstruction with global consistency can be realized.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits of the present invention will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram of an environment reconstruction system based on a simultaneous localization and mapping method according to the present invention;
FIG. 2 is a schematic diagram of the environment reconstruction method based on the simultaneous localization and mapping method of the present invention;
fig. 3 shows a schematic structural diagram of the mobile robot of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that references in the specification to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not intended to refer to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Moreover, where certain terms are used throughout the description and following claims to refer to particular components or features, those skilled in the art will understand that manufacturers may refer to a component or feature by different names or terms. This specification and the claims that follow do not intend to distinguish between components or features that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. Furthermore, the term "coupled" is intended to include any direct or indirect coupling. Indirect means of connection include connection by other means.
As shown in fig. 1, the environment reconstruction system based on a simultaneous localization and mapping (SLAM) method of the present invention includes a map database 100 and a processor 200. The map database 100 is used for storing data, including a plurality of data sources, such as, but not limited to, inertial measurement unit data 11, wheel encoder data 12, 2D point cloud data 13, depth map data 14, and IR map data 15. The processor 200 is configured to perform: constructing a 3D semantic map by using the plurality of data sources based on a simultaneous localization and map construction method and storing the 3D semantic map in the map database 100; and updating the 3D semantic map according to the depth measurements and the semantic information.
More specifically, the processor 200 may comprise, for example, a fusion unit 21 and an update unit 22. The fusion unit 21 may be configured to fuse the prior information and the measurement information to obtain fused positioning information. The prior information is the prior information of the state quantity of the mobile robot at the current moment, which is obtained by establishing a motion equation according to the motion constraint of the mobile robot, and the state quantity can comprise at least one of position, attitude, linear velocity, angular velocity and acceleration. The measurement information is measurement information of a plurality of sensors obtained by establishing an observation equation based on characteristics of the plurality of sensors on the mobile robot. The sensors may include, for example, but are not limited to, laser radar, depth cameras, Inertial Measurement Units (IMUs), wheel encoders, and the like. Wherein the lidar may be a single line lidar by which 2D point cloud data 13 may be obtained. The depth camera may be, for example, a visual image acquisition device that may perceive ambient depth such as structured light, TOF, binocular vision, etc., such as a 3D depth camera from which depth map data 14, IR data 15, etc. may be obtained. The IMU may comprise, for example, a gyroscope and an accelerometer, by which IMU data 11, such as angular velocity, acceleration, etc., may be obtained. Wheel encoder data 12, for example wheel speed data, can be obtained by means of the wheel encoder. Preferably, the fusion unit 21 fuses the prior information and the measurement information by using a bayesian recursive estimation algorithm. In addition, when the fusion unit 21 performs fusion, the fusion result of the fast-changing data measured by the inertial measurement unit and the wheel encoder is first used as the initial value of the slow-changing data measured by the laser radar and the depth camera, and then bayesian estimation is performed together with a registration result.
The updating unit 22 may be configured to obtain a 3D point cloud by projecting the depth map data 14 collected by the depth camera, and update probability information in a map voxel (voxel) of the 3D semantic map by combining the fused positioning information.
In the present invention, the processor 200 may further include a confidence histogram processing unit 23, which is configured to analyze and reject point cloud noise with low confidence in the 3D point cloud by using the IR map data 15 collected by the depth camera through a confidence histogram method.
In the present invention, the processor 200 may further include a downsampling processing unit 24, which may be configured to downsample the 3D point cloud.
In the present invention, the processor 200 may further include a semantic segmentation unit 25 and an edge extraction unit 26. Wherein the semantic segmentation unit 25 is operable to perform semantic segmentation on an object using the IR map data 15 collected by the depth camera to obtain semantic information of the object. The edge extraction unit 26 may be configured to perform edge extraction on an object using the IR map data 15 collected by the depth camera to obtain edge information of the object. Wherein the processor is to update the 3D semantic map in conjunction with the semantic information and the edge information. Also, the object may be a light transmitting object, a light reflecting object, and/or a light absorbing object.
In the present invention, the processor 200 may further include a bayesian inference unit 27 and a map point (map point) reprojection unit 28. The bayesian inference unit 27 may be configured to perform bayesian inference based on the semantic information and the edge information to obtain an inference result. The map point re-projection unit 29 is configured to re-project the map points of the 3D point cloud according to the inference result, so as to obtain a re-projected 3D point cloud.
In the present invention, the processor 200 may further include a pose optimization unit 29, which may be configured to perform pose optimization on the pose when performing fusion.
As shown in fig. 2, the environment reconstruction method based on the simultaneous localization and mapping method of the present invention includes:
and step S1, fusing a plurality of data sources based on a simultaneous localization and map construction method to construct a 3D semantic map and store the 3D semantic map in a map database. The plurality of data sources may include, for example, but are not limited to, inertial measurement unit data, wheel encoder data, 2D point cloud data, depth map data, and IR map data, among others.
Step S2, updating the 3D semantic map according to the depth measurements and semantic information.
In the present invention, the fusion may include, for example: establishing a motion equation according to motion constraint of a mobile robot, and acquiring prior information of a state quantity of the mobile robot at the current moment, wherein the state quantity of the motion equation can comprise at least one of position, attitude, linear velocity, angular velocity and acceleration. And establishing an observation equation according to the characteristics of the sensors on the mobile robot, and acquiring the measurement information of the sensors. And fusing the prior information and the measurement information to obtain fused positioning information, for example, fusing by using a Bayesian recursive estimation algorithm. Preferably, when fusion is performed, a fusion result of fast-changing data measured by the inertial measurement unit and the wheel encoder is first used as an initial value of slow-changing data measured by the laser radar and the depth camera, and then bayesian inference is performed together with a registration result. And when fusion is carried out, pose optimization can be carried out on the posture.
In the present invention, the updating may include, for example: and projecting depth map data acquired by a depth camera to obtain a 3D point cloud, and updating probability information in a map voxel of the 3D semantic map by combining the fused positioning information.
In the present invention, the environment reconstruction method may further include: and analyzing and rejecting point cloud noise points in the 3D point cloud by using IR image data acquired by the depth camera through a confidence histogram method.
In the present invention, the environment reconstruction method may further include: and performing down-sampling processing on the 3D point cloud.
In the present invention, the environment reconstruction method may further include: and performing semantic segmentation and edge extraction on the object by using the IR image data acquired by the depth camera to obtain semantic information and edge information of the object, and updating the 3D semantic map by combining the semantic information and the edge information. For example, bayesian inference can be performed according to the semantic information and the edge information to obtain an inference result; and carrying out re-projection on the map points of the 3D point cloud according to the inference result to obtain the 3D point cloud after re-projection.
As shown in fig. 3, the environment reconstruction system based on the simultaneous localization and mapping method of the present invention can be applied to a mobile robot 300, and the mobile robot 300 can be, for example, a sweeper.
The positioning data that obtain through single line laser radar can be regarded as absolute positioning data, and produced positioning data can be regarded as relative positioning data through wheel speed meter and gyroscope etc. if 3D depth camera only leans on produced relative positioning data such as wheel speed meter and gyroscope in the removal process, can lead to the map concatenation because the accumulative error of relative positioning data. The method can simultaneously obtain the depth, outline, IR and other image information of the environment and the obstacle by combining the characteristics of high range far precision of the single-line laser radar, large data volume and rich information of the 3D depth camera, and realizes high-precision environment map modeling. According to the method, the IMU data, the wheel encoder data, the 2D point cloud data, the depth map data, the IR map data and other data sources are effectively fused, so that high-precision positioning information can be obtained, and a 3D semantic map of an environment can be constructed more accurately. In addition, the 3D semantic map is updated according to the depth measurement and the semantic information, so that the reconstruction of the high-precision environment map can be realized. Therefore, the robot can really see the home environment, and the problems that the existing sweeper cannot see low obstacles and can not avoid pet excrement, slippers, clothes and socks, cables and the like are thoroughly solved.
In addition, because transparent, reflective and light-absorbing objects which are difficult to measure by an optical sensor commonly exist in a household environment, the depth camera is inaccurate in measurement and cannot be measured, and the positioning precision and the consistency of a 3D semantic map are reduced, so that point cloud noise points with low confidence coefficient are analyzed and eliminated by using IR image data collected by the depth camera through a confidence histogram method, and meanwhile, for the transparent objects, the reflective objects and the light-absorbing objects which cannot be measured, the 3D semantic map can be updated by combining semantic information and edge information of the objects, so that the high-precision environment map reconstruction with global consistency can be realized.
The invention discloses A1, an environment reconstruction method based on a simultaneous localization and mapping method, comprising the following steps:
fusing a plurality of data sources based on a simultaneous positioning and map construction method to construct a 3D semantic map and store the 3D semantic map in a map database; and
updating the 3D semantic map according to the depth measurement and the semantic information.
A2, the environment reconstruction method according to A1, wherein the plurality of data sources include inertial measurement unit data, wheel encoder data, 2D point cloud data, depth map data, and IR map data.
A3, the environment reconstruction method according to A2, wherein the fusing includes:
establishing a motion equation according to motion constraint of a mobile robot, and acquiring prior information of a state quantity of the mobile robot at the current moment;
establishing an observation equation according to the characteristics of a plurality of sensors on the mobile robot, and acquiring the measurement information of the plurality of sensors; and
and fusing the prior information and the measurement information to obtain fused positioning information.
A4, the environment reconstruction method according to A3, wherein the updating includes:
and projecting depth map data acquired by a depth camera to obtain a 3D point cloud, and updating probability information in a map voxel of the 3D semantic map by combining the fused positioning information.
A5, the environment reconstruction method according to A4, wherein the environment reconstruction method further comprises:
and analyzing and rejecting point cloud noise points in the 3D point cloud by using IR image data acquired by the depth camera through a confidence histogram method.
A6, the environment reconstruction method according to A5, wherein the environment reconstruction method further comprises:
and performing down-sampling processing on the 3D point cloud.
A7, the environment reconstruction method according to A6, wherein the environment reconstruction method further comprises:
and performing semantic segmentation and edge extraction on the object by using the IR image data acquired by the depth camera to obtain semantic information and edge information of the object, and updating the 3D semantic map by combining the semantic information and the edge information.
A8, the environment reconstruction method according to A7, wherein updating the 3D semantic map in combination with the semantic information and the edge information includes:
carrying out Bayesian inference according to the semantic information and the edge information to obtain an inference result; and
and carrying out re-projection on the map points of the 3D point cloud according to the inference result to obtain the 3D point cloud after re-projection.
A9, the environment reconstructing method according to A8, wherein the object is a light reflecting object and/or a light absorbing object.
A10, the environment reconstructing method according to any one of A3 to A9, wherein the state quantities of the equation of motion include at least one of position, attitude, linear velocity, angular velocity, and acceleration.
A11, the environment reconstruction method according to A10, wherein the sensors include lidar, a depth camera, an inertial measurement unit, and a wheel encoder.
A12, the environment reconstruction method according to A11, wherein the fusion is to fuse the prior information and the measurement information by adopting a Bayesian recursive estimation algorithm.
A13 is the environment reconstruction method according to A12, wherein in the fusion, a fusion result of fast-varying data measured by the inertial measurement unit and the wheel encoder is used as an initial value of slow-varying data measured by the laser radar and the depth camera, and Bayesian inference is performed together with a registration result.
A14, the environment reconstructing method according to A13, wherein the method further comprises:
and performing pose optimization on the posture.
A15, an environment reconstruction system based on simultaneous localization and mapping method, comprising:
a map database for storing data, including a plurality of data sources;
a processor configured to perform:
constructing a 3D semantic map by utilizing the data sources based on a simultaneous positioning and map construction method and storing the 3D semantic map in the map database; and
updating the 3D semantic map according to the depth measurement and the semantic information.
A16, the environment reconstruction system according to A15, wherein the plurality of data sources include inertial measurement unit data, wheel encoder data, 2D point cloud data, depth map data, and IR map data.
A17, the environment reconstruction system according to A16, wherein the processor comprises:
the fusion unit is used for fusing prior information and measurement information to obtain fused positioning information, wherein the prior information is prior information of a current moment state quantity of a mobile robot, which is obtained by establishing a motion equation according to motion constraint of the mobile robot, and the measurement information is measurement information of a plurality of sensors, which is obtained by establishing an observation equation according to characteristics of the plurality of sensors on the mobile robot; and
and the updating unit is used for obtaining a 3D point cloud by utilizing the projection of the depth map data acquired by the depth camera and updating probability information in the map voxel of the 3D semantic map by combining the fused positioning information.
A18, the environment reconstruction system according to A17, wherein the processor further comprises:
and the confidence histogram processing unit is used for analyzing and eliminating point cloud noise points in the 3D point cloud by using the IR image data acquired by the depth camera through a confidence histogram method.
A19, the environment reconstruction system according to A18, wherein the processor further comprises:
and the down-sampling processing unit is used for performing down-sampling processing on the 3D point cloud.
A20, the environment reconstruction system according to A18, wherein the processor further comprises:
the semantic segmentation unit is used for performing semantic segmentation on the object by utilizing the IR image data acquired by the depth camera to obtain semantic information of the object;
the edge extraction unit is used for extracting the edge of the object by utilizing the IR image data collected by the depth camera so as to obtain the edge information of the object;
wherein the processor is to update the 3D semantic map in conjunction with the semantic information and the edge information.
A21, the environment reconstruction system according to A20, wherein the processor further comprises:
the Bayesian inference unit is used for carrying out Bayesian inference according to the semantic information and the edge information to obtain an inference result; and
and the map point re-projection unit is used for re-projecting the map points of the 3D point cloud according to the inference result to obtain the re-projected 3D point cloud.
A22, the environment reconstructing method according to A21, wherein the object is a light reflecting object and/or a light absorbing object.
A23, the environment reconstruction system according to any one of A17 to A22, wherein the state quantities of the equation of motion include at least one of position, attitude, linear velocity, angular velocity, and acceleration.
A24, the environment reconstruction system according to A23, wherein the sensors include lidar, a depth camera, an inertial measurement unit, and a wheel encoder.
A25, the environment reconstruction system according to A24, wherein the fusion unit fuses the prior information and the measurement information by using a Bayesian recursive estimation algorithm.
A26, the environment reconstruction system according to A25, wherein the fusion unit, when fusing, uses a fusion result of fast-varying data measured by the inertial measurement unit and the wheel encoder as an initial value of slow-varying data measured by the laser radar and the depth camera, and then performs Bayesian inference together with a registration result.
A27, the environment reconstruction system according to A26, wherein the processor further comprises:
and the pose optimization unit is used for optimizing the pose of the gesture during fusion.
A28, a mobile robot, comprising:
an environment reconstruction system based on the simultaneous localization and mapping method as described in any of a 15-a 27.
A29, the mobile robot according to A28, wherein the mobile robot is a sweeper.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: i.e. the invention as claimed, has more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or groups of devices in an embodiment may be adaptively changed and arranged in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it should be understood that various changes and modifications can be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An environment reconstruction method based on a simultaneous localization and mapping method is characterized by comprising the following steps:
fusing a plurality of data sources based on a simultaneous positioning and map construction method to construct a 3D semantic map and store the 3D semantic map in a map database; and
updating the 3D semantic map according to the depth measurement and the semantic information.
2. The environment reconstruction method of claim 1, wherein the plurality of data sources comprises inertial measurement unit data, wheel encoder data, 2D point cloud data, depth map data, and IR map data.
3. The environment reconstruction method according to claim 2, wherein the fusing comprises:
establishing a motion equation according to motion constraint of a mobile robot, and acquiring prior information of a state quantity of the mobile robot at the current moment;
establishing an observation equation according to the characteristics of a plurality of sensors on the mobile robot, and acquiring the measurement information of the plurality of sensors; and
fusing the prior information and the measurement information to obtain fused positioning information;
the updating comprises the following steps:
and projecting depth map data acquired by a depth camera to obtain a 3D point cloud, and updating probability information in a map voxel of the 3D semantic map by combining the fused positioning information.
4. The environment reconstruction method according to claim 3, further comprising:
analyzing and rejecting point cloud noise points in the 3D point cloud by using IR image data acquired by the depth camera through a confidence histogram method;
performing down-sampling processing on the 3D point cloud;
and performing semantic segmentation and edge extraction on the object by using the IR image data acquired by the depth camera to obtain semantic information and edge information of the object, and updating the 3D semantic map by combining the semantic information and the edge information.
5. The environment reconstruction method of claim 4, wherein updating the 3D semantic map in conjunction with the semantic information and the edge information comprises:
carrying out Bayesian inference according to the semantic information and the edge information to obtain an inference result; and
and carrying out re-projection on the map points of the 3D point cloud according to the inference result to obtain the 3D point cloud after re-projection.
6. An environment reconstruction system based on a simultaneous localization and mapping method, comprising:
a map database for storing data, including a plurality of data sources;
a processor configured to perform:
constructing a 3D semantic map by utilizing the data sources based on a simultaneous positioning and map construction method and storing the 3D semantic map in the map database; and
updating the 3D semantic map according to the depth measurement and the semantic information.
7. The environment reconstruction system of claim 6, wherein the plurality of data sources comprises inertial measurement unit data, wheel encoder data, 2D point cloud data, depth map data, and IR map data.
8. The environment reconstruction system of claim 7, wherein the processor comprises:
the fusion unit is used for fusing prior information and measurement information to obtain fused positioning information, wherein the prior information is prior information of a current moment state quantity of a mobile robot, which is obtained by establishing a motion equation according to motion constraint of the mobile robot, and the measurement information is measurement information of a plurality of sensors, which is obtained by establishing an observation equation according to characteristics of the plurality of sensors on the mobile robot; and
and the updating unit is used for obtaining a 3D point cloud by utilizing the projection of the depth map data acquired by the depth camera and updating probability information in the map voxel of the 3D semantic map by combining the fused positioning information.
9. The environment reconstruction system of claim 8, wherein the processor further comprises:
the confidence histogram processing unit is used for analyzing and eliminating point cloud noise points in the 3D point cloud by using the IR image data acquired by the depth camera through a confidence histogram method;
the down-sampling processing unit is used for performing down-sampling processing on the 3D point cloud;
the semantic segmentation unit is used for performing semantic segmentation on the object by utilizing the IR image data acquired by the depth camera to obtain semantic information of the object;
the edge extraction unit is used for extracting the edge of the object by utilizing the IR image data collected by the depth camera so as to obtain the edge information of the object; wherein the processor is to update the 3D semantic map in conjunction with the semantic information and the edge information;
the Bayesian inference unit is used for carrying out Bayesian inference according to the semantic information and the edge information to obtain an inference result; and
and the map point re-projection unit is used for re-projecting the map points of the 3D point cloud according to the inference result to obtain the re-projected 3D point cloud.
10. A mobile robot, comprising:
the system according to any of claims 6 to 9 for environment reconstruction based on a simultaneous localization and mapping method.
CN202010312739.1A 2020-04-20 2020-04-20 SLAM method-based environment reconstruction method and system and mobile robot Pending CN113534786A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010312739.1A CN113534786A (en) 2020-04-20 2020-04-20 SLAM method-based environment reconstruction method and system and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010312739.1A CN113534786A (en) 2020-04-20 2020-04-20 SLAM method-based environment reconstruction method and system and mobile robot

Publications (1)

Publication Number Publication Date
CN113534786A true CN113534786A (en) 2021-10-22

Family

ID=78123585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010312739.1A Pending CN113534786A (en) 2020-04-20 2020-04-20 SLAM method-based environment reconstruction method and system and mobile robot

Country Status (1)

Country Link
CN (1) CN113534786A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009223628A (en) * 2008-03-17 2009-10-01 Toyota Motor Corp Mobile robot and method for generating environment map
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
US20160282126A1 (en) * 2015-03-24 2016-09-29 Google Inc. Associating Semantic Location Data with Automated Environment Mapping
CN108106605A (en) * 2013-02-28 2018-06-01 谷歌技术控股有限责任公司 Depth transducer control based on context
CN108229416A (en) * 2018-01-17 2018-06-29 苏州科技大学 Robot SLAM methods based on semantic segmentation technology
CN109583457A (en) * 2018-12-03 2019-04-05 荆门博谦信息科技有限公司 A kind of method and robot of robot localization and map structuring
CN109815847A (en) * 2018-12-30 2019-05-28 中国电子科技集团公司信息科学研究院 A kind of vision SLAM method based on semantic constraint
CN109816686A (en) * 2019-01-15 2019-05-28 山东大学 Robot semantic SLAM method, processor and robot based on object instance matching
CN109828588A (en) * 2019-03-11 2019-05-31 浙江工业大学 Paths planning method in a kind of robot chamber based on Multi-sensor Fusion
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning
CN110706248A (en) * 2019-08-20 2020-01-17 广东工业大学 A SLAM-based visual perception mapping algorithm and mobile robot
CN110717917A (en) * 2019-09-30 2020-01-21 北京影谱科技股份有限公司 CNN-based semantic segmentation depth prediction method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009223628A (en) * 2008-03-17 2009-10-01 Toyota Motor Corp Mobile robot and method for generating environment map
CN108106605A (en) * 2013-02-28 2018-06-01 谷歌技术控股有限责任公司 Depth transducer control based on context
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
US20160282126A1 (en) * 2015-03-24 2016-09-29 Google Inc. Associating Semantic Location Data with Automated Environment Mapping
CN108229416A (en) * 2018-01-17 2018-06-29 苏州科技大学 Robot SLAM methods based on semantic segmentation technology
CN109583457A (en) * 2018-12-03 2019-04-05 荆门博谦信息科技有限公司 A kind of method and robot of robot localization and map structuring
CN109815847A (en) * 2018-12-30 2019-05-28 中国电子科技集团公司信息科学研究院 A kind of vision SLAM method based on semantic constraint
CN109816686A (en) * 2019-01-15 2019-05-28 山东大学 Robot semantic SLAM method, processor and robot based on object instance matching
CN109828588A (en) * 2019-03-11 2019-05-31 浙江工业大学 Paths planning method in a kind of robot chamber based on Multi-sensor Fusion
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning
CN110706248A (en) * 2019-08-20 2020-01-17 广东工业大学 A SLAM-based visual perception mapping algorithm and mobile robot
CN110717917A (en) * 2019-09-30 2020-01-21 北京影谱科技股份有限公司 CNN-based semantic segmentation depth prediction method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张泽坤;唐冰;陈小平;: "面向物流分拣的多立体摄像头物体操作系统", 计算机应用, no. 08, 11 May 2018 (2018-05-11), pages 2442 - 2448 *
张立志;陈殿生;刘维惠;: "基于混合地图的护理机器人室内导航方法", 北京航空航天大学学报, no. 05, 22 September 2017 (2017-09-22) *
陈超;李强;闫青;: "基于异质传感器信息融合的移动机器人同步定位与构图", 科学技术与工程, no. 13, 8 May 2018 (2018-05-08) *

Similar Documents

Publication Publication Date Title
JP6812404B2 (en) Methods, devices, computer-readable storage media, and computer programs for fusing point cloud data
CN109297510B (en) Relative pose calibration method, device, equipment and medium
CN109084732B (en) Positioning and navigation method, device and processing equipment
CN110388924B (en) System and method for radar-based vehicle positioning in connection with automatic navigation
CN111936821B (en) System and method for positioning
CN112639502B (en) Robot pose estimation
CN109270545B (en) Positioning true value verification method, device, equipment and storage medium
CN109506642B (en) A robot multi-camera visual inertial real-time positioning method and device
Wefelscheid et al. Three-dimensional building reconstruction using images obtained by unmanned aerial vehicles
CN110873883B (en) Positioning method, medium, terminal and device integrating laser radar and IMU
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
CN111220164A (en) Positioning method, device, equipment and storage medium
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
KR20220025028A (en) Method and device for building beacon map based on visual beacon
US20100164807A1 (en) System and method for estimating state of carrier
CN113933818A (en) Method, device, storage medium and program product for calibration of external parameters of lidar
CN114111774B (en) Vehicle positioning method, system, equipment and computer readable storage medium
CN110470333A (en) Scaling method and device, the storage medium and electronic device of sensor parameters
CN113640756B (en) Data calibration method, system, device, computer program and storage medium
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
CN113580134B (en) Visual positioning method, device, robot, storage medium and program product
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN108322698B (en) System and method based on fusion of multiple cameras and inertial measurement unit
CN113534786A (en) SLAM method-based environment reconstruction method and system and mobile robot
CN115200569A (en) Reliable high-precision positioning method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant after: Shenzhen 3600 Smart Life Technology Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: SHENZHEN QIHU INTELLIGENT TECHNOLOGY CO.,LTD.

Country or region before: China

CB02 Change of applicant information