CN111539994B - Particle filter repositioning method based on semantic likelihood estimation - Google Patents
Particle filter repositioning method based on semantic likelihood estimation Download PDFInfo
- Publication number
- CN111539994B CN111539994B CN202010348294.2A CN202010348294A CN111539994B CN 111539994 B CN111539994 B CN 111539994B CN 202010348294 A CN202010348294 A CN 202010348294A CN 111539994 B CN111539994 B CN 111539994B
- Authority
- CN
- China
- Prior art keywords
- semantic
- map
- robot
- laser
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000002245 particle Substances 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 17
- 230000000007 visual effect Effects 0.000 claims abstract description 14
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 4
- 230000005484 gravity Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000012952 Resampling Methods 0.000 claims description 5
- 238000005259 measurement Methods 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 230000007704 transition Effects 0.000 claims 2
- 206010033307 Overweight Diseases 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000001914 filtration Methods 0.000 abstract description 6
- 230000007547 defect Effects 0.000 abstract 1
- 230000000875 corresponding effect Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000010813 municipal solid waste Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/77—Determining position or orientation of objects or cameras using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
技术领域Technical Field
本发明属于机器人导航重定位领域,特别是涉及一种基于语义似然估计的粒子滤波重定位方法。The invention belongs to the field of robot navigation relocation, and in particular relates to a particle filtering relocation method based on semantic likelihood estimation.
背景技术Background Art
在如今机器人领域,人们的需求越来越高,因此要求机器人导航更准确,而机器人的重定位问题就显得尤为重要。In today's robotics field, people's demands are getting higher and higher, so robot navigation is required to be more accurate, and the problem of robot repositioning is particularly important.
重定位是SLAM(simultaneous localization and mapping,即时定位与地图构建中最重要的一个环节。SLAM的重定位定义为:在己知环境地图的情况下,如何依据机器当前的传感器观测数据,找出自身定位。只有在定位成功后,才能进行地图构建与导航。Relocalization is the most important part of SLAM (simultaneous localization and mapping). SLAM relocalization is defined as: given a known environment map, how to find out the machine's own location based on the machine's current sensor observation data. Only after successful positioning can map construction and navigation be carried out.
国内外研究者组合多种信息帮助机器人完成重定位,有通过在环境中布局二维码,增加额外信息约束粒子采样,有通过在环境内布局多摄像头监控,在多摄像头采集的图像中提取ORB特征检测机器人,确定机器人位于子地图,提高了出现绑架后重定位的时间效率,还有的通过环境中的地标信息与机器人在实际位姿下得到的语义激光完成对机器人定位的修正,增强了定位准确性与定位精度。Researchers at home and abroad have combined a variety of information to help robots complete repositioning. Some have done this by placing QR codes in the environment and adding additional information to constrain particle sampling. Some have done this by placing multiple camera monitors in the environment and extracting ORB features from images captured by multiple cameras to detect the robot and determine where the robot is located in the sub-map, thereby improving the time efficiency of repositioning after kidnapping. Others have corrected the robot's positioning through landmark information in the environment and semantic lasers obtained in the robot's actual position, thereby enhancing positioning accuracy and precision.
无论是通过布局二维码等人工标记,还是利用视频监控帮助机器人定位,方法都不够直接、自主。而仅通过单一物体地标进行机器人位姿校正与重定位,与单个地标检测正确率相关性较大,且要求物体地标尽量扁平化,以易于提取激光簇中心激光点坐标作为物体位置坐标,所以在进行重定位时,目标识别有较大误差,重定位的准确率低。Whether it is through the layout of artificial markers such as QR codes, or the use of video surveillance to help the robot locate, the method is not direct and autonomous enough. The robot posture correction and relocation based on a single object landmark is highly correlated with the accuracy of a single landmark detection, and requires the object landmark to be as flat as possible to facilitate the extraction of the laser point coordinates at the center of the laser cluster as the object position coordinates. Therefore, when relocating, there is a large error in target recognition and the accuracy of relocation is low.
基于现有技术存在的以上问题,本发明拟提出一种基于语义似然估计的粒子滤波重定位方法,该方法增加了物体的语义信息匹配,克服了仅利用环境结构信息进行匹配的不足,能有效解决机器人全局重定位错误的问题,同时增强了重定位的收敛速度。Based on the above problems existing in the prior art, the present invention proposes a particle filtering relocalization method based on semantic likelihood estimation, which increases the semantic information matching of objects, overcomes the shortcomings of matching only with environmental structure information, can effectively solve the problem of global relocalization errors of robots, and at the same time enhances the convergence speed of relocalization.
发明内容Summary of the invention
本发明为解决现有技术中存在的问题采用的技术方案如下:The technical solution adopted by the present invention to solve the problems existing in the prior art is as follows:
一种基于语义似然估计的粒子滤波重定位方法,其特征在于,包括如下步骤:A particle filtering relocation method based on semantic likelihood estimation, characterized in that it comprises the following steps:
步骤S1:构建栅格地图,环境中要有明显的路障特征用于障碍物信息的识别;Step S1: construct a grid map, and the environment must have obvious roadblock features for obstacle information recognition;
步骤S2:通过卷积神经网络的目标检测方法识别物体,并得到该物体的语义信息;Step S2: Identify the object through the object detection method of the convolutional neural network and obtain the semantic information of the object;
步骤S3:利用视觉传感器模拟激光雷达,得到语义激光数据;Step S3: using the visual sensor to simulate the laser radar to obtain semantic laser data;
步骤S4:根据机器人与视觉传感器位置关系和全局坐标系与机器人的位置关系,以及物体的语义激光数据,构建语义地图;Step S4: constructing a semantic map according to the positional relationship between the robot and the visual sensor, the positional relationship between the global coordinate system and the robot, and the semantic laser data of the object;
步骤S5:当定位出现偏差时,机器人用激光雷达和视觉传感器识别环境,通过激光栅格地图获取障碍物似然域,以及通过语义地图获取物体语义似然域;Step S5: When the positioning is deviated, the robot uses the laser radar and visual sensor to identify the environment, obtains the obstacle likelihood domain through the laser grid map, and obtains the object semantic likelihood domain through the semantic map;
步骤S6:预测机器人的初始位姿,并通过激光与物体语义的同时似然匹配,改变粒子权重,并进行重采样,逐渐迭代,滤波器收敛后完成对机器人状态的估计,确定机器人的准确位置。Step S6: Predict the initial posture of the robot, and change the particle weights through simultaneous likelihood matching of laser and object semantics, and resample, gradually iterate, complete the estimation of the robot state after the filter converges, and determine the exact position of the robot.
所述步骤S1的具体步骤如下:The specific steps of step S1 are as follows:
步骤S101:障碍物的选择,选取辨识度较高,并且不会移动的物体作为识别的障碍物;Step S101: selecting obstacles, selecting objects with high recognition and that will not move as obstacles to be identified;
步骤S102:根据激光雷达数据建立栅格地图,机器人在环境中移动一圈,根据激光雷达获取到环境的激光信息和机器人里程计信息,并利用gmapping算法建立栅格地图。Step S102: A grid map is created based on the laser radar data. The robot moves in a circle in the environment, obtains laser information of the environment and robot odometer information based on the laser radar, and creates a grid map using the gmapping algorithm.
所述步骤S2的目标检测方法是采用SSD方法用于物体的检测,得到物体的语义信息,视觉识别所用的传感器是kinectV2。The target detection method of step S2 is to use the SSD method for object detection to obtain the semantic information of the object, and the sensor used for visual recognition is kinectV2.
所述步骤S3获取物体语义激光数据的具体步骤如下:The specific steps of step S3 to obtain semantic laser data of the object are as follows:
步骤S301:进行KinectV2彩色相机与深度相机的配准,以及激光雷达与彩色相机相对位置关系标定;Step S301: aligning the KinectV2 color camera and the depth camera, and calibrating the relative position relationship between the laser radar and the color camera;
步骤S302:在参数标定与配准之后,将物体语义信息中的点云数据通过去除地面并只保留检测框内每一列的最近点,模拟成激光雷达击中点,得到物体的语义激光数据;Step S302: After parameter calibration and registration, the point cloud data in the semantic information of the object is simulated as a laser radar hit point by removing the ground and retaining only the nearest point of each column in the detection frame to obtain the semantic laser data of the object;
步骤S303:获取到的语义激光数据坐标如下:Step S303: The coordinates of the semantic laser data obtained are as follows:
zk=d/s (1)z k = d/s (1)
xk=(u-cx)*zk/fx (2)x k =(uc x )*z k /f x (2)
yk=(v-cy)*zk/fy (3)y k =(vc y )*z k /f y (3)
式中d为彩色相机的一帧照片中像素的坐标(u,v)在对应深度图中的深度值,s为深度图的缩放因子,cx、cy、fx、fy分别为RGB相机在两个轴上的焦距与光圈中心,xk、yk、zk为kinectV2坐标系下的三维坐标。Where d is the depth value of the pixel coordinate (u, v) in a frame of the color camera in the corresponding depth map, s is the scaling factor of the depth map, c x , cy , f x , f y are the focal length and aperture center of the RGB camera on the two axes respectively, and x k , y k , z k are the three-dimensional coordinates in the kinectV2 coordinate system.
所述步骤S4构建语义地图的具体步骤如下:The specific steps of constructing the semantic map in step S4 are as follows:
步骤S401:根据摄像头坐标系与机器人坐标系之间的变换矩阵Trk、机器人坐标系与全局坐标系之间的变换矩阵Twr,将语义激光数据坐标转换到机器人坐标系,继而转换到全局地图中;Step S401: according to the transformation matrix T rk between the camera coordinate system and the robot coordinate system and the transformation matrix T wr between the robot coordinate system and the global coordinate system, the semantic laser data coordinates are transformed into the robot coordinate system and then transformed into the global map;
式中坐标(xki,yki,zki)为语义激光数据的某点坐标,坐标(xwi,ywi,zwi)为对应点在全局坐标系下的坐标;Wherein the coordinates (x ki , y ki , z ki ) are the coordinates of a point in the semantic laser data, and the coordinates (x wi , y wi , z wi ) are the coordinates of the corresponding point in the global coordinate system;
步骤S402:将最终得到的全局坐标系下的点云坐标变换到栅格地图坐标系下,得到物体在栅格地图中的坐标,将所有坐标点连起来,即获得语义地图:Step S402: transform the point cloud coordinates in the global coordinate system to the grid map coordinate system to obtain the coordinates of the object in the grid map, and connect all the coordinate points to obtain a semantic map:
式中r为真实地图的长度与栅格地图的长度之比,int为取整符号,(xw,yw)为全局平面坐标系下的坐标值,(xcell,ycell)为语义栅格地图的坐标值。Where r is the ratio of the length of the real map to the length of the grid map, int is the integer symbol, (x w , y w ) is the coordinate value in the global plane coordinate system, and (x cell , y cell ) is the coordinate value of the semantic grid map.
所述步骤S5中获取似然域的公式如下:The formula for obtaining the likelihood domain in step S5 is as follows:
式中dist为地图中的点与最近障碍物之间的欧式距离,σ为传感器击中点与地图中障碍物点的最大距离,为传感器t时刻测量的物体的概率,zt为t时刻传感器信息,为t时刻第k个粒子的位姿信息,m为环境地图;Where dist is the Euclidean distance between a point in the map and the nearest obstacle, σ is the maximum distance between the sensor hit point and the obstacle point in the map, is the probability of the object measured by the sensor at time t, z t is the sensor information at time t, is the position information of the kth particle at time t, and m is the environment map;
用似然域模型表示障碍物的概率,概率越高,障碍物的可能性就越大;在语义地图中,有了障碍物的语义类别信息,传感器击中点会寻找与地图中同类且最近障碍物点,得到dist:The probability of an obstacle is represented by a likelihood domain model. The higher the probability, the greater the possibility of an obstacle. In the semantic map, with the semantic category information of the obstacle, the sensor hit point will look for the same type and closest obstacle point in the map to obtain dist:
所述步骤S6确定机器人的准确位置的具体步骤如下:The specific steps of step S6 to determine the exact position of the robot are as follows:
步骤S601位姿预测:机器人在t时刻,利用t-1时刻的状态估计xt-1,以及t-1时刻至t时刻的控制量ut,通过运动模型得到状态转移分布p(xt|xt-1,ut),并从状态转移分布中随机采样获取t时刻的假定状态进行机器人的位姿预测;Step S601 Pose prediction: At time t, the robot uses the state estimate xt -1 at time t-1 and the control amount ut from time t-1 to time t to obtain the state transfer distribution p( xt | xt-1 , ut ) through the motion model, and randomly samples from the state transfer distribution to obtain the assumed state at time t. Predict the robot’s posture;
步骤S602匹配更新:通过t时刻的观测模型函数,返回每一个粒子包含的激光信息以及相机检测信息与环境地图的似然匹配程度,即粒子权重公式如下:Step S602 Matching update: Through the observation model function at time t, the likelihood matching degree between the laser information and camera detection information contained in each particle and the environment map is returned, that is, the particle weight The formula is as follows:
式中:权重为t时刻第k个粒子的权重,表示测量概率随着时间的积分,且满足 与分别为激光和相机测量的似然概率;lt、kt分别为t时刻的激光雷达数据与语义点云数据;ml、mk为激光栅格地图与语义地图;Where: Weight is the weight of the kth particle at time t, which represents the integral of the measurement probability over time and satisfies and are the likelihood probabilities of laser and camera measurements, respectively; l t , k t are the laser radar data and semantic point cloud data at time t, respectively; m l , m k are the laser grid map and semantic map;
步骤S603重采样:当机器人运动超过一定阈值后,根据各粒子权重按照轮盘转法复制新的粒子添加进粒子集合中,并归一化粒子权重,保证了重采样产生的粒子向高权值粒子位姿移动,用粒子群近似后验概率分布,以趋近机器人真实位姿;Step S603: When the robot moves beyond a certain threshold, the weight of each particle is Copy new particles according to the roulette wheel method Add them to the particle set and normalize the particle weights to ensure that the particles generated by resampling move to the position of particles with high weights, and use the particle swarm to approximate the posterior probability distribution to approach the actual position of the robot;
在机器人足够运动所有粒子收敛后,通过对粒子的权重求和归一化得到所有粒子的重心即机器人的真实位置,重心公式如下:After the robot has moved enough and all particles have converged, the center of gravity of all particles, i.e. the true position of the robot, is obtained by normalizing the weights of the particles. The center of gravity formula is as follows:
本发明具有如下优点:The present invention has the following advantages:
本发明的基于语义似然估计的粒子滤波重定位方法,充分结合了障碍物栅格信息和物体语义信息进行重定位,避免单使用激光雷达信息在相似的环境下定位失败的问题。该方法克服了原本粒子滤波方法仅利用环境结构信息进行匹配的不足,能有效解决机器人全局重定位错误的问题,同时增强了重定位的收敛速度。The particle filter relocalization method based on semantic likelihood estimation of the present invention fully combines obstacle grid information and object semantic information for relocalization, avoiding the problem of positioning failure in similar environments using only laser radar information. This method overcomes the deficiency of the original particle filter method that only uses environmental structure information for matching, can effectively solve the problem of global relocalization errors of robots, and at the same time enhances the convergence speed of relocalization.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明方法流程图;Fig. 1 is a flow chart of the method of the present invention;
图2为栅格地图;Figure 2 is a grid map;
图3为语义地图;Figure 3 is a semantic map;
图4是语义障碍物与地图似然匹配图;Figure 4 is a map of semantic obstacles and map likelihood matching;
图5为本发明粒子滤波定位流程图。FIG5 is a flow chart of particle filtering positioning according to the present invention.
具体实施方式DETAILED DESCRIPTION
下面通过实施例,并结合附图,对本发明的技术方案作进一步具体的说明,一种基于语义似然估计的粒子滤波重定位方法,其特征在于,包括如下步骤:The technical solution of the present invention is further specifically described below through embodiments and in conjunction with the accompanying drawings. A particle filtering relocation method based on semantic likelihood estimation is characterized in that it comprises the following steps:
步骤S1:构建栅格地图,环境中要有明显的路障特征用于障碍物信息的识别;Step S1: construct a grid map, and the environment must have obvious roadblock features for obstacle information recognition;
步骤S2:通过卷积神经网络的目标检测方法识别物体,并得到该物体的语义信息;Step S2: Identify the object through the object detection method of the convolutional neural network and obtain the semantic information of the object;
步骤S3:利用视觉传感器模拟激光雷达,得到语义激光数据;Step S3: using the visual sensor to simulate the laser radar to obtain semantic laser data;
步骤S4:根据机器人与视觉传感器位置关系和全局坐标系与机器人的位置关系,以及物体的语义激光数据,构建语义地图;Step S4: constructing a semantic map according to the positional relationship between the robot and the visual sensor, the positional relationship between the global coordinate system and the robot, and the semantic laser data of the object;
步骤S5:当定位出现偏差时,机器人用激光雷达和视觉传感器识别环境,通过激光栅格地图获取障碍物似然域,以及通过语义地图获取物体语义似然域;Step S5: When the positioning is deviated, the robot uses the laser radar and visual sensor to identify the environment, obtains the obstacle likelihood domain through the laser grid map, and obtains the object semantic likelihood domain through the semantic map;
步骤S6:预测机器人的初始位姿,并通过激光与物体语义的同时似然匹配,改变粒子权重,并进行重采样,逐渐迭代,滤波器收敛后完成对机器人状态的估计,确定机器人的准确位置。Step S6: Predict the initial posture of the robot, and change the particle weights through simultaneous likelihood matching of laser and object semantics, and resample, gradually iterate, complete the estimation of the robot state after the filter converges, and determine the exact position of the robot.
所述步骤S1的具体步骤如下:The specific steps of step S1 are as follows:
步骤S101:障碍物的选择,选取辨识度较高,并且不会移动的物体作为识别的障碍物;Step S101: selecting obstacles, selecting objects with high recognition and that will not move as obstacles to be identified;
步骤S102:根据激光雷达数据建立栅格地图,机器人在环境中移动一圈,根据激光雷达获取到环境的激光信息和机器人里程计信息,并利用gmapping算法建立栅格地图,如图2所示。Step S102: A grid map is created based on the laser radar data. The robot moves in a circle in the environment, obtains laser information of the environment and robot odometer information based on the laser radar, and creates a grid map using the gmapping algorithm, as shown in FIG2 .
所述步骤S2的目标检测方法是采用SSD方法用于物体的检测,得到物体的语义信息,视觉识别所用的传感器是kinectV2。The target detection method of step S2 is to use the SSD method for object detection to obtain the semantic information of the object, and the sensor used for visual recognition is kinectV2.
所述步骤S3获取物体语义激光数据的具体步骤如下:The specific steps of step S3 to obtain semantic laser data of the object are as follows:
步骤S301:进行KinectV2彩色相机与深度相机的配准,以及激光雷达与彩色相机相对位置关系标定;Step S301: aligning the KinectV2 color camera and the depth camera, and calibrating the relative position relationship between the laser radar and the color camera;
步骤S302:在参数标定与配准之后,将物体语义信息中的点云数据通过去除地面并只保留检测框内每一列的最近点,模拟成激光雷达击中点,得到物体的语义激光数据;Step S302: After parameter calibration and registration, the point cloud data in the semantic information of the object is simulated as a laser radar hit point by removing the ground and retaining only the nearest point of each column in the detection frame to obtain the semantic laser data of the object;
步骤S303:获取到的语义激光数据坐标如下:Step S303: The coordinates of the semantic laser data obtained are as follows:
zk=d/s (1)z k = d/s (1)
xk=(u-cx)*zk/fx (2)x k =(uc x )*z k /f x (2)
yk=(v-cy)*zk/fy (3)y k =(vc y )*z k /f y (3)
式中d为彩色相机的一帧照片中像素的坐标(u,v)在对应深度图中的深度值,s为深度图的缩放因子,cx、cy、fx、fy分别为RGB相机在两个轴上的焦距与光圈中心,xk、yk、zk为kinectV2坐标系下的三维坐标。Where d is the depth value of the pixel coordinate (u, v) in a frame of the color camera in the corresponding depth map, s is the scaling factor of the depth map, c x , cy , f x , f y are the focal length and aperture center of the RGB camera on the two axes respectively, and x k , y k , z k are the three-dimensional coordinates in the kinectV2 coordinate system.
所述步骤S4构建语义地图的具体步骤如下:The specific steps of constructing the semantic map in step S4 are as follows:
步骤S401:根据摄像头坐标系与机器人坐标系之间的变换矩阵Trk、机器人坐标系与全局坐标系之间的变换矩阵Twr,将语义激光数据坐标转换到机器人坐标系,继而转换到全局地图中;Step S401: according to the transformation matrix T rk between the camera coordinate system and the robot coordinate system and the transformation matrix T wr between the robot coordinate system and the global coordinate system, the semantic laser data coordinates are transformed into the robot coordinate system and then transformed into the global map;
式中坐标(xki,yki,zki)为语义激光数据的某点坐标,坐标(xwi,ywi,zwi)为对应点在全局坐标系下的坐标;Wherein the coordinates (x ki , y ki , z ki ) are the coordinates of a point in the semantic laser data, and the coordinates (x wi , y wi , z wi ) are the coordinates of the corresponding point in the global coordinate system;
步骤S402:将最终得到的全局坐标系下的点云坐标变换到栅格地图坐标系下,得到物体在栅格地图中的坐标,将所有坐标点连起来,即获得语义地图,如图3所示,图中有多种不同障碍物类别,如A-椅子、B-门、C-柜子、D-垃圾桶等:Step S402: transform the point cloud coordinates in the global coordinate system to the grid map coordinate system to obtain the coordinates of the object in the grid map, connect all the coordinate points, and obtain a semantic map, as shown in FIG3 , where there are many different types of obstacles, such as A-chair, B-door, C-cabinet, D-trash can, etc.:
式中r为真实地图的长度与栅格地图的长度之比,int为取整符号,(xw,yw)为全局平面坐标系下的坐标值,(xcell,ycell)为语义栅格地图的坐标值。Where r is the ratio of the length of the real map to the length of the grid map, int is the integer symbol, (x w , y w ) is the coordinate value in the global plane coordinate system, and (x cell , y cell ) is the coordinate value of the semantic grid map.
步骤S5中获取似然域的公式如下:The formula for obtaining the likelihood domain in step S5 is as follows:
式中dist为地图中的点与最近障碍物之间的欧式距离,σ为传感器击中点与地图中障碍物点的最大距离,为传感器t时刻测量的物体的概率,zt为t时刻传感器信息,为t时刻第k个粒子的位姿信息,m为环境地图;Where dist is the Euclidean distance between a point in the map and the nearest obstacle, σ is the maximum distance between the sensor hit point and the obstacle point in the map, is the probability of the object measured by the sensor at time t, z t is the sensor information at time t, is the position information of the kth particle at time t, and m is the environment map;
用似然域模型表示障碍物的概率,概率越高,障碍物的可能性就越大;如图4所示为视觉所测数据与原本地图数据的似然匹配图。图中线条1为传感器测得的语义数据,线条2为地图数据,图中分为a、b、c、d四个不同的语义类别,线条1上的点与线条2中同类别且最近的点相连,即在语义地图中,有了障碍物的语义类别信息,传感器击中点会寻找与地图中同类且最近障碍物点,从而可得到dist:The probability of an obstacle is represented by the likelihood domain model. The higher the probability, the greater the possibility of the obstacle. Figure 4 shows the likelihood matching diagram of the visual data and the original map data.
步骤S6确定机器人的准确位置的具体步骤如下:The specific steps of step S6 to determine the exact position of the robot are as follows:
步骤S601位姿预测:机器人在t时刻,利用t-1时刻的状态估计xt-1,以及t-1时刻至t时刻的控制量ut,通过运动模型得到状态转移分布p(xt|xt-1,ut),并从状态转移分布中随机采样获取t时刻的假定状态进行机器人的位姿预测;Step S601 Pose prediction: At time t, the robot uses the state estimate xt -1 at time t-1 and the control amount ut from time t-1 to time t to obtain the state transfer distribution p( xt | xt-1 , ut ) through the motion model, and randomly samples from the state transfer distribution to obtain the assumed state at time t. Predict the robot’s posture;
步骤S602匹配更新:通过t时刻的观测模型函数,返回每一个粒子包含的激光信息以及相机检测信息与环境地图的似然匹配程度,即粒子权重公式如下:Step S602 Matching update: Through the observation model function at time t, the likelihood matching degree between the laser information and camera detection information contained in each particle and the environment map is returned, that is, the particle weight The formula is as follows:
式中:权重为t时刻第k个粒子的权重,表示测量概率随着时间的积分,且满足 与分别为激光和相机测量的似然概率;lt、tk分别为t时刻的激光雷达数据与语义点云数据;ml、mk为激光栅格地图与语义地图。Where: Weight is the weight of the kth particle at time t, which represents the integral of the measurement probability over time and satisfies and are the likelihood probabilities of laser and camera measurements respectively; l t , t k are the lidar data and semantic point cloud data at time t respectively; m l , m k are the laser grid map and semantic map.
步骤S603重采样:当机器人运动超过一定阈值后,根据各粒子权重按照轮盘转法复制新的粒子添加进粒子集合中,并归一化粒子权重,保证了重采样产生的粒子向高权值粒子位姿移动,用粒子群近似后验概率分布,以趋近机器人真实位姿;Step S603: When the robot moves beyond a certain threshold, the weight of each particle is Copy new particles according to the roulette wheel method Add them to the particle set and normalize the particle weights to ensure that the particles generated by resampling move to the position of particles with high weights, and use the particle swarm to approximate the posterior probability distribution to approach the actual position of the robot;
在机器人足够运动所有粒子收敛后,通过对粒子的权重求和归一化得到所有粒子的重心即机器人的真实位置,重心公式如下:After the robot has moved enough and all particles have converged, the center of gravity of all particles, i.e. the true position of the robot, is obtained by normalizing the weights of the particles. The center of gravity formula is as follows:
本发明的保护范围并不限于上述的实施例,显然,本领域的技术人员可以对本发明进行各种改动和变形而不脱离本发明的范围和精神。倘若这些改动和变形属于本发明权利要求及其等同技术的范围内,则本发明的意图也包含这些改动和变形在内。The protection scope of the present invention is not limited to the above-mentioned embodiments. Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the scope and spirit of the present invention. If these changes and modifications fall within the scope of the claims of the present invention and their equivalents, the intention of the present invention also includes these changes and modifications.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010348294.2A CN111539994B (en) | 2020-04-28 | 2020-04-28 | Particle filter repositioning method based on semantic likelihood estimation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010348294.2A CN111539994B (en) | 2020-04-28 | 2020-04-28 | Particle filter repositioning method based on semantic likelihood estimation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111539994A CN111539994A (en) | 2020-08-14 |
CN111539994B true CN111539994B (en) | 2023-04-18 |
Family
ID=71975812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010348294.2A Active CN111539994B (en) | 2020-04-28 | 2020-04-28 | Particle filter repositioning method based on semantic likelihood estimation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111539994B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114200481B (en) * | 2020-08-28 | 2025-05-16 | 华为技术有限公司 | Positioning method, positioning system and vehicle |
CN112433528A (en) * | 2020-11-27 | 2021-03-02 | 深圳优地科技有限公司 | Method and device for robot to take advantage of elevator, robot and storage medium |
CN112612862B (en) * | 2020-12-24 | 2022-06-24 | 哈尔滨工业大学芜湖机器人产业技术研究院 | A grid map localization method based on point cloud registration |
CN112836698A (en) * | 2020-12-31 | 2021-05-25 | 北京纵目安驰智能科技有限公司 | A positioning method, device, storage medium and electronic device |
CN112833892B (en) * | 2020-12-31 | 2022-12-16 | 杭州自适应科技有限公司 | Semantic mapping method based on track alignment |
CN112767477A (en) * | 2020-12-31 | 2021-05-07 | 北京纵目安驰智能科技有限公司 | Positioning method, positioning device, storage medium and electronic equipment |
CN113238251B (en) * | 2021-04-02 | 2023-09-29 | 华南理工大学 | Target level semantic positioning method based on vehicle-mounted laser radar |
CN113465620A (en) * | 2021-06-02 | 2021-10-01 | 上海追势科技有限公司 | Parking lot particle filter positioning method based on semantic information |
CN113483747B (en) * | 2021-06-25 | 2023-03-24 | 武汉科技大学 | Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot |
CN113657205B (en) * | 2021-07-29 | 2024-04-19 | 北京中科慧眼科技有限公司 | Obstacle detection method and system based on motion grid and intelligent terminal |
CN113551677B (en) * | 2021-08-16 | 2024-06-14 | 河南牧原智能科技有限公司 | Method for repositioning robot and related product |
CN116399328B (en) * | 2023-04-17 | 2024-06-21 | 石家庄铁道大学 | A map construction and positioning method for indoor mobile robots based on BIM |
CN116300480B (en) * | 2023-05-23 | 2023-08-01 | 西南科技大学 | Radioactive Source Search Method Based on Improved Particle Filter and Biologically Inspired Neural Network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107908185A (en) * | 2017-10-14 | 2018-04-13 | 北醒(北京)光子科技有限公司 | A kind of robot autonomous global method for relocating and robot |
CN108897836A (en) * | 2018-06-25 | 2018-11-27 | 广州视源电子科技股份有限公司 | Method and device for robot to map based on semantics |
CN109724603A (en) * | 2019-01-08 | 2019-05-07 | 北京航空航天大学 | An Indoor Robot Navigation Method Based on Environmental Feature Detection |
CN111061287A (en) * | 2019-12-31 | 2020-04-24 | 芜湖哈特机器人产业技术研究院有限公司 | A mobile robot relocation method based on particle self-convergence |
CN111061276A (en) * | 2019-12-31 | 2020-04-24 | 芜湖哈特机器人产业技术研究院有限公司 | A mobile robot relocation method based on dynamic region division |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10452923B2 (en) * | 2017-11-28 | 2019-10-22 | Visual Semantics, Inc. | Method and apparatus for integration of detected object identifiers and semantic scene graph networks for captured visual scene behavior estimation |
-
2020
- 2020-04-28 CN CN202010348294.2A patent/CN111539994B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107908185A (en) * | 2017-10-14 | 2018-04-13 | 北醒(北京)光子科技有限公司 | A kind of robot autonomous global method for relocating and robot |
CN108897836A (en) * | 2018-06-25 | 2018-11-27 | 广州视源电子科技股份有限公司 | Method and device for robot to map based on semantics |
CN109724603A (en) * | 2019-01-08 | 2019-05-07 | 北京航空航天大学 | An Indoor Robot Navigation Method Based on Environmental Feature Detection |
CN111061287A (en) * | 2019-12-31 | 2020-04-24 | 芜湖哈特机器人产业技术研究院有限公司 | A mobile robot relocation method based on particle self-convergence |
CN111061276A (en) * | 2019-12-31 | 2020-04-24 | 芜湖哈特机器人产业技术研究院有限公司 | A mobile robot relocation method based on dynamic region division |
Also Published As
Publication number | Publication date |
---|---|
CN111539994A (en) | 2020-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111539994B (en) | Particle filter repositioning method based on semantic likelihood estimation | |
CN108802785B (en) | Vehicle self-positioning method based on high-precision vector map and monocular vision sensor | |
US20230236280A1 (en) | Method and system for positioning indoor autonomous mobile robot | |
Kümmerle et al. | Large scale graph-based SLAM using aerial images as prior information | |
CN106651990B (en) | Indoor map construction method and indoor positioning method based on indoor map | |
CN111486855A (en) | Indoor two-dimensional semantic grid map construction method with object navigation points | |
WO2020023982A9 (en) | Method and apparatus for combining data to construct a floor plan | |
CN110132284B (en) | A Global Positioning Method Based on Depth Information | |
CN110033489A (en) | A kind of appraisal procedure, device and the equipment of vehicle location accuracy | |
CN112444246B (en) | Laser fusion positioning method in high-precision digital twin scene | |
CN113108773A (en) | Grid map construction method integrating laser and visual sensor | |
CN108332752B (en) | Indoor robot positioning method and device | |
WO2019136613A1 (en) | Indoor locating method and device for robot | |
CN108537214B (en) | An automatic construction method of indoor semantic map | |
CN111964681B (en) | A real-time positioning system for an inspection robot | |
CN117671022B (en) | Mobile robot vision positioning system and method in indoor weak texture environment | |
CN111862200A (en) | A method of unmanned aerial vehicle positioning in coal shed | |
CN115200601A (en) | Navigation method, device, wheeled robot and storage medium | |
CN115060268A (en) | A computer room fusion positioning method, system, equipment and storage medium | |
CN113902828A (en) | Construction method of indoor two-dimensional semantic map with corner as key feature | |
JP2023503750A (en) | ROBOT POSITIONING METHOD AND DEVICE, DEVICE, STORAGE MEDIUM | |
CN118470077A (en) | A charging robot autonomous positioning method and system based on multi-sensor fusion of binocular camera and laser radar | |
Zingaretti et al. | Appearance based robotics | |
CN110992291A (en) | Distance measuring method, system and storage medium based on trinocular vision | |
CN116766191A (en) | Robot motion control method, device, computing device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |