[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111539994B - Particle filter repositioning method based on semantic likelihood estimation - Google Patents

Particle filter repositioning method based on semantic likelihood estimation Download PDF

Info

Publication number
CN111539994B
CN111539994B CN202010348294.2A CN202010348294A CN111539994B CN 111539994 B CN111539994 B CN 111539994B CN 202010348294 A CN202010348294 A CN 202010348294A CN 111539994 B CN111539994 B CN 111539994B
Authority
CN
China
Prior art keywords
semantic
map
robot
laser
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010348294.2A
Other languages
Chinese (zh)
Other versions
CN111539994A (en
Inventor
蒋林
刘奇
向超
方东君
朱建阳
雷斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Technology WHUST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Technology WHUST filed Critical Wuhan University of Science and Technology WHUST
Priority to CN202010348294.2A priority Critical patent/CN111539994B/en
Publication of CN111539994A publication Critical patent/CN111539994A/en
Application granted granted Critical
Publication of CN111539994B publication Critical patent/CN111539994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a particle filter repositioning method based on semantic likelihood estimation, which comprises the following steps: step S1: constructing a grid map; step S2: identifying an object by a target detection method of a convolutional neural network, and obtaining semantic information of the object; and step S3: simulating a laser radar by using a visual sensor to obtain semantic laser data; and step S4: constructing a semantic map; step S5: obtaining an obstacle likelihood domain through a laser grid map, and obtaining an object semantic likelihood domain through a semantic map; step S6: the exact position of the robot is determined. The method fully combines the grid information of the obstacles and the semantic information of the objects to perform repositioning, and avoids the problem that positioning fails under similar environments by using laser radar information alone. The method overcomes the defect that the original particle filtering method only utilizes environment structure information for matching, can effectively solve the problem of global relocation error of the robot, and simultaneously enhances the convergence rate of relocation.

Description

一种基于语义似然估计的粒子滤波重定位方法A particle filter relocalization method based on semantic likelihood estimation

技术领域Technical Field

本发明属于机器人导航重定位领域,特别是涉及一种基于语义似然估计的粒子滤波重定位方法。The invention belongs to the field of robot navigation relocation, and in particular relates to a particle filtering relocation method based on semantic likelihood estimation.

背景技术Background Art

在如今机器人领域,人们的需求越来越高,因此要求机器人导航更准确,而机器人的重定位问题就显得尤为重要。In today's robotics field, people's demands are getting higher and higher, so robot navigation is required to be more accurate, and the problem of robot repositioning is particularly important.

重定位是SLAM(simultaneous localization and mapping,即时定位与地图构建中最重要的一个环节。SLAM的重定位定义为:在己知环境地图的情况下,如何依据机器当前的传感器观测数据,找出自身定位。只有在定位成功后,才能进行地图构建与导航。Relocalization is the most important part of SLAM (simultaneous localization and mapping). SLAM relocalization is defined as: given a known environment map, how to find out the machine's own location based on the machine's current sensor observation data. Only after successful positioning can map construction and navigation be carried out.

国内外研究者组合多种信息帮助机器人完成重定位,有通过在环境中布局二维码,增加额外信息约束粒子采样,有通过在环境内布局多摄像头监控,在多摄像头采集的图像中提取ORB特征检测机器人,确定机器人位于子地图,提高了出现绑架后重定位的时间效率,还有的通过环境中的地标信息与机器人在实际位姿下得到的语义激光完成对机器人定位的修正,增强了定位准确性与定位精度。Researchers at home and abroad have combined a variety of information to help robots complete repositioning. Some have done this by placing QR codes in the environment and adding additional information to constrain particle sampling. Some have done this by placing multiple camera monitors in the environment and extracting ORB features from images captured by multiple cameras to detect the robot and determine where the robot is located in the sub-map, thereby improving the time efficiency of repositioning after kidnapping. Others have corrected the robot's positioning through landmark information in the environment and semantic lasers obtained in the robot's actual position, thereby enhancing positioning accuracy and precision.

无论是通过布局二维码等人工标记,还是利用视频监控帮助机器人定位,方法都不够直接、自主。而仅通过单一物体地标进行机器人位姿校正与重定位,与单个地标检测正确率相关性较大,且要求物体地标尽量扁平化,以易于提取激光簇中心激光点坐标作为物体位置坐标,所以在进行重定位时,目标识别有较大误差,重定位的准确率低。Whether it is through the layout of artificial markers such as QR codes, or the use of video surveillance to help the robot locate, the method is not direct and autonomous enough. The robot posture correction and relocation based on a single object landmark is highly correlated with the accuracy of a single landmark detection, and requires the object landmark to be as flat as possible to facilitate the extraction of the laser point coordinates at the center of the laser cluster as the object position coordinates. Therefore, when relocating, there is a large error in target recognition and the accuracy of relocation is low.

基于现有技术存在的以上问题,本发明拟提出一种基于语义似然估计的粒子滤波重定位方法,该方法增加了物体的语义信息匹配,克服了仅利用环境结构信息进行匹配的不足,能有效解决机器人全局重定位错误的问题,同时增强了重定位的收敛速度。Based on the above problems existing in the prior art, the present invention proposes a particle filtering relocalization method based on semantic likelihood estimation, which increases the semantic information matching of objects, overcomes the shortcomings of matching only with environmental structure information, can effectively solve the problem of global relocalization errors of robots, and at the same time enhances the convergence speed of relocalization.

发明内容Summary of the invention

本发明为解决现有技术中存在的问题采用的技术方案如下:The technical solution adopted by the present invention to solve the problems existing in the prior art is as follows:

一种基于语义似然估计的粒子滤波重定位方法,其特征在于,包括如下步骤:A particle filtering relocation method based on semantic likelihood estimation, characterized in that it comprises the following steps:

步骤S1:构建栅格地图,环境中要有明显的路障特征用于障碍物信息的识别;Step S1: construct a grid map, and the environment must have obvious roadblock features for obstacle information recognition;

步骤S2:通过卷积神经网络的目标检测方法识别物体,并得到该物体的语义信息;Step S2: Identify the object through the object detection method of the convolutional neural network and obtain the semantic information of the object;

步骤S3:利用视觉传感器模拟激光雷达,得到语义激光数据;Step S3: using the visual sensor to simulate the laser radar to obtain semantic laser data;

步骤S4:根据机器人与视觉传感器位置关系和全局坐标系与机器人的位置关系,以及物体的语义激光数据,构建语义地图;Step S4: constructing a semantic map according to the positional relationship between the robot and the visual sensor, the positional relationship between the global coordinate system and the robot, and the semantic laser data of the object;

步骤S5:当定位出现偏差时,机器人用激光雷达和视觉传感器识别环境,通过激光栅格地图获取障碍物似然域,以及通过语义地图获取物体语义似然域;Step S5: When the positioning is deviated, the robot uses the laser radar and visual sensor to identify the environment, obtains the obstacle likelihood domain through the laser grid map, and obtains the object semantic likelihood domain through the semantic map;

步骤S6:预测机器人的初始位姿,并通过激光与物体语义的同时似然匹配,改变粒子权重,并进行重采样,逐渐迭代,滤波器收敛后完成对机器人状态的估计,确定机器人的准确位置。Step S6: Predict the initial posture of the robot, and change the particle weights through simultaneous likelihood matching of laser and object semantics, and resample, gradually iterate, complete the estimation of the robot state after the filter converges, and determine the exact position of the robot.

所述步骤S1的具体步骤如下:The specific steps of step S1 are as follows:

步骤S101:障碍物的选择,选取辨识度较高,并且不会移动的物体作为识别的障碍物;Step S101: selecting obstacles, selecting objects with high recognition and that will not move as obstacles to be identified;

步骤S102:根据激光雷达数据建立栅格地图,机器人在环境中移动一圈,根据激光雷达获取到环境的激光信息和机器人里程计信息,并利用gmapping算法建立栅格地图。Step S102: A grid map is created based on the laser radar data. The robot moves in a circle in the environment, obtains laser information of the environment and robot odometer information based on the laser radar, and creates a grid map using the gmapping algorithm.

所述步骤S2的目标检测方法是采用SSD方法用于物体的检测,得到物体的语义信息,视觉识别所用的传感器是kinectV2。The target detection method of step S2 is to use the SSD method for object detection to obtain the semantic information of the object, and the sensor used for visual recognition is kinectV2.

所述步骤S3获取物体语义激光数据的具体步骤如下:The specific steps of step S3 to obtain semantic laser data of the object are as follows:

步骤S301:进行KinectV2彩色相机与深度相机的配准,以及激光雷达与彩色相机相对位置关系标定;Step S301: aligning the KinectV2 color camera and the depth camera, and calibrating the relative position relationship between the laser radar and the color camera;

步骤S302:在参数标定与配准之后,将物体语义信息中的点云数据通过去除地面并只保留检测框内每一列的最近点,模拟成激光雷达击中点,得到物体的语义激光数据;Step S302: After parameter calibration and registration, the point cloud data in the semantic information of the object is simulated as a laser radar hit point by removing the ground and retaining only the nearest point of each column in the detection frame to obtain the semantic laser data of the object;

步骤S303:获取到的语义激光数据坐标如下:Step S303: The coordinates of the semantic laser data obtained are as follows:

zk=d/s (1)z k = d/s (1)

xk=(u-cx)*zk/fx (2)x k =(uc x )*z k /f x (2)

yk=(v-cy)*zk/fy (3)y k =(vc y )*z k /f y (3)

式中d为彩色相机的一帧照片中像素的坐标(u,v)在对应深度图中的深度值,s为深度图的缩放因子,cx、cy、fx、fy分别为RGB相机在两个轴上的焦距与光圈中心,xk、yk、zk为kinectV2坐标系下的三维坐标。Where d is the depth value of the pixel coordinate (u, v) in a frame of the color camera in the corresponding depth map, s is the scaling factor of the depth map, c x , cy , f x , f y are the focal length and aperture center of the RGB camera on the two axes respectively, and x k , y k , z k are the three-dimensional coordinates in the kinectV2 coordinate system.

所述步骤S4构建语义地图的具体步骤如下:The specific steps of constructing the semantic map in step S4 are as follows:

步骤S401:根据摄像头坐标系与机器人坐标系之间的变换矩阵Trk、机器人坐标系与全局坐标系之间的变换矩阵Twr,将语义激光数据坐标转换到机器人坐标系,继而转换到全局地图中;Step S401: according to the transformation matrix T rk between the camera coordinate system and the robot coordinate system and the transformation matrix T wr between the robot coordinate system and the global coordinate system, the semantic laser data coordinates are transformed into the robot coordinate system and then transformed into the global map;

Figure BDA0002470979220000041
Figure BDA0002470979220000041

式中坐标(xki,yki,zki)为语义激光数据的某点坐标,坐标(xwi,ywi,zwi)为对应点在全局坐标系下的坐标;Wherein the coordinates (x ki , y ki , z ki ) are the coordinates of a point in the semantic laser data, and the coordinates (x wi , y wi , z wi ) are the coordinates of the corresponding point in the global coordinate system;

步骤S402:将最终得到的全局坐标系下的点云坐标变换到栅格地图坐标系下,得到物体在栅格地图中的坐标,将所有坐标点连起来,即获得语义地图:Step S402: transform the point cloud coordinates in the global coordinate system to the grid map coordinate system to obtain the coordinates of the object in the grid map, and connect all the coordinate points to obtain a semantic map:

Figure BDA0002470979220000042
Figure BDA0002470979220000042

式中r为真实地图的长度与栅格地图的长度之比,int为取整符号,(xw,yw)为全局平面坐标系下的坐标值,(xcell,ycell)为语义栅格地图的坐标值。Where r is the ratio of the length of the real map to the length of the grid map, int is the integer symbol, (x w , y w ) is the coordinate value in the global plane coordinate system, and (x cell , y cell ) is the coordinate value of the semantic grid map.

所述步骤S5中获取似然域的公式如下:The formula for obtaining the likelihood domain in step S5 is as follows:

Figure BDA0002470979220000043
Figure BDA0002470979220000043

式中dist为地图中的点与最近障碍物之间的欧式距离,σ为传感器击中点与地图中障碍物点的最大距离,

Figure BDA0002470979220000044
为传感器t时刻测量的物体的概率,zt为t时刻传感器信息,
Figure BDA0002470979220000045
为t时刻第k个粒子的位姿信息,m为环境地图;Where dist is the Euclidean distance between a point in the map and the nearest obstacle, σ is the maximum distance between the sensor hit point and the obstacle point in the map,
Figure BDA0002470979220000044
is the probability of the object measured by the sensor at time t, z t is the sensor information at time t,
Figure BDA0002470979220000045
is the position information of the kth particle at time t, and m is the environment map;

用似然域模型表示障碍物的概率,概率越高,障碍物的可能性就越大;在语义地图中,有了障碍物的语义类别信息,传感器击中点会寻找与地图中同类且最近障碍物点,得到dist:The probability of an obstacle is represented by a likelihood domain model. The higher the probability, the greater the possibility of an obstacle. In the semantic map, with the semantic category information of the obstacle, the sensor hit point will look for the same type and closest obstacle point in the map to obtain dist:

Figure BDA0002470979220000046
Figure BDA0002470979220000046

所述步骤S6确定机器人的准确位置的具体步骤如下:The specific steps of step S6 to determine the exact position of the robot are as follows:

步骤S601位姿预测:机器人在t时刻,利用t-1时刻的状态估计xt-1,以及t-1时刻至t时刻的控制量ut,通过运动模型得到状态转移分布p(xt|xt-1,ut),并从状态转移分布中随机采样获取t时刻的假定状态

Figure BDA0002470979220000051
进行机器人的位姿预测;Step S601 Pose prediction: At time t, the robot uses the state estimate xt -1 at time t-1 and the control amount ut from time t-1 to time t to obtain the state transfer distribution p( xt | xt-1 , ut ) through the motion model, and randomly samples from the state transfer distribution to obtain the assumed state at time t.
Figure BDA0002470979220000051
Predict the robot’s posture;

步骤S602匹配更新:通过t时刻的观测模型函数,返回每一个粒子包含的激光信息以及相机检测信息与环境地图的似然匹配程度,即粒子权重

Figure BDA0002470979220000052
公式如下:Step S602 Matching update: Through the observation model function at time t, the likelihood matching degree between the laser information and camera detection information contained in each particle and the environment map is returned, that is, the particle weight
Figure BDA0002470979220000052
The formula is as follows:

Figure BDA0002470979220000053
Figure BDA0002470979220000053

式中:权重

Figure BDA0002470979220000054
为t时刻第k个粒子的权重,表示测量概率随着时间的积分,且满足
Figure BDA0002470979220000055
Figure BDA0002470979220000056
Figure BDA0002470979220000057
分别为激光和相机测量的似然概率;lt、kt分别为t时刻的激光雷达数据与语义点云数据;ml、mk为激光栅格地图与语义地图;Where: Weight
Figure BDA0002470979220000054
is the weight of the kth particle at time t, which represents the integral of the measurement probability over time and satisfies
Figure BDA0002470979220000055
Figure BDA0002470979220000056
and
Figure BDA0002470979220000057
are the likelihood probabilities of laser and camera measurements, respectively; l t , k t are the laser radar data and semantic point cloud data at time t, respectively; m l , m k are the laser grid map and semantic map;

步骤S603重采样:当机器人运动超过一定阈值后,根据各粒子权重

Figure BDA0002470979220000058
按照轮盘转法复制新的粒子
Figure BDA0002470979220000059
添加进粒子集合中,并归一化粒子权重,保证了重采样产生的粒子向高权值粒子位姿移动,用粒子群近似后验概率分布,以趋近机器人真实位姿;Step S603: When the robot moves beyond a certain threshold, the weight of each particle is
Figure BDA0002470979220000058
Copy new particles according to the roulette wheel method
Figure BDA0002470979220000059
Add them to the particle set and normalize the particle weights to ensure that the particles generated by resampling move to the position of particles with high weights, and use the particle swarm to approximate the posterior probability distribution to approach the actual position of the robot;

在机器人足够运动所有粒子收敛后,通过对粒子的权重求和归一化得到所有粒子的重心即机器人的真实位置,重心公式如下:After the robot has moved enough and all particles have converged, the center of gravity of all particles, i.e. the true position of the robot, is obtained by normalizing the weights of the particles. The center of gravity formula is as follows:

Figure BDA00024709792200000510
Figure BDA00024709792200000510

本发明具有如下优点:The present invention has the following advantages:

本发明的基于语义似然估计的粒子滤波重定位方法,充分结合了障碍物栅格信息和物体语义信息进行重定位,避免单使用激光雷达信息在相似的环境下定位失败的问题。该方法克服了原本粒子滤波方法仅利用环境结构信息进行匹配的不足,能有效解决机器人全局重定位错误的问题,同时增强了重定位的收敛速度。The particle filter relocalization method based on semantic likelihood estimation of the present invention fully combines obstacle grid information and object semantic information for relocalization, avoiding the problem of positioning failure in similar environments using only laser radar information. This method overcomes the deficiency of the original particle filter method that only uses environmental structure information for matching, can effectively solve the problem of global relocalization errors of robots, and at the same time enhances the convergence speed of relocalization.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明方法流程图;Fig. 1 is a flow chart of the method of the present invention;

图2为栅格地图;Figure 2 is a grid map;

图3为语义地图;Figure 3 is a semantic map;

图4是语义障碍物与地图似然匹配图;Figure 4 is a map of semantic obstacles and map likelihood matching;

图5为本发明粒子滤波定位流程图。FIG5 is a flow chart of particle filtering positioning according to the present invention.

具体实施方式DETAILED DESCRIPTION

下面通过实施例,并结合附图,对本发明的技术方案作进一步具体的说明,一种基于语义似然估计的粒子滤波重定位方法,其特征在于,包括如下步骤:The technical solution of the present invention is further specifically described below through embodiments and in conjunction with the accompanying drawings. A particle filtering relocation method based on semantic likelihood estimation is characterized in that it comprises the following steps:

步骤S1:构建栅格地图,环境中要有明显的路障特征用于障碍物信息的识别;Step S1: construct a grid map, and the environment must have obvious roadblock features for obstacle information recognition;

步骤S2:通过卷积神经网络的目标检测方法识别物体,并得到该物体的语义信息;Step S2: Identify the object through the object detection method of the convolutional neural network and obtain the semantic information of the object;

步骤S3:利用视觉传感器模拟激光雷达,得到语义激光数据;Step S3: using the visual sensor to simulate the laser radar to obtain semantic laser data;

步骤S4:根据机器人与视觉传感器位置关系和全局坐标系与机器人的位置关系,以及物体的语义激光数据,构建语义地图;Step S4: constructing a semantic map according to the positional relationship between the robot and the visual sensor, the positional relationship between the global coordinate system and the robot, and the semantic laser data of the object;

步骤S5:当定位出现偏差时,机器人用激光雷达和视觉传感器识别环境,通过激光栅格地图获取障碍物似然域,以及通过语义地图获取物体语义似然域;Step S5: When the positioning is deviated, the robot uses the laser radar and visual sensor to identify the environment, obtains the obstacle likelihood domain through the laser grid map, and obtains the object semantic likelihood domain through the semantic map;

步骤S6:预测机器人的初始位姿,并通过激光与物体语义的同时似然匹配,改变粒子权重,并进行重采样,逐渐迭代,滤波器收敛后完成对机器人状态的估计,确定机器人的准确位置。Step S6: Predict the initial posture of the robot, and change the particle weights through simultaneous likelihood matching of laser and object semantics, and resample, gradually iterate, complete the estimation of the robot state after the filter converges, and determine the exact position of the robot.

所述步骤S1的具体步骤如下:The specific steps of step S1 are as follows:

步骤S101:障碍物的选择,选取辨识度较高,并且不会移动的物体作为识别的障碍物;Step S101: selecting obstacles, selecting objects with high recognition and that will not move as obstacles to be identified;

步骤S102:根据激光雷达数据建立栅格地图,机器人在环境中移动一圈,根据激光雷达获取到环境的激光信息和机器人里程计信息,并利用gmapping算法建立栅格地图,如图2所示。Step S102: A grid map is created based on the laser radar data. The robot moves in a circle in the environment, obtains laser information of the environment and robot odometer information based on the laser radar, and creates a grid map using the gmapping algorithm, as shown in FIG2 .

所述步骤S2的目标检测方法是采用SSD方法用于物体的检测,得到物体的语义信息,视觉识别所用的传感器是kinectV2。The target detection method of step S2 is to use the SSD method for object detection to obtain the semantic information of the object, and the sensor used for visual recognition is kinectV2.

所述步骤S3获取物体语义激光数据的具体步骤如下:The specific steps of step S3 to obtain semantic laser data of the object are as follows:

步骤S301:进行KinectV2彩色相机与深度相机的配准,以及激光雷达与彩色相机相对位置关系标定;Step S301: aligning the KinectV2 color camera and the depth camera, and calibrating the relative position relationship between the laser radar and the color camera;

步骤S302:在参数标定与配准之后,将物体语义信息中的点云数据通过去除地面并只保留检测框内每一列的最近点,模拟成激光雷达击中点,得到物体的语义激光数据;Step S302: After parameter calibration and registration, the point cloud data in the semantic information of the object is simulated as a laser radar hit point by removing the ground and retaining only the nearest point of each column in the detection frame to obtain the semantic laser data of the object;

步骤S303:获取到的语义激光数据坐标如下:Step S303: The coordinates of the semantic laser data obtained are as follows:

zk=d/s (1)z k = d/s (1)

xk=(u-cx)*zk/fx (2)x k =(uc x )*z k /f x (2)

yk=(v-cy)*zk/fy (3)y k =(vc y )*z k /f y (3)

式中d为彩色相机的一帧照片中像素的坐标(u,v)在对应深度图中的深度值,s为深度图的缩放因子,cx、cy、fx、fy分别为RGB相机在两个轴上的焦距与光圈中心,xk、yk、zk为kinectV2坐标系下的三维坐标。Where d is the depth value of the pixel coordinate (u, v) in a frame of the color camera in the corresponding depth map, s is the scaling factor of the depth map, c x , cy , f x , f y are the focal length and aperture center of the RGB camera on the two axes respectively, and x k , y k , z k are the three-dimensional coordinates in the kinectV2 coordinate system.

所述步骤S4构建语义地图的具体步骤如下:The specific steps of constructing the semantic map in step S4 are as follows:

步骤S401:根据摄像头坐标系与机器人坐标系之间的变换矩阵Trk、机器人坐标系与全局坐标系之间的变换矩阵Twr,将语义激光数据坐标转换到机器人坐标系,继而转换到全局地图中;Step S401: according to the transformation matrix T rk between the camera coordinate system and the robot coordinate system and the transformation matrix T wr between the robot coordinate system and the global coordinate system, the semantic laser data coordinates are transformed into the robot coordinate system and then transformed into the global map;

Figure BDA0002470979220000081
Figure BDA0002470979220000081

式中坐标(xki,yki,zki)为语义激光数据的某点坐标,坐标(xwi,ywi,zwi)为对应点在全局坐标系下的坐标;Wherein the coordinates (x ki , y ki , z ki ) are the coordinates of a point in the semantic laser data, and the coordinates (x wi , y wi , z wi ) are the coordinates of the corresponding point in the global coordinate system;

步骤S402:将最终得到的全局坐标系下的点云坐标变换到栅格地图坐标系下,得到物体在栅格地图中的坐标,将所有坐标点连起来,即获得语义地图,如图3所示,图中有多种不同障碍物类别,如A-椅子、B-门、C-柜子、D-垃圾桶等:Step S402: transform the point cloud coordinates in the global coordinate system to the grid map coordinate system to obtain the coordinates of the object in the grid map, connect all the coordinate points, and obtain a semantic map, as shown in FIG3 , where there are many different types of obstacles, such as A-chair, B-door, C-cabinet, D-trash can, etc.:

Figure BDA0002470979220000082
Figure BDA0002470979220000082

式中r为真实地图的长度与栅格地图的长度之比,int为取整符号,(xw,yw)为全局平面坐标系下的坐标值,(xcell,ycell)为语义栅格地图的坐标值。Where r is the ratio of the length of the real map to the length of the grid map, int is the integer symbol, (x w , y w ) is the coordinate value in the global plane coordinate system, and (x cell , y cell ) is the coordinate value of the semantic grid map.

步骤S5中获取似然域的公式如下:The formula for obtaining the likelihood domain in step S5 is as follows:

Figure BDA0002470979220000083
Figure BDA0002470979220000083

式中dist为地图中的点与最近障碍物之间的欧式距离,σ为传感器击中点与地图中障碍物点的最大距离,

Figure BDA0002470979220000084
为传感器t时刻测量的物体的概率,zt为t时刻传感器信息,
Figure BDA0002470979220000085
为t时刻第k个粒子的位姿信息,m为环境地图;Where dist is the Euclidean distance between a point in the map and the nearest obstacle, σ is the maximum distance between the sensor hit point and the obstacle point in the map,
Figure BDA0002470979220000084
is the probability of the object measured by the sensor at time t, z t is the sensor information at time t,
Figure BDA0002470979220000085
is the position information of the kth particle at time t, and m is the environment map;

用似然域模型表示障碍物的概率,概率越高,障碍物的可能性就越大;如图4所示为视觉所测数据与原本地图数据的似然匹配图。图中线条1为传感器测得的语义数据,线条2为地图数据,图中分为a、b、c、d四个不同的语义类别,线条1上的点与线条2中同类别且最近的点相连,即在语义地图中,有了障碍物的语义类别信息,传感器击中点会寻找与地图中同类且最近障碍物点,从而可得到dist:The probability of an obstacle is represented by the likelihood domain model. The higher the probability, the greater the possibility of the obstacle. Figure 4 shows the likelihood matching diagram of the visual data and the original map data. Line 1 in the figure is the semantic data measured by the sensor, and line 2 is the map data. The figure is divided into four different semantic categories: a, b, c, and d. The points on line 1 are connected to the points of the same category and the nearest points in line 2. That is, in the semantic map, with the semantic category information of the obstacle, the sensor hit point will look for the same type and the nearest obstacle point in the map, so that dist can be obtained:

Figure BDA0002470979220000091
Figure BDA0002470979220000091

步骤S6确定机器人的准确位置的具体步骤如下:The specific steps of step S6 to determine the exact position of the robot are as follows:

步骤S601位姿预测:机器人在t时刻,利用t-1时刻的状态估计xt-1,以及t-1时刻至t时刻的控制量ut,通过运动模型得到状态转移分布p(xt|xt-1,ut),并从状态转移分布中随机采样获取t时刻的假定状态

Figure BDA0002470979220000092
进行机器人的位姿预测;Step S601 Pose prediction: At time t, the robot uses the state estimate xt -1 at time t-1 and the control amount ut from time t-1 to time t to obtain the state transfer distribution p( xt | xt-1 , ut ) through the motion model, and randomly samples from the state transfer distribution to obtain the assumed state at time t.
Figure BDA0002470979220000092
Predict the robot’s posture;

步骤S602匹配更新:通过t时刻的观测模型函数,返回每一个粒子包含的激光信息以及相机检测信息与环境地图的似然匹配程度,即粒子权重

Figure BDA0002470979220000093
公式如下:Step S602 Matching update: Through the observation model function at time t, the likelihood matching degree between the laser information and camera detection information contained in each particle and the environment map is returned, that is, the particle weight
Figure BDA0002470979220000093
The formula is as follows:

Figure BDA0002470979220000094
Figure BDA0002470979220000094

式中:权重

Figure BDA0002470979220000095
为t时刻第k个粒子的权重,表示测量概率随着时间的积分,且满足
Figure BDA0002470979220000096
Figure BDA0002470979220000097
Figure BDA0002470979220000098
分别为激光和相机测量的似然概率;lt、tk分别为t时刻的激光雷达数据与语义点云数据;ml、mk为激光栅格地图与语义地图。Where: Weight
Figure BDA0002470979220000095
is the weight of the kth particle at time t, which represents the integral of the measurement probability over time and satisfies
Figure BDA0002470979220000096
Figure BDA0002470979220000097
and
Figure BDA0002470979220000098
are the likelihood probabilities of laser and camera measurements respectively; l t , t k are the lidar data and semantic point cloud data at time t respectively; m l , m k are the laser grid map and semantic map.

步骤S603重采样:当机器人运动超过一定阈值后,根据各粒子权重

Figure BDA0002470979220000099
按照轮盘转法复制新的粒子
Figure BDA00024709792200000910
添加进粒子集合中,并归一化粒子权重,保证了重采样产生的粒子向高权值粒子位姿移动,用粒子群近似后验概率分布,以趋近机器人真实位姿;Step S603: When the robot moves beyond a certain threshold, the weight of each particle is
Figure BDA0002470979220000099
Copy new particles according to the roulette wheel method
Figure BDA00024709792200000910
Add them to the particle set and normalize the particle weights to ensure that the particles generated by resampling move to the position of particles with high weights, and use the particle swarm to approximate the posterior probability distribution to approach the actual position of the robot;

在机器人足够运动所有粒子收敛后,通过对粒子的权重求和归一化得到所有粒子的重心即机器人的真实位置,重心公式如下:After the robot has moved enough and all particles have converged, the center of gravity of all particles, i.e. the true position of the robot, is obtained by normalizing the weights of the particles. The center of gravity formula is as follows:

Figure BDA00024709792200000911
Figure BDA00024709792200000911

本发明的保护范围并不限于上述的实施例,显然,本领域的技术人员可以对本发明进行各种改动和变形而不脱离本发明的范围和精神。倘若这些改动和变形属于本发明权利要求及其等同技术的范围内,则本发明的意图也包含这些改动和变形在内。The protection scope of the present invention is not limited to the above-mentioned embodiments. Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the scope and spirit of the present invention. If these changes and modifications fall within the scope of the claims of the present invention and their equivalents, the intention of the present invention also includes these changes and modifications.

Claims (5)

1. A particle filter repositioning method based on semantic likelihood estimation is characterized by comprising the following steps:
step S1: constructing a grid map, wherein obvious roadblock characteristics in the environment are used for identifying obstacle information;
step S2: identifying an object by a target detection method of a convolutional neural network, and obtaining semantic information of the object;
and step S3: simulating a laser radar by using a visual sensor to obtain semantic laser data;
and step S4: constructing a semantic map according to the position relationship between the robot and the visual sensor, the position relationship between the global coordinate system and the robot, and semantic laser data of the object;
step S5: when deviation occurs in positioning, the robot recognizes the environment by using a laser radar and a visual sensor, obtains an obstacle likelihood domain through a laser grid map, and obtains an object semantic likelihood domain through a semantic map;
step S6: predicting the initial pose of the robot, changing the weight of particles through simultaneous likelihood matching of the semantics of the laser and the object, resampling, gradually iterating, finishing estimation of the state of the robot after the convergence of a filter, and determining the accurate position of the robot;
the step S3 of acquiring object semantic laser data specifically includes the following steps:
step S301: registering the KinectV2 color camera and the depth camera, and calibrating the relative position relation between the laser radar and the color camera;
step S302: after parameter calibration and registration, point cloud data in object semantic information is simulated into laser radar hit points by removing the ground and only keeping the closest point of each row in a detection frame, so as to obtain semantic laser data of the object;
step S303: the obtained semantic laser data coordinates are as follows:
z k =d/s (l)
x k =(u-c x )*z k /f x (2)
y k =(v-c y )*z k /f y (3)
where d is the depth value of the coordinate (u, v) of the pixel in a frame of picture of the color camera in the corresponding depth map, s is the scaling factor of the depth map, c x 、c y 、f x 、f y Focal length and aperture center, x, of RGB camera on two axes, respectively k 、y k 、z k Is a three-dimensional coordinate under a kinect V2 coordinate system;
the formula for obtaining the likelihood domain in step S5 is as follows:
Figure FDA0004081358770000021
dist is the Euclidean distance between a point in the map and the nearest obstacle, and sigma is the maximum point of the sensor hitting point and the obstacle point in the mapThe distance between the two adjacent light-emitting diodes is large,
Figure FDA0004081358770000022
probability of an object measured for sensor time t, z t For the sensor information at the time of the t,
Figure FDA0004081358770000023
the pose information of the kth particle at the time t, and m is an environment map;
the probability of the obstacle is represented by a likelihood domain model, and the higher the probability is, the higher the possibility of the obstacle is; in the semantic map, semantic category information of the obstacles exists, the sensor hit point can search for the same type and nearest obstacle point in the map, and dist is obtained:
Figure FDA0004081358770000024
2. the particle filter relocation method based on semantic likelihood estimation as claimed in claim 1, wherein the specific steps of the step S1 are as follows:
step S101: selecting an obstacle, namely selecting an object which has higher identification degree and cannot move as the identified obstacle;
step S102: and establishing a grid map according to the laser radar data, moving the robot for one circle in the environment, acquiring laser information of the environment and odometer information of the robot according to the laser radar, and establishing the grid map by using a mapping algorithm.
3. The particle filter repositioning method based on semantic likelihood estimation of claim 1, wherein the target detection method in step S2 is to use SSD method for object detection to obtain semantic information of the object, and the sensor used for visual recognition is kinectV2.
4. The particle filter repositioning method based on semantic likelihood estimation as claimed in claim 1, wherein the step S4 of constructing the semantic map comprises the following specific steps:
step S401: according to the transformation matrix T between the camera coordinate system and the robot coordinate system rk Transformation matrix T between robot coordinate system and global coordinate system wr Converting the semantic laser data coordinate into a robot coordinate system and then converting into a global map;
Figure FDA0004081358770000031
formula of middle coordinate (x) ki ,y ki ,z ki ) Coordinate of a point, coordinate (x) for semantic laser data wi ,y wi ,z wi ) The coordinates of the corresponding points in the global coordinate system are obtained;
step S402: and transforming the finally obtained point cloud coordinate under the global coordinate system to a grid map coordinate system to obtain the coordinate of the object in the grid map, and connecting all coordinate points to obtain the semantic map:
Figure FDA0004081358770000032
where r is the ratio of the length of the real map to the length of the grid map, int is the rounding symbol, (x) w ,y w ) Is a coordinate value under the global plane coordinate system, (x) cell ,y cell ) Is the coordinate value of the semantic grid map.
5. The particle filter repositioning method based on semantic likelihood estimation as claimed in claim 1, wherein the step S6 of determining the accurate position of the robot comprises the following specific steps:
step S601 pose prediction: the robot estimates x at time t by using the state at time t-1 t-1 And a control amount u from time t-1 to time t t The state transition distribution p (x) is obtained by a motion model t |x t-1 ,u t ) Randomly sampling from the state transition distribution to obtain the pose information of the kth particle at the t moment
Figure FDA0004081358770000033
Predicting the pose of the robot;
step S602 matching update: returning the likelihood matching degree of the laser information and the camera detection information contained in each particle and the environment map, namely the particle weight, through an observation model function at the time t
Figure FDA0004081358770000041
The formula is as follows:
Figure FDA0004081358770000042
in the formula: weight of
Figure FDA0004081358770000043
The weight of the kth particle at the moment t represents the integral of the measurement probability with time, and satisfies
Figure FDA0004081358770000044
Figure FDA0004081358770000045
And/or>
Figure FDA0004081358770000046
Likelihood probabilities measured for the laser and camera, respectively; l t 、l t Respectively laser radar data and semantic point cloud data at the time t; m is a unit of l 、m k A laser grid map and a semantic map are used;
step S603 resampling: when the robot motion exceeds a certain threshold, according to the weight of each particle
Figure FDA0004081358770000047
Copying new particles according to the roulette method>
Figure FDA0004081358770000048
Adding the particles into a particle set, normalizing the weight of the particles, ensuring that the particles generated by resampling move to the pose of high-weight particles, and approximating posterior probability distribution by a particle swarm to approach the real pose of the robot;
after the robot moves enough and all the particles converge, the gravity centers of all the particles, namely the real positions of the robot, are obtained by summing and normalizing the weights of the particles, and the gravity center formula is as follows:
Figure FDA0004081358770000049
CN202010348294.2A 2020-04-28 2020-04-28 Particle filter repositioning method based on semantic likelihood estimation Active CN111539994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010348294.2A CN111539994B (en) 2020-04-28 2020-04-28 Particle filter repositioning method based on semantic likelihood estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010348294.2A CN111539994B (en) 2020-04-28 2020-04-28 Particle filter repositioning method based on semantic likelihood estimation

Publications (2)

Publication Number Publication Date
CN111539994A CN111539994A (en) 2020-08-14
CN111539994B true CN111539994B (en) 2023-04-18

Family

ID=71975812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010348294.2A Active CN111539994B (en) 2020-04-28 2020-04-28 Particle filter repositioning method based on semantic likelihood estimation

Country Status (1)

Country Link
CN (1) CN111539994B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114200481B (en) * 2020-08-28 2025-05-16 华为技术有限公司 Positioning method, positioning system and vehicle
CN112433528A (en) * 2020-11-27 2021-03-02 深圳优地科技有限公司 Method and device for robot to take advantage of elevator, robot and storage medium
CN112612862B (en) * 2020-12-24 2022-06-24 哈尔滨工业大学芜湖机器人产业技术研究院 A grid map localization method based on point cloud registration
CN112836698A (en) * 2020-12-31 2021-05-25 北京纵目安驰智能科技有限公司 A positioning method, device, storage medium and electronic device
CN112833892B (en) * 2020-12-31 2022-12-16 杭州自适应科技有限公司 Semantic mapping method based on track alignment
CN112767477A (en) * 2020-12-31 2021-05-07 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN113238251B (en) * 2021-04-02 2023-09-29 华南理工大学 Target level semantic positioning method based on vehicle-mounted laser radar
CN113465620A (en) * 2021-06-02 2021-10-01 上海追势科技有限公司 Parking lot particle filter positioning method based on semantic information
CN113483747B (en) * 2021-06-25 2023-03-24 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot
CN113657205B (en) * 2021-07-29 2024-04-19 北京中科慧眼科技有限公司 Obstacle detection method and system based on motion grid and intelligent terminal
CN113551677B (en) * 2021-08-16 2024-06-14 河南牧原智能科技有限公司 Method for repositioning robot and related product
CN116399328B (en) * 2023-04-17 2024-06-21 石家庄铁道大学 A map construction and positioning method for indoor mobile robots based on BIM
CN116300480B (en) * 2023-05-23 2023-08-01 西南科技大学 Radioactive Source Search Method Based on Improved Particle Filter and Biologically Inspired Neural Network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107908185A (en) * 2017-10-14 2018-04-13 北醒(北京)光子科技有限公司 A kind of robot autonomous global method for relocating and robot
CN108897836A (en) * 2018-06-25 2018-11-27 广州视源电子科技股份有限公司 Method and device for robot to map based on semantics
CN109724603A (en) * 2019-01-08 2019-05-07 北京航空航天大学 An Indoor Robot Navigation Method Based on Environmental Feature Detection
CN111061287A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 A mobile robot relocation method based on particle self-convergence
CN111061276A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 A mobile robot relocation method based on dynamic region division

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10452923B2 (en) * 2017-11-28 2019-10-22 Visual Semantics, Inc. Method and apparatus for integration of detected object identifiers and semantic scene graph networks for captured visual scene behavior estimation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107908185A (en) * 2017-10-14 2018-04-13 北醒(北京)光子科技有限公司 A kind of robot autonomous global method for relocating and robot
CN108897836A (en) * 2018-06-25 2018-11-27 广州视源电子科技股份有限公司 Method and device for robot to map based on semantics
CN109724603A (en) * 2019-01-08 2019-05-07 北京航空航天大学 An Indoor Robot Navigation Method Based on Environmental Feature Detection
CN111061287A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 A mobile robot relocation method based on particle self-convergence
CN111061276A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 A mobile robot relocation method based on dynamic region division

Also Published As

Publication number Publication date
CN111539994A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN111539994B (en) Particle filter repositioning method based on semantic likelihood estimation
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
Kümmerle et al. Large scale graph-based SLAM using aerial images as prior information
CN106651990B (en) Indoor map construction method and indoor positioning method based on indoor map
CN111486855A (en) Indoor two-dimensional semantic grid map construction method with object navigation points
WO2020023982A9 (en) Method and apparatus for combining data to construct a floor plan
CN110132284B (en) A Global Positioning Method Based on Depth Information
CN110033489A (en) A kind of appraisal procedure, device and the equipment of vehicle location accuracy
CN112444246B (en) Laser fusion positioning method in high-precision digital twin scene
CN113108773A (en) Grid map construction method integrating laser and visual sensor
CN108332752B (en) Indoor robot positioning method and device
WO2019136613A1 (en) Indoor locating method and device for robot
CN108537214B (en) An automatic construction method of indoor semantic map
CN111964681B (en) A real-time positioning system for an inspection robot
CN117671022B (en) Mobile robot vision positioning system and method in indoor weak texture environment
CN111862200A (en) A method of unmanned aerial vehicle positioning in coal shed
CN115200601A (en) Navigation method, device, wheeled robot and storage medium
CN115060268A (en) A computer room fusion positioning method, system, equipment and storage medium
CN113902828A (en) Construction method of indoor two-dimensional semantic map with corner as key feature
JP2023503750A (en) ROBOT POSITIONING METHOD AND DEVICE, DEVICE, STORAGE MEDIUM
CN118470077A (en) A charging robot autonomous positioning method and system based on multi-sensor fusion of binocular camera and laser radar
Zingaretti et al. Appearance based robotics
CN110992291A (en) Distance measuring method, system and storage medium based on trinocular vision
CN116766191A (en) Robot motion control method, device, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant