CN117032235A - Mobile robot inspection and remote monitoring method under complex indoor scene - Google Patents
Mobile robot inspection and remote monitoring method under complex indoor scene Download PDFInfo
- Publication number
- CN117032235A CN117032235A CN202311019366.9A CN202311019366A CN117032235A CN 117032235 A CN117032235 A CN 117032235A CN 202311019366 A CN202311019366 A CN 202311019366A CN 117032235 A CN117032235 A CN 117032235A
- Authority
- CN
- China
- Prior art keywords
- mobile robot
- quadruped
- distance
- lane line
- inspection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000007689 inspection Methods 0.000 title claims abstract description 57
- 238000012544 monitoring process Methods 0.000 title claims abstract description 35
- 230000033001 locomotion Effects 0.000 claims abstract description 37
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 230000008569 process Effects 0.000 claims description 28
- 238000001914 filtration Methods 0.000 claims description 8
- 238000003032 molecular docking Methods 0.000 claims description 8
- 238000005260 corrosion Methods 0.000 claims description 6
- 230000007797 corrosion Effects 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims 4
- 238000005299 abrasion Methods 0.000 claims 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims 1
- 230000009471 action Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011217 control strategy Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
本发明公开了一种复杂室内场景下移动机器人巡检及远程监控方法,包括:在四足机器人运动过程中,上位机实时接收位于四足移动机器人两侧的工业相机发送的四足移动机器人两侧的车道线图像,对车道线图像进行处理,拟合得到与实际车道线最贴合的直线,将该直线作为四足移动机器人的巡检路线;同时,上位机采用位于四足移动机器人头部正中的激光雷达对位于前方的障碍物或者墙壁进行检测,根据检测结果将四足移动机器人切换避障模式或者转弯模式。本发明能有效克服室内黄色车道线磨损、污染的干扰,并在无车道线处能够准确调整运动姿态,并实现远程的监视与控制,能够适用于大型室内环境的巡逻与检查。
The invention discloses a mobile robot inspection and remote monitoring method in a complex indoor scene, which includes: during the movement of the quadruped robot, the host computer receives in real time the two images of the quadruped mobile robot sent by the industrial cameras located on both sides of the quadruped mobile robot. The lane line image on the side is processed, and the straight line that best fits the actual lane line is obtained by fitting, and the straight line is used as the inspection route of the quadruped mobile robot; at the same time, the host computer uses a system located on the head of the quadruped mobile robot. The lidar in the center detects obstacles or walls in front, and switches the quadruped mobile robot to obstacle avoidance mode or turning mode based on the detection results. The invention can effectively overcome the interference of indoor yellow lane line wear and pollution, can accurately adjust the movement posture at no lane lines, and realize remote monitoring and control, and can be suitable for patrol and inspection of large indoor environments.
Description
技术领域Technical field
本发明属于机器人技术领域,具体涉及一种复杂室内场景下移动机器人巡检及远程监控方法。The invention belongs to the field of robot technology, and specifically relates to a mobile robot inspection and remote monitoring method in complex indoor scenes.
背景技术Background technique
目前,机器人技术的发展取得了显著进展,广泛应用于各个领域。特别是在巡检和监测领域,机器人可以有效地完成繁重、危险或需要长时间的任务。然而,现有的机器人系统在不同地形和环境条件下的移动能力和稳定性方面仍然存在一些挑战。针对机器人在巡检过程中的路径规划和导航问题,研究人员提出了多种算法和方法。例如,基于传感器数据和地图信息的自主导航算法,使用SLAM(Simultaneous Localization and Mapping)技术实现实时地图构建和位置估计,并结合规划算法生成最优路径。为了实现全面的巡检任务,使用各种传感器(如摄像头、激光雷达、红外传感器等),机器人可以获取周围环境的信息,进而构建准确的环境模型,以支持巡检任务的执行和决策制定。At present, the development of robotic technology has made significant progress and is widely used in various fields. Especially in the field of inspection and monitoring, robots can effectively complete tasks that are heavy, dangerous or require a long time. However, existing robotic systems still have some challenges in terms of mobility and stability in different terrains and environmental conditions. For the path planning and navigation problems of robots during inspection, researchers have proposed a variety of algorithms and methods. For example, autonomous navigation algorithms based on sensor data and map information use SLAM (Simultaneous Localization and Mapping) technology to achieve real-time map construction and position estimation, and combine with planning algorithms to generate optimal paths. In order to achieve comprehensive inspection tasks, using various sensors (such as cameras, lidar, infrared sensors, etc.), the robot can obtain information about the surrounding environment and then build an accurate environment model to support the execution and decision-making of inspection tasks.
在大型室内环境下,应用较为广泛的移动机器人是AGV小车。AGV小车通常采用磁导航技术,通过在地面贴磁条的方式进行室内导航,此方法导航方式会受环路通过的金属等硬物的机械损伤,对导航有一定的影响,仅适合特定且单一的环境中使用。在大型仓库等其他室内场景下的SLAM建图的精度难以保证,同时大型货物、大型装备以及人员的频繁移动使得场景内的空间结构难以固定,SLAM建图的方法不适用于此类环境。此外,在大场景下采用磁导航的方法需要铺设大量磁条,影响了场景内原有的作业活动。In large indoor environments, the most widely used mobile robot is the AGV car. AGV cars usually use magnetic navigation technology to conduct indoor navigation by sticking magnetic strips on the ground. This method of navigation will be mechanically damaged by hard objects such as metal passing through the loop, which will have a certain impact on navigation and is only suitable for specific and single tasks. used in the environment. The accuracy of SLAM mapping in other indoor scenes such as large warehouses is difficult to guarantee. At the same time, the frequent movement of large goods, large equipment, and people makes it difficult to fix the spatial structure in the scene. The SLAM mapping method is not suitable for such environments. In addition, the use of magnetic navigation in large scenes requires laying a large number of magnetic strips, which affects the original operating activities in the scene.
四足移动机器人相比于常见的轮式、履带式移动机器人具有更强大的运动性能,更适用于大型室内环境下的巡检,但由于四足移动机器人无法使用编码器获取里程数据,因此将四足移动机器人应用在室内巡检任务中能提高效率,但根据实际环境在无法使用编码器的情况下设计合适的策略方法引导其运动。Compared with common wheeled and crawler mobile robots, quadruped mobile robots have more powerful motion performance and are more suitable for inspections in large indoor environments. However, since quadruped mobile robots cannot use encoders to obtain mileage data, they will The application of quadruped mobile robots can improve efficiency in indoor inspection tasks, but according to the actual environment, it is necessary to design an appropriate strategy and method to guide its movement when the encoder cannot be used.
如清扫机器人、绘画机器人等轮式机器人,可以依靠编码器计算里程的方式引导运动,例如,专利公开号为CN111538338B的发明中提出了一种机器人贴边运动控制系统及方法,专利公开号为CN113863195B的发明中提出了沿边清扫方法以及清扫车,CN115993820A基于ROS系统的迷宫机器人路径规划,CN116175561A一种绘画机器人的控制方法、芯片和机器人等等。然而,尽管这些专利中应用激光雷达进行墙壁等物体的检测,但功能单一,无法做到巡线与沿墙行走的切换,只能适用于固定场合,对于复杂室内场景下则不再适用。Wheeled robots such as cleaning robots and painting robots can rely on encoders to calculate mileage to guide their movements. For example, the patent publication number CN111538338B proposes a robot edge motion control system and method, the patent publication number is CN113863195B Among the inventions, edge cleaning methods and sweeping vehicles are proposed, CN115993820A maze robot path planning based on ROS system, CN116175561A a control method, chip and robot for a painting robot, etc. However, although these patents use lidar to detect objects such as walls, they have a single function and cannot switch between line following and walking along walls. They can only be used in fixed situations and are no longer suitable for complex indoor scenes.
发明内容Contents of the invention
解决的技术问题:本发明公开了一种复杂室内场景下移动机器人巡检及远程监控方法,解决了无编码器获取里程情况下四足移动机器人巡逻与检查的问题,能有效克服室内黄色车道线磨损、污染的干扰,并在无车道线处能够准确调整运动姿态,并实现远程的监视与控制,能够适用于大型室内环境的巡逻与检查。Technical problem solved: The present invention discloses a mobile robot patrol and remote monitoring method in complex indoor scenes, which solves the problem of patrol and inspection of four-legged mobile robots without an encoder to obtain mileage, and can effectively overcome indoor yellow lane lines. It can accurately adjust the movement posture without interference from wear and pollution, and realize remote monitoring and control. It can be suitable for patrol and inspection of large indoor environments.
技术方案:Technical solutions:
一种复杂室内场景下移动机器人巡检及远程监控方法,所述移动机器人巡检及远程监控方法包括以下步骤:A mobile robot inspection and remote monitoring method in complex indoor scenes. The mobile robot inspection and remote monitoring method includes the following steps:
在四足移动机器人本体上加装激光雷达、工业相机、超声波传感器,通过USB拓展坞连接至上位机,上位机接收远程控制终端下发的控制指令与任务指令,将四足移动机器人切换成巡线模式,利用ROS机器人操作系统处理各传感器信息生成四足移动机器人的运动指令,通过UDP协议发送至下位机,使下位机根据运动指令控制四足移动机器人运动,使四足移动机器人沿设定的巡检路线移动;Lidar, industrial cameras, and ultrasonic sensors are installed on the quadruped mobile robot body, and are connected to the host computer through a USB expansion dock. The host computer receives control instructions and task instructions issued by the remote control terminal, and switches the quadruped mobile robot to patrol mode. In line mode, the ROS robot operating system is used to process each sensor information to generate the movement instructions of the quadruped mobile robot, which are sent to the lower computer through the UDP protocol, so that the lower computer controls the movement of the quadruped mobile robot according to the movement instructions, so that the quadruped mobile robot moves along the set Movement of inspection routes;
在四足机器人运动过程中,上位机实时接收位于四足移动机器人两侧的工业相机发送的四足移动机器人两侧的车道线图像,对车道线图像进行处理,拟合得到与实际车道线最贴合的直线,将该直线作为四足移动机器人的巡检路线;同时,上位机采用位于四足移动机器人头部正中的激光雷达对位于前方的障碍物或者墙壁进行检测,根据检测结果将四足移动机器人切换避障模式或者转弯模式;During the movement of the quadruped robot, the host computer receives the lane line images on both sides of the quadruped mobile robot from the industrial cameras located on both sides of the quadruped mobile robot in real time, processes the lane line images, and obtains the best fit with the actual lane line. The straight line that fits the quadruped mobile robot is used as the inspection route; at the same time, the host computer uses the laser radar located in the center of the quadruped mobile robot's head to detect the obstacles or walls in front, and according to the detection results, the four-legged mobile robot The mobile robot can switch to obstacle avoidance mode or turning mode;
当巡检任务完成且四足移动机器人运动至原点库区时,将四足移动机器人切换成入库模式,采用位于四足移动机器人尾部的超声波传感器检测四足移动机器人尾部与原点库区底部挡板的距离,控制四足移动机器人后退,直至四足移动机器人尾部与原点库区底部挡板的距离小于第一预设入库距离阈值,再采用激光雷达检测四足机器人与原点库区侧壁的距离,控制四足移动机器人向侧壁平移,直至四足机器人与原点库区侧壁的距离小于第二预设入库距离阈值。When the inspection task is completed and the quadruped mobile robot moves to the origin storage area, the quadruped mobile robot is switched to the storage mode, and the ultrasonic sensor located at the tail of the quadruped mobile robot is used to detect the barrier between the tail of the quadruped mobile robot and the bottom of the origin storage area. board, control the quadruped mobile robot to retreat until the distance between the tail of the quadruped mobile robot and the bottom baffle of the origin storage area is less than the first preset storage distance threshold, and then use lidar to detect the quadruped robot and the side wall of the origin storage area distance, the quadruped mobile robot is controlled to translate toward the side wall until the distance between the quadruped robot and the side wall of the origin storage area is less than the second preset storage distance threshold.
进一步地,所述激光雷达为二维激光雷达;所述激光雷达扫描得到的信息包括:一维距离数组[d0,d1,d2,...,d359],其中数组中的元素di为激光雷达距离障碍物距离大小,i表示雷达扫描的角度。Further, the lidar is a two-dimensional lidar; the information obtained by the lidar scan includes: a one-dimensional distance array [d 0 , d 1 , d 2 , ..., d 359 ], where the elements in the array di is the distance between the lidar and the obstacle, and i represents the angle of the radar scanning.
进一步地,上位机采用位于四足移动机器人头部正中的激光雷达对位于前方的障碍物或者墙壁进行检测,根据检测结果将四足移动机器人切换避障模式或者转弯模式的过程包括:Further, the host computer uses the laser radar located in the center of the quadruped mobile robot's head to detect obstacles or walls in front. The process of switching the quadruped mobile robot to obstacle avoidance mode or turning mode according to the detection results includes:
移动机器人运动过程中,从激光雷达检测的360帧距离数据中截取得到[d141,d142,d143,…,d220],当0.2≤min[d141,d142,d143,…,d220]≤1时,生成停止指令,并发布避障ROS话题;During the movement of the mobile robot, [d 141 , d 142 , d 143 ,..., d 220 ] is intercepted from the 360 frames of distance data detected by the lidar. When 0.2≤min [d 141 , d 142 , d 143 ,..., When d 220 ] ≤ 1, generate a stop command and publish the obstacle avoidance ROS topic;
通过计算时间预估四足移动机器人是否到达转弯节点附近,如到达转弯节点附近,从激光雷达检测的360帧距离数据中截取得到[d178,d179,d180,d181,d182],预先确定转弯节点与临近墙壁之间的距离dx,当min[d178,d179,d180,d181,d182]≤dx时,生成转弯90°指令,发布转弯ROS话题。By calculating the time, it is estimated whether the quadruped mobile robot reaches the vicinity of the turning node. If it reaches the vicinity of the turning node, it is intercepted from the 360 frames of distance data detected by the lidar [d 178 , d 179 , d 180 , d 181 , d 182 ]. The distance d x between the turning node and the adjacent wall is determined in advance. When min[d 178 , d 179 , d 180 , d 181 , d 182 ] ≤ d x , a 90° turn instruction is generated and a turning ROS topic is issued.
进一步地,采用激光雷达检测四足机器人与原点库区侧壁的距离,控制四足移动机器人向侧壁平移,直至四足机器人与原点库区侧壁的距离小于第二预设入库距离阈值是指,Further, lidar is used to detect the distance between the quadruped robot and the side wall of the origin storage area, and the quadruped mobile robot is controlled to translate toward the side wall until the distance between the quadruped robot and the side wall of the origin storage area is less than the second preset storage distance threshold. Refers to,
取激光雷达360帧距离数据中的[d89,d90],预先设置四足移动机器人与库内右侧墙壁距离dz,当时,四足移动机器人停止右移,站立2s后趴下,等待下一任务指令。Take [d 89 , d 90 ] in the lidar 360 frame distance data, and pre-set the distance d z between the quadruped mobile robot and the right wall of the library. When , the quadruped mobile robot stops moving to the right, stands for 2 seconds and then lies down, waiting for the next task command.
进一步地,所述移动机器人巡检及远程监控方法还包括以下步骤:Further, the mobile robot inspection and remote monitoring method also includes the following steps:
在四足机器人运动过程中,如果上位机无法拟合得到与实际车道线最贴合的直线,将四足移动机器人切换成沿墙运动模式,采用激光雷达获取自身与前方墙壁和侧面墙壁的距离信息,根据与前方墙壁的距离信息判断是否进行转弯,根据与侧面墙壁的距离信息将自身调整至与侧面墙壁平行。During the movement of the quadruped robot, if the host computer cannot fit the straight line that best fits the actual lane line, the quadruped mobile robot switches to the wall-moving mode and uses lidar to obtain the distance between itself and the wall in front and side walls. information, determine whether to turn based on the distance information to the wall in front, and adjust itself to be parallel to the side wall based on the distance information to the side wall.
进一步地,根据与侧面墙壁的距离信息将自身调整至与侧面墙壁平行的过程包括以下步骤:Further, the process of adjusting oneself to be parallel to the side wall based on the distance information to the side wall includes the following steps:
从激光雷达检测的360帧距离数据中截取得到四足移动机器人左侧面和右侧面与墙壁的距离[d60,d61,d62,d63]与[d119,d120,d121,d122],计算平均值与以及差值ΔD=D1-D2,采用式(1)计算得到四足移动机器人的调整速度:The distances between the left and right sides of the quadruped mobile robot and the wall [d 60 , d 61 , d 62 , d 63 ] and [d 119 , d 120 , d 121 are intercepted from the 360 frames of distance data detected by lidar , d 122 ], calculate the average and And the difference ΔD=D 1 -D 2 , use formula (1) to calculate the adjustment speed of the quadruped mobile robot:
式中,vr为实际调整速度,v0为预设调整速度,v0为正表示左转调整,v0为负表示右转调整。In the formula, v r is the actual adjustment speed, v 0 is the preset adjustment speed, v 0 is positive for left-turn adjustment, and v 0 is negative for right-turn adjustment.
进一步地,如果上位机无法拟合得到与实际车道线最贴合的直线,将四足移动机器人切换成沿墙运动模式,判断当前区域是否属于自主设定的无线段区域,如果是,驱使四足移动机器人沿墙移动直至离开当前区域;如果不是,驱使四足移动机器人沿墙移动的同时启动计时,若预设等待时间后仍未检测到车道线,令四足移动机器人停止移动,并向远程终端发送任务终止指令,若预设等待时间内重新检测到车道线,则切换回巡线模式。Furthermore, if the host computer cannot fit the straight line that best fits the actual lane line, the quadruped mobile robot switches to the wall-moving mode to determine whether the current area belongs to the independently set wireless segment area. If so, drive the four-legged mobile robot to move along the wall. The quadruped mobile robot moves along the wall until it leaves the current area; if not, the quadruped mobile robot is driven to move along the wall and the timing is started at the same time. If the lane line is not detected after the preset waiting time, the quadruped mobile robot is stopped and moves toward The remote terminal sends a task termination command. If the lane line is detected again within the preset waiting time, it switches back to the line patrol mode.
进一步地,四足移动机器人在巡线模式下,根据车道线与四足移动机器人的相对位置,预先设置每一段路线采用左侧工业相机视野或者右侧工业相机视野。Furthermore, in the line patrol mode, the quadruped mobile robot pre-sets each route to use the left industrial camera field of view or the right industrial camera field of view according to the relative position of the lane line and the quadruped mobile robot.
进一步地,对车道线图像进行处理,拟合得到与实际车道线最贴合的直线的过程包括以下步骤:Further, the process of processing the lane line image and fitting the straight line that best fits the actual lane line includes the following steps:
对工业相机采集到的RGB图像进行区域划分,屏蔽四足机器人本体的视野干扰,截取出ROI区域;截取出的ROI区域为三通道RGB图像,将其作为输入图像,R、G、B分别表示三通道的像素值,根据式(2)至式(4)计算出对应的H、S、V通道像素值,得到HSV图像:Divide the RGB image collected by the industrial camera into regions, shield the field of view interference of the quadruped robot body, and intercept the ROI area; the intercepted ROI area is a three-channel RGB image, which is used as the input image, represented by R, G, and B respectively. For the pixel values of the three channels, calculate the corresponding H, S, and V channel pixel values according to equations (2) to (4) to obtain the HSV image:
V=Cmax (4)V=C max (4)
其中,Cmax=max(R′,G′,B′);Cmin=min(R′,G′,B′);Δ=Cmax-Cmin;in, C max =max (R′, G′, B′); C min =min (R′, G′, B′); Δ = C max -C min ;
根据公式(5)对HSV图像进行二值化操作,输出原始二值图像:Binarize the HSV image according to formula (5) and output the original binary image:
式中,dst(I)为输出像素值,lowerb(I)为对应通道的下阈值,upperb(I)为对应通道的上阈值,src(I)为输入图像对应通道的像素值;当每个通道的像素值都在设定的上、下阈值范围内时,dst(I)即为255,若不满足则为0;In the formula, dst(I) is the output pixel value, lowerb(I) is the lower threshold of the corresponding channel, upperb(I) is the upper threshold of the corresponding channel, src(I) is the pixel value of the corresponding channel of the input image; when each When the pixel values of the channel are within the set upper and lower threshold ranges, dst(I) is 255, if not, it is 0;
按照现场车道线磨损、污染情况提取多组地面车道线HSV色彩空间阈值,根据公式(6)进行融合生成掩膜,将原始二值图像作为输入进行颜色过滤,得到融合的二值图像,从而得到车道线大致区域:Extract multiple sets of ground lane line HSV color space thresholds according to the on-site lane line wear and pollution conditions, fuse them according to formula (6) to generate masks, use the original binary image as input for color filtering, and obtain the fused binary image, thus obtaining Approximate area of lane lines:
dst=mask1|mask2|…|maskj (6)dst=mask 1 |mask 2 |…|mask j (6)
式中,dst为融合的二值图,maskj为不同颜色范围处理下的掩膜,|表示按位或;In the formula, dst is the fused binary image, mask j is the mask under different color range processing, | means bitwise OR;
对融合的二值图像中进行开运算、闭运算,去除细小污染与磨损的干扰,得到完整的车道线区域;开运算对图像先进行腐蚀将粘连的细微处分离并去除毛刺与孤立的点,再进行膨胀维持区域原有形状;具体地,当A为初始图像,B为m×n的结构元素时,腐蚀与膨胀运算过程如式(7)与(8)所示:Opening and closing operations are performed on the fused binary image to remove the interference of small pollution and wear, and the complete lane line area is obtained; the opening operation first corrodes the image to separate the tiny adhesion points and remove burrs and isolated points. Then dilation is performed to maintain the original shape of the area; specifically, when A is the initial image and B is the m×n structural element, the corrosion and expansion operation processes are as shown in Equations (7) and (8):
式中,z表示B在A上的平移,表示膨胀运算符式,Θ表示腐蚀运算符,则开运算如式(9):In the formula, z represents the translation of B on A, represents the expansion operator formula, and Θ represents the corrosion operator, then the opening operation is as follows (9):
通过闭运算消除细小噪声空洞,平滑车道线轮廓,补全断裂与缝隙;闭运算计算过程如式(10)所示:Small noise holes are eliminated through closed operations, the lane line contours are smoothed, and breaks and gaps are filled; the closed operation calculation process is shown in Equation (10):
以最优二值图片作为输入,进行Canny边缘检测,主要过程包括:使用高斯分布对像素点(x,y)进行高斯滤波去除图像噪声;利用Sobel算子Sx与Sy计算高斯滤波后图像I的每个像素点梯度的幅值/>与方向根据每个像素梯度方向与幅值进行非极大值抑制,去除可能不构成边缘的像素;设置双阈值,保留强边缘与虚边缘,判断强边缘与虚边缘的连接情况,保留与强边缘连接的虚边缘;Taking the optimal binary image as input, Canny edge detection is performed. The main process includes: using Gaussian distribution Perform Gaussian filtering on pixels (x, y) to remove image noise; use Sobel operators S x and S y to calculate the amplitude of the gradient of each pixel of image I after Gaussian filtering/> with direction Perform non-maximum suppression based on the gradient direction and amplitude of each pixel to remove pixels that may not constitute edges; set double thresholds to retain strong edges and virtual edges, determine the connection between strong edges and virtual edges, and retain connections with strong edges virtual edge;
对边缘检测后的图像进行霍夫直线变换,拟合出多组与实际车道线疑似贴合的直线,设定夹角界限θ,取tanθ作为斜率界限,保留-tanθ~tanθ之间的直线;Perform Hough straight line transformation on the image after edge detection, and fit multiple sets of straight lines that are suspected to fit the actual lane lines. Set the angle limit θ, take tanθ as the slope limit, and retain the straight line between -tanθ~tanθ;
通过判断多组与实际车道线平行直线的中点纵坐标[y1,y2,…,ya],找到ymax=max(y1,y2,…,ya),ymax所对应的直线为ROI区域最下方的直线,将该直线作为与实际车道线最贴合的直线和四足移动机器人巡线标准;By judging the midpoint ordinates [y 1 , y 2 ,..., y a ] of multiple sets of straight lines parallel to the actual lane lines, find y max =max (y 1 , y 2 ,..., y a ), which corresponds to y max The straight line is the straight line at the bottom of the ROI area, and this straight line is used as the straight line that best fits the actual lane line and the line patrol standard for the quadruped mobile robot;
计算视野最下方与实际车道线最贴合的直线的斜率k与中点像素坐标(x,y),k≤-0.016时左转调整,k≥0.016时右转调整,当使用左侧相机视野时,y≤400px时向右平移,y≥480px时向左平移,当使用右侧相机视野时,y≤400px时向左平移,y≥480px时向右平移,保证移动机器人与车道线平行且相距约0.3m,发布巡线调整ROS话题。Calculate the slope k and midpoint pixel coordinates (x, y) of the straight line at the bottom of the field of view that best fits the actual lane line. When k ≤ -0.016, turn left and adjust, when k ≥ 0.016, turn right and adjust. When using the left camera field of view When y ≤ 400px, translate to the right, when y ≥ 480px, translate to the left. When using the right camera field of view, when y ≤ 400px, translate to the left, when y ≥ 480px, translate to the right, ensuring that the mobile robot is parallel to the lane line and The distance is about 0.3m, and the ROS topic of line patrol adjustment is released.
本发明还公开了一种复杂室内场景下移动机器人巡检平台,所述移动机器人巡检平台包括激光雷达、上位机、下位机、左侧工业相机、右侧工业相机、超声波传感器、四足移动机器人本体、关节电机、USB拓展坞、电源、头部工业相机和远程监控终端;The invention also discloses a mobile robot inspection platform in complex indoor scenes. The mobile robot inspection platform includes a laser radar, a host computer, a slave computer, a left industrial camera, a right industrial camera, an ultrasonic sensor, a quadruped mobile Robot body, joint motors, USB docking station, power supply, head industrial camera and remote monitoring terminal;
所述四足移动机器人本体的矩形躯体表面中轴线垂直切面前端处放置有激光雷达,矩形躯体上方正方体框架左、右两侧与四足移动机器人本体头部空间内分别放置左侧工业相机、右侧工业相机与头部工业相机,正方体框架中轴线与四足移动机器人本体的中轴线重合,四足机器人本体的矩形躯体尾部放置有超声波传感器;A laser radar is placed at the front end of the vertical section of the central axis of the rectangular body surface of the quadruped mobile robot body. The left and right industrial cameras are respectively placed on the left and right sides of the cube frame above the rectangular body and in the head space of the quadruped mobile robot body. The side industrial camera and the head industrial camera, the central axis of the cube frame coincides with the central axis of the quadruped mobile robot body, and an ultrasonic sensor is placed at the tail of the rectangular body of the quadruped robot body;
在四足移动机器人本体内部放置有上位机与下位机,上位机与下位机通过网线连接,上位机包含WiFi模块;There are a host computer and a lower computer placed inside the quadruped mobile robot body. The upper computer and the lower computer are connected through network cables. The upper computer contains a WiFi module;
所述远程监控终端与上位机无线连接,采用如前所述的复杂室内场景下移动机器人巡检及远程监控方法对四足移动机器人进行控制。The remote monitoring terminal is wirelessly connected to the host computer, and the quadruped mobile robot is controlled using the mobile robot inspection and remote monitoring methods in complex indoor scenes as mentioned above.
有益效果:Beneficial effects:
本发明的复杂室内场景下移动机器人巡检及远程监控方法,能够使四足移动机器人在无编码器输出里程数据的情况下,完成复杂室内环境下的巡逻与检查任务。The mobile robot inspection and remote monitoring method in complex indoor scenes of the present invention enables the quadruped mobile robot to complete patrol and inspection tasks in complex indoor environments without an encoder to output mileage data.
附图说明Description of the drawings
图1为本发明实施例的复杂室内场景下移动机器人巡检及远程监控方法的原理框架图;Figure 1 is a principle framework diagram of a mobile robot inspection and remote monitoring method in a complex indoor scene according to an embodiment of the present invention;
图2为本发明实施例的四足移动机器人结构示意图;Figure 2 is a schematic structural diagram of a quadruped mobile robot according to an embodiment of the present invention;
图3为本发明实施例的复杂室内场景下移动机器人巡检及远程监控方法流程图。Figure 3 is a flow chart of a mobile robot inspection and remote monitoring method in a complex indoor scene according to an embodiment of the present invention.
具体实施方式Detailed ways
下面的实施例可使本专业技术人员更全面地理解本发明,但不以任何方式限制本发明。The following examples can enable those skilled in the art to understand the present invention more comprehensively, but do not limit the present invention in any way.
参见图3,本发明公开了一种复杂室内场景下移动机器人巡检及远程监控方法,所述移动机器人巡检及远程监控方法包括以下步骤:Referring to Figure 3, the present invention discloses a mobile robot inspection and remote monitoring method in a complex indoor scene. The mobile robot inspection and remote monitoring method includes the following steps:
在四足移动机器人本体上加装激光雷达、工业相机、超声波传感器,通过USB拓展坞连接至上位机,上位机接收远程控制终端下发的控制指令与任务指令,将四足移动机器人切换成巡线模式,利用ROS机器人操作系统处理各传感器信息生成四足移动机器人的运动指令,通过UDP协议发送至下位机,使下位机根据运动指令控制四足移动机器人运动,使四足移动机器人沿设定的巡检路线移动;Lidar, industrial cameras, and ultrasonic sensors are installed on the quadruped mobile robot body, and are connected to the host computer through a USB expansion dock. The host computer receives control instructions and task instructions issued by the remote control terminal, and switches the quadruped mobile robot to patrol mode. In line mode, the ROS robot operating system is used to process each sensor information to generate the movement instructions of the quadruped mobile robot, which are sent to the lower computer through the UDP protocol, so that the lower computer controls the movement of the quadruped mobile robot according to the movement instructions, so that the quadruped mobile robot moves along the set Movement of inspection routes;
在四足机器人运动过程中,上位机实时接收位于四足移动机器人两侧的工业相机发送的四足移动机器人两侧的车道线图像,对车道线图像进行处理,拟合得到与实际车道线最贴合的直线,将该直线作为四足移动机器人的巡检路线;同时,上位机采用位于四足移动机器人头部正中的激光雷达对位于前方的障碍物或者墙壁进行检测,根据检测结果将四足移动机器人切换避障模式或者转弯模式;During the movement of the quadruped robot, the host computer receives the lane line images on both sides of the quadruped mobile robot from the industrial cameras located on both sides of the quadruped mobile robot in real time, processes the lane line images, and obtains the best fit with the actual lane line. The straight line that fits the quadruped mobile robot is used as the inspection route; at the same time, the host computer uses the laser radar located in the center of the quadruped mobile robot's head to detect the obstacles or walls in front, and according to the detection results, the four-legged mobile robot The mobile robot can switch to obstacle avoidance mode or turning mode;
当巡检任务完成且四足移动机器人运动至原点库区时,将四足移动机器人切换成入库模式,采用位于四足移动机器人尾部的超声波传感器检测四足移动机器人尾部与原点库区底部挡板的距离,控制四足移动机器人后退,直至四足移动机器人尾部与原点库区底部挡板的距离小于第一预设入库距离阈值,再采用激光雷达检测四足机器人与原点库区侧壁的距离,控制四足移动机器人向侧壁平移,直至四足机器人与原点库区侧壁的距离小于第二预设入库距离阈值。When the inspection task is completed and the quadruped mobile robot moves to the origin storage area, the quadruped mobile robot is switched to the storage mode, and the ultrasonic sensor located at the tail of the quadruped mobile robot is used to detect the barrier between the tail of the quadruped mobile robot and the bottom of the origin storage area. board, control the quadruped mobile robot to retreat until the distance between the tail of the quadruped mobile robot and the bottom baffle of the origin storage area is less than the first preset storage distance threshold, and then use lidar to detect the quadruped robot and the side wall of the origin storage area distance, the quadruped mobile robot is controlled to translate toward the side wall until the distance between the quadruped robot and the side wall of the origin storage area is less than the second preset storage distance threshold.
如图1和图2所示,本发明还公开了一种复杂室内场景下移动机器人巡检平台,所述移动机器人巡检平台包括激光雷达、上位机、下位机、左侧工业相机、右侧工业相机、超声波传感器、四足移动机器人本体、关节电机、USB拓展坞、电源、头部工业相机和远程监控终端;As shown in Figures 1 and 2, the present invention also discloses a mobile robot inspection platform in complex indoor scenes. The mobile robot inspection platform includes a laser radar, a host computer, a slave computer, an industrial camera on the left, and an industrial camera on the right. Industrial camera, ultrasonic sensor, quadruped mobile robot body, joint motor, USB docking station, power supply, head industrial camera and remote monitoring terminal;
所述四足移动机器人本体的矩形躯体表面中轴线垂直切面前端处放置有激光雷达,矩形躯体上方正方体框架左、右两侧与四足移动机器人本体头部空间内分别放置左侧工业相机、右侧工业相机与头部工业相机,正方体框架中轴线与四足移动机器人本体的中轴线重合,四足机器人本体的矩形躯体尾部放置有超声波传感器;A laser radar is placed at the front end of the vertical section of the central axis of the rectangular body surface of the quadruped mobile robot body. The left and right industrial cameras are respectively placed on the left and right sides of the cube frame above the rectangular body and in the head space of the quadruped mobile robot body. The side industrial camera and the head industrial camera, the central axis of the cube frame coincides with the central axis of the quadruped mobile robot body, and an ultrasonic sensor is placed at the tail of the rectangular body of the quadruped robot body;
在四足移动机器人本体内部放置有上位机与下位机,上位机与下位机通过网线连接,上位机包含WiFi模块;There are a host computer and a lower computer placed inside the quadruped mobile robot body. The upper computer and the lower computer are connected through network cables. The upper computer contains a WiFi module;
所述远程监控终端与上位机无线连接,采用如前所述的复杂室内场景下移动机器人巡检及远程监控方法对四足移动机器人进行控制。The remote monitoring terminal is wirelessly connected to the host computer, and the quadruped mobile robot is controlled using the mobile robot inspection and remote monitoring methods in complex indoor scenes as mentioned above.
四足机器人本体7包括矩形躯体与四肢,矩形躯体为横向支架结构,其内部放置上位机2与下位机3与电源10;横向支架上层中轴线前段放置有激光雷达1,通过串口与USB拓展坞9连接,横向支架上层中部放置正方体框架,尾部放置超声波传感器6,通过串口与USB拓展坞9连接;正方体框架左、右两侧最上方横梁中点分别放置左侧工业相机4和右侧工业相机5,左侧工业相机4和右侧工业相机5与正方体框架中轴线平行,且与USB拓展坞9连接,正方体框架内部放置USB拓展坞9,并作为线路布置的空间;USB拓展坞9通过USB接线与上位机2连接。四足机器人本体7所包含的矩形躯体前端左侧固定有2个关节电机8,前端右侧与其对称2个相同的关节电机8;四足机器人本体7所包含的矩形躯体后端左侧固定有2个关节电机8,前端右侧与其对称2个相同的关节电机8;每处最外层关节电机8与腿部结构连接,共有4处腿部结构,4处腿部结构与矩形躯体共同组成四足机器人本体7。The quadruped robot body 7 includes a rectangular body and limbs. The rectangular body is a transverse bracket structure. The upper computer 2 and the lower computer 3 and the power supply 10 are placed inside the rectangular body. A laser radar 1 is placed in the front section of the central axis of the upper layer of the transverse bracket. Through the serial port and USB expansion dock 9 connection, a cube frame is placed in the middle of the upper layer of the horizontal bracket, and an ultrasonic sensor 6 is placed at the tail, which is connected to the USB docking station 9 through the serial port; the left industrial camera 4 and the right industrial camera are placed at the midpoints of the uppermost beams on the left and right sides of the cube frame respectively. 5. The left industrial camera 4 and the right industrial camera 5 are parallel to the central axis of the cube frame and connected to the USB docking station 9. The USB docking station 9 is placed inside the cube frame and serves as a space for line layout; the USB docking station 9 is connected via USB The wiring is connected to the host computer 2. There are two joint motors 8 fixed on the left side of the front end of the rectangular body included in the quadruped robot body 7, and two identical joint motors 8 on the right side of the front end; the left side of the rear end of the rectangular body included in the quadruped robot body 7 is fixed with two joint motors 8. 2 joint motors 8, 2 identical joint motors 8 are symmetrical to the right side of the front end; each outermost joint motor 8 is connected to the leg structure, there are 4 leg structures in total, and the 4 leg structures are composed of the rectangular body Quadruped robot body 7.
激光雷达1为0.2m-16m的激光雷达,激光雷达1扫描范围为周身360°;左侧工业相机4、右侧工业相机5与头部工业相机均无畸变,视野范围100°,与地面夹角可调。超声波传感器6的探测距离为0.03m-5m,超声波传感器6探测方式为直线测量;激光雷达1与超声波传感器6获取距离信息,激光雷达坐标系为以移动机器人中轴线为极轴,正前方为正方向。使用激光雷达1对移动机器人巡检作业过程中前方0.2m-1m范围内的障碍物进行检测;使用激光雷达1检测移动机器人巡检作业过程中与前方墙壁的距离,通过串口将检测到信息传输至上位机2将该距离作为参照判断是否进行转弯:根据现场环境,当移动机器人需沿墙壁运动时,使用激光雷达检测移动机器人侧面与墙壁的距离,通过串口将检测到信息传输至上位机2计算读取到的三个角度范围内的距离数据差值,判断移动机器人与墙壁的平行情况,如不平行则进行调整;超声波传感器6测量移动机器人尾部到墙壁、挡板的直线距离,用于移动机器人返回原点;左侧工业相机4和右侧工业相机5固定在移动机器人左、右两侧,用于检测道路两侧的车道线。Lidar 1 is a 0.2m-16m lidar. The scanning range of Lidar 1 is 360° around the body. The left industrial camera 4, the right industrial camera 5 and the head industrial camera have no distortion, and the field of view range is 100°, sandwiched with the ground. The angle is adjustable. The detection distance of ultrasonic sensor 6 is 0.03m-5m, and the detection method of ultrasonic sensor 6 is linear measurement; lidar 1 and ultrasonic sensor 6 obtain distance information. The lidar coordinate system takes the central axis of the mobile robot as the polar axis, and the front is the positive axis. direction. Use lidar 1 to detect obstacles within the range of 0.2m-1m in front of the mobile robot during inspection operations; use lidar 1 to detect the distance from the wall in front of the mobile robot during inspection operations, and transmit the detected information through the serial port The host computer 2 uses this distance as a reference to determine whether to make a turn: According to the on-site environment, when the mobile robot needs to move along the wall, lidar is used to detect the distance between the side of the mobile robot and the wall, and the detected information is transmitted to the host computer 2 through the serial port. Calculate the difference between the read distance data within the three angle ranges to determine the parallel status of the mobile robot and the wall. If not, make adjustments; the ultrasonic sensor 6 measures the straight-line distance from the tail of the mobile robot to the wall and baffle for The mobile robot returns to the origin; the left industrial camera 4 and the right industrial camera 5 are fixed on the left and right sides of the mobile robot and are used to detect lane lines on both sides of the road.
上位机与下位机为可安装ubuntu系统的微型计算机。下位机3接收和发送信息给上位机2,同时对接收的信息进行处理,判断巡检任务类型并发送动作指令给关节电机8,通过UDP方式向上位机2发送激光雷达1、超声波传感器6、左侧工业相机4与右侧工业相机5等传感器的启停指令与巡检任务完成指令;上位机2用于获取激光雷达1、超声波传感器6和左侧工业相机4与右侧工业相机5的检测数据,经过判断处理后将任务指令与控制指令,发送给下位机3,上位机2与远程监控终端进行通信。The upper computer and the lower computer are microcomputers that can install the Ubuntu system. The lower computer 3 receives and sends information to the upper computer 2, processes the received information at the same time, determines the inspection task type and sends action instructions to the joint motor 8, and sends lidar 1, ultrasonic sensor 6, The start and stop instructions and inspection task completion instructions of sensors such as the left industrial camera 4 and the right industrial camera 5; the host computer 2 is used to obtain the information of the laser radar 1, the ultrasonic sensor 6, and the left industrial camera 4 and the right industrial camera 5. After the detection data is judged and processed, the task instructions and control instructions are sent to the lower computer 3, and the upper computer 2 communicates with the remote monitoring terminal.
本发明的复杂室内场景下移动机器人巡检及远程监控方法基于ROS操作系统实现智能移动设备远程通信,具体包括:在上位机与下位机均安装Ubuntu系统,配置ROS环境;在上位机在ROS操作系统下安装功能包,功能包包括:发布订阅服务功能包、调用雷达功能包、串口通信功能包、flask框架、mjpg-streamer;在上位机ROS操作系统环境下,运行启动文件,启动文件包括:web框架程序、上位机与下位机UDP通信程序、激光雷达测距程序、巡线程序、入库程序、mjpg-streamer;上位机连接至局域网确定上位机IP地址与端口,配置上位机与下位机、上位机与远程终端之间的连接,下位机连接至局域网,确定下位机IP地址与端口,配置下位机与上位机之间的连接,在下位机编写移动机器人运动程序,并生成可执行文件,编写启动脚本,设置开机自启动;上位机与下位机上电后步骤三与步骤四所述程序均自启动,远程终端连接至局域网,在远程终端打开浏览器网页输入上位机配置的地址,打开远程控制终端网页;进入远程控制终端网页,选择任务模式,移动机器人则按照预先设置的路线沿车道线或墙面进行巡逻,到达指定地点后进行检查,巡检任务完成后上位机向远程终端网页上报任务完成结果,移动机器人回到原点库内趴下待命;在远程控制终端网页,选择控制模式,出现实时视频、虚拟键盘、运行状态,通过虚拟按键对移动机器人进行前进、后退、左平移、右平移、左转弯以及右转弯控制,通过窗口查看当前移动机器人前方实时图像。The mobile robot inspection and remote monitoring method in complex indoor scenes of the present invention is based on the ROS operating system to realize remote communication of intelligent mobile devices. Specifically, it includes: installing the Ubuntu system on both the upper computer and the lower computer, and configuring the ROS environment; operating the ROS on the upper computer Install the function package under the system, which includes: publish and subscribe service function package, call radar function package, serial communication function package, flask framework, mjpg-streamer; in the host computer ROS operating system environment, run the startup file, the startup file includes: web framework program, UDP communication program between the upper computer and lower computer, lidar ranging program, line patrol program, warehousing program, mjpg-streamer; the upper computer is connected to the LAN to determine the IP address and port of the upper computer, and configures the upper computer and lower computer , the connection between the upper computer and the remote terminal, the lower computer is connected to the LAN, the IP address and port of the lower computer are determined, the connection between the lower computer and the upper computer is configured, the mobile robot motion program is written on the lower computer, and an executable file is generated , write a startup script, and set up automatic startup; after the upper computer and lower computer are powered on, the procedures described in steps three and four will start automatically. The remote terminal is connected to the LAN, open the browser web page on the remote terminal, enter the address configured by the upper computer, and open Remote control terminal web page; enter the remote control terminal web page, select the task mode, the mobile robot will patrol along the lane line or wall according to the preset route, and conduct inspection after arriving at the designated location. After the inspection task is completed, the host computer will report to the remote terminal web page After reporting the task completion result, the mobile robot returns to the origin library and lies down on standby; on the remote control terminal web page, select the control mode, and real-time video, virtual keyboard, and running status will appear. Use virtual buttons to move the mobile robot forward, backward, left pan, and Right pan, left turn and right turn control, view the real-time image in front of the current mobile robot through the window.
其中,web应用程序启动flask框架配置任务接口,接收远程终端发送的任务请求,通过ROS话题的形式传递至UDP程序。在激光雷达测距程序运行时,上位机在移动机器人运动过程中接收激光雷达距离检测信息,通过程序根据障碍物检测与墙壁检测情况做出判断,生成停止、左转、右转控制指令,通过ROS话题的形式传递至UDP发送程序。巡线程序特征为上位机在移动机器人运动过程中接收左侧工业相机4与右侧工业相机5检测信息,通过程序根据检测车道线情况,生成左移、右移、左转、右转控制指令,根据车道线的有无,判断是否切换至沿墙运动状态,通过ROS话题的形式传递至UDP发送程序。入库程序运行时,上位机在移动机器人运动过程中接收超声波传感器6检测信息,在移动机器人返回原点进行入库动作时,通过程序根据检测移动机器人尾部到原点库区挡板距离情况,生成后退或前进控制指令,通过ROS话题的形式传递至UDP发送程序;上位机通过ROS接收话题,将S31-S34所描述任务与控制指令集合生成UDP数据帧,通过UDP发送至下位机,同时接收下位机通过UDP发送的数据帧;通过mjpg-stremer调用头部工业相机11用于局域网内远程视频传输。Among them, the web application starts the flask framework to configure the task interface, receives the task request sent by the remote terminal, and passes it to the UDP program in the form of a ROS topic. When the lidar ranging program is running, the host computer receives the lidar distance detection information during the movement of the mobile robot, makes judgments based on the obstacle detection and wall detection through the program, and generates stop, turn left, and turn right control instructions. The form of a ROS topic is passed to the UDP sender. The characteristic of the line patrol program is that the host computer receives the detection information of the left industrial camera 4 and the right industrial camera 5 during the movement of the mobile robot, and generates left move, right move, left turn, and right turn control instructions based on the detected lane line conditions through the program. , based on the presence or absence of lane lines, determine whether to switch to the motion state along the wall, and pass it to the UDP sending program in the form of a ROS topic. When the warehousing program is running, the host computer receives the detection information of the ultrasonic sensor 6 during the movement of the mobile robot. When the mobile robot returns to the origin for the warehousing action, the program detects the distance from the tail of the mobile robot to the origin warehouse area baffle to generate a retreat. Or forward control instructions are passed to the UDP sending program in the form of ROS topics; the host computer receives the topic through ROS, generates a UDP data frame from the set of tasks and control instructions described in S31-S34, and sends it to the lower computer through UDP, while receiving the lower computer Data frames sent via UDP; calling the head industrial camera 11 through mjpg-stremer for remote video transmission within the LAN.
激光雷达检测移动机器人前端与障碍物的距离,在移动机器人运动过程中,当物体出现在检测角度范围为140°-220°、长度范围为0.2m-1m的区域内,程序做出停止判断,并发布ROS话题;激光雷达检测移动机器人前端与墙壁的距离,检测角度范围为178°-182°,长度范围根据巡检现场环境调节,移动机器人在任务模式下工作,运动至路线中的转弯处时,通过激光雷达检测程序检测与正前方墙壁距离,当距离达到设定的转弯点距离时,生成转弯指令,并发布ROS话题;激光雷达检测移动机器人右侧与墙壁的距离,当移动机器人在任务模式下需沿着墙壁移动时,调整指令均通过ROS话题发布。Lidar detects the distance between the front end of the mobile robot and the obstacle. During the movement of the mobile robot, when the object appears in the area with a detection angle range of 140°-220° and a length range of 0.2m-1m, the program will make a stop judgment. And released a ROS topic; lidar detects the distance between the front end of the mobile robot and the wall. The detection angle range is 178°-182°. The length range is adjusted according to the inspection site environment. The mobile robot works in task mode and moves to the turn in the route. When , the lidar detection program detects the distance to the wall directly in front. When the distance reaches the set turning point distance, a turning instruction is generated and a ROS topic is released; the lidar detects the distance between the right side of the mobile robot and the wall. When the mobile robot is at When it is necessary to move along the wall in mission mode, the adjustment instructions are issued through the ROS topic.
具体地,本发明采用的激光雷达为二维激光雷达;激光雷达扫描得到的信息包括:一维距离数组[d0,d1,d2,…,d359],其中数组中的元素di为激光雷达距离障碍物距离大小,i表示雷达扫描的角度,根据di值的变化,确定所述雷达所需的距离。Specifically, the lidar used in the present invention is a two-dimensional lidar; the information obtained by lidar scanning includes: a one-dimensional distance array [d 0 , d 1 , d 2 , ..., d 359 ], where the element d i in the array is the distance between the lidar and the obstacle, i represents the angle of radar scanning, and the distance required by the radar is determined based on the change in the value of di .
激光雷达检测程序实现了避障、转弯点判断与墙壁检测。The lidar detection program implements obstacle avoidance, turning point judgment and wall detection.
激光雷达避障方法包括:移动机器人运动过程中,从激光雷达检测的360帧距离数据中截取得到[d141,d142,d143,…,d220],当0.2≤min[d141,d142,d143,…,d220]≤1时,生成停止指令,并发布避障ROS话题;The lidar obstacle avoidance method includes: during the movement of the mobile robot, [d 141 , d 142 , d 143 ,..., d 220 ] is intercepted from the 360 frames of distance data detected by the lidar. When 0.2≤min[d 141 , d 142 , d 143 , ..., d 220 ] ≤ 1, generate a stop command and publish the obstacle avoidance ROS topic;
激光雷达转弯点判断方法包括:通过计算时间预估四足移动机器人是否到达转弯节点附近,如到达转弯节点附近,此时开始从激光雷达检测的360帧距离数据中截取得到[d178,d179,d180,d181,d182],预先确定转弯节点实际与临近墙壁距离dx,当min[d178,d179,d180,d181,d182]≤dx时,生成转弯90°指令,转弯方向根据现场需要设置,发布转弯ROS话题;The lidar turning point judgment method includes: estimating whether the quadruped mobile robot reaches the turning node by calculating the time. If it reaches the turning node, it starts to be intercepted from the 360 frames of distance data detected by the lidar [d 178 , d 179 , d 180 , d 181 , d 182 ], predetermine the actual distance d x between the turning node and the adjacent wall. When min [d 178 , d 179 , d 180 , d 181 , d 182 ] ≤ d x , a 90° turn is generated Instructions, turn directions are set according to on-site needs, and turn ROS topics are published;
激光雷达墙壁检测方法包括:在无车道线区域,切换至沿墙运动模式,此时从激光雷达检测的360帧距离数据中截取得到四足移动机器人右侧面与墙壁的距离[d60,d61,d62,d63]与[d119,d120,d121,d122],计算平均值与/>以及差值ΔD=D1-D2:The lidar wall detection method includes: in the area without lane lines, switch to the wall movement mode. At this time, the distance between the right side of the quadruped mobile robot and the wall is intercepted from the 360 frames of distance data detected by the lidar [d 60 , d 61 , d 62 , d 63 ] and [d 119 , d 120 , d 121 , d 122 ], calculate the average with/> And the difference ΔD=D 1 -D 2 :
式中,vr为实际调整速度,v0为预设调整速度,v0为正表示左转调整,为负表示右转调整,发布沿墙运动ROS话题,保持四足移动机器人与墙面平行。In the formula, v r is the actual adjustment speed, v 0 is the preset adjustment speed, v 0 is positive for left-turn adjustment, and negative for right-turn adjustment. Post the ROS topic of wall-moving to keep the quadruped mobile robot parallel to the wall. .
移动机器人在任务模式下需要沿车道线移动时,根据车道线与移动机器人的相对位置选择使用左侧工业相机4视野或右侧工业相机5视野;保证移动机器人与车道线平行且相距约0.3m。When the mobile robot needs to move along the lane line in mission mode, choose to use the left industrial camera 4 field of view or the right industrial camera 5 field of view according to the relative position of the lane line and the mobile robot; ensure that the mobile robot is parallel to the lane line and about 0.3m apart. .
移动机器人在任务模式下,如存在车道线,根据车道线与移动机器人的相对位置,预先设置每一段路线使用左侧工业相机4视野或右侧工业相机5视野;When the mobile robot is in mission mode, if there is a lane line, each route is preset to use the 4-field view of the left industrial camera or the 5-field field of view of the right industrial camera according to the relative position of the lane line and the mobile robot;
对工业相机采集到的RGB图像进行区域划分,屏蔽四足机器人本体7的视野干扰,截取出ROI区域;Divide the RGB image collected by the industrial camera into regions, shield the field of view interference of the quadruped robot body 7, and intercept the ROI area;
截取出的ROI区域为三通道RGB图像,将其作为输入图像,R、G、B分别表示三通道的像素值,根据式(2)至式(4)计算出对应的H、S、V通道像素值,得到HSV图像:The intercepted ROI area is a three-channel RGB image, which is used as the input image. R, G, and B represent the pixel values of the three channels respectively. The corresponding H, S, and V channels are calculated according to equations (2) to (4). Pixel value, get HSV image:
V=Cmax (4)V=C max (4)
其中,Cmax=max(R′,G′,B′);Cmin=min(R′,G′,B′);Δ=Cmax-Cmin。in, C max =max(R', G', B'); C min =min(R', G', B'); Δ=C max -C min .
根据公式(5)对HSV图像进行二值化操作,输出原始二值图像:Binarize the HSV image according to formula (5) and output the original binary image:
式中,dst(I)为输出像素值,lowerb(I)为对应通道的下阈值,upperb(I)为对应通道的上阈值,src(I)为输入图像对应通道的像素值。当每个通道的像素值都在设定的上、下阈值范围内时,dst(I)即为255,若不满足则为0;In the formula, dst(I) is the output pixel value, lowerb(I) is the lower threshold of the corresponding channel, upperb(I) is the upper threshold of the corresponding channel, and src(I) is the pixel value of the corresponding channel of the input image. When the pixel value of each channel is within the set upper and lower threshold range, dst(I) is 255, if not, it is 0;
按照现场车道线磨损、污染等情况提取多组地面车道线HSV色彩空间阈值,根据公式(6)进行融合生成掩膜,将原始二值图像作为输入进行颜色过滤,得到融合的二值图像,从而得到车道线大致区域:Extract multiple sets of ground lane line HSV color space thresholds according to on-site lane line wear, pollution, etc., fuse and generate masks according to formula (6), use the original binary image as input for color filtering, and obtain the fused binary image, thus Get the approximate area of the lane line:
dst=mask1| mask2|…|maski (6)dst=mask 1 | mask 2 |…|mask i (6)
式中,dst为融合的二值图,maski为不同颜色范围处理下的掩膜,|表示按位或;In the formula, dst is the fused binary image, mask i is the mask under different color range processing, | means bitwise OR;
对融合的二值图像中进行开运算、闭运算,去除细小污染与磨损的干扰,得到完整的车道线区域;Perform opening and closing operations on the fused binary image to remove interference from small contamination and wear, and obtain a complete lane line area;
开运算对图像先进行腐蚀将粘连的细微处分离并去除毛刺与孤立的点,再进行膨胀维持区域原有形状。当A为初始图像,B为m×n的结构元素时,腐蚀与膨胀运算过程如式(7)与(8)所示:The opening operation first erodes the image to separate the fine points of adhesion and remove burrs and isolated points, and then expands to maintain the original shape of the area. When A is the initial image and B is the m×n structural element, the corrosion and expansion operation processes are as shown in equations (7) and (8):
式中,z表示B在A上的平移,表示膨胀运算符式,Θ表示腐蚀运算符,则开运算如式(9):In the formula, z represents the translation of B on A, represents the expansion operator formula, and Θ represents the corrosion operator, then the opening operation is as follows (9):
通过闭运算消除细小噪声空洞,平滑车道线轮廓,补全断裂与缝隙。闭运算计算过程如式(10)所示:Through closed operations, small noise holes are eliminated, lane line contours are smoothed, and breaks and gaps are filled. The closed operation calculation process is shown in Equation (10):
以最优二值图片作为输入,进行Canny边缘检测,主要过程包括:使用高斯分布对像素点(x,y)进行高斯滤波去除图像噪声;利用Sobel算子Sx与Sy计算高斯滤波后图像I的每个像素点梯度的幅值/>与方向根据每个像素梯度方向与幅值进行非极大值抑制,去除可能不构成边缘的像素;设置双阈值,保留强边缘与虚边缘,判断强边缘与虚边缘的连接情况,保留与强边缘连接的虚边缘;Taking the optimal binary image as input, Canny edge detection is performed. The main process includes: using Gaussian distribution Perform Gaussian filtering on pixels (x, y) to remove image noise; use Sobel operators S x and S y to calculate the amplitude of the gradient of each pixel of image I after Gaussian filtering/> with direction Perform non-maximum suppression based on the gradient direction and amplitude of each pixel to remove pixels that may not constitute edges; set double thresholds to retain strong edges and virtual edges, determine the connection between strong edges and virtual edges, and retain connections with strong edges virtual edge;
对边缘检测后的图像进行霍夫直线变换,拟合出多组与实际车道线疑似贴合的直线,设定夹角界限θ,取tanθ作为斜率界限,保留-tanθ~tanθ之间的直线;Perform Hough straight line transformation on the image after edge detection, and fit multiple sets of straight lines that are suspected to fit the actual lane lines. Set the angle limit θ, take tanθ as the slope limit, and retain the straight line between -tanθ~tanθ;
通过判断多组与实际车道线平行直线的中点纵坐标[y1,y2,…,yi],找到ymax=max(y1,y2,…,yi),ymax所对应的直线则为ROI区域最下方的直线,该直线即为与实际车道线最贴合的直线,以该直线作为四足移动机器人巡线标准;By judging the midpoint ordinates [y 1 , y 2 ,..., y i ] of multiple sets of straight lines parallel to the actual lane lines, find y max =max (y 1 , y 2 ,..., y i ), corresponding to y max The straight line is the straight line at the bottom of the ROI area. This straight line is the straight line that best fits the actual lane line. This straight line is used as the line patrol standard for the quadruped mobile robot;
计算视野最下方与实际车道线最贴合的直线的斜率k与中点像素坐标(x,y),k≤-0.016时左转调整,k≥0.016时右转调整,当使用左侧相机视野时,y≤400px时向右平移,y≥480px时向左平移,当使用右侧相机视野时,y≤400px时向左平移,y≥480px时向右平移,保证移动机器人与车道线平行且相距约0.3m,发布巡线调整ROS话题。Calculate the slope k and midpoint pixel coordinates (x, y) of the straight line at the bottom of the field of view that best fits the actual lane line. When k ≤ -0.016, turn left and adjust, when k ≥ 0.016, turn right and adjust. When using the left camera field of view When y ≤ 400px, translate to the right, when y ≥ 480px, translate to the left. When using the right camera field of view, when y ≤ 400px, translate to the left, when y ≥ 480px, translate to the right, ensuring that the mobile robot is parallel to the lane line and The distance is about 0.3m, and the ROS topic of line patrol adjustment is released.
存在车道线时,上位机处理左、右两侧工业相机采集现场车道线视频图像,经过图像处理有效克服车道线磨损、污染的干扰,拟合出与车道线贴合直线,经控制策略判断输出调整指令,保持四足移动机器人与车道线平行;无车道线时,激光雷达检测侧边墙壁,经控制策略判断输出调整指令,保持四足移动机器人墙壁平行;激光雷达获取与正前方障碍物距离进行避障判断,检测转弯节点处与墙壁距离判断转弯动作;任务即将完成时,通过激光雷达与超声波传感调整入库姿态,完成入库动作。所有转弯节点、任务指定点、巡线区域、沿墙运动区域均已预先确定,无车道线区域包括:用户自主设定的无线段区域与巡线程序未检测出车道线的区域。例如,在长直道运动情况下,工业相机视野中连续20帧未能检测到车道线时,即判定为无车道线,此时切换状态,通过沿墙调整方法运动20s,此过程中若重新检测到车道线则切换至巡线模式,若20s后仍未检测到车道线则停止运动,并向远程终端发送任务终止指令。When there are lane lines, the host computer processes the video images of the on-site lane lines collected by the industrial cameras on the left and right sides. After image processing, it effectively overcomes the interference of lane line wear and pollution, and fits a straight line that fits the lane lines. The output is judged by the control strategy. Adjust the instruction to keep the quadruped mobile robot parallel to the lane line; when there is no lane line, the lidar detects the side wall, and outputs the adjustment instruction based on the control strategy judgment to keep the quadruped mobile robot parallel to the wall; the lidar obtains the distance to the obstacle directly ahead Make obstacle avoidance judgments and detect the distance between the turning node and the wall to determine the turning action; when the task is about to be completed, the warehousing attitude is adjusted through lidar and ultrasonic sensing to complete the warehousing action. All turning nodes, task designated points, line patrol areas, and wall movement areas have been predetermined. Lane-free areas include: wireless segment areas set by users and areas where lane lines are not detected by the line patrol program. For example, when moving on a long straight road, if the industrial camera fails to detect lane lines in the field of view for 20 consecutive frames, it is determined that there is no lane line. At this time, the state is switched and the movement is moved along the wall for 20 seconds. If the lane line is re-detected during this process, When it reaches the lane line, it switches to the line patrol mode. If the lane line is not detected after 20 seconds, it stops moving and sends a task termination command to the remote terminal.
入库程序特征包括:Inbound program features include:
预估四足移动机器人即将完成巡检任务运动至原点库区附近的时间,此时开始减速,激光雷达通过权利要求9所述转弯点判断方法进行转弯判断;It is estimated that the quadruped mobile robot is about to complete the inspection task and move to the vicinity of the origin warehouse area. At this time, it starts to decelerate, and the laser radar performs turning judgment through the turning point judgment method described in claim 9;
完成转弯后控制四足移动机器人后退并启动超声波传感器检测尾部与库区底部挡板距离dy,当dy≤168mm时,停止后退并开始向右平移;After completing the turn, control the quadruped mobile robot to retreat and start the ultrasonic sensor to detect the distance d y between the tail and the bottom baffle of the warehouse area. When d y ≤ 168mm, stop retreating and start to translate to the right;
在右平移过程中,通过权利要求9所述沿墙调整方法保持机器人与右侧墙面平行,此时取激光雷达360帧距离数据中的[d89,d90],预先设置四足移动机器人与库内右侧墙壁距离dz,当时,四足移动机器人停止右移,站立2s后趴下,等待下一任务指令。During the right translation process, the robot is kept parallel to the right wall through the wall adjustment method described in claim 9. At this time, [d 89 , d 90 ] in the 360-frame distance data of the lidar is taken, and the quadruped mobile robot is preset The distance d z from the right wall of the library, when , the quadruped mobile robot stops moving to the right, stands for 2 seconds and then lies down, waiting for the next task command.
以上仅是本发明的优选实施方式,本发明的保护范围并不仅局限于上述实施例,凡属于本发明思路下的技术方案均属于本发明的保护范围。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理前提下的若干改进和润饰,应视为本发明的保护范围。The above are only preferred embodiments of the present invention. The protection scope of the present invention is not limited to the above-mentioned embodiments. All technical solutions that fall under the idea of the present invention belong to the protection scope of the present invention. It should be pointed out that for those of ordinary skill in the art, several improvements and modifications without departing from the principle of the present invention should be regarded as the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311019366.9A CN117032235A (en) | 2023-08-11 | 2023-08-11 | Mobile robot inspection and remote monitoring method under complex indoor scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311019366.9A CN117032235A (en) | 2023-08-11 | 2023-08-11 | Mobile robot inspection and remote monitoring method under complex indoor scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117032235A true CN117032235A (en) | 2023-11-10 |
Family
ID=88642670
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311019366.9A Pending CN117032235A (en) | 2023-08-11 | 2023-08-11 | Mobile robot inspection and remote monitoring method under complex indoor scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117032235A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118163880A (en) * | 2024-05-14 | 2024-06-11 | 中国海洋大学 | Building disease detection quadruped robot and detection method |
-
2023
- 2023-08-11 CN CN202311019366.9A patent/CN117032235A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118163880A (en) * | 2024-05-14 | 2024-06-11 | 中国海洋大学 | Building disease detection quadruped robot and detection method |
CN118163880B (en) * | 2024-05-14 | 2024-07-30 | 中国海洋大学 | A quadruped robot for detecting building defects and a detection method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110737271B (en) | Autonomous cruising system and method for water surface robot | |
US7162056B2 (en) | Systems and methods for the automated sensing of motion in a mobile robot using visual data | |
JP5080333B2 (en) | Object recognition device for autonomous mobile objects | |
US11402850B2 (en) | Robotic cleaning device with operating speed variation based on environment | |
CN111693050B (en) | Indoor medium and large robot navigation method based on building information model | |
US12096909B2 (en) | Systems and methods for visual docking in an autonomous mobile robot | |
CN106527426A (en) | Indoor multi-target track planning system and method | |
CN104390645B (en) | A kind of intelligent wheel chair indoor navigation method of view-based access control model information | |
Singh et al. | A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors | |
JP7249422B2 (en) | Route management system and its management method | |
US20230123512A1 (en) | Robotic cleaning device with dynamic area coverage | |
CN115223039A (en) | Robot semi-autonomous control method and system for complex environment | |
CN111966089A (en) | Method for estimating speed of dynamic obstacle by using cost map in mobile robot | |
CN117032235A (en) | Mobile robot inspection and remote monitoring method under complex indoor scene | |
CN115755888A (en) | AGV obstacle detection system with multi-sensor data fusion and obstacle avoidance method | |
Kamarudin et al. | Improving performance of 2D SLAM methods by complementing Kinect with laser scanner | |
CN113994291A (en) | Moving body, control method and program | |
CN113610910B (en) | Obstacle avoidance method for mobile robot | |
Su et al. | A framework of cooperative UAV-UGV system for target tracking | |
WO2016048238A1 (en) | Method and apparatus for navigation of a robotic device | |
CN114299039A (en) | Robot and collision detection device and method thereof | |
WO2021246170A1 (en) | Information processing device, information processing system and method, and program | |
CN117850420A (en) | Sanitation vehicle control method and sanitation vehicle | |
JP2023015634A (en) | Information processing apparatus, moving object control system, information processing method, and program | |
CN112925326B (en) | AGV obstacle avoidance method based on data fusion of laser radar and depth camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |