WO2021129071A1 - Robot, positioning method, and computer readable storage medium - Google Patents
Robot, positioning method, and computer readable storage medium Download PDFInfo
- Publication number
- WO2021129071A1 WO2021129071A1 PCT/CN2020/121655 CN2020121655W WO2021129071A1 WO 2021129071 A1 WO2021129071 A1 WO 2021129071A1 CN 2020121655 W CN2020121655 W CN 2020121655W WO 2021129071 A1 WO2021129071 A1 WO 2021129071A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- positioning
- pose information
- robot
- information
- pose
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 239000003550 marker Substances 0.000 claims abstract description 101
- 238000012545 processing Methods 0.000 claims description 32
- 230000009466 transformation Effects 0.000 claims description 16
- 238000012216 screening Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 description 8
- 238000013507 mapping Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
Definitions
- This application relates to the field of positioning and image processing, and specifically, to a robot, a positioning method, and a computer-readable storage medium.
- this application provides at least a robot, a positioning method, and a computer-readable storage medium.
- the present application provides a robot including a camera and a processor; the processor includes an analysis processing module, an image coordinate determination module, a pose transformation module, and a target pose determination module;
- the camera is set to capture a target image including one or more positioning landmarks
- the analysis processing module is configured to determine the first pose information of each of the positioning landmarks in the geographic coordinate system
- the image coordinate determining module is configured to determine the first position coordinate of the key point corresponding to each of the positioning markers in the target image
- the pose transformation module is configured to, for each positioning landmark, determine the second position of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark.
- Pose information is configured to, for each positioning landmark, determine the second position of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark.
- the target pose determination module is configured to determine the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning landmark.
- the analysis processing module is further configured to determine the second position coordinates of the key point corresponding to the positioning marker in the marker coordinate system;
- the pose transformation module is specifically set to:
- the second pose information of the robot in the geographic coordinate system is determined.
- the target pose determination module is specifically set to:
- the target posture information is determined.
- the target pose determination module determines the target pose information based on each second pose information after adjustment
- the specific settings are as follows:
- the adjusted second pose information corresponding to the smallest reprojection error is used as the target pose information.
- the analysis processing module is specifically configured to:
- the first pose information of each positioning landmark in the geographic coordinate system is determined.
- the location marker is a two-dimensional code
- the analysis processing module determines the identification information of each of the positioning markers, it is specifically set as follows:
- the analysis processing module selects the positioning marker from the target image
- the specific setting is as follows:
- each object Based on the shape of each object in the target image, it is determined whether each object is the positioning marker.
- the specific settings are as follows:
- each object Based on the shape of the front view of each object, it is determined whether each object is the positioning marker.
- this application discloses a positioning method, including:
- For each positioning landmark determine the second pose information of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark;
- the determining the second pose information of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark includes :
- the second pose information of the robot in the geographic coordinate system is determined.
- the determining the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning landmark includes:
- the target posture information is determined.
- the determining the first pose information of each of the positioning landmarks in a geographic coordinate system includes:
- the first pose information of each positioning landmark in the geographic coordinate system is determined.
- the location marker is a two-dimensional code
- the determining the identification information of each of the positioning markers includes:
- Each two-dimensional code is decoded separately to obtain the identification information of each two-dimensional code.
- the screening of the positioning markers from the target image includes:
- each object Based on the shape of each object in the target image, it is determined whether each object is the positioning marker.
- the determining whether each object is the positioning marker based on the shape of each object in the target image includes:
- each object Based on the shape of the front view of each object, it is determined whether each object is the positioning marker.
- the present application also provides a computer-readable storage medium with a computer program stored on the computer-readable storage medium, and the computer program executes the steps of the above positioning method when the computer program is run by a processor.
- the present application provides a robot, a positioning method, and a computer-readable storage medium.
- the robot first takes a target image including multiple positioning markers; then, based on the key point corresponding to each positioning marker in the target image A position coordinate and the first pose information of each positioning landmark in the geographic coordinate system determine the multiple second pose information of the robot in the geographic coordinate system; finally, based on the second position corresponding to each positioning landmark Pose information, to determine the target pose information of the robot in the geographic coordinate system.
- the above technical solution is able to more accurately determine the pose information of the robot in the geographic coordinate system based on the position information of multiple positioning landmarks.
- Figure 1 shows a schematic structural diagram of a robot provided by an embodiment of the present application
- FIG. 2 shows a schematic diagram of an image obtained after preprocessing the target image in an embodiment of the present application
- FIG. 3 shows a schematic diagram of an image obtained after polygonal approximation is performed on a preprocessed target image in an embodiment of the present application
- Fig. 4 shows a front view obtained after performing affine transformation on a parallelogram in an embodiment of the present application
- FIG. 5 shows a schematic diagram of an image obtained by performing grid division on positioning markers obtained by screening in an embodiment of the present application
- Fig. 6 shows a flowchart of a positioning method provided by an embodiment of the present application.
- this application Based on the current needs for robot positioning and the lack of accuracy in current robot positioning, this application provides a robot, a positioning method, and a computer-readable storage medium.
- the robot first photographs a target including multiple positioning markers. Image; After that, based on the first position coordinates of the key points corresponding to each positioning landmark in the target image and the first pose information of each positioning landmark in the geographic coordinate system, determine the number of robots in the geographic coordinate system Second pose information; Finally, based on the second pose information corresponding to each positioning marker, determine the target pose information of the robot in the geographic coordinate system.
- the present application can determine the pose information of the robot in the geographic coordinate system more accurately based on the position information of multiple positioning landmarks.
- the present application provides a robot, including a camera 110, a processor 120; the processor 120 includes an analysis processing module 1201, an image coordinate determination module 1202, a pose transformation module 1203, and a target pose determination module 1204.
- the camera 110 is configured to capture a target image including one or more positioning landmarks.
- the analysis processing module 1201 is configured to determine the first pose information of each of the positioning landmarks in the geographic coordinate system.
- the image coordinate determining module 1202 is configured to determine the first position coordinate of the key point corresponding to each positioning marker in the target image.
- the pose transformation module 1203 is configured to, for each positioning landmark, determine the first position of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark. Two pose information.
- the target pose determination module 1204 is configured to determine the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning marker.
- the aforementioned pose information includes the position coordinates of the object in the corresponding coordinate system and the rotation angle of the object in the corresponding coordinate system.
- the target image taken by the robot with the camera includes multiple positioning markers, and these positioning markers are used to determine the pose information of the robot in the geographic coordinate system.
- the positioning marker may be a two-dimensional code set on the ground, and the analysis processing module 1201 decodes the two-dimensional code in the image to obtain identification information of the positioning marker. After that, the analysis processing module 1201 can determine the first pose information of the positioning marker in the geographic coordinate system by using the identification information of the positioning marker.
- one or more of the above-mentioned positioning markers can be deployed in advance on one or more fixed objects of the ground, shelf sides, ceiling, and walls; the deployment position is that the robot can be photographed by the camera during the movement.
- the shooting direction of the camera For example, you can shoot toward the front (the side of the shelf or the wall), you can shoot upward (the ceiling), you can shoot downward (the ground), or you can shoot toward the side (side of the shelf) Or wall). You can install multiple cameras on the robot to shoot in different directions.
- the above positioning method provided by the embodiments of the present disclosure can be used, because the pose of the positioning marker in the geographic coordinate system is predetermined when the positioning marker is deployed Therefore, based on the pose information of the positioning landmark in the image coordinate system corresponding to the target image taken by the robot, and the pose information of the positioning landmark in the geographic coordinate system, the robot's target in the geographic coordinate system can be determined
- pose information please refer to the following description for details.
- the first pose information of the positioning marker in the geographic coordinates has been determined and known.
- the first pose information of each positioning marker and the identification information of each positioning marker can be respectively established a mapping relationship and stored in the memory of the robot.
- the analysis processing module 1201 obtains the identification information of the positioning landmark in the target image, it can determine the first pose information of the positioning landmark in the geographic coordinate system based on the foregoing mapping relationship.
- the analysis processing module 1201 Before the analysis processing module 1201 analyzes the positioning markers and obtains the identification information, it first needs to screen the positioning markers from the target image. Specifically, the analysis processing module 1201 uses the following steps to screen positioning markers from the target image:
- Step 1 Extract contour information of each object from the target image.
- the analysis processing module 1201 first needs to preprocess the target image before extracting the contour information of each object from the target image.
- the target image can be binarized.
- the image obtained after quantization is the first needs to preprocess the target image before extracting the contour information of each object from the target image.
- the analysis processing module 1201 extracts contour information of each object from the obtained image.
- Step 2 Determine the shape of each object in the target image based on the contour information of each object.
- the analysis processing module 1201 After extracting the contour information of each object, the analysis processing module 1201 respectively performs polygonal approximation on each object based on the contour information of each object to obtain the shape of each object in the target image.
- Figure 3 shows the image obtained after polygonal approximation of the preprocessed target image.
- the shape After determining the shape of each object in the target image, the shape can be matched with the shape of the pre-stored positioning marker. If the direct matching is successful, it can be determined that the object is a positioning marker. In actual implementation, since the shooting angle of the camera may be in any direction, and the shape of the pre-stored positioning marker is generally the shape of the front view, the following step 3 can be performed at this time, first determine the shape of the front view of the object .
- Step 3 Determine the shape of the front view of each object based on the shape of each object in the target image.
- the analytical processing module 1201 determines the shape of each object in the target image, it sorts the corners of each shape and performs radial transformation to obtain the front view of each shape, that is, determine the front view of each object shape.
- Figure 4 shows the front view obtained after performing affine transformation on a parallelogram.
- Step 4 Based on the shape of the front view of each object, determine whether each object is the positioning marker.
- the analysis processing module 1201 After determining the shape of the front view of each object, the analysis processing module 1201 filters the positioning markers based on the shape of the front view of the object. For example, in the case where the positioning marker is a two-dimensional code, the analysis processing module 1201 selects objects whose front view is square as the positioning marker.
- the analysis processing module 1201 filters the positioning markers from the target image, it divides the positioning markers in the target image into grids, and then performs a decoding operation based on the positioning markers after the grid is divided to obtain the positioning markers Identification information.
- Figure 5 shows the image obtained by meshing the selected positioning markers.
- the image coordinate determination module 1202 analyzes and processes the target image, and determines the first position coordinate of the key point corresponding to each positioning marker in the target image.
- the first position coordinate here is the coordinate of the key point corresponding to the positioning marker in the robot coordinate system.
- the pose transformation module 1203 can determine the pose of the robot in the geographic coordinate system based on the first position coordinates of the key points corresponding to the positioning markers in the robot coordinate system and the first pose information of the positioning markers in the geographic coordinate system. Information, that is, the above-mentioned second pose information.
- the pose transformation module 1203 may use the following steps to determine the second pose information of the robot in the geographic coordinate system:
- Step 1 Determine the third pose information of the robot relative to the positioning marker based on the first position coordinates and the second position coordinates corresponding to the positioning marker.
- the analysis processing module 1201 Before the pose transformation module 1203 performs this step, the analysis processing module 1201 first needs to determine the second position coordinates of the key points corresponding to the positioning markers in the marker coordinate system.
- the position coordinates of the positioning marker in the marker coordinate system have been determined and known.
- the identification information of the positioning marker is preset with a mapping relationship with its position coordinates in the marker coordinate system, and is stored in the memory of the robot. After decoding the identification information of the positioning marker, the analysis processing module 1201 can determine the position coordinates of the positioning marker in the marker coordinate system, that is, the second position coordinates, in combination with the foregoing mapping relationship.
- the pose transformation module 1203 is based on the first position coordinates of the key point corresponding to the positioning marker in the robot coordinates and the key point corresponding to the positioning marker.
- the second position coordinates in the marker coordinate system can determine the pose information of the robot relative to the positioning marker in the marker coordinate system, that is, the above-mentioned third pose information.
- Both the first position coordinates and the second position coordinates may include the coordinates of at least 3 non-collinear points on the positioning marker.
- Step 2 Determine the second pose information of the robot in the geographic coordinate system based on the third pose information and the first pose information of the positioning landmark in the geographic coordinate system.
- the pose transformation module 1203 determines the robot's first pose in the geographic coordinate system based on the third pose information of the robot relative to the positioning landmark in the landmark coordinate system and the first pose information of the positioning landmark in the geographic coordinate system. Two pose information.
- the target pose determination module 1204 can use the following steps to determine the target pose information of the robot in the geographic coordinate system:
- Step 1 Determine the reprojection error corresponding to each of the second pose information.
- the target pose determination module 1204 projects the positioning marker and the pose corresponding to the second pose information of the positioning marker onto the same image, and then determines the pose corresponding to each second pose information in the image.
- the error between the projection on the image and the projection of the positioning marker on the image is to determine the reprojection error corresponding to each second pose information.
- Step 2 Regarding the minimum sum of all the reprojection errors as the target, adjust each second pose information.
- the target pose determination module 1204 calculates the sum of all reprojection errors, and adjusts each second pose information with the minimum sum as the target.
- Step 3 Determine the target pose information based on each second pose information after adjustment.
- the target pose determination module 1204 determines the reprojection error corresponding to each adjusted second pose information after finishing the adjustment of the pose information; and corresponds the adjusted second pose to the smallest reprojection error The information is used as the target pose information.
- the target pose determination module 1204 adjusts each second pose information, it may also calculate the average value of the adjusted second pose, and use the obtained average value as the target pose information.
- the above solution can be used as an independent positioning solution to be applied to the positioning during the movement of the robot. It can also be implemented in combination with other positioning schemes, for example, it can be used to determine the initial positioning pose of other positioning schemes, can be used to perform pose calibration during the positioning process, and can also be used to perform pose relocation.
- the initial positioning pose information of the robot can be obtained first based on the positioning method of the embodiment of the present disclosure, and then the inertial measurement unit (IMU) of the robot can be used to determine the positioning pose of the robot in real time, and in the real-time positioning process , Periodically adopt the solution of the embodiments of the present disclosure to determine the pose of the robot to calibrate the real-time positioning pose of the robot.
- IMU inertial measurement unit
- the scheme of the example performs robot relocation.
- the above process can be applied to the process of visual real-time positioning and mapping (Simultaneous Localization and Mapping, SLAM).
- the embodiment of the present application also provides a positioning method, which is applied to the above-mentioned robot, and is used to realize the positioning of the above-mentioned robot, and can achieve the same or similar beneficial effects, so the repeated parts will not be repeated here. .
- the positioning method provided by the embodiment of the present application may include the following steps:
- S620 Determine the first pose information of each of the positioning landmarks in the geographic coordinate system and the first position coordinates of the key points corresponding to each of the positioning landmarks in the target image.
- For each positioning landmark determine the second pose information of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark.
- S640 Determine the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning landmark.
- the determining the second pose information of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark includes:
- the second pose information of the robot in the geographic coordinate system is determined.
- the determining the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning landmark includes:
- the target posture information is determined.
- the determining the first pose information of each of the positioning landmarks in a geographic coordinate system includes:
- the first pose information of each positioning landmark in the geographic coordinate system is determined.
- the location marker is a two-dimensional code
- the determining the identification information of each of the positioning markers includes:
- Each two-dimensional code is decoded separately to obtain the identification information of each two-dimensional code.
- the screening of the positioning markers from the target image includes:
- each object Based on the shape of the front view of each object, it is determined whether each object is the positioning marker.
- the embodiment of the present application also provides a computer program product corresponding to the above method, which includes a computer-readable storage medium storing program code.
- the instructions included in the program code can be used to execute the method in the previous method embodiment. For specific implementation, see The method embodiment will not be repeated here.
- the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile computer readable storage medium executable by a processor.
- the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Automation & Control Theory (AREA)
- Computer Graphics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
A robot, a positioning method, and a computer readable storage medium. The method comprises: first, a robot photographs a target image comprising one or more positioning markers (step 610); next, on the basis of a first position coordinate of a key point corresponding to each positioning marker in the target image, and first attitude information of each positioning marker in a geographic coordinate system (step 620), determining multiple pieces of second attitude information of the robot in the geographic coordinate system (step 630); and finally, determining target attitude information of the robot in the geographic coordinate system on the basis of the second attitude information corresponding to each positioning marker (step 640). According to the method, the attitude information of the robot in the geographic coordinate system can be accurately determined on the basis of position information of multiple positioning markers.
Description
本申请要求在2019年12月25日提交中国专利局、申请号为201911358248.4、申请名称为“机器人、定位方法及计算机可读存储介质”的中国专利的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent filed with the Chinese Patent Office, the application number is 201911358248.4, and the application name is "robots, positioning methods, and computer-readable storage media" on December 25, 2019, the entire contents of which are incorporated herein by reference. Applying.
本申请涉及定位和图像处理领域,具体而言,涉及一种机器人、定位方法及计算机可读存储介质。This application relates to the field of positioning and image processing, and specifically, to a robot, a positioning method, and a computer-readable storage medium.
随着人工智能的快速发展,越来越多的自动化设备给人们的生活带来很大的方便,例如,机器人以其自动化、智能化逐渐在各行各业中被使用。With the rapid development of artificial intelligence, more and more automated devices have brought great convenience to people's lives. For example, robots are gradually being used in all walks of life with their automation and intelligence.
在使用和控制机器人的过程中,首先需要确定机器人的位姿,基于机器人的位姿和机器人的控制目标,才能确定和控制机器人接下来执行的动作。因此,需要一种技术方案能够准确的确定机器人的位姿。In the process of using and controlling the robot, it is first necessary to determine the pose of the robot. Based on the pose of the robot and the control target of the robot, the next action of the robot can be determined and controlled. Therefore, a technical solution is needed to accurately determine the pose of the robot.
发明内容Summary of the invention
有鉴于此,本申请至少提供一种机器人、定位方法及计算机可读存储介质。In view of this, this application provides at least a robot, a positioning method, and a computer-readable storage medium.
第一方面,本申请提供了一种机器人,包括摄像头、处理器;所述处理器包括解析处理模块、图像坐标确定模块、位姿变换模块和目标位姿确定模块;In the first aspect, the present application provides a robot including a camera and a processor; the processor includes an analysis processing module, an image coordinate determination module, a pose transformation module, and a target pose determination module;
所述摄像头设置为,拍摄的包括一个或多个定位标志物的目标图像;The camera is set to capture a target image including one or more positioning landmarks;
所述解析处理模块设置为,确定每个所述定位标志物在地理坐标系中的第一位姿信息;The analysis processing module is configured to determine the first pose information of each of the positioning landmarks in the geographic coordinate system;
所述图像坐标确定模块设置为,确定每个所述定位标志物对应的关键点在所述目标图像中的第一位置坐标;The image coordinate determining module is configured to determine the first position coordinate of the key point corresponding to each of the positioning markers in the target image;
所述位姿变换模块设置为,针对每个定位标志物,基于该定位标志物对应的所述第一位置坐标和所述第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息;The pose transformation module is configured to, for each positioning landmark, determine the second position of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark. Pose information
所述目标位姿确定模块设置为,基于每个定位标志物对应的所述第二位姿信息,确定所述机器人在地理坐标系中的目标位姿信息。The target pose determination module is configured to determine the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning landmark.
在一种可能的实施方式中,所述解析处理模块还设置为,确定所述定位标志物对应的关键点在标志物坐标系中的第二位置坐标;In a possible implementation manner, the analysis processing module is further configured to determine the second position coordinates of the key point corresponding to the positioning marker in the marker coordinate system;
所述位姿变换模块具体设置为,The pose transformation module is specifically set to:
基于所述定位标志物对应的所述第一位置坐标和所述第二位置坐标,确定所述机器人相对于所述定位标志物的第三位姿信息;Determining the third pose information of the robot relative to the positioning marker based on the first position coordinates and the second position coordinates corresponding to the positioning marker;
基于所述第三位姿信息和所述定位标志物在地理坐标系中的第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息。Based on the third pose information and the first pose information of the positioning marker in the geographic coordinate system, the second pose information of the robot in the geographic coordinate system is determined.
在一种可能的实施方式中,所述目标位姿确定模块,具体设置为,In a possible implementation manner, the target pose determination module is specifically set to:
确定每个所述第二位姿信息对应的重投影误差;Determining the reprojection error corresponding to each of the second pose information;
以所有的所述重投影误差的和最小为目标,调整每个第二位姿信息;Regarding the minimum sum of all the reprojection errors as the target, adjust each second pose information;
基于调整后的每个第二位姿信息,确定所述目标位姿信息。Based on each second posture information after adjustment, the target posture information is determined.
在一种可能的实施方式中,所述目标位姿确定模块在基于调整后的每个第二位姿信息,确定所述目标位姿信息时,具体设置为:In a possible implementation manner, when the target pose determination module determines the target pose information based on each second pose information after adjustment, the specific settings are as follows:
确定调整后的每个第二位姿信息对应的重投影误差;Determine the reprojection error corresponding to each second pose information after adjustment;
将最小的重投影误差对应的、调整后的第二位姿信息作为所述目标位姿信息。The adjusted second pose information corresponding to the smallest reprojection error is used as the target pose information.
在一种可能的实施方式中,所述解析处理模块具体设置为,In a possible implementation manner, the analysis processing module is specifically configured to:
从所述目标图像中筛选所述定位标志物;Screening the positioning marker from the target image;
确定每个所述定位标志物的标识信息;Determining the identification information of each of the positioning markers;
基于每个定位标志物的标识信息,确定每个所述定位标志物在地理坐标系中的第一位姿信息。Based on the identification information of each positioning landmark, the first pose information of each positioning landmark in the geographic coordinate system is determined.
在一种可能的实施方式中,所述定位标志物为二维码;In a possible implementation, the location marker is a two-dimensional code;
所述解析处理模块在确定每个所述定位标志物的标识信息时,具体设置为:When the analysis processing module determines the identification information of each of the positioning markers, it is specifically set as follows:
分别对每个二维码进行解码处理,得到每个二维码的标识信息Decode each QR code separately to obtain the identification information of each QR code
在一种可能的实施方式中,所述解析处理模块在从所述目标图像中筛选所述定位标志物时,具体设置为:In a possible implementation manner, when the analysis processing module selects the positioning marker from the target image, the specific setting is as follows:
从所述目标图像中提取每个物体的轮廓信息;Extracting contour information of each object from the target image;
基于每个物体的轮廓信息,确定每个物体在所述目标图像中的形状;Determine the shape of each object in the target image based on the contour information of each object;
基于每个物体在所述目标图像中的形状,确定每个物体是否为所述定位标志物。Based on the shape of each object in the target image, it is determined whether each object is the positioning marker.
在一种可能的实施方式中,所述解析处理模块在基于每个物体在所述目标图像中的形状,确定每个物体是否为所述定位标志物时,具体设置为:In a possible implementation manner, when the analysis processing module determines whether each object is the positioning marker based on the shape of each object in the target image, the specific settings are as follows:
基于每个物体在所述目标图像中的形状,确定每个物体的正视图的形状;Determine the shape of the front view of each object based on the shape of each object in the target image;
基于每个物体的正视图的形状,分别确定每个物体是否为所述定位标志物。Based on the shape of the front view of each object, it is determined whether each object is the positioning marker.
第二方面,本申请公开了一种定位方法,包括:In the second aspect, this application discloses a positioning method, including:
获取机器人拍摄的包括一个或多个定位标志物的目标图像;Obtain a target image taken by the robot including one or more positioning markers;
确定每个所述定位标志物在地理坐标系中的第一位姿信息和每个所述定位标志物对应的关键点在所述目标图像中的第一位置坐标;Determining the first pose information of each of the positioning landmarks in the geographic coordinate system and the first position coordinates of the key points corresponding to each of the positioning landmarks in the target image;
针对每个定位标志物,基于该定位标志物对应的所述第一位置坐标和所述第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息;For each positioning landmark, determine the second pose information of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark;
基于每个定位标志物对应的所述第二位姿信息,确定所述机器人在地理坐标系中的目标位姿信息。Based on the second pose information corresponding to each positioning landmark, determine the target pose information of the robot in the geographic coordinate system.
在一种可能的实施方式中,所述基于该定位标志物对应的所述第一位置坐标和所述第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息,包括:In a possible implementation manner, the determining the second pose information of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark includes :
获取该定位标志物对应的关键点在标志物坐标系中的第二位置坐标;Acquiring the second position coordinates of the key point corresponding to the positioning marker in the marker coordinate system;
基于该定位标志物对应的所述第一位置坐标和所述第二位置坐标,确定所 述机器人相对于该定位标志物的第三位姿信息;Determining the third pose information of the robot relative to the positioning marker based on the first position coordinates and the second position coordinates corresponding to the positioning marker;
基于所述第三位姿信息和该定位标志物在地理坐标系中的第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息。Based on the third pose information and the first pose information of the positioning marker in the geographic coordinate system, the second pose information of the robot in the geographic coordinate system is determined.
在一种可能的实施方式中,所述基于每个定位标志物对应的所述第二位姿信息,确定所述机器人在地理坐标系中的目标位姿信息,包括:In a possible implementation manner, the determining the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning landmark includes:
确定每个所述第二位姿信息对应的重投影误差;Determining the reprojection error corresponding to each of the second pose information;
以所有的所述重投影误差的和最小为目标,调整每个第二位姿信息;Regarding the minimum sum of all the reprojection errors as the target, adjust each second pose information;
基于调整后的每个第二位姿信息,确定所述目标位姿信息。Based on each second posture information after adjustment, the target posture information is determined.
在一种可能的实施方式中,所述确定每个所述定位标志物在地理坐标系中的第一位姿信息,包括:In a possible implementation manner, the determining the first pose information of each of the positioning landmarks in a geographic coordinate system includes:
从所述目标图像中筛选所述定位标志物;Screening the positioning marker from the target image;
确定每个所述定位标志物的标识信息;Determining the identification information of each of the positioning markers;
基于每个定位标志物的标识信息,确定每个所述定位标志物在地理坐标系中的第一位姿信息。Based on the identification information of each positioning landmark, the first pose information of each positioning landmark in the geographic coordinate system is determined.
在一种可能的实施方式中,所述定位标志物为二维码;In a possible implementation, the location marker is a two-dimensional code;
所述确定每个所述定位标志物的标识信息,包括:The determining the identification information of each of the positioning markers includes:
分别对每个二维码进行解码处理,得到每个二维码的标识信息。Each two-dimensional code is decoded separately to obtain the identification information of each two-dimensional code.
在一种可能的实施方式中,所述从所述目标图像中筛选所述定位标志物,包括:In a possible implementation manner, the screening of the positioning markers from the target image includes:
从所述目标图像中提取每个物体的轮廓信息;Extracting contour information of each object from the target image;
基于每个物体的轮廓信息,确定每个物体在所述目标图像中的形状;Determine the shape of each object in the target image based on the contour information of each object;
基于每个物体在所述目标图像中的形状,确定每个物体是否为所述定位标志物。Based on the shape of each object in the target image, it is determined whether each object is the positioning marker.
在一种可能的实施方式中,所述基于每个物体在所述目标图像中的形状,确定每个物体是否为所述定位标志物,包括:In a possible implementation manner, the determining whether each object is the positioning marker based on the shape of each object in the target image includes:
基于每个物体在所述目标图像中的形状,确定每个物体的正视图的形状;Determine the shape of the front view of each object based on the shape of each object in the target image;
基于每个物体的正视图的形状,分别确定每个物体是否为所述定位标志物。Based on the shape of the front view of each object, it is determined whether each object is the positioning marker.
第三方面,本申请还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如上述定位方法的步骤。In a third aspect, the present application also provides a computer-readable storage medium with a computer program stored on the computer-readable storage medium, and the computer program executes the steps of the above positioning method when the computer program is run by a processor.
本申请提供了一种机器人、定位方法及计算机可读存储介质,其中,机器人首先拍摄包括多个定位标志物的目标图像;之后,基于每个定位标志物对应的关键点在目标图像中的第一位置坐标和每个定位标志物在地理坐标系中的第一位姿信息,确定机器人在地理坐标系中的多个第二位姿信息;最后,基于每个定位标志物对应的第二位姿信息,确定机器人在地理坐标系中的目标位姿信息。上述技术方案基于多个定位标志物的位置信息能够较为准确的确定机器人在地理坐标系中的位姿信息。The present application provides a robot, a positioning method, and a computer-readable storage medium. The robot first takes a target image including multiple positioning markers; then, based on the key point corresponding to each positioning marker in the target image A position coordinate and the first pose information of each positioning landmark in the geographic coordinate system determine the multiple second pose information of the robot in the geographic coordinate system; finally, based on the second position corresponding to each positioning landmark Pose information, to determine the target pose information of the robot in the geographic coordinate system. The above technical solution is able to more accurately determine the pose information of the robot in the geographic coordinate system based on the position information of multiple positioning landmarks.
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使 用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to more clearly describe the technical solutions of the embodiments of the present application, the following will briefly introduce the drawings that need to be used in the embodiments. It should be understood that the following drawings only show certain embodiments of the present application, and therefore do not It should be regarded as a limitation of the scope. For those of ordinary skill in the art, other related drawings can be obtained based on these drawings without creative work.
图1示出了本申请实施例提供的机器人的结构示意图;Figure 1 shows a schematic structural diagram of a robot provided by an embodiment of the present application;
图2示出了本申请实施例中对目标图像进行预处理后得到的图像的示意图;FIG. 2 shows a schematic diagram of an image obtained after preprocessing the target image in an embodiment of the present application;
图3示出了本申请实施例中对预处理后的目标图像进行多边形近似后得到的图像的示意图;FIG. 3 shows a schematic diagram of an image obtained after polygonal approximation is performed on a preprocessed target image in an embodiment of the present application;
图4示出了本申请实施例中对一个平行四边形进行仿射变换之后得到的正视图;Fig. 4 shows a front view obtained after performing affine transformation on a parallelogram in an embodiment of the present application;
图5示出了本申请实施例中对筛选得到的定位标志物进行网格划分得到的图像的示意图;FIG. 5 shows a schematic diagram of an image obtained by performing grid division on positioning markers obtained by screening in an embodiment of the present application;
图6示出了本申请实施例提供的一种定位方法的流程图。Fig. 6 shows a flowchart of a positioning method provided by an embodiment of the present application.
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,应当理解,本申请中附图仅起到说明和描述的目的,并不用于限定本申请的保护范围。另外,应当理解,示意性的附图并未按实物比例绘制。本申请中使用的流程图示出了根据本申请的一些实施例实现的操作。应该理解,流程图的操作可以不按顺序实现,没有逻辑的上下文关系的步骤可以反转顺序或者同时实施。此外,本领域技术人员在本申请内容的指引下,可以向流程图添加一个或多个其他操作,也可以从流程图中移除一个或多个操作。In order to make the purpose, technical solutions, and advantages of the embodiments of this application clearer, the technical solutions in the embodiments of this application will be described clearly and completely in conjunction with the drawings in the embodiments of this application. It should be understood that this application is attached The drawings are only for the purpose of illustration and description, and are not used to limit the protection scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. The flowchart used in this application shows operations implemented according to some embodiments of this application. It should be understood that the operations of the flowchart may be implemented out of order, and steps without logical context may be reversed in order or implemented at the same time. In addition, under the guidance of the content of this application, those skilled in the art can add one or more other operations to the flowchart, or remove one or more operations from the flowchart.
另外,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。In addition, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. The components of the embodiments of the present application generally described and shown in the drawings herein may be arranged and designed in various different configurations. Therefore, the following detailed description of the embodiments of the application provided in the accompanying drawings is not intended to limit the scope of the claimed application, but merely represents selected embodiments of the application. Based on the embodiments of the present application, all other embodiments obtained by those skilled in the art without creative work shall fall within the protection scope of the present application.
需要说明的是,本申请实施例中将会用到术语“包括”,用于指出其后所声明的特征的存在,但并不排除增加其它的特征。It should be noted that the term "including" will be used in the embodiments of the present application to indicate the existence of the features declared thereafter, but it does not exclude the addition of other features.
基于目前对机器人的定位需要,以及,目前机器人定位存在的精度不够的缺陷,本申请提供了一种机器人、定位方法及计算机可读存储介质,其中,机器人首先拍摄包括多个定位标志物的目标图像;之后,基于每个定位标志物对应的关键点在目标图像中的第一位置坐标和每个定位标志物在地理坐标系中的第一位姿信息,确定机器人在地理坐标系中的多个第二位姿信息;最后,基于每个定位标志物对应的第二位姿信息,确定机器人在地理坐标系中的目标位姿信息。本申请基于多个定位标志物的位置信息能够较为准确的确定机器人在地理坐标系中的位姿信息。Based on the current needs for robot positioning and the lack of accuracy in current robot positioning, this application provides a robot, a positioning method, and a computer-readable storage medium. The robot first photographs a target including multiple positioning markers. Image; After that, based on the first position coordinates of the key points corresponding to each positioning landmark in the target image and the first pose information of each positioning landmark in the geographic coordinate system, determine the number of robots in the geographic coordinate system Second pose information; Finally, based on the second pose information corresponding to each positioning marker, determine the target pose information of the robot in the geographic coordinate system. The present application can determine the pose information of the robot in the geographic coordinate system more accurately based on the position information of multiple positioning landmarks.
如图1所示,本申请提供了一种机器人,包括摄像头110、处理器120;所述处理器120包括解析处理模块1201、图像坐标确定模块1202、位姿变换模块 1203和目标位姿确定模块1204。As shown in FIG. 1, the present application provides a robot, including a camera 110, a processor 120; the processor 120 includes an analysis processing module 1201, an image coordinate determination module 1202, a pose transformation module 1203, and a target pose determination module 1204.
所述摄像头110设置为,拍摄包括一个或多个定位标志物的目标图像。The camera 110 is configured to capture a target image including one or more positioning landmarks.
所述解析处理模块1201设置为,确定每个所述定位标志物在地理坐标系中的第一位姿信息。The analysis processing module 1201 is configured to determine the first pose information of each of the positioning landmarks in the geographic coordinate system.
所述图像坐标确定模块1202设置为,确定每个所述定位标志物对应的关键点在所述目标图像中的第一位置坐标。The image coordinate determining module 1202 is configured to determine the first position coordinate of the key point corresponding to each positioning marker in the target image.
所述位姿变换模块1203设置为,针对每个定位标志物,基于该定位标志物对应的所述第一位置坐标和所述第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息。The pose transformation module 1203 is configured to, for each positioning landmark, determine the first position of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark. Two pose information.
所述目标位姿确定模块1204设置为,基于每个定位标志物对应的所述第二位姿信息,确定所述机器人在地理坐标系中的目标位姿信息。The target pose determination module 1204 is configured to determine the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning marker.
上述位姿信息包括物体在对应坐标系中的位置坐标和物体在对应坐标系中旋转的角度。The aforementioned pose information includes the position coordinates of the object in the corresponding coordinate system and the rotation angle of the object in the corresponding coordinate system.
机器人利用摄像头拍摄的目标图像中包括多个定位标志物,这些定位标志物用于确定机器人在地理坐标系中的位姿信息。在实际应用中,定位标志物可以是设置在地面上的二维码,解析处理模块1201对图像中的二维码进行解码处理,得到该定位标志物的标识信息。之后,解析处理模块1201利用定位标志物的标识信息能够确定定位标志物在地理坐标系中的第一位姿信息。The target image taken by the robot with the camera includes multiple positioning markers, and these positioning markers are used to determine the pose information of the robot in the geographic coordinate system. In practical applications, the positioning marker may be a two-dimensional code set on the ground, and the analysis processing module 1201 decodes the two-dimensional code in the image to obtain identification information of the positioning marker. After that, the analysis processing module 1201 can determine the first pose information of the positioning marker in the geographic coordinate system by using the identification information of the positioning marker.
在具体实施中,可以预先在地面、货架侧面、天花板、和墙壁等中的一种或多种固定物上部署上述一个或多个定位标志物;部署位置是机器人在移动过程中可以通过摄像头拍摄到的位置,摄像头的拍摄方向不限,比如可以朝向前方拍摄(货架侧面或墙壁),也可以朝向上方拍摄(天花板),还可以朝向下方拍摄(地面),还可以朝向侧面方向拍摄(货架侧面或墙壁)。可以在机器人上安装多个摄像头,分别朝向不同方向进行拍摄。不管是采用哪种固定物进行定位标志物的部署,均可以使用本公开实施例提供的上述定位方法,由于定位标志物在地理坐标系下的位姿是在部署该定位标志物时预先确定的,因此,基于定位标志物在机器人拍摄的目标图像对应的图像坐标系下的位姿信息、及定位标志物在地理坐标系下的位姿信息,即可确定出机器人在地理坐标系下的目标位姿信息,详见后续描述。In specific implementation, one or more of the above-mentioned positioning markers can be deployed in advance on one or more fixed objects of the ground, shelf sides, ceiling, and walls; the deployment position is that the robot can be photographed by the camera during the movement. There is no limit to the shooting direction of the camera. For example, you can shoot toward the front (the side of the shelf or the wall), you can shoot upward (the ceiling), you can shoot downward (the ground), or you can shoot toward the side (side of the shelf) Or wall). You can install multiple cameras on the robot to shoot in different directions. Regardless of which fixture is used for the deployment of the positioning marker, the above positioning method provided by the embodiments of the present disclosure can be used, because the pose of the positioning marker in the geographic coordinate system is predetermined when the positioning marker is deployed Therefore, based on the pose information of the positioning landmark in the image coordinate system corresponding to the target image taken by the robot, and the pose information of the positioning landmark in the geographic coordinate system, the robot's target in the geographic coordinate system can be determined For pose information, please refer to the following description for details.
因为定位标志物是预先设置好的,因此,在设置好定位标志物之后,该定位标志物在地理坐标中的第一位姿信息就已经确定,并且是已知的。在设置好每个定位标志物之后,可以将每个定位标志物的第一位姿信息和每个定位标志物的标识信息分别建立映射关系,并存储在机器人的存储器中。解析处理模块1201在获取到目标图像中定位标志物的标识信息之后,就能够基于上述映射关系确定定位标志物在地理坐标系中的第一位姿信息。Because the positioning marker is set in advance, after the positioning marker is set, the first pose information of the positioning marker in the geographic coordinates has been determined and known. After each positioning marker is set, the first pose information of each positioning marker and the identification information of each positioning marker can be respectively established a mapping relationship and stored in the memory of the robot. After the analysis processing module 1201 obtains the identification information of the positioning landmark in the target image, it can determine the first pose information of the positioning landmark in the geographic coordinate system based on the foregoing mapping relationship.
解析处理模块1201在对定位标志物进行解析,得到标识信息之前,首先需要从目标图像中筛选定位标志物。具体地,解析处理模块1201利用以下步骤从目标图像中筛选定位标志物:Before the analysis processing module 1201 analyzes the positioning markers and obtains the identification information, it first needs to screen the positioning markers from the target image. Specifically, the analysis processing module 1201 uses the following steps to screen positioning markers from the target image:
步骤一、从所述目标图像中提取每个物体的轮廓信息。Step 1: Extract contour information of each object from the target image.
解析处理模块1201在从目标图像中提取每个物体的轮廓信息之前,首先需要对目标图像进行预处理,例如,可以对目标图像进行二值化处理,如图2所示为对目标图像进行二值化后得到的图像。The analysis processing module 1201 first needs to preprocess the target image before extracting the contour information of each object from the target image. For example, the target image can be binarized. The image obtained after quantization.
对目标图像进行预处理之后,解析处理模块1201从得到的图像中提取每个物体的轮廓信息。After preprocessing the target image, the analysis processing module 1201 extracts contour information of each object from the obtained image.
步骤二、基于每个物体的轮廓信息,确定每个物体在所述目标图像中的形状。Step 2: Determine the shape of each object in the target image based on the contour information of each object.
解析处理模块1201在提取到每个物体的轮廓信息之后,基于每个物体的轮廓信息分别对每个物体进行多边形近似,得到每个物体在目标图像中的形状。如图3所示为对预处理后的目标图像进行多边形近似后得到的图像。After extracting the contour information of each object, the analysis processing module 1201 respectively performs polygonal approximation on each object based on the contour information of each object to obtain the shape of each object in the target image. Figure 3 shows the image obtained after polygonal approximation of the preprocessed target image.
在确定每个物体在所述目标图像中的形状之后,可以将该形状与预存的定位标志物的形状进行匹配,若能直接匹配成功,则可以确定该物体是定位标志物。在实际实施中,由于摄像头的拍摄角度可能是任意方向的,而预存的定位标志物的形状一般是其正视图的形状,此时可以执行下述步骤三,先确定该物体的正视图的形状。After determining the shape of each object in the target image, the shape can be matched with the shape of the pre-stored positioning marker. If the direct matching is successful, it can be determined that the object is a positioning marker. In actual implementation, since the shooting angle of the camera may be in any direction, and the shape of the pre-stored positioning marker is generally the shape of the front view, the following step 3 can be performed at this time, first determine the shape of the front view of the object .
步骤三、基于每个物体在所述目标图像中的形状,确定每个物体的正视图的形状。Step 3: Determine the shape of the front view of each object based on the shape of each object in the target image.
解析处理模块1201在确定了每个物体在所述目标图像中的形状之后,对每个形状进行角点排序,并进行放射变换,得到每个形状的正视图,即确定每个物体的正视图的形状。如图4所示为对一个平行四边形进行仿射变换之后得到的正视图。After the analytical processing module 1201 determines the shape of each object in the target image, it sorts the corners of each shape and performs radial transformation to obtain the front view of each shape, that is, determine the front view of each object shape. Figure 4 shows the front view obtained after performing affine transformation on a parallelogram.
步骤四、基于每个物体的正视图的形状,分别确定每个物体是否为所述定位标志物。Step 4: Based on the shape of the front view of each object, determine whether each object is the positioning marker.
解析处理模块1201在确定了每个物体的正视图的形状之后,基于物体的正视图的形状,筛选定位标志物。例如,在定位标志物为二维码的情况下,解析处理模块1201筛选正视图为正方形的物体作为定位标志物。After determining the shape of the front view of each object, the analysis processing module 1201 filters the positioning markers based on the shape of the front view of the object. For example, in the case where the positioning marker is a two-dimensional code, the analysis processing module 1201 selects objects whose front view is square as the positioning marker.
解析处理模块1201在从目标图像中筛选得到定位标志物之后,对目标图像中的定位标志物进行网格划分,之后,并基于划分网格后的定位标志物进行解码操作,得到定位标志物的标识信息。如图5所示为对筛选得到的定位标志物进行网格划分得到的图像。After the analysis processing module 1201 filters the positioning markers from the target image, it divides the positioning markers in the target image into grids, and then performs a decoding operation based on the positioning markers after the grid is divided to obtain the positioning markers Identification information. Figure 5 shows the image obtained by meshing the selected positioning markers.
图像坐标确定模块1202对目标图像进行解析处理,确定每个所述定位标志物对应的关键点在所述目标图像中的第一位置坐标。这里的第一位置坐标为定位标志物对应的关键点在机器人坐标系下的坐标。The image coordinate determination module 1202 analyzes and processes the target image, and determines the first position coordinate of the key point corresponding to each positioning marker in the target image. The first position coordinate here is the coordinate of the key point corresponding to the positioning marker in the robot coordinate system.
位姿变换模块1203基于定位标志物对应的关键点在机器人坐标系下的第一位置坐标和定位标志物在地理坐标系下的第一位姿信息,能够确定机器人在地理坐标系下的位姿信息,即上述第二位姿信息。The pose transformation module 1203 can determine the pose of the robot in the geographic coordinate system based on the first position coordinates of the key points corresponding to the positioning markers in the robot coordinate system and the first pose information of the positioning markers in the geographic coordinate system. Information, that is, the above-mentioned second pose information.
具体地,位姿变换模块1203可以利用如下步骤确定机器人在地理坐标系下的第二位姿信息:Specifically, the pose transformation module 1203 may use the following steps to determine the second pose information of the robot in the geographic coordinate system:
步骤一、基于所述定位标志物对应的所述第一位置坐标和所述第二位置坐 标,确定所述机器人相对于所述定位标志物的第三位姿信息。Step 1: Determine the third pose information of the robot relative to the positioning marker based on the first position coordinates and the second position coordinates corresponding to the positioning marker.
在位姿变换模块1203执行此步骤之前,首先需要解析处理模块1201确定所述定位标志物对应的关键点在标志物坐标系中的第二位置坐标。定位标志物在标志物坐标系中的位置坐标是已经确定好的,并且是已知的。定位标志物的标识信息与其在标志物坐标系中的位置坐标预先设置有映射关系,并且存储在机器人的存储器中。解析处理模块1201在解码得到定位标志物的标识信息之后,结合上述映射关系即可确定定位标志物在标志物坐标系中的位置坐标,即上述第二位置坐标。Before the pose transformation module 1203 performs this step, the analysis processing module 1201 first needs to determine the second position coordinates of the key points corresponding to the positioning markers in the marker coordinate system. The position coordinates of the positioning marker in the marker coordinate system have been determined and known. The identification information of the positioning marker is preset with a mapping relationship with its position coordinates in the marker coordinate system, and is stored in the memory of the robot. After decoding the identification information of the positioning marker, the analysis processing module 1201 can determine the position coordinates of the positioning marker in the marker coordinate system, that is, the second position coordinates, in combination with the foregoing mapping relationship.
在确定了定位标志物在标志物坐标系中的第二位置信息之后,位姿变换模块1203基于定位标志物对应的关键点在机器人坐标中的第一位置坐标和该定位标志物对应的关键点在标志物坐标系中的第二位置坐标,能够确定机器人在标志物坐标系中相对于该定位标志物的位姿信息,即上述第三位姿信息。After determining the second position information of the positioning marker in the marker coordinate system, the pose transformation module 1203 is based on the first position coordinates of the key point corresponding to the positioning marker in the robot coordinates and the key point corresponding to the positioning marker The second position coordinates in the marker coordinate system can determine the pose information of the robot relative to the positioning marker in the marker coordinate system, that is, the above-mentioned third pose information.
上述第一位置坐标和第二位置坐标均可以包括定位标志物上的至少3个不共线的点的坐标。Both the first position coordinates and the second position coordinates may include the coordinates of at least 3 non-collinear points on the positioning marker.
步骤二、基于所述第三位姿信息和所述定位标志物在地理坐标系中的第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息。Step 2: Determine the second pose information of the robot in the geographic coordinate system based on the third pose information and the first pose information of the positioning landmark in the geographic coordinate system.
位姿变换模块1203基于机器人在标志物坐标系中相对于定位标志物的第三位姿信息和该定位标志物在地理坐标系中的第一位姿信息,确定机器人在地理坐标系中的第二位姿信息。The pose transformation module 1203 determines the robot's first pose in the geographic coordinate system based on the third pose information of the robot relative to the positioning landmark in the landmark coordinate system and the first pose information of the positioning landmark in the geographic coordinate system. Two pose information.
在得到了机器人在地理坐标系中的多个第二位姿信息之后,目标位姿确定模块1204可以利用如下步骤确定机器人在地理坐标系中的目标位姿信息:After obtaining multiple second pose information of the robot in the geographic coordinate system, the target pose determination module 1204 can use the following steps to determine the target pose information of the robot in the geographic coordinate system:
步骤一、确定每个所述第二位姿信息对应的重投影误差。Step 1: Determine the reprojection error corresponding to each of the second pose information.
目标位姿确定模块1204将定位标志物和该定位标志物的第二位姿信息对应的位姿均投影到同一个图像上,之后,分别确定每个第二位姿信息对应的位姿在图像上的投影和定位标志物在图像上的投影之间的误差,即确定每个第二位姿信息对应的重投影误差。The target pose determination module 1204 projects the positioning marker and the pose corresponding to the second pose information of the positioning marker onto the same image, and then determines the pose corresponding to each second pose information in the image. The error between the projection on the image and the projection of the positioning marker on the image is to determine the reprojection error corresponding to each second pose information.
步骤二、以所有的所述重投影误差的和最小为目标,调整每个第二位姿信息。Step 2: Regarding the minimum sum of all the reprojection errors as the target, adjust each second pose information.
这里,目标位姿确定模块1204计算所有重投影误差的和,并以该和最小为目标,调整每个第二位姿信息。Here, the target pose determination module 1204 calculates the sum of all reprojection errors, and adjusts each second pose information with the minimum sum as the target.
步骤三、基于调整后的每个第二位姿信息,确定所述目标位姿信息。Step 3: Determine the target pose information based on each second pose information after adjustment.
这里,目标位姿确定模块1204在结束位姿信息调整之后,确定调整后的每个第二位姿信息对应的重投影误差;并将最小的重投影误差对应的、调整后的第二位姿信息作为所述目标位姿信息。Here, the target pose determination module 1204 determines the reprojection error corresponding to each adjusted second pose information after finishing the adjustment of the pose information; and corresponds the adjusted second pose to the smallest reprojection error The information is used as the target pose information.
目标位姿确定模块1204在对每个第二位姿信息进行调整之后,还可以计算调整后的第二位姿的均值,并将得到的均值作为目标位姿信息。After the target pose determination module 1204 adjusts each second pose information, it may also calculate the average value of the adjusted second pose, and use the obtained average value as the target pose information.
上述方案可以作为独立的定位方案应用于机器人移动过程中的定位。也可以与其它定位方案结合实施,比如,可以用于确定其它定位方案的初始定位位姿,并可以用于在定位过程中进行位姿校准,还可以用于进行位姿重定位。比 如,可以首先基于本公开实施例的定位方法得到机器人的初始定位位姿信息,之后可以利用机器人的惯性测量单元(Inertial Measurement Unit,IMU)实时确定机器人的定位位姿,并在实时定位过程中,周期性采用本公开实施例的方案确定机器人位姿,以对机器人的实时定位位姿进行校准,还可以在无法获取到IMU的测量信息或者在IMU的测量信息不稳定时,采用本公开实施例的方案进行机器人重定位。上述过程可以适用于视觉实时定位与建图(Simultaneous Localization and Mapping,SLAM)的过程。The above solution can be used as an independent positioning solution to be applied to the positioning during the movement of the robot. It can also be implemented in combination with other positioning schemes, for example, it can be used to determine the initial positioning pose of other positioning schemes, can be used to perform pose calibration during the positioning process, and can also be used to perform pose relocation. For example, the initial positioning pose information of the robot can be obtained first based on the positioning method of the embodiment of the present disclosure, and then the inertial measurement unit (IMU) of the robot can be used to determine the positioning pose of the robot in real time, and in the real-time positioning process , Periodically adopt the solution of the embodiments of the present disclosure to determine the pose of the robot to calibrate the real-time positioning pose of the robot. It is also possible to implement the present disclosure when the measurement information of the IMU cannot be obtained or the measurement information of the IMU is unstable. The scheme of the example performs robot relocation. The above process can be applied to the process of visual real-time positioning and mapping (Simultaneous Localization and Mapping, SLAM).
对应于上述机器人,本申请实施例还提供了一种定位方法,该方法应用于上述机器人,用于实现对上述机器人的定位,能够达到相同或相似的有益效果,因此对于重复的部分不再赘述。Corresponding to the above-mentioned robot, the embodiment of the present application also provides a positioning method, which is applied to the above-mentioned robot, and is used to realize the positioning of the above-mentioned robot, and can achieve the same or similar beneficial effects, so the repeated parts will not be repeated here. .
如图6所示,本申请实施例提供的定位方法可以包括如下步骤:As shown in FIG. 6, the positioning method provided by the embodiment of the present application may include the following steps:
S610、获取机器人拍摄的包括一个或多个定位标志物的目标图像。S610. Obtain a target image that includes one or more positioning markers taken by the robot.
S620、确定每个所述定位标志物在地理坐标系中的第一位姿信息和每个所述定位标志物对应的关键点在所述目标图像中的第一位置坐标。S620: Determine the first pose information of each of the positioning landmarks in the geographic coordinate system and the first position coordinates of the key points corresponding to each of the positioning landmarks in the target image.
S630、针对每个定位标志物,基于该定位标志物对应的所述第一位置坐标和所述第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息。S630. For each positioning landmark, determine the second pose information of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark.
S640、基于每个定位标志物对应的所述第二位姿信息,确定所述机器人在地理坐标系中的目标位姿信息。S640: Determine the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning landmark.
在一些实施例中,所述基于该定位标志物对应的所述第一位置坐标和所述第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息,包括:In some embodiments, the determining the second pose information of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark includes:
获取该定位标志物对应的关键点在标志物坐标系中的第二位置坐标;Acquiring the second position coordinates of the key point corresponding to the positioning marker in the marker coordinate system;
基于该定位标志物对应的所述第一位置坐标和所述第二位置坐标,确定所述机器人相对于该定位标志物的第三位姿信息;Determining the third pose information of the robot relative to the positioning marker based on the first position coordinates and the second position coordinates corresponding to the positioning marker;
基于所述第三位姿信息和该定位标志物在地理坐标系中的第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息。Based on the third pose information and the first pose information of the positioning marker in the geographic coordinate system, the second pose information of the robot in the geographic coordinate system is determined.
在一些实施例中,所述基于每个定位标志物对应的所述第二位姿信息,确定所述机器人在地理坐标系中的目标位姿信息,包括:In some embodiments, the determining the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning landmark includes:
确定每个所述第二位姿信息对应的重投影误差;Determining the reprojection error corresponding to each of the second pose information;
以所有的所述重投影误差的和最小为目标,调整每个第二位姿信息;Regarding the minimum sum of all the reprojection errors as the target, adjust each second pose information;
基于调整后的每个第二位姿信息,确定所述目标位姿信息。Based on each second posture information after adjustment, the target posture information is determined.
在一些实施例中,所述确定每个所述定位标志物在地理坐标系中的第一位姿信息,包括:In some embodiments, the determining the first pose information of each of the positioning landmarks in a geographic coordinate system includes:
从所述目标图像中筛选所述定位标志物;Screening the positioning marker from the target image;
确定每个所述定位标志物的标识信息;Determining the identification information of each of the positioning markers;
基于每个定位标志物的标识信息,确定每个所述定位标志物在地理坐标系中的第一位姿信息。Based on the identification information of each positioning landmark, the first pose information of each positioning landmark in the geographic coordinate system is determined.
在一些实施例中,所述定位标志物为二维码;In some embodiments, the location marker is a two-dimensional code;
所述确定每个所述定位标志物的标识信息,包括:The determining the identification information of each of the positioning markers includes:
分别对每个二维码进行解码处理,得到每个二维码的标识信息。Each two-dimensional code is decoded separately to obtain the identification information of each two-dimensional code.
在一些实施例中,所述从所述目标图像中筛选所述定位标志物,包括:In some embodiments, the screening of the positioning markers from the target image includes:
从所述目标图像中提取每个物体的轮廓信息;Extracting contour information of each object from the target image;
基于每个物体的轮廓信息,确定每个物体在所述目标图像中的形状;Determine the shape of each object in the target image based on the contour information of each object;
基于每个物体在所述目标图像中的形状,确定每个物体的正视图的形状;Determine the shape of the front view of each object based on the shape of each object in the target image;
基于每个物体的正视图的形状,分别确定每个物体是否为所述定位标志物。Based on the shape of the front view of each object, it is determined whether each object is the positioning marker.
本申请实施例还提供的一种对应于上述方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,程序代码包括的指令可用于执行前面方法实施例中的方法,具体实现可参见方法实施例,在此不再赘述。The embodiment of the present application also provides a computer program product corresponding to the above method, which includes a computer-readable storage medium storing program code. The instructions included in the program code can be used to execute the method in the previous method embodiment. For specific implementation, see The method embodiment will not be repeated here.
上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,本文不再赘述。The above description of the various embodiments tends to emphasize the differences between the various embodiments, and the same or similarities can be referred to each other. For the sake of brevity, the details are not repeated herein.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考方法实施例中的对应过程,本申请中不再赘述。在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that, for convenience and concise description, the specific working process of the system and device described above can refer to the corresponding process in the method embodiment, which will not be repeated in this application. In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method can be implemented in other ways. The device embodiments described above are merely illustrative. For example, the division of the modules is only a logical function division, and there may be other divisions in actual implementation. For example, multiple modules or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some communication interfaces, devices or modules, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.
以上仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application, and they should all be covered Within the scope of protection of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
Claims (16)
- 一种机器人,其特征在于,包括摄像头、处理器;所述处理器包括解析处理模块、图像坐标确定模块、位姿变换模块和目标位姿确定模块;A robot, characterized in that it includes a camera and a processor; the processor includes an analysis processing module, an image coordinate determination module, a pose transformation module, and a target pose determination module;所述摄像头设置为,拍摄包括一个或多个定位标志物的目标图像;The camera is configured to shoot a target image including one or more positioning landmarks;所述解析处理模块设置为,确定每个所述定位标志物在地理坐标系中的第一位姿信息;The analysis processing module is configured to determine the first pose information of each of the positioning landmarks in the geographic coordinate system;所述图像坐标确定模块设置为,确定每个所述定位标志物对应的关键点在所述目标图像中的第一位置坐标;The image coordinate determining module is configured to determine the first position coordinate of the key point corresponding to each of the positioning markers in the target image;所述位姿变换模块设置为,针对每个定位标志物,基于该定位标志物对应的所述第一位置坐标和所述第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息;The pose transformation module is configured to, for each positioning landmark, determine the second position of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark. Pose information所述目标位姿确定模块设置为,基于每个定位标志物对应的所述第二位姿信息,确定所述机器人在地理坐标系中的目标位姿信息。The target pose determination module is configured to determine the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning landmark.
- 根据权利要求1所述的机器人,其特征在于,所述解析处理模块还设置为,确定所述定位标志物对应的关键点在标志物坐标系中的第二位置坐标;The robot according to claim 1, wherein the analysis processing module is further configured to determine the second position coordinates of the key point corresponding to the positioning marker in the marker coordinate system;所述位姿变换模块具体设置为,The pose transformation module is specifically set to:基于所述定位标志物对应的所述第一位置坐标和所述第二位置坐标,确定所述机器人相对于所述定位标志物的第三位姿信息;Determining the third pose information of the robot relative to the positioning marker based on the first position coordinates and the second position coordinates corresponding to the positioning marker;基于所述第三位姿信息和所述定位标志物在地理坐标系中的第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息。Based on the third pose information and the first pose information of the positioning marker in the geographic coordinate system, the second pose information of the robot in the geographic coordinate system is determined.
- 根据权利要求1或2所述的机器人,其特征在于,所述目标位姿确定模块,具体设置为,The robot according to claim 1 or 2, wherein the target pose determination module is specifically set to:确定每个所述第二位姿信息对应的重投影误差;Determining the reprojection error corresponding to each of the second pose information;以所有的所述重投影误差的和最小为目标,调整每个第二位姿信息;Regarding the minimum sum of all the reprojection errors as the target, adjust each second pose information;基于调整后的每个第二位姿信息,确定所述目标位姿信息。Based on each second posture information after adjustment, the target posture information is determined.
- 根据权利要求3所述的机器人,其特征在于,所述目标位姿确定模块在基于调整后的每个第二位姿信息,确定所述目标位姿信息时,具体设置为:The robot according to claim 3, wherein when the target pose determination module determines the target pose information based on each second pose information after adjustment, the specific settings are as follows:确定调整后的每个第二位姿信息对应的重投影误差;Determine the reprojection error corresponding to each second pose information after adjustment;将最小的重投影误差对应的、调整后的第二位姿信息作为所述目标位姿信息。The adjusted second pose information corresponding to the smallest reprojection error is used as the target pose information.
- 根据权利要求1~4任一所述的机器人,其特征在于,所述解析处理模块具体设置为,The robot according to any one of claims 1 to 4, wherein the analysis processing module is specifically configured to:从所述目标图像中筛选所述定位标志物;Screening the positioning marker from the target image;确定每个所述定位标志物的标识信息;Determining the identification information of each of the positioning markers;基于每个定位标志物的标识信息,确定每个所述定位标志物在地理坐标系中的第一位姿信息。Based on the identification information of each positioning landmark, the first pose information of each positioning landmark in the geographic coordinate system is determined.
- 根据权利要求5所述的机器人,其特征在于,所述定位标志物为二维码;The robot according to claim 5, wherein the positioning marker is a two-dimensional code;所述解析处理模块在确定每个所述定位标志物的标识信息时,具体设置 为:When the analysis processing module determines the identification information of each of the positioning markers, it is specifically set as follows:分别对每个二维码进行解码处理,得到每个二维码的标识信息。Each two-dimensional code is decoded separately to obtain the identification information of each two-dimensional code.
- 根据权利要求5或6所述的机器人,其特征在于,所述解析处理模块在从所述目标图像中筛选所述定位标志物时,具体设置为:The robot according to claim 5 or 6, characterized in that, when the analysis processing module selects the positioning marker from the target image, the specific setting is as follows:从所述目标图像中提取每个物体的轮廓信息;Extracting contour information of each object from the target image;基于每个物体的轮廓信息,确定每个物体在所述目标图像中的形状;Determine the shape of each object in the target image based on the contour information of each object;基于每个物体在所述目标图像中的形状,确定每个物体是否为所述定位标志物。Based on the shape of each object in the target image, it is determined whether each object is the positioning marker.
- 根据权利要求7所述的机器人,其特征在于,所述解析处理模块在基于每个物体在所述目标图像中的形状,确定每个物体是否为所述定位标志物时,具体设置为:The robot according to claim 7, wherein when the analysis processing module determines whether each object is the positioning marker based on the shape of each object in the target image, the specific settings are as follows:基于每个物体在所述目标图像中的形状,确定每个物体的正视图的形状;Determine the shape of the front view of each object based on the shape of each object in the target image;基于每个物体的正视图的形状,分别确定每个物体是否为所述定位标志物。Based on the shape of the front view of each object, it is determined whether each object is the positioning marker.
- 一种定位方法,其特征在于,包括:A positioning method, characterized in that it comprises:获取机器人拍摄的包括一个或多个定位标志物的目标图像;Obtain a target image taken by the robot including one or more positioning markers;确定每个所述定位标志物在地理坐标系中的第一位姿信息和每个所述定位标志物对应的关键点在所述目标图像中的第一位置坐标;Determining the first pose information of each of the positioning landmarks in the geographic coordinate system and the first position coordinates of the key points corresponding to each of the positioning landmarks in the target image;针对每个定位标志物,基于该定位标志物对应的所述第一位置坐标和所述第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息;For each positioning landmark, determine the second pose information of the robot in the geographic coordinate system based on the first position coordinates and the first pose information corresponding to the positioning landmark;基于每个定位标志物对应的所述第二位姿信息,确定所述机器人在地理坐标系中的目标位姿信息。Based on the second pose information corresponding to each positioning landmark, determine the target pose information of the robot in the geographic coordinate system.
- 根据权利要求9所述的定位方法,其特征在于,所述基于该定位标志物对应的所述第一位置坐标和所述第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息,包括:The positioning method according to claim 9, wherein the first position coordinates and the first pose information corresponding to the positioning marker are used to determine the second position of the robot in the geographic coordinate system. Pose information, including:获取该定位标志物对应的关键点在标志物坐标系中的第二位置坐标;Acquiring the second position coordinates of the key point corresponding to the positioning marker in the marker coordinate system;基于该定位标志物对应的所述第一位置坐标和所述第二位置坐标,确定所述机器人相对于该定位标志物的第三位姿信息;Determining the third pose information of the robot relative to the positioning marker based on the first position coordinates and the second position coordinates corresponding to the positioning marker;基于所述第三位姿信息和该定位标志物在地理坐标系中的第一位姿信息,确定所述机器人在地理坐标系中的第二位姿信息。Based on the third pose information and the first pose information of the positioning marker in the geographic coordinate system, the second pose information of the robot in the geographic coordinate system is determined.
- 根据权利要求9或10所述的定位方法,其特征在于,所述基于每个定位标志物对应的所述第二位姿信息,确定所述机器人在地理坐标系中的目标位姿信息,包括:The positioning method according to claim 9 or 10, wherein the determining the target pose information of the robot in the geographic coordinate system based on the second pose information corresponding to each positioning marker includes :确定每个所述第二位姿信息对应的重投影误差;Determining the reprojection error corresponding to each of the second pose information;以所有的所述重投影误差的和最小为目标,调整每个第二位姿信息;Regarding the minimum sum of all the reprojection errors as the target, adjust each second pose information;基于调整后的每个第二位姿信息,确定所述目标位姿信息。Based on each second posture information after adjustment, the target posture information is determined.
- 根据权利要求9~11任一所述的定位方法,其特征在于,所述确定每个所述定位标志物在地理坐标系中的第一位姿信息,包括:The positioning method according to any one of claims 9 to 11, wherein the determining the first pose information of each of the positioning markers in a geographic coordinate system comprises:从所述目标图像中筛选所述定位标志物;Screening the positioning marker from the target image;确定每个所述定位标志物的标识信息;Determining the identification information of each of the positioning markers;基于每个定位标志物的标识信息,确定每个所述定位标志物在地理坐标系中的第一位姿信息。Based on the identification information of each positioning landmark, the first pose information of each positioning landmark in the geographic coordinate system is determined.
- 根据权利要求12所述的定位方法,其特征在于,所述定位标志物为二维码;The positioning method according to claim 12, wherein the positioning marker is a two-dimensional code;所述确定每个所述定位标志物的标识信息,包括:The determining the identification information of each of the positioning markers includes:分别对每个二维码进行解码处理,得到每个二维码的标识信息。Each two-dimensional code is decoded separately to obtain the identification information of each two-dimensional code.
- 根据权利要求12或13所述的定位方法,其特征在于,所述从所述目标图像中筛选所述定位标志物,包括:The positioning method according to claim 12 or 13, wherein the screening of the positioning markers from the target image comprises:从所述目标图像中提取每个物体的轮廓信息;Extracting contour information of each object from the target image;基于每个物体的轮廓信息,确定每个物体在所述目标图像中的形状;Determine the shape of each object in the target image based on the contour information of each object;基于每个物体在所述目标图像中的形状,确定每个物体是否为所述定位标志物。Based on the shape of each object in the target image, it is determined whether each object is the positioning marker.
- 根据权利要求14所述的定位方法,其特征在于,基于每个物体在所述目标图像中的形状,确定每个物体是否为所述定位标志物,包括:The positioning method according to claim 14, wherein the determining whether each object is the positioning marker based on the shape of each object in the target image comprises:基于每个物体在所述目标图像中的形状,确定每个物体的正视图的形状;Determine the shape of the front view of each object based on the shape of each object in the target image;基于每个物体的正视图的形状,分别确定每个物体是否为所述定位标志物。Based on the shape of the front view of each object, it is determined whether each object is the positioning marker.
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行如权利要求9~15任一所述的定位方法。A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and the computer program executes the positioning method according to any one of claims 9-15 when the computer program is run by a processor.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911358248.4 | 2019-12-25 | ||
CN201911358248.4A CN113031582A (en) | 2019-12-25 | 2019-12-25 | Robot, positioning method, and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021129071A1 true WO2021129071A1 (en) | 2021-07-01 |
Family
ID=76458357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/121655 WO2021129071A1 (en) | 2019-12-25 | 2020-10-16 | Robot, positioning method, and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113031582A (en) |
WO (1) | WO2021129071A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113485350A (en) * | 2021-07-22 | 2021-10-08 | 乐聚(深圳)机器人技术有限公司 | Robot movement control method, device, equipment and storage medium |
CN114227699B (en) * | 2022-02-10 | 2024-06-11 | 乐聚(深圳)机器人技术有限公司 | Robot motion adjustment method, apparatus, and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016121126A1 (en) * | 2015-01-30 | 2016-08-04 | 株式会社日立製作所 | Two-dimensional code, two-dimensional code read device, and encoding method |
CN107609451A (en) * | 2017-09-14 | 2018-01-19 | 斯坦德机器人(深圳)有限公司 | A kind of high-precision vision localization method and system based on Quick Response Code |
CN109345588A (en) * | 2018-09-20 | 2019-02-15 | 浙江工业大学 | A kind of six-degree-of-freedom posture estimation method based on Tag |
CN109579843A (en) * | 2018-11-29 | 2019-04-05 | 浙江工业大学 | Multirobot co-located and fusion under a kind of vacant lot multi-angle of view build drawing method |
CN110458863A (en) * | 2019-06-25 | 2019-11-15 | 广东工业大学 | A kind of dynamic SLAM system merged based on RGBD with encoder |
CN110570477A (en) * | 2019-08-28 | 2019-12-13 | 贝壳技术有限公司 | Method, device and storage medium for calibrating relative attitude of camera and rotating shaft |
CN110580724A (en) * | 2019-08-28 | 2019-12-17 | 贝壳技术有限公司 | method and device for calibrating binocular camera set and storage medium |
CN111179427A (en) * | 2019-12-24 | 2020-05-19 | 深圳市优必选科技股份有限公司 | Autonomous mobile device, control method thereof, and computer-readable storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481287A (en) * | 2017-07-13 | 2017-12-15 | 中国科学院空间应用工程与技术中心 | It is a kind of based on the object positioning and orientation method and system identified more |
CN110319834B (en) * | 2018-03-30 | 2021-04-23 | 深圳市神州云海智能科技有限公司 | Indoor robot positioning method and robot |
US11215462B2 (en) * | 2018-10-26 | 2022-01-04 | Here Global B.V. | Method, apparatus, and system for location correction based on feature point correspondence |
CN110349221A (en) * | 2019-07-16 | 2019-10-18 | 北京航空航天大学 | A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor |
CN110362083B (en) * | 2019-07-17 | 2021-01-26 | 北京理工大学 | Autonomous navigation method under space-time map based on multi-target tracking prediction |
-
2019
- 2019-12-25 CN CN201911358248.4A patent/CN113031582A/en active Pending
-
2020
- 2020-10-16 WO PCT/CN2020/121655 patent/WO2021129071A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016121126A1 (en) * | 2015-01-30 | 2016-08-04 | 株式会社日立製作所 | Two-dimensional code, two-dimensional code read device, and encoding method |
CN107609451A (en) * | 2017-09-14 | 2018-01-19 | 斯坦德机器人(深圳)有限公司 | A kind of high-precision vision localization method and system based on Quick Response Code |
CN109345588A (en) * | 2018-09-20 | 2019-02-15 | 浙江工业大学 | A kind of six-degree-of-freedom posture estimation method based on Tag |
CN109579843A (en) * | 2018-11-29 | 2019-04-05 | 浙江工业大学 | Multirobot co-located and fusion under a kind of vacant lot multi-angle of view build drawing method |
CN110458863A (en) * | 2019-06-25 | 2019-11-15 | 广东工业大学 | A kind of dynamic SLAM system merged based on RGBD with encoder |
CN110570477A (en) * | 2019-08-28 | 2019-12-13 | 贝壳技术有限公司 | Method, device and storage medium for calibrating relative attitude of camera and rotating shaft |
CN110580724A (en) * | 2019-08-28 | 2019-12-17 | 贝壳技术有限公司 | method and device for calibrating binocular camera set and storage medium |
CN111179427A (en) * | 2019-12-24 | 2020-05-19 | 深圳市优必选科技股份有限公司 | Autonomous mobile device, control method thereof, and computer-readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113031582A (en) | 2021-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6573354B2 (en) | Image processing apparatus, image processing method, and program | |
CN110570477B (en) | Method, device and storage medium for calibrating relative attitude of camera and rotating shaft | |
JP6100380B2 (en) | Image processing method used for vision-based positioning, particularly for apparatus | |
CN108022264B (en) | Method and equipment for determining camera pose | |
CN111462207A (en) | RGB-D simultaneous positioning and map creation method integrating direct method and feature method | |
CN111094895B (en) | System and method for robust self-repositioning in pre-constructed visual maps | |
CN108022265B (en) | Method, equipment and system for determining pose of infrared camera | |
JP6860620B2 (en) | Information processing equipment, information processing methods, and programs | |
CN110827353B (en) | Robot positioning method based on monocular camera assistance | |
WO2021129071A1 (en) | Robot, positioning method, and computer readable storage medium | |
KR20150101009A (en) | Apparatus and method for image matching unmanned aerial vehicle image with map image | |
CN107636729B (en) | Lighting plan generator | |
KR102263560B1 (en) | System for setting ground control points using cluster RTK drones | |
US10509513B2 (en) | Systems and methods for user input device tracking in a spatial operating environment | |
US20240029295A1 (en) | Method and apparatus for determining pose of tracked object in image tracking process | |
JP2018173882A (en) | Information processing device, method, and program | |
US11758100B2 (en) | Portable projection mapping device and projection mapping system | |
US20240155257A1 (en) | Determining a camera control point for virtual production | |
Mutka et al. | A low cost vision based localization system using fiducial markers | |
JP4896762B2 (en) | Image processing apparatus and image processing program | |
CN107339988B (en) | Positioning processing method and device | |
CN110631586A (en) | Map construction method based on visual SLAM, navigation system and device | |
US20230033339A1 (en) | Image processing system | |
JP6186072B2 (en) | Positioning of moving objects in 3D using a single camera | |
CN114494857A (en) | Indoor target object identification and distance measurement method based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20905113 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 03.11.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20905113 Country of ref document: EP Kind code of ref document: A1 |