[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020019221A1 - Method, apparatus and robot for autonomous positioning and map creation - Google Patents

Method, apparatus and robot for autonomous positioning and map creation Download PDF

Info

Publication number
WO2020019221A1
WO2020019221A1 PCT/CN2018/097134 CN2018097134W WO2020019221A1 WO 2020019221 A1 WO2020019221 A1 WO 2020019221A1 CN 2018097134 W CN2018097134 W CN 2018097134W WO 2020019221 A1 WO2020019221 A1 WO 2020019221A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
map
landmark
distance
image
Prior art date
Application number
PCT/CN2018/097134
Other languages
French (fr)
Chinese (zh)
Inventor
徐泽元
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201880001385.XA priority Critical patent/CN109074085B/en
Priority to PCT/CN2018/097134 priority patent/WO2020019221A1/en
Publication of WO2020019221A1 publication Critical patent/WO2020019221A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Definitions

  • the embodiments of the present application relate to the field of artificial intelligence, for example, to an autonomous positioning and map building method, device, and robot.
  • SLAM Simultaneous Localization and Mapping
  • SLAM Simultaneous Localization and Mapping
  • An object of the embodiments of the present application is to provide an autonomous positioning and map building method, device, and robot, which can reduce the influence of the surrounding environment on the autonomous positioning and map building of the robot.
  • an embodiment of the present application provides a method for autonomous positioning and map establishment.
  • the method is applied to a robot.
  • the method includes:
  • a new landmark point belonging to a fixed object among the landmark points is added to the map, and the pose of the new landmark point is obtained according to the position and the distance observation value.
  • an embodiment of the present application further provides an autonomous positioning and map establishment device, where the device is applied to a robot, and the device includes:
  • An observation distance acquisition module configured to acquire a distance observation value of the robot to a landmark point
  • a positioning module configured to obtain a position of the robot in a map
  • a mapping module is configured to add a new landmark point belonging to a fixed object among the landmark points to the map, and obtain a pose of the new landmark point according to the position and the distance observation value.
  • an embodiment of the present application further provides a robot, including:
  • At least one processor At least one processor
  • a memory connected in communication with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the foregoing method.
  • the method, device and robot for autonomous positioning and map establishment provided by the embodiments of the present application only add road signs belonging to a fixed object to the map, so as to avoid the situation that the robot changes the road signs when referring to the road signs, and reduce the robot's positioning and map establishment in the surrounding environment. Influence, so that the positioning of the robot and the calculation of road signs in the map are more accurate.
  • FIG. 1 is a schematic diagram of an application scenario of the method and device for autonomous positioning and map establishment of the present application
  • FIG. 2 is a schematic diagram of robot positioning and mapping in an embodiment of the present application.
  • FIG. 3 is a flowchart of an embodiment of an autonomous positioning and map establishment method of the present application.
  • FIG. 4 is a flowchart of steps for obtaining a distance observation value of a robot to a landmark in an embodiment of an autonomous positioning and map establishment method of the present application;
  • FIG. 5 is a flowchart of an embodiment of an autonomous positioning and map building method of the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of an autonomous positioning and map building device of the present application.
  • FIG. 7 is a schematic structural diagram of an embodiment of an autonomous positioning and map building device of the present application.
  • FIG. 8 is a schematic diagram of a hardware structure of a robot according to an embodiment of the present application.
  • the method and device for autonomous positioning and map establishment provided in this application are applicable to the application scenario shown in FIG. 1.
  • the application scenario includes a robot 10, where the robot 10 is a mobile robot, and the robot refers to a robot with certain artificial intelligence. Machines, for example, sweeping robots, humanoid robots, self-driving cars, etc.
  • the robot 10 In order to complete the task of the user or other situations, the robot 10 needs to move in an unknown environment. In order to achieve autonomous positioning and navigation during the movement, it is necessary to build an incremental map and estimate its own position at the same time.
  • the map is composed of many landmarks, such as y 1 , y 2, and y 3.
  • the robot 10 will observe a part of the landmarks and obtain their distance observations z (that is, the distance between x and y). .
  • the positioning of the robot 10 is to estimate the position (x) of the robot 10 in the map.
  • the robot 10 can estimate the position of the robot 10 in the existing map by observing the distance of the landmarks.
  • the establishment of the map is to estimate the position (y) of the landmark in the map, and the position (y) of the landmark can be obtained according to the position (x) and the distance observation (z) of the robot 10.
  • the positioning and mapping of the robot 10 is a continuous process. As the position of the robot 10 changes, the robot 10 will observe new landmarks and continuously add new landmarks to the map.
  • the road signs are, for example, walls, windows, pillars, trees, buildings, tables, cabinets, flowers, signs, people, pets, vehicles, and the like.
  • the mobile attributes of walls, windows, columns, trees, buildings, etc. may be defined as "fixed objects”
  • the mobile attributes of tables, cabinets, flowers, signs, etc. may be defined as "movable objects”
  • the robot 10 only adds the landmarks belonging to the fixed object to the map.
  • the above-mentioned definition of the movement attribute can be defined in advance according to the application scenario of the robot 10, and is not absolute.
  • the movement attribute of the same object in some application scenarios is "movable objects", and in other application scenarios it may be " Fixed object. "
  • FIG. 3 is a schematic flowchart of an autonomous positioning and map establishment method according to an embodiment of the present application. The method may be executed by the robot 10 in FIG. 1. As shown in FIG. 3, the method includes:
  • the distance observation of the robot 10 to the landmarks can be based on a visual method, such as obtaining an image in front of the robot vision through a binocular camera or a depth camera, and then obtaining the robot 10 and each of the pixels by obtaining depth information of each pixel in the image.
  • Distance between road signs In other embodiments, the robot may also measure the distance between the robot 10 and the road sign by other methods.
  • the robot 10 obtains a distance-observed value on a road sign, including:
  • a first image is obtained by a camera located on the left side of the robot 10 and a second image is obtained by a camera located on the right side of the robot 10, wherein the left camera and the right camera can be set on the left and right eyes of the robot 10, respectively Office.
  • the moving attributes include moving objects, movable objects, and fixed objects.
  • the first image and the second image to perform image recognition for example, a neural network model based on deep learning can be used to identify, and the category of each object in the image can be identified.
  • the movement attributes of each object in the image can be determined according to the movement attributes defined in advance by the object category, and the category and movement attributes (such as moving objects, movable objects, and fixed objects) can be marked for the pixels of the corresponding area of the object.
  • the object types in the first image are identified as tables, people, walls, etc. through image recognition, according to the definition of tables, people, and walls in advance, their moving attributes are respectively movable objects, moving objects, and fixed objects.
  • the categories and moving attributes of the pixels corresponding to the areas in the table in the image are "table” and “movable objects", and the categories and moving attributes of the pixels corresponding to the areas in the image are “people” and “moving objects” ",
  • the category and moving attributes are" wall “and” fixed object ".
  • Feature points are generally some "stable points" in the image, and will not disappear due to changes in perspective, lighting changes, and noise interference, such as corner points, edge points, bright points in dark areas, and dark points in bright areas.
  • the feature points marked as moving objects in the feature points can be removed. Because the moving object has a high probability of movement, if the robot 10 uses these moving objects as a reference, the position positioning of the robot 10 will be inaccurate. Therefore, in this step, landmarks whose moving attribute is a moving object can be eliminated.
  • the area where the moving attribute belongs to the moving object can also be masked directly.
  • feature points located in a shielded area are not extracted, so that the extracted feature points do not include feature points whose moving attribute is a moving object.
  • feature point matching may be performed based on, for example, a stereo matching algorithm, and matching may be performed between feature points of the same category. For example, the feature points are matched between the feature points belonging to the "table" and the feature points are matched to the "wall". This can reduce the matching range and improve the validity of the matching results.
  • the matching result can be used to calculate the disparity of a point on the first image and the second image according to the principle of triangulation to determine the depth of the feature point, that is, the distance of the robot 10 from the feature point.
  • the starting point (x 1 ) of the robot 10 movement can be set as a dot, and the robot can observe the road signs y 1 and y 2 at this position, assuming the road signs y 1 and y 2
  • the moving attributes of are fixed objects, and road signs y 1 and y 2 are added to the map.
  • the robot 10 is at the dot at this time, according to the distance observation values z 1 and z 2 of the road signs y 1 and y 2 by the robot here, the postures of the road signs y 1 and y 2 on the map can be obtained.
  • the distance observation values of the robot 10 to the road signs y 1 and y 2 are z 1 ′ and z 2 ′, according to z 1 ′ and z 2 ′ on the map (the map obtained at the position x 1 )
  • Location search i.e. positioning
  • the position and each landmark in the map is referred to as the positioning distance
  • the distance observed by the robot 10 for each corresponding landmark is referred to as the observation distance. If the coincidence degree of the observation distance is greater than a preset threshold, the position is regarded as an estimated position of the x 2 position.
  • each road sign includes a plurality of feature points (road sign points). Calculating the above-mentioned coincidence degree is actually calculating the degree at which the positioning distance of each landmark point in the map matches the actual observation distance of each corresponding landmark point by the robot 10. For example, if the preset threshold is 70, the landmark points whose positioning distance matches the observation distance are marked as 1, and the landmark points whose positioning distance does not match the observation distance are marked as 0. If there are more than 70 landmark points and the positioning distance of the position is consistent with the actual observation distance, the position can be considered as the position of the robot 10 on the map. Otherwise, you need to search for the new location again.
  • the correspondence between the landmark points observed by the robot 10 and the landmark points in the map needs to be matched.
  • the feature point matching between the landmark points observed by the robot 10 and the landmark points in the map may be performed to determine the landmark points corresponding to the landmark points observed by the robot in the map.
  • the category to which each landmark point belongs may also be marked, and the feature point matching may be performed between feature points of the same category to reduce the matching range and improve the matching. The validity of the results.
  • a moving weight value may also be marked for each landmark point.
  • the trees and building movement properties in the map are fixed objects, relatively speaking, buildings are less prone to movement, so you can set the building's movement weight value to be greater than the tree movement weight value. .
  • the above-mentioned degree of coincidence can also be obtained by combining the moving weight values corresponding to each landmark point in the map.
  • the coincidence degree and the moving weight value are positively correlated.
  • the calculation amount is reduced. It is also possible to first estimate the displacement of the robot 10 by a detection device such as a sensor, and then estimate the position of the robot 10, and then perform a position search within a certain range near the estimated position in the map. Specifically, by estimating the displacement of the robot 10 between the first time and the second time, it is possible to perform feature point matching on the depth maps of the landmark points obtained at the first time and the second time, based on the depth of the same feature point in different depth maps. The displacement of the robot 10 is obtained. This displacement can be further filtered and fused with the attitude information provided by the Inertial Measurement Unit (IMU).
  • IMU Inertial Measurement Unit
  • the robot 10 When the robot 10 moves to the position x 4 , the robot observes a new landmark y 3 at this position. If the landmark y 3 also belongs to a fixed object, the landmark y 3 is added to the map. The pose of the road sign y 3 in the map is obtained according to the position x 4 obtained by the positioning and the distance observation value of the road sign y 3 by the robot 10 at this position.
  • each landmark is composed of multiple feature points
  • the feature points corresponding to the landmark are also added to the map. These feature point clouds are combined to form a map.
  • the method for autonomous positioning and map establishment provided in the embodiments of the present application only adds landmark points that belong to a fixed object to the map to avoid the situation that the robot changes the landmark when referring to the landmark, and reduces the influence of the surrounding environment on the robot positioning and map establishment. This makes the robot's positioning and the calculation of road signs in the map more accurate.
  • step 104 the method for autonomous positioning and map establishment further includes step 104:
  • the position is corrected according to the landmark position pose obtained at the historical position, and the landmark position pose obtained between the historical position and the position is corrected.
  • a map with consistent information is obtained.
  • the robot 10 periodically detects whether the current position is a historical position that has been visited before, that is, performs loopback detection. If the current position coincides with the historical position, the current position is corrected based on the map obtained at the historical position, and the map obtained from the historical position to the current position is corrected. Because the robot 10 removes the influence of road sign changes during positioning, it is easier for the robot 10 to find the historical position during loop detection.
  • an embodiment of the present application further provides an autonomous positioning and map establishment device, which is used for the robot 10 shown in FIG. 1, and as shown in FIG. 6, the autonomous positioning and map establishment
  • the apparatus 600 includes:
  • An observation distance acquisition module 601 configured to acquire a distance observation value of the robot to a landmark point
  • a positioning module 602 configured to obtain a position of the robot in a map
  • a mapping module 603 is configured to add a new landmark point belonging to a fixed object among the landmark points to the map, and obtain a pose of the new landmark point according to the position and the distance observation value.
  • the autonomous positioning and map building device provided in the embodiment of the present application only adds the landmarks belonging to a fixed object to the map, so as to avoid the situation that the robot changes the landmark when referring to the landmark, and reduces the influence of the surrounding environment on the robot positioning and map establishment, thereby Make the robot's positioning and the calculation of road signs in the map more accurate.
  • the observation distance acquisition module 601 is specifically configured to:
  • the Movement attributes include moving objects, movable objects and fixed objects;
  • Feature point matching is performed based on the first image and the second image after the feature points are removed, and the feature point matching is performed between feature points of the same category to obtain a landmark point in the image whose moving attribute is a non-moving object. Distance observations.
  • mapping module 603 is specifically configured to:
  • a new landmark point belonging to a fixed object among the landmark points is added to the map, and a category and a moving weight value are marked for the new landmark point.
  • the positioning module 602 is specifically configured to:
  • the feature points match the category Between the same feature points;
  • the degree of coincidence exceeds a preset threshold, locate the position as The position of the robot in the map, the positioning distance is the distance between the position in the map and each landmark point, and the observation distance is the distance observation value of the robot to each corresponding landmark point.
  • the device described with reference to FIG. 7 further includes:
  • a loopback detection module 604 is configured to, if the position coincides with a historical position, correct the position according to the landmark position pose obtained at the historical position, and the landmark position obtained between the historical position and the position posture.
  • the above-mentioned autonomous positioning and map building device can execute the autonomous positioning and map building method provided in the embodiments of the present application, and has corresponding functional modules and beneficial effects of the execution method.
  • the autonomous positioning and map establishment device can execute the autonomous positioning and map building method provided in the embodiments of the present application, and has corresponding functional modules and beneficial effects of the execution method.
  • FIG. 8 is a schematic diagram of a hardware structure of a robot 10 according to an embodiment of the present application. As shown in FIG. 8, the robot 10 includes:
  • One processor 11 is taken as an example in FIG. 8.
  • the processor 11 and the memory 12 may be connected through a bus or other manners. In FIG. 8, the connection through the bus is taken as an example.
  • the memory 12 is a non-volatile computer-readable storage medium, and may be used to store non-volatile software programs, non-volatile computer executable programs, and modules, as corresponding to the methods for autonomous positioning and map establishment in the embodiments of the present application.
  • Program instructions / modules for example, the observation distance acquisition module 601, the positioning module 602, and the mapping module 603 shown in FIG. 6).
  • the processor 11 executes various functional applications and data processing of the robot by running the non-volatile software programs, instructions, and modules stored in the memory 12, that is, the autonomous positioning and map establishment method of the above method embodiment.
  • the memory 12 may include a storage program area and a storage data area, where the storage program area may store an operating system and application programs required for at least one function; the storage data area may store data created according to autonomous positioning and use of a map building device, and the like .
  • the memory 12 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 12 may optionally include a memory remotely disposed with respect to the processor 11, and these remote memories may be connected to the autonomous positioning and mapping device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the one or more modules are stored in the memory 12, and when executed by the one or more processors 11, execute the autonomous positioning and map establishment method in any of the above method embodiments, for example, execute the above-described
  • the above product can execute the method provided in the embodiment of the present application, and has the corresponding functional modules and beneficial effects of executing the method.
  • the above product can execute the method provided in the embodiment of the present application, and has the corresponding functional modules and beneficial effects of executing the method.
  • An embodiment of the present application provides a non-volatile computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, for example, as shown in FIG. 8
  • a processor 11 may enable the one or more processors to execute the autonomous positioning and map building method in any of the foregoing method embodiments, for example, execute the method steps 101 to 103 in FIG. 3 described above, and FIG. 4
  • the method includes steps 1011 to 1014 in the method, and steps 101 to 104 in the method in FIG. 5.
  • the functions of modules 601 to 603 in FIG. 6 and modules 601 to 604 in FIG. 7 are implemented.
  • the embodiments can be implemented by means of software plus a general hardware platform, and of course, also by hardware.
  • a person of ordinary skill in the art can understand that all or part of the processes in the method of the foregoing embodiment can be completed by using a computer program to instruct related hardware.
  • the program can be stored in a computer-readable storage medium. When executed, the processes of the embodiments of the methods described above may be included.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RandomAccess Memory, RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A method for autonomous positioning and map creation, the method being applied to a robot (10) and comprising: obtaining a distance observation value of the robot (10) for a landmark point (101); obtaining the location of the robot (10) in a map (102); adding a new landmark point comprising a fixed article in the landmark point to the map, and obtaining a pose of the new landmark point according to the location and the distance observation value (103). The described method only adds a landmark of a fixed object into a map, avoids changes to a landmark when a robot refers to the landmark, and reduces the influence of the surrounding environment on robot positioning and map creation, such that the positioning of a robot and the calculation of a landmark in a map are more accurate. Further disclosed are an apparatus for autonomous positioning and map creation, a robot for implementing the described method, a non-volatile computer readable storage medium for implementing the described method, and a computer program product for implementing the described method.

Description

一种自主定位和地图建立方法、装置和机器人Method, device and robot for autonomous positioning and map establishment 技术领域Technical field
本申请实施例涉及人工智能领域,例如涉及一种自主定位和地图建立方法、装置和机器人。The embodiments of the present application relate to the field of artificial intelligence, for example, to an autonomous positioning and map building method, device, and robot.
背景技术Background technique
SLAM(Simultaneous Localization and Mapping),即同时定位与地图重建,是指搭载特定传感器的机器人,在没有环境先验信息的情况下,通过机器人的运动过程构建环境的增量式地图,同时估计自身的位姿,实现机器人的自主定位与导航。随着科技的发展,基于SLAM的应用也越来越多。SLAM (Simultaneous Localization and Mapping), that is, simultaneous positioning and map reconstruction, refers to a robot equipped with a specific sensor. In the absence of prior information about the environment, it builds an incremental map of the environment through the robot's motion process and estimates its own Posture to realize autonomous positioning and navigation of the robot. With the development of science and technology, more and more applications based on SLAM.
在研究现有技术的过程中,发明人发现相关技术中至少存在如下问题:现有技术中,机器人在定位时,通常依据已有地图中各路标的位置,以及机器人当前时刻对路标的距离观测值,来估计当前时刻机器人在地图中的位置。而地图中新路标的位姿计算又依赖于机器人在地图中的位置和上述距离观测值。如果机器人参考的路标发生变动,将导致机器人定位不准确,同时也将导致地图中新路标位姿计算不准确。从而使机器人的定位和地图建立均受环境影响较大。In the process of researching the prior art, the inventors found that there are at least the following problems in the related art: In the prior art, when positioning a robot, it is usually based on the position of each landmark in an existing map and the distance observation of the landmark by the robot at the current moment. Value to estimate the robot's position on the map at the current moment. The pose calculation of new landmarks in the map depends on the robot's position in the map and the above-mentioned distance observations. If the landmark referenced by the robot changes, the robot positioning will be inaccurate, and the pose calculation of the new landmark in the map will also be inaccurate. As a result, robot positioning and map building are greatly affected by the environment.
发明内容Summary of the Invention
本申请实施例的一个目的是提供一种自主定位和地图建立方法、装置和机器人,能降低周围环境对机器人自主定位和地图建立的影响。An object of the embodiments of the present application is to provide an autonomous positioning and map building method, device, and robot, which can reduce the influence of the surrounding environment on the autonomous positioning and map building of the robot.
第一方面,本申请实施例提供了一种自主定位和地图建立方法,所述方法应用于机器人,所述方法包括:In a first aspect, an embodiment of the present application provides a method for autonomous positioning and map establishment. The method is applied to a robot. The method includes:
获取所述机器人对路标点的距离观测值;Obtaining a distance observation value of the robot to a landmark point;
获取所述机器人在地图中的位置;Obtaining a position of the robot in a map;
将所述路标点中属于固定物体的新路标点加入所述地图,并根据所述位置和所述距离观测值获得所述新路标点的位姿。A new landmark point belonging to a fixed object among the landmark points is added to the map, and the pose of the new landmark point is obtained according to the position and the distance observation value.
第二方面,本申请实施例还提供了自主定位和地图建立装置,所述装置应用于机器人,所述装置包括:In a second aspect, an embodiment of the present application further provides an autonomous positioning and map establishment device, where the device is applied to a robot, and the device includes:
观测距离获取模块,用于获取所述机器人对路标点的距离观测值;An observation distance acquisition module, configured to acquire a distance observation value of the robot to a landmark point;
定位模块,用于获取所述机器人在地图中的位置;A positioning module, configured to obtain a position of the robot in a map;
建图模块,用于将所述路标点中属于固定物体的新路标点加入所述地图,并根据所述位置和所述距离观测值获得所述新路标点的位姿。A mapping module is configured to add a new landmark point belonging to a fixed object among the landmark points to the map, and obtain a pose of the new landmark point according to the position and the distance observation value.
第三方面,本申请实施例还提供了一种机器人,包括:In a third aspect, an embodiment of the present application further provides a robot, including:
至少一个处理器;以及,At least one processor; and
与所述至少一个处理器通信连接的存储器;其中,A memory connected in communication with the at least one processor; wherein,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the foregoing method.
本申请实施例提供的自主定位和地图建立方法、装置和机器人,仅将属于固定物体的路标加入地图中,以避免机器人在参照路标时发生路标变化的情况,降低周围环境对机器人定位和地图建立的影响,从而使机器人的定位和地图中路标的计算更准确。The method, device and robot for autonomous positioning and map establishment provided by the embodiments of the present application only add road signs belonging to a fixed object to the map, so as to avoid the situation that the robot changes the road signs when referring to the road signs, and reduce the robot's positioning and map establishment in the surrounding environment. Influence, so that the positioning of the robot and the calculation of road signs in the map are more accurate.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplified by the pictures in the accompanying drawings. These exemplary descriptions do not constitute a limitation on the embodiments. Elements with the same reference numerals in the drawings are denoted as similar elements. Unless otherwise stated, the drawings in the drawings do not constitute a limitation on scale.
图1是本申请自主定位和地图建立方法和装置的应用场景示意图;FIG. 1 is a schematic diagram of an application scenario of the method and device for autonomous positioning and map establishment of the present application;
图2是本申请的一个实施例中机器人定位和建图示意图;2 is a schematic diagram of robot positioning and mapping in an embodiment of the present application;
图3是本申请自主定位和地图建立方法的一个实施例的流程图;FIG. 3 is a flowchart of an embodiment of an autonomous positioning and map establishment method of the present application; FIG.
图4是本申请自主定位和地图建立方法的一个实施例中获取机器人对路标的距离观测值步骤的流程图;4 is a flowchart of steps for obtaining a distance observation value of a robot to a landmark in an embodiment of an autonomous positioning and map establishment method of the present application;
图5是本申请自主定位和地图建立方法的一个实施例的流程图;FIG. 5 is a flowchart of an embodiment of an autonomous positioning and map building method of the present application;
图6是本申请自主定位和地图建立装置的一个实施例的结构示意图;FIG. 6 is a schematic structural diagram of an embodiment of an autonomous positioning and map building device of the present application;
图7是本申请自主定位和地图建立装置的一个实施例的结构示意图;FIG. 7 is a schematic structural diagram of an embodiment of an autonomous positioning and map building device of the present application;
图8是本申请实施例提供的机器人的硬件结构示意图。FIG. 8 is a schematic diagram of a hardware structure of a robot according to an embodiment of the present application.
具体实施方式detailed description
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请 实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the objectives, technical solutions, and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments These are part of the embodiments of the present application, but not all the embodiments. Based on the embodiments in the present application, all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
本申请提供的自主定位和地图建立方法和装置适用于图1所示的应用场景,所述应用场景包括机器人10,其中,机器人10为可移动机器人,所述机器人是指具有某些人工智能的机器,例如,扫地机器人、人形机器人、自动驾驶汽车等。机器人10在为了完成用户的任务或者其他情况下,需要在未知环境中运动。在运动过程中为了实现自主定位和导航,需要构建增量式地图,同时估计自身位置。The method and device for autonomous positioning and map establishment provided in this application are applicable to the application scenario shown in FIG. 1. The application scenario includes a robot 10, where the robot 10 is a mobile robot, and the robot refers to a robot with certain artificial intelligence. Machines, for example, sweeping robots, humanoid robots, self-driving cars, etc. In order to complete the task of the user or other situations, the robot 10 needs to move in an unknown environment. In order to achieve autonomous positioning and navigation during the movement, it is necessary to build an incremental map and estimate its own position at the same time.
为方便起见,请参照图2,把机器人10的一段连续运动分成离散时刻t=1,…k,在每一个时刻用x n表示机器人10的自身位置,即x 1,x 2…x k,它构成了机器人10的运动轨迹。假设地图由许多个路标组成,如图中的y 1、y 2和y 3,在每一个时刻机器人10都会观测到一部分路标,得到它们的距离观测值z(即x和y之间的距离)。机器人10的定位即估计机器人10在地图中的位置(x),机器人10可以通过对路标的距离观测值在已有地图中估计自身所处的位置。地图建立即估计地图中路标的位置(y),路标的位置(y)可以根据机器人10的位置(x)和距离观测值(z)获得。机器人10的定位和建图是一个不断持续的过程,随着机器人10的位置变化,机器人10会观测到新的路标,同时不断将新的路标加入到地图中。 For convenience, please refer to FIG. 2, a segment of continuous movement of the robot 10 is divided into discrete moments t = 1,... K, and at each moment, the position of the robot 10 is represented by x n , that is, x 1 , x 2 … x k , It constitutes the trajectory of the robot 10. Assume that the map is composed of many landmarks, such as y 1 , y 2, and y 3. At each moment, the robot 10 will observe a part of the landmarks and obtain their distance observations z (that is, the distance between x and y). . The positioning of the robot 10 is to estimate the position (x) of the robot 10 in the map. The robot 10 can estimate the position of the robot 10 in the existing map by observing the distance of the landmarks. The establishment of the map is to estimate the position (y) of the landmark in the map, and the position (y) of the landmark can be obtained according to the position (x) and the distance observation (z) of the robot 10. The positioning and mapping of the robot 10 is a continuous process. As the position of the robot 10 changes, the robot 10 will observe new landmarks and continuously add new landmarks to the map.
其中,所述路标例如墙面、窗户、柱子、树木、建筑物、桌子、柜子、花草、标识牌、人、宠物、车辆等。在一些实施例中,可以将墙面、窗户、柱子、树木和建筑物等的移动属性定义为“固定物体”,将桌子、柜子、花草、标识牌等的移动属性定义为“可移动物体”,将人、宠物、车辆等的移动属性定义为“移动物体”。机器人10仅将属于固定物体的路标加入地图中,由于这些路标不易发生变化,可以避免机器人10在参照路标定位时发生路标变化的情况,能降低周围环境对机器人定位和建立地图的影响,从而使机器人的定位和地图中路标的计算更准确。需要说明的是,上述移动属性的定义可以根据机器人10的应用场景事先定义,并非绝对的,同一个物体在一些应用场景中移动属性为 “可移动物体”,在另一些应用场景中则可能为“固定物体”。The road signs are, for example, walls, windows, pillars, trees, buildings, tables, cabinets, flowers, signs, people, pets, vehicles, and the like. In some embodiments, the mobile attributes of walls, windows, columns, trees, buildings, etc. may be defined as "fixed objects", and the mobile attributes of tables, cabinets, flowers, signs, etc. may be defined as "movable objects" , Define the moving attributes of people, pets, vehicles, etc. as "moving objects". The robot 10 only adds the landmarks belonging to the fixed object to the map. Since these landmarks are not easy to change, it can avoid the changes of the landmarks when the robot 10 refers to the landmarks, and can reduce the impact of the surrounding environment on the robot's positioning and map creation, so that The positioning of the robot and the calculation of road signs in the map are more accurate. It should be noted that the above-mentioned definition of the movement attribute can be defined in advance according to the application scenario of the robot 10, and is not absolute. The movement attribute of the same object in some application scenarios is "movable objects", and in other application scenarios it may be " Fixed object. "
图3为本申请实施例提供的自主定位和地图建立方法的流程示意图,所述方法可由图1中的机器人10执行,如图3所示,所述方法包括:FIG. 3 is a schematic flowchart of an autonomous positioning and map establishment method according to an embodiment of the present application. The method may be executed by the robot 10 in FIG. 1. As shown in FIG. 3, the method includes:
101:获取所述机器人对路标点的距离观测值。101: Obtain a distance observation value of a robot to a landmark point.
其中,机器人10对路标的距离观测可以基于视觉的方法,例如通过双目相机或深度相机等获得机器人视觉前方的图像,然后通过获取所述图像中各像素点的深度信息来获得机器人10与各路标之间的距离。在另一些实施例中,机器人也可以通过其他方法测量机器人10与路标之间的距离。Among them, the distance observation of the robot 10 to the landmarks can be based on a visual method, such as obtaining an image in front of the robot vision through a binocular camera or a depth camera, and then obtaining the robot 10 and each of the pixels by obtaining depth information of each pixel in the image. Distance between road signs. In other embodiments, the robot may also measure the distance between the robot 10 and the road sign by other methods.
以基于视觉方法的双目相机获得距离观测值为例,请参照图4,所述机器人10获取对路标的距离观测值,包括:Taking a distance-observed value obtained by a binocular camera based on a vision method as an example, please refer to FIG. 4. The robot 10 obtains a distance-observed value on a road sign, including:
1011:通过双目摄像装置分别获取视觉范围内的第一图像和第二图像。1011: Obtain a first image and a second image in the visual range through a binocular camera.
即通过位于机器人10左侧的相机获得第一图像,通过位于机器人10右侧的相机获得第二图像,其中,左侧的相机和右侧的相机可以分别设置在机器人10的左眼和右眼处。That is, a first image is obtained by a camera located on the left side of the robot 10 and a second image is obtained by a camera located on the right side of the robot 10, wherein the left camera and the right camera can be set on the left and right eyes of the robot 10, respectively Office.
1012:分别对所述第一图像和所述第二图像进行图像识别,识别出图像中各个区域的类别并根据所述类别确定移动属性,为每个区域标记所述类别和所述移动属性,所述移动属性包括移动物体、可移动物体和固定物体。1012: performing image recognition on the first image and the second image respectively, identifying a category of each region in the image, and determining a mobile attribute according to the category, marking the category and the mobile attribute for each region, The moving attributes include moving objects, movable objects, and fixed objects.
具体的,对第一图像和第二图像进行图像识别,可以采用例如基于深度学习的神经网络模型进行识别,识别出图像中各个物体的类别。同时可以根据物体类别事先定义的移动属性确定图像中各个物体的移动属性,并为物体对应区域的像素点标记所属类别和移动属性(例如移动物体、可移动物体和固定物体)。例如,通过图像识别识别出第一图像中的物体类别分别为桌子、人、墙等,依据事先对桌子、人和墙的定义,其移动属性分别为可移动物体、移动物体和固定物体,则为桌子在图像中的区域对应的像素点标记类别和移动属性为“桌子”和“可移动物体”,为人在图像中的区域对应的像素点标记类别和移动属性为“人”和“移动物体”,为墙在图像中的区域对应的像素点标记类别和移动属性为“墙”和“固定物体”。本领域技术人员可以理解的是,在实际应用中对像素点标记类别和移动属性时,采用的可能是代表实际类别和实际移动属性的计算机符号,而并非实际类别和实际移动属性本身。其中,每个物体的移动属 性可以事先根据机器人10的应用场景定义。Specifically, for the first image and the second image to perform image recognition, for example, a neural network model based on deep learning can be used to identify, and the category of each object in the image can be identified. At the same time, the movement attributes of each object in the image can be determined according to the movement attributes defined in advance by the object category, and the category and movement attributes (such as moving objects, movable objects, and fixed objects) can be marked for the pixels of the corresponding area of the object. For example, if the object types in the first image are identified as tables, people, walls, etc. through image recognition, according to the definition of tables, people, and walls in advance, their moving attributes are respectively movable objects, moving objects, and fixed objects. The categories and moving attributes of the pixels corresponding to the areas in the table in the image are "table" and "movable objects", and the categories and moving attributes of the pixels corresponding to the areas in the image are "people" and "moving objects" ", For the pixels in the image corresponding to the area of the wall, the category and moving attributes are" wall "and" fixed object ". Those skilled in the art can understand that when marking categories and moving attributes on pixels in practical applications, computer symbols representing actual categories and actual moving attributes may be used instead of the actual categories and actual moving attributes themselves. Among them, the moving attributes of each object can be defined in advance according to the application scenario of the robot 10.
1013:基于所述第一图像和所述第二图像提取特征点,并去除属于移动物体的特征点。1013: Extract feature points based on the first image and the second image, and remove feature points belonging to a moving object.
具体的,从第一图像和第二图像的各像素点中提取特征点,可以采用例如SIFT或ORB等算法。特征点一般是图像中的一些“稳定点”,不会因为视角的改变、光照的变化、噪音的干扰而消失,比如角点、边缘点、暗区域的亮点以及亮区域的暗点等。特征点提取之后,可以去除特征点中标记为移动物体的特征点。因为移动的物体发生移动的概率较大,如果机器人10定位时以这些移动的物体为参考,将导致机器人10的位置定位不准确。因此,在该步骤中,可以将移动属性为移动物体的路标进行剔除。Specifically, for extracting feature points from each pixel of the first image and the second image, an algorithm such as SIFT or ORB may be used. Feature points are generally some "stable points" in the image, and will not disappear due to changes in perspective, lighting changes, and noise interference, such as corner points, edge points, bright points in dark areas, and dark points in bright areas. After the feature points are extracted, the feature points marked as moving objects in the feature points can be removed. Because the moving object has a high probability of movement, if the robot 10 uses these moving objects as a reference, the position positioning of the robot 10 will be inaccurate. Therefore, in this step, landmarks whose moving attribute is a moving object can be eliminated.
在其他一些实施例中,识别出图像中的每个区域后,也可以直接将移动属性属于移动物体的区域进行屏蔽。这样进行特征点提取时,就不会提取到位于屏蔽区域的特征点,以使提取的特征点中不包括移动属性为移动物体的特征点。In some other embodiments, after each area in the image is identified, the area where the moving attribute belongs to the moving object can also be masked directly. When performing feature point extraction in this way, feature points located in a shielded area are not extracted, so that the extracted feature points do not include feature points whose moving attribute is a moving object.
1014:基于去除特征点后的所述第一图像和所述第二图像进行特征点匹配,所述特征点匹配在类别相同的特征点之间进行,以获得图像中移动属性为非移动物体的路标点的距离观测值。1014: Perform feature point matching based on the first image and the second image after removing feature points, and the feature point matching is performed between feature points of the same category to obtain a road in the image whose moving attribute is a non-moving object The distance measurement of the punctuation point.
具体的,可以基于例如立体匹配算法进行特征点匹配,在类别相同的特征点之间进行匹配。例如在类别同属于“桌子”的特征点间进行特征点匹配,或者在类别同属于“墙面”的特征点间进行匹配。这样可以减小匹配范围,提高匹配结果的有效性。特征点匹配之后,可以利用匹配结果,根据三角测量原理,计算一个点在第一图像和第二图像上的视差来确定该特征点的深度,即机器人10距该特征点的距离。Specifically, feature point matching may be performed based on, for example, a stereo matching algorithm, and matching may be performed between feature points of the same category. For example, the feature points are matched between the feature points belonging to the "table" and the feature points are matched to the "wall". This can reduce the matching range and improve the validity of the matching results. After the feature points are matched, the matching result can be used to calculate the disparity of a point on the first image and the second image according to the principle of triangulation to determine the depth of the feature point, that is, the distance of the robot 10 from the feature point.
102:获取所述机器人在地图中的位置。102: Obtain a position of the robot in a map.
103:将所述路标点中属于固定物体的新路标点加入所述地图,并根据所述位置和所述距离观测值获得所述新路标点的位姿。103: Add a new landmark point belonging to a fixed object among the landmark points to the map, and obtain a pose of the new landmark point according to the position and the distance observation value.
请参照图2,在机器人10开始一段运动时,可以将机器人10运动的起点(x 1)设置为圆点,机器人在此位置可以观测到路标y 1和y 2,假设路标y 1和y 2的移动属性均为固定物体,将路标y 1和y 2加入到地图中。因为此时机器人10在圆点,根据机器人在此处对路标y 1和y 2的距离观测值z 1和z 2即可获得路标y 1和y 2在地图中的位姿。 Please refer to FIG. 2, when the robot 10 starts a movement, the starting point (x 1 ) of the robot 10 movement can be set as a dot, and the robot can observe the road signs y 1 and y 2 at this position, assuming the road signs y 1 and y 2 The moving attributes of are fixed objects, and road signs y 1 and y 2 are added to the map. Because the robot 10 is at the dot at this time, according to the distance observation values z 1 and z 2 of the road signs y 1 and y 2 by the robot here, the postures of the road signs y 1 and y 2 on the map can be obtained.
当机器人10运动到位置x 2时,机器人10对路标y 1和y 2的距离观测值为z 1’和z 2’,根据z 1’和z 2’在地图(x 1位置获得的地图)中进行位置搜索(即定位)。当搜索到地图中的某一位置时,将该位置与地图中各路标的距离称为定位距离,将机器人10实际对各对应路标的距离观测值称为观测距离,如果所述定位距离符合所述观测距离的符合程度大于预设阈值,则认为该位置即为x 2位置的估计位置。 When the robot 10 moves to the position x 2 , the distance observation values of the robot 10 to the road signs y 1 and y 2 are z 1 ′ and z 2 ′, according to z 1 ′ and z 2 ′ on the map (the map obtained at the position x 1 ) Location search (i.e. positioning). When a certain position in the map is searched, the distance between the position and each landmark in the map is referred to as the positioning distance, and the distance observed by the robot 10 for each corresponding landmark is referred to as the observation distance. If the coincidence degree of the observation distance is greater than a preset threshold, the position is regarded as an estimated position of the x 2 position.
图2中仅示出了路标y 1和y 2,实际每个路标均包括多个特征点(路标点)。计算上述符合程度,实际上是计算在该位置、地图中各个路标点的定位距离符合机器人10对各个对应路标点的实际观测距离的程度。例如,假设预设阈值为70,定位距离符合观测距离的路标点记为1,定位距离不符合观测距离的路标点记为0。如果有超过70个路标点与该位置的定位距离符合实际观测距离,则可以认为该位置就是机器人10在地图中的位置。否则,需要重新搜索新的位置。 Only the road signs y 1 and y 2 are shown in FIG. 2. Actually each road sign includes a plurality of feature points (road sign points). Calculating the above-mentioned coincidence degree is actually calculating the degree at which the positioning distance of each landmark point in the map matches the actual observation distance of each corresponding landmark point by the robot 10. For example, if the preset threshold is 70, the landmark points whose positioning distance matches the observation distance are marked as 1, and the landmark points whose positioning distance does not match the observation distance are marked as 0. If there are more than 70 landmark points and the positioning distance of the position is consistent with the actual observation distance, the position can be considered as the position of the robot 10 on the map. Otherwise, you need to search for the new location again.
其中,判断路标点的定位距离是否符合观测距离需要基于对应的路标点,因此,在计算所述符合程度之前,需匹配机器人10观测的路标点和地图中的路标点的对应关系。可以对机器人10观测的路标点和地图中的路标点进行特征点匹配,以确定机器人观测的路标点在地图中对应的路标点。其中,在一些实施例中,在地图中加入各路标点时,还可以为各路标点标记所属类别,上述特征点匹配可以在相同类别的特征点之间进行,以减小匹配范围,提高匹配结果的有效性。Wherein, judging whether the positioning distance of the landmark points meets the observation distance needs to be based on the corresponding landmark points. Therefore, before calculating the degree of coincidence, the correspondence between the landmark points observed by the robot 10 and the landmark points in the map needs to be matched. The feature point matching between the landmark points observed by the robot 10 and the landmark points in the map may be performed to determine the landmark points corresponding to the landmark points observed by the robot in the map. Wherein, in some embodiments, when each landmark point is added to the map, the category to which each landmark point belongs may also be marked, and the feature point matching may be performed between feature points of the same category to reduce the matching range and improve the matching. The validity of the results.
在一些实施例中,在地图中加入各路标点时,还可以为各路标点标记移动权重值。例如,虽然作为地图中的路标树木和建筑物移动属性都是固定物体,但是,相对来说,建筑物更不易发生运动,因此,可以将建筑物的移动权重值设置为大于树木的移动权重值。例如,将建筑物的移动权重值设置为3,将树木的移动权重值设置为2。对应的,上述符合程度也可以结合地图中各路标点对应的移动权重值来获取。在一些实施例中,所述符合程度和移动权重值为正相关关系。还以上例说明,假设路标y 1的移动权重值为3,路标y 2的移动权重值为1,预设阈值还是70。如果路标y 1中有20个定位距离符合观测距离的特征点,路标y 2中有15个定位距离符合观测距离的特征点,则符合程度为20*3+15*1=75>70,此位置定位成功。如果路标y 1中有8个定位距离符合观测距离的特征点,路标y 2中有45个定位距离符合观测距离的特征点,则符合程度为8*3+45*1=69<70, 此位置定位失败,需要重新在地图中搜索新位置。 In some embodiments, when each landmark point is added to the map, a moving weight value may also be marked for each landmark point. For example, although the trees and building movement properties in the map are fixed objects, relatively speaking, buildings are less prone to movement, so you can set the building's movement weight value to be greater than the tree movement weight value. . For example, set the moving weight value of the building to 3 and the moving weight value of the tree to 2. Correspondingly, the above-mentioned degree of coincidence can also be obtained by combining the moving weight values corresponding to each landmark point in the map. In some embodiments, the coincidence degree and the moving weight value are positively correlated. The above example also illustrates that it is assumed that the moving weight value of the landmark y 1 is 3, the moving weight value of the landmark y 2 is 1, and the preset threshold is still 70. If the signs y 1 20 positioning distance compliance feature point observation distance, road signs y 2 15 positioning distance compliance feature point observation distance, the degree of compliance of 20 * 3 + 15 * 1 = 75> 70, this Position location succeeded. If the signs y 1 8 positioning from compliance feature point observation distance, road signs y 2 45 positioning distance compliance feature point observation distance, the degree of compliance for the 8 * 3 + 45 * 1 = 69 <70, this Location location failed, you need to search for a new location on the map again.
其中,在一些实施例中,为了减小搜索范围,降低运算量。还可以先通过传感器等检测装置对机器人10的位移进行估计,进而估计机器人10的位置,然后在地图中该估计位置附近一定范围内进行位置搜索。具体的,估计机器人10在第一时刻与第二时刻间的位移,可以对第一时刻和第二时刻获得的路标点深度图进行特征点匹配,通过同一特征点在不同深度图中的深度来获得机器人10的位移。该位移还可以进一步与惯性测量单元(IMU,Inertial measurement unit)提供的姿态信息进行滤波融合。Among them, in some embodiments, in order to reduce the search range, the calculation amount is reduced. It is also possible to first estimate the displacement of the robot 10 by a detection device such as a sensor, and then estimate the position of the robot 10, and then perform a position search within a certain range near the estimated position in the map. Specifically, by estimating the displacement of the robot 10 between the first time and the second time, it is possible to perform feature point matching on the depth maps of the landmark points obtained at the first time and the second time, based on the depth of the same feature point in different depth maps. The displacement of the robot 10 is obtained. This displacement can be further filtered and fused with the attitude information provided by the Inertial Measurement Unit (IMU).
当机器人10运动到位置x 4时,机器人在此位置观测到新路标y 3,如果路标y 3亦属于固定物体,则将路标y 3加入到地图中。路标y 3在地图中的位姿根据定位获得的x 4处的位置和机器人10在此位置对路标y 3的距离观测值获得。 When the robot 10 moves to the position x 4 , the robot observes a new landmark y 3 at this position. If the landmark y 3 also belongs to a fixed object, the landmark y 3 is added to the map. The pose of the road sign y 3 in the map is obtained according to the position x 4 obtained by the positioning and the distance observation value of the road sign y 3 by the robot 10 at this position.
因为每个路标均由多个特征点组成,加入到地图中的也是路标对应的多个特征点,把这些特征点云拼起来即构成地图。Because each landmark is composed of multiple feature points, the feature points corresponding to the landmark are also added to the map. These feature point clouds are combined to form a map.
本申请实施例提供的自主定位和地图建立方法,仅将属于固定物体的路标点加入地图中,以避免机器人在参照路标时发生路标变化的情况,降低周围环境对机器人定位和地图建立的影响,从而使机器人的定位和地图中路标的计算更准确。The method for autonomous positioning and map establishment provided in the embodiments of the present application only adds landmark points that belong to a fixed object to the map to avoid the situation that the robot changes the landmark when referring to the landmark, and reduces the influence of the surrounding environment on the robot positioning and map establishment. This makes the robot's positioning and the calculation of road signs in the map more accurate.
在另一些实施例中,请参照图5,所述自主定位和地图建立方法,除了步骤101-103之外,还包括步骤104:In other embodiments, please refer to FIG. 5. In addition to steps 101-103, the method for autonomous positioning and map establishment further includes step 104:
如果所述位置与历史位置重合,则根据在所述历史位置获得的路标点位姿修正所述位置、以及所述历史位置和所述位置之间获得的路标点位姿。If the position coincides with the historical position, the position is corrected according to the landmark position pose obtained at the historical position, and the landmark position pose obtained between the historical position and the position is corrected.
为了修正由于漂移造成的误差,得到信息一致的地图。机器人10会定时检测当前位置是否是之前访问过的历史位置,即进行回环检测。如果当前位置与历史位置重合,则根据在历史位置获得的地图修正当前位置、以及从历史位置到当前位置之间获得的地图。由于机器人10定位时剔除了路标变化的影响,因此在回环检测时机器人10更容易找到历史位置。In order to correct errors due to drift, a map with consistent information is obtained. The robot 10 periodically detects whether the current position is a historical position that has been visited before, that is, performs loopback detection. If the current position coincides with the historical position, the current position is corrected based on the map obtained at the historical position, and the map obtained from the historical position to the current position is corrected. Because the robot 10 removes the influence of road sign changes during positioning, it is easier for the robot 10 to find the historical position during loop detection.
相应的,本申请实施例还提供了一种自主定位和地图建立装置,所述自主定位和地图建立装置用于图1所示的机器人10,如图6所示,所述自主定位和地图建立装置600包括:Correspondingly, an embodiment of the present application further provides an autonomous positioning and map establishment device, which is used for the robot 10 shown in FIG. 1, and as shown in FIG. 6, the autonomous positioning and map establishment The apparatus 600 includes:
观测距离获取模块601,用于获取所述机器人对路标点的距离观测值;An observation distance acquisition module 601, configured to acquire a distance observation value of the robot to a landmark point;
定位模块602,用于获取所述机器人在地图中的位置;A positioning module 602, configured to obtain a position of the robot in a map;
建图模块603,用于将所述路标点中属于固定物体的新路标点加入所述地图,并根据所述位置和所述距离观测值获得所述新路标点的位姿。A mapping module 603 is configured to add a new landmark point belonging to a fixed object among the landmark points to the map, and obtain a pose of the new landmark point according to the position and the distance observation value.
本申请实施例提供的自主定位和地图建立装置,仅将属于固定物体的路标加入地图中,以避免机器人在参照路标时发生路标变化的情况,降低周围环境对机器人定位和地图建立的影响,从而使机器人的定位和地图中路标的计算更准确。The autonomous positioning and map building device provided in the embodiment of the present application only adds the landmarks belonging to a fixed object to the map, so as to avoid the situation that the robot changes the landmark when referring to the landmark, and reduces the influence of the surrounding environment on the robot positioning and map establishment, thereby Make the robot's positioning and the calculation of road signs in the map more accurate.
在自主定位和地图建立装置600的一些实施例中,观测距离获取模块601具体用于:In some embodiments of the autonomous positioning and map building apparatus 600, the observation distance acquisition module 601 is specifically configured to:
通过双目摄像装置分别获取视觉范围内的第一图像和第二图像;Acquiring a first image and a second image in the visual range through a binocular camera device;
分别对所述第一图像和所述第二图像进行图像识别,识别出图像中各个区域的类别并根据所述类别确定移动属性,为每个区域标记所述类别和所述移动属性,所述移动属性包括移动物体、可移动物体和固定物体;Performing image recognition on the first image and the second image respectively, identifying a category of each region in the image, and determining a moving attribute according to the category, marking the category and the moving attribute for each region, the Movement attributes include moving objects, movable objects and fixed objects;
基于所述第一图像和所述第二图像提取特征点,并去除属于移动物体的特征点;Extracting feature points based on the first image and the second image, and removing feature points belonging to a moving object;
基于去除特征点后的所述第一图像和所述第二图像进行特征点匹配,所述特征点匹配在类别相同的特征点之间进行,以获得图像中移动属性为非移动物体的路标点的距离观测值。Feature point matching is performed based on the first image and the second image after the feature points are removed, and the feature point matching is performed between feature points of the same category to obtain a landmark point in the image whose moving attribute is a non-moving object. Distance observations.
具体的,在其中一些实施例中,建图模块603具体用于:Specifically, in some of these embodiments, the mapping module 603 is specifically configured to:
将所述路标点中属于固定物体的新路标点加入所述地图,并为所述新路标点标记所属类别和移动权重值。A new landmark point belonging to a fixed object among the landmark points is added to the map, and a category and a moving weight value are marked for the new landmark point.
具体的,在其中一些实施例中,定位模块602具体用于:Specifically, in some of these embodiments, the positioning module 602 is specifically configured to:
将第二时刻机器人观测的路标点与第一时刻地图中的路标点进行特征点匹配,确定第二时刻机器人观测的路标点在第一时刻地图中对应的路标点,所述特征点匹配在类别相同的特征点之间进行;Match the landmark points observed by the robot at the second moment with the landmark points in the map at the first moment, and determine the corresponding landmark points in the map at the first moment by the landmark points observed by the robot at the second moment. The feature points match the category Between the same feature points;
根据所述第二时刻所述机器人对路标点的距离观测值在所述第一时刻的地图中进行位置搜索;Performing a position search in the map at the first time according to the distance observation value of the robot to a landmark point at the second time;
结合所述地图中路标点的移动权重值,获取所述机器人在所述地图中某一位置时定位距离与观测距离的符合程度,如果所述符合程度超过预设阈值,则 将该位置定位为所述机器人在地图中的位置,所述定位距离为所述地图中该位置与各路标点的距离,所述观测距离为所述机器人对各对应路标点的距离观测值。Combine the moving weight value of the landmark points in the map to obtain the degree of coincidence between the positioning distance and the observation distance when the robot is at a certain position in the map. If the degree of coincidence exceeds a preset threshold, locate the position as The position of the robot in the map, the positioning distance is the distance between the position in the map and each landmark point, and the observation distance is the distance observation value of the robot to each corresponding landmark point.
在自主定位和地图建立装置600的其他实施例中,请参照图7所述装置还包括:In other embodiments of the autonomous positioning and map establishing device 600, the device described with reference to FIG. 7 further includes:
回环检测模块604,用于如果所述位置与历史位置重合,则根据在所述历史位置获得的路标点位姿修正所述位置、以及所述历史位置和所述位置之间获得的路标点位姿。A loopback detection module 604 is configured to, if the position coincides with a historical position, correct the position according to the landmark position pose obtained at the historical position, and the landmark position obtained between the historical position and the position posture.
需要说明的是,上述自主定位和地图建立装置可执行本申请实施例所提供的自主定位和地图建立方法,具备执行方法相应的功能模块和有益效果。未在自主定位和地图建立装置实施例中详尽描述的技术细节,可参见本申请实施例所提供的自主定位和地图建立方法。It should be noted that the above-mentioned autonomous positioning and map building device can execute the autonomous positioning and map building method provided in the embodiments of the present application, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in the embodiment of the autonomous positioning and map establishment device, reference may be made to the autonomous positioning and map establishment method provided in the embodiment of the present application.
图8是本申请实施例提供的机器人10的硬件结构示意图,如图8所示,该机器人10包括:FIG. 8 is a schematic diagram of a hardware structure of a robot 10 according to an embodiment of the present application. As shown in FIG. 8, the robot 10 includes:
一个或多个处理器11以及存储器12,图8中以一个处理器11为例。One or more processors 11 and a memory 12. One processor 11 is taken as an example in FIG. 8.
处理器11和存储器12可以通过总线或者其他方式连接,图8中以通过总线连接为例。The processor 11 and the memory 12 may be connected through a bus or other manners. In FIG. 8, the connection through the bus is taken as an example.
存储器12作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的自主定位和地图建立方法对应的程序指令/模块(例如,附图6所示的观测距离获取模块601、定位模块602和建图模块603)。处理器11通过运行存储在存储器12中的非易失性软件程序、指令以及模块,从而执行机器人的各种功能应用以及数据处理,即实现上述方法实施例的自主定位和地图建立方法。The memory 12 is a non-volatile computer-readable storage medium, and may be used to store non-volatile software programs, non-volatile computer executable programs, and modules, as corresponding to the methods for autonomous positioning and map establishment in the embodiments of the present application. Program instructions / modules (for example, the observation distance acquisition module 601, the positioning module 602, and the mapping module 603 shown in FIG. 6). The processor 11 executes various functional applications and data processing of the robot by running the non-volatile software programs, instructions, and modules stored in the memory 12, that is, the autonomous positioning and map establishment method of the above method embodiment.
存储器12可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据自主定位和地图建立装置的使用所创建的数据等。此外,存储器12可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器12可选包括相对于处 理器11远程设置的存储器,这些远程存储器可以通过网络连接至自主定位和地图建立装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 12 may include a storage program area and a storage data area, where the storage program area may store an operating system and application programs required for at least one function; the storage data area may store data created according to autonomous positioning and use of a map building device, and the like . In addition, the memory 12 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device. In some embodiments, the memory 12 may optionally include a memory remotely disposed with respect to the processor 11, and these remote memories may be connected to the autonomous positioning and mapping device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
所述一个或者多个模块存储在所述存储器12中,当被所述一个或者多个处理器11执行时,执行上述任意方法实施例中的自主定位和地图建立方法,例如,执行以上描述的图3中的方法步骤101至步骤103,图4中的方法步骤1011至步骤1014,图5中的方法步骤101至步骤104,;实现图6中的模块601-603、图7中模块601-604的功能。The one or more modules are stored in the memory 12, and when executed by the one or more processors 11, execute the autonomous positioning and map establishment method in any of the above method embodiments, for example, execute the above-described The method steps 101 to 103 in FIG. 3, the method steps 1011 to 1014 in FIG. 4, and the method steps 101 to 104 in FIG. 5; implement modules 601-603 in FIG. 6 and modules 601- in FIG. 7. 604 features.
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。The above product can execute the method provided in the embodiment of the present application, and has the corresponding functional modules and beneficial effects of executing the method. For technical details not described in detail in this embodiment, reference may be made to the method provided in the embodiment of the present application.
本申请实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图8中的一个处理器11,可使得上述一个或多个处理器可执行上述任意方法实施例中的自主定位和地图建立方法,例如,执行以上描述的图3中的方法步骤101至步骤103,图4中的方法步骤1011至步骤1014,图5中的方法步骤101至步骤104,;实现图6中的模块601-603、图7中模块601-604的功能。An embodiment of the present application provides a non-volatile computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, for example, as shown in FIG. 8 A processor 11 may enable the one or more processors to execute the autonomous positioning and map building method in any of the foregoing method embodiments, for example, execute the method steps 101 to 103 in FIG. 3 described above, and FIG. 4 The method includes steps 1011 to 1014 in the method, and steps 101 to 104 in the method in FIG. 5. The functions of modules 601 to 603 in FIG. 6 and modules 601 to 604 in FIG. 7 are implemented.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, may be located One place, or it can be distributed across multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment.
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(RandomAccessMemory,RAM)等。Through the description of the above embodiments, a person of ordinary skill in the art can clearly understand that the embodiments can be implemented by means of software plus a general hardware platform, and of course, also by hardware. A person of ordinary skill in the art can understand that all or part of the processes in the method of the foregoing embodiment can be completed by using a computer program to instruct related hardware. The program can be stored in a computer-readable storage medium. When executed, the processes of the embodiments of the methods described above may be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RandomAccess Memory, RAM).
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to describe the technical solution of the present application, but not limited thereto. Under the idea of the present application, the technical features in the above embodiments or different embodiments may also be combined. The steps can be implemented in any order, and there are many other variations of different aspects of the present application as described above, for the sake of brevity, they are not provided in the details; although the present application is described in detail with reference to the foregoing embodiments, it is common in the art The skilled person should understand that they can still modify the technical solutions described in the foregoing embodiments, or equivalently replace some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions separate from the implementation of this application. Examples of technical solutions.

Claims (13)

  1. 一种自主定位和地图建立方法,所述方法应用于机器人,其特征在于,所述方法包括:An autonomous positioning and map building method, which is applied to a robot, is characterized in that the method includes:
    获取所述机器人对路标点的距离观测值;Obtaining a distance observation value of the robot to a landmark point;
    获取所述机器人在地图中的位置;Obtaining a position of the robot in a map;
    将所述路标点中属于固定物体的新路标点加入所述地图,并根据所述位置和所述距离观测值获得所述新路标点的位姿。A new landmark point belonging to a fixed object among the landmark points is added to the map, and the pose of the new landmark point is obtained according to the position and the distance observation value.
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述机器人对路标点的距离观测值,包括:The method according to claim 1, wherein the acquiring the distance observation value of the robot to a landmark point comprises:
    通过双目摄像装置分别获取视觉范围内的第一图像和第二图像;Acquiring a first image and a second image in the visual range through a binocular camera device;
    分别对所述第一图像和所述第二图像进行图像识别,识别出图像中各个区域的类别并根据所述类别确定移动属性,为每个区域标记所述类别和所述移动属性,所述移动属性包括移动物体、可移动物体和固定物体;Performing image recognition on the first image and the second image respectively, identifying a category of each region in the image, and determining a moving attribute according to the category, marking the category and the moving attribute for each region, the Movement attributes include moving objects, movable objects and fixed objects;
    基于所述第一图像和所述第二图像提取特征点,并去除属于移动物体的特征点;Extracting feature points based on the first image and the second image, and removing feature points belonging to a moving object;
    基于去除特征点后的所述第一图像和所述第二图像进行特征点匹配,所述特征点匹配在类别相同的特征点之间进行,以获得图像中移动属性为非移动物体的路标点的距离观测值。Feature point matching is performed based on the first image and the second image after the feature points are removed, and the feature point matching is performed between feature points of the same category to obtain a landmark point in the image whose moving attribute is a non-moving object. Distance observations.
  3. 根据权利要求1或2所述的方法,其特征在于,所述将所述路标点中属于固定物体的新路标点加入所述地图,包括:The method according to claim 1 or 2, wherein the adding a new landmark point belonging to a fixed object among the landmark points to the map comprises:
    将所述路标点中属于固定物体的新路标点加入所述地图,并为所述新路标点标记所属类别和移动权重值。A new landmark point belonging to a fixed object among the landmark points is added to the map, and a category and a moving weight value are marked for the new landmark point.
  4. 根据权利要求3所述的方法,其特征在于,所述获取所述机器人在地图中的位置,包括:The method according to claim 3, wherein the acquiring the position of the robot in a map comprises:
    将第二时刻机器人观测的路标点与第一时刻地图中的路标点进行特征点匹配,确定第二时刻机器人观测的路标点在第一时刻地图中对应的路标点,所述特征点匹配在类别相同的特征点之间进行;Match the landmark points observed by the robot at the second moment with the landmark points in the map at the first moment, and determine the corresponding landmark points in the map at the first moment by the landmark points observed by the robot at the second moment. The feature points match the category Between the same feature points;
    根据所述第二时刻所述机器人对路标点的距离观测值在所述第一时刻的地图中进行位置搜索;Performing a position search in the map at the first time according to the distance observation value of the robot to a landmark point at the second time;
    结合所述地图中路标点的移动权重值,获取所述机器人在所述地图中某一位置时定位距离与观测距离的符合程度,如果所述符合程度超过预设阈值,则将该位置定位为所述机器人在地图中的位置,所述定位距离为所述地图中该位置与各路标点的距离,所述观测距离为所述机器人对各对应路标点的距离观测值。Combine the moving weight value of the landmark points in the map to obtain the degree of coincidence between the positioning distance and the observation distance when the robot is at a certain position in the map. If the degree of coincidence exceeds a preset threshold, locate the position as The position of the robot in the map, the positioning distance is the distance between the position in the map and each landmark point, and the observation distance is the distance observation value of the robot to each corresponding landmark point.
  5. 根据权利要求1-4任意一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-4, further comprising:
    如果所述位置与历史位置重合,则根据在所述历史位置获得的路标点位姿修正所述位置、以及所述历史位置和所述位置之间获得的路标点位姿。If the position coincides with the historical position, the position is corrected according to the landmark position pose obtained at the historical position, and the landmark position pose obtained between the historical position and the position is corrected.
  6. 一种自主定位和地图建立装置,所述装置应用于机器人,其特征在于,所述装置包括:An autonomous positioning and map building device, which is applied to a robot, is characterized in that the device includes:
    观测距离获取模块,用于获取所述机器人对路标点的距离观测值;An observation distance acquisition module, configured to acquire a distance observation value of the robot to a landmark point;
    定位模块,用于获取所述机器人在地图中的位置;A positioning module, configured to obtain a position of the robot in a map;
    建图模块,用于将所述路标点中属于固定物体的新路标点加入所述地图,并根据所述位置和所述距离观测值获得所述新路标点的位姿。A mapping module is configured to add a new landmark point belonging to a fixed object among the landmark points to the map, and obtain a pose of the new landmark point according to the position and the distance observation value.
  7. 根据权利要求6所述的装置,其特征在于,所述观测距离获取模块具体用于:The apparatus according to claim 6, wherein the observation distance obtaining module is specifically configured to:
    通过双目摄像装置分别获取视觉范围内的第一图像和第二图像;Acquiring a first image and a second image in the visual range through a binocular camera device;
    分别对所述第一图像和所述第二图像进行图像识别,识别出图像中各个区域的类别并根据所述类别确定移动属性,为每个区域标记所述类别和所述移动属性,所述移动属性包括移动物体、可移动物体和固定物体;Performing image recognition on the first image and the second image respectively, identifying a category of each region in the image, and determining a moving attribute according to the category, marking the category and the moving attribute for each region, the Movement attributes include moving objects, movable objects and fixed objects;
    基于所述第一图像和所述第二图像提取特征点,并去除属于移动物体的特征点;Extracting feature points based on the first image and the second image, and removing feature points belonging to a moving object;
    基于去除特征点后的所述第一图像和所述第二图像进行特征点匹配,所述特征点匹配在类别相同的特征点之间进行,以获得图像中移动属性为非移动物体的路标点的距离观测值。Feature point matching is performed based on the first image and the second image after the feature points are removed, and the feature point matching is performed between feature points of the same category to obtain the Distance observations.
  8. 根据权利要求6或7所述的装置,其特征在于,所述建图模块具体用于:The device according to claim 6 or 7, wherein the mapping module is specifically configured to:
    将所述路标点中属于固定物体的新路标点加入所述地图,并为所述新路标 点标记所属类别和移动权重值。A new landmark point belonging to a fixed object among the landmark points is added to the map, and a category and a moving weight value are marked for the new landmark point.
  9. 根据权利要求8所述的装置,其特征在于,所述定位模块具体用于:The device according to claim 8, wherein the positioning module is specifically configured to:
    将第二时刻机器人观测的路标点与第一时刻地图中的路标点进行特征点匹配,确定第二时刻机器人观测的路标点在第一时刻地图中对应的路标点,所述特征点匹配在类别相同的特征点之间进行;Match the landmark points observed by the robot at the second moment with the landmark points in the map at the first moment, and determine the corresponding landmark points in the map at the first moment by the landmark points observed by the robot at the second moment. The feature points match the category Between the same feature points;
    根据所述第二时刻所述机器人对路标点的距离观测值在所述第一时刻的地图中进行位置搜索;Performing a position search in the map at the first time according to the distance observation value of the robot to a landmark point at the second time;
    结合所述地图中路标点的移动权重值,获取所述机器人在所述地图中某一位置时定位距离与观测距离的符合程度,如果所述符合程度超过预设阈值,则将该位置定位为所述机器人在地图中的位置,所述定位距离为所述地图中该位置与各路标点的距离,所述观测距离为所述机器人对各对应路标点的距离观测值。Combine the moving weight value of the landmark points in the map to obtain the degree of coincidence between the positioning distance and the observation distance when the robot is at a certain position in the map. If the degree of coincidence exceeds a preset threshold, locate the position as The position of the robot in the map, the positioning distance is the distance between the position in the map and each landmark point, and the observation distance is the distance observation value of the robot to each corresponding landmark point.
  10. 根据权利要求6-9任意一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 6-9, wherein the device further comprises:
    回环检测模块,用于如果所述位置与历史位置重合,则根据在所述历史位置获得的路标点位姿修正所述位置、以及所述历史位置和所述位置之间获得的路标点位姿。A loop detection module configured to correct the position according to the landmark position pose obtained from the historical position, and the landmark position pose obtained between the historical position and the position if the position coincides with the historical position .
  11. 一种机器人,其特征在于,包括:A robot is characterized by comprising:
    至少一个处理器;以及,At least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,A memory connected in communication with the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-5任一项所述的方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method according to any one of claims 1-5. method.
  12. 一种非易失性计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被机器人执行时,使所述机器人执行权利要求1-5任一项所述的方法。A non-volatile computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a robot, the robot executes claim 1 -The method according to any one of -5.
  13. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储在 非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被机器人执行时,使所述机器人执行权利要求1-5任一项所述的方法。A computer program product, characterized in that the computer program product includes a computer program stored on a non-volatile computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a robot, The robot is caused to perform the method according to any one of claims 1-5.
PCT/CN2018/097134 2018-07-26 2018-07-26 Method, apparatus and robot for autonomous positioning and map creation WO2020019221A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880001385.XA CN109074085B (en) 2018-07-26 2018-07-26 Autonomous positioning and map building method and device and robot
PCT/CN2018/097134 WO2020019221A1 (en) 2018-07-26 2018-07-26 Method, apparatus and robot for autonomous positioning and map creation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097134 WO2020019221A1 (en) 2018-07-26 2018-07-26 Method, apparatus and robot for autonomous positioning and map creation

Publications (1)

Publication Number Publication Date
WO2020019221A1 true WO2020019221A1 (en) 2020-01-30

Family

ID=64789340

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097134 WO2020019221A1 (en) 2018-07-26 2018-07-26 Method, apparatus and robot for autonomous positioning and map creation

Country Status (2)

Country Link
CN (1) CN109074085B (en)
WO (1) WO2020019221A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464989A (en) * 2020-11-02 2021-03-09 北京科技大学 Closed loop detection method based on target detection network
CN112683273A (en) * 2020-12-21 2021-04-20 广州慧扬健康科技有限公司 Adaptive incremental mapping method, system, computer equipment and storage medium
CN113108798A (en) * 2021-04-21 2021-07-13 浙江中烟工业有限责任公司 Multi-storage robot indoor map positioning system based on laser radar
CN113238550A (en) * 2021-04-12 2021-08-10 大连海事大学 Mobile robot vision homing method based on road sign self-adaptive correction
CN113838074A (en) * 2020-06-08 2021-12-24 北京极智嘉科技股份有限公司 Positioning method and device
US11255982B2 (en) 2018-11-30 2022-02-22 Saint-Gobain Ceramics & Plastics, Inc. Radiation detection apparatus having a reflector
CN114536326A (en) * 2022-01-19 2022-05-27 深圳市灵星雨科技开发有限公司 Road sign data processing method and device and storage medium
CN114763992A (en) * 2021-01-14 2022-07-19 未岚大陆(北京)科技有限公司 Map building method, positioning method, device, self-moving equipment and medium
CN114822120A (en) * 2021-01-29 2022-07-29 上海大唐移动通信设备有限公司 Simulation teaching device
CN119863587A (en) * 2025-03-13 2025-04-22 卧安科技(深圳)有限公司 Map construction method for body-equipped robot control system, body-equipped robot, and medium

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110068824B (en) * 2019-04-17 2021-07-23 北京地平线机器人技术研发有限公司 Sensor pose determining method and device
CN110046677B (en) * 2019-04-26 2021-07-06 山东大学 Data preprocessing method, map construction method, loop closure detection method and system
CN110175540A (en) * 2019-05-11 2019-08-27 深圳市普渡科技有限公司 Road sign map structuring system and robot
CN112596509B (en) * 2019-09-17 2024-10-18 广州汽车集团股份有限公司 Vehicle control method, device, computer equipment and computer readable storage medium
CN112629546B (en) * 2019-10-08 2023-09-19 宁波吉利汽车研究开发有限公司 Position adjustment parameter determining method and device, electronic equipment and storage medium
CN110579215B (en) * 2019-10-22 2021-05-18 上海智蕙林医疗科技有限公司 Positioning method based on environmental feature description, mobile robot and storage medium
CN111553945B (en) * 2020-04-13 2023-08-11 东风柳州汽车有限公司 Vehicle positioning method
CN114322939B (en) * 2020-09-30 2024-09-06 财团法人车辆研究测试中心 Positioning mapping method and mobile device
CN114764878A (en) * 2020-12-30 2022-07-19 氪见(南京)科技有限公司 Information processing method and device, processing equipment and mobile robot
CN112325873B (en) * 2021-01-04 2021-04-06 炬星科技(深圳)有限公司 Environment map autonomous updating method, equipment and computer readable storage medium
CN112801193B (en) * 2021-02-03 2023-04-07 拉扎斯网络科技(上海)有限公司 Positioning data processing method and device, electronic equipment and medium
CN114202689A (en) * 2021-12-06 2022-03-18 北京云迹科技股份有限公司 A point marking method, device, electronic device and storage medium
CN115344035A (en) * 2022-05-11 2022-11-15 西安达升科技股份有限公司 Robot map construction and navigation method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103105852A (en) * 2011-11-14 2013-05-15 联想(北京)有限公司 Method and device for displacement computing and method and device for simultaneous localization and mapping
CN104062973A (en) * 2014-06-23 2014-09-24 西北工业大学 Mobile robot SLAM method based on image marker identification
CN104916216A (en) * 2015-06-26 2015-09-16 深圳乐行天下科技有限公司 Map construction method and system thereof

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100449444C (en) * 2006-09-29 2009-01-07 浙江大学 A Method for Simultaneous Localization and Map Construction of Mobile Robots in Unknown Environments
KR101019336B1 (en) * 2007-10-29 2011-03-07 재단법인서울대학교산학협력재단 Stabilization Control System and Method Using Inertial Sensor
CN102656532B (en) * 2009-10-30 2015-11-25 悠进机器人股份公司 For ground map generalization and the update method of position of mobile robot identification
EP2527943A1 (en) * 2011-05-24 2012-11-28 BAE Systems Plc. Vehicle navigation
KR20130096539A (en) * 2012-02-22 2013-08-30 한국전자통신연구원 Autonomous moving appartus and method for controlling thereof
US9846042B2 (en) * 2014-11-13 2017-12-19 Worcester Polytechnic Institute Gyroscope assisted scalable visual simultaneous localization and mapping
CN105334858A (en) * 2015-11-26 2016-02-17 江苏美的清洁电器股份有限公司 Floor sweeping robot and indoor map establishing method and device thereof
CN106056643B (en) * 2016-04-27 2018-10-26 深圳积木易搭科技技术有限公司 A kind of indoor dynamic scene SLAM method and system based on cloud
US10884417B2 (en) * 2016-11-07 2021-01-05 Boston Incubator Center, LLC Navigation of mobile robots based on passenger following
JP6775263B2 (en) * 2016-12-02 2020-10-28 深▲せん▼前海達闥云端智能科技有限公司Cloudminds (Shenzhen) Robotics Systems Co.,Ltd. Positioning method and equipment
CN106908040B (en) * 2017-03-06 2019-06-14 哈尔滨工程大学 An autonomous positioning method for binocular panoramic vision robot based on SURF algorithm
CN107832661B (en) * 2017-09-27 2019-06-14 南通大学 A localization method for indoor mobile robot based on visual landmarks
DE102017217412A1 (en) * 2017-09-29 2019-04-04 Robert Bosch Gmbh Method, apparatus and computer program for operating a robot control system
JP6932058B2 (en) * 2017-10-11 2021-09-08 日立Astemo株式会社 Position estimation device and position estimation method for moving objects
CN107991680B (en) * 2017-11-21 2019-08-23 南京航空航天大学 SLAM method under dynamic environment based on laser radar
CN108225327B (en) * 2017-12-31 2021-05-14 芜湖哈特机器人产业技术研究院有限公司 A method of constructing and locating a top-marked map
JP7353747B2 (en) * 2018-01-12 2023-10-02 キヤノン株式会社 Information processing device, system, method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103105852A (en) * 2011-11-14 2013-05-15 联想(北京)有限公司 Method and device for displacement computing and method and device for simultaneous localization and mapping
CN104062973A (en) * 2014-06-23 2014-09-24 西北工业大学 Mobile robot SLAM method based on image marker identification
CN104916216A (en) * 2015-06-26 2015-09-16 深圳乐行天下科技有限公司 Map construction method and system thereof

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11255982B2 (en) 2018-11-30 2022-02-22 Saint-Gobain Ceramics & Plastics, Inc. Radiation detection apparatus having a reflector
CN113838074A (en) * 2020-06-08 2021-12-24 北京极智嘉科技股份有限公司 Positioning method and device
CN112464989B (en) * 2020-11-02 2024-02-20 北京科技大学 Closed loop detection method based on target detection network
CN112464989A (en) * 2020-11-02 2021-03-09 北京科技大学 Closed loop detection method based on target detection network
CN112683273A (en) * 2020-12-21 2021-04-20 广州慧扬健康科技有限公司 Adaptive incremental mapping method, system, computer equipment and storage medium
CN114763992A (en) * 2021-01-14 2022-07-19 未岚大陆(北京)科技有限公司 Map building method, positioning method, device, self-moving equipment and medium
CN114822120A (en) * 2021-01-29 2022-07-29 上海大唐移动通信设备有限公司 Simulation teaching device
CN113238550B (en) * 2021-04-12 2023-10-27 大连海事大学 Mobile robot vision homing method based on road sign self-adaptive correction
CN113238550A (en) * 2021-04-12 2021-08-10 大连海事大学 Mobile robot vision homing method based on road sign self-adaptive correction
CN113108798A (en) * 2021-04-21 2021-07-13 浙江中烟工业有限责任公司 Multi-storage robot indoor map positioning system based on laser radar
CN114536326A (en) * 2022-01-19 2022-05-27 深圳市灵星雨科技开发有限公司 Road sign data processing method and device and storage medium
CN114536326B (en) * 2022-01-19 2024-03-22 深圳市灵星雨科技开发有限公司 Road sign data processing method, device and storage medium
CN119863587A (en) * 2025-03-13 2025-04-22 卧安科技(深圳)有限公司 Map construction method for body-equipped robot control system, body-equipped robot, and medium

Also Published As

Publication number Publication date
CN109074085B (en) 2021-11-09
CN109074085A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
WO2020019221A1 (en) Method, apparatus and robot for autonomous positioning and map creation
CN107990899B (en) Positioning method and system based on SLAM
CN109506658B (en) Robot autonomous positioning method and system
CN110211151B (en) Method and device for tracking moving object
CN106940704B (en) Positioning method and device based on grid map
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
US12062210B2 (en) Data processing method and apparatus
CN112304307A (en) Positioning method and device based on multi-sensor fusion and storage medium
JP6976350B2 (en) Imaging system for locating and mapping scenes, including static and dynamic objects
CN110298914B (en) A Method of Establishing Characteristic Map of Fruit Tree Canopy in Orchard
CN116255992A (en) Method and device for simultaneously positioning and mapping
CN110751722B (en) Simultaneous positioning and mapping method and device
CN116012540A (en) Space static map construction method and system
Tamjidi et al. 6-DOF pose estimation of a portable navigation aid for the visually impaired
CN110827353A (en) A robot positioning method based on monocular camera assistance
WO2023072269A1 (en) Object tracking
CN113984068A (en) Positioning method, positioning device, and computer-readable storage medium
CN115218906A (en) Indoor SLAM-oriented visual inertial fusion positioning method and system
CN115586767A (en) A multi-robot path planning method and device
Sartipi et al. Decentralized visual-inertial localization and mapping on mobile devices for augmented reality
CN111160280B (en) RGBD camera-based target object identification and positioning method and mobile robot
JP2022138037A (en) Information processing device, information processing method and program
CN110458177A (en) Image depth information acquisition method, image processing device and storage medium
CN117635651A (en) A dynamic environment SLAM method based on YOLOv8 instance segmentation
CN117036462A (en) Visual positioning method and device based on event camera, electronic equipment and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18927503

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/05/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18927503

Country of ref document: EP

Kind code of ref document: A1