[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112381939A - Visual SLAM method, device, robot and storage medium - Google Patents

Visual SLAM method, device, robot and storage medium Download PDF

Info

Publication number
CN112381939A
CN112381939A CN202011352975.2A CN202011352975A CN112381939A CN 112381939 A CN112381939 A CN 112381939A CN 202011352975 A CN202011352975 A CN 202011352975A CN 112381939 A CN112381939 A CN 112381939A
Authority
CN
China
Prior art keywords
target object
preset type
determining
line
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011352975.2A
Other languages
Chinese (zh)
Other versions
CN112381939B (en
Inventor
李昂
郭盖华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen LD Robot Co Ltd
Original Assignee
Shenzhen LD Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen LD Robot Co Ltd filed Critical Shenzhen LD Robot Co Ltd
Priority to CN202011352975.2A priority Critical patent/CN112381939B/en
Priority claimed from CN202011352975.2A external-priority patent/CN112381939B/en
Publication of CN112381939A publication Critical patent/CN112381939A/en
Application granted granted Critical
Publication of CN112381939B publication Critical patent/CN112381939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a visual SLAM method, a visual SLAM device, a robot and a storage medium. The method includes the steps that a current image is obtained, and a characteristic line of a preset type target object is determined from the current image; acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; matching the characteristic line of the preset type target object with the historical characteristic line; and when the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, and constructing a map according to the current pose, so that the accuracy of map construction and positioning of the visual SLAM is improved.

Description

Visual SLAM method, device, robot and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a visual SLAM method, an apparatus, a robot, and a storage medium.
Background
At present, Simultaneous Localization And Mapping (SLAM) is a research hotspot for realizing autonomous navigation by robots. The incremental map positioning method is mainly used for sensing the surrounding environment after the robot enters the unknown environment, further constructing the incremental map and simultaneously performing self positioning. Generally, in a conventional visual SLAM method, because characteristic points in a natural environment are convenient to detect and track, drawing and positioning are mainly performed by using the characteristic points in the natural environment, but because only characteristic point information in the natural environment is considered, the visual SLAM method is inaccurate in judgment of some artificial building environments such as walls and the like, accuracy of the visual SLAM is seriously affected, and accordingly the drawing and positioning accuracy of the visual SLAM is low.
Disclosure of Invention
The embodiment of the application provides a visual SLAM method, a device, a robot and a storage medium, which can solve the problem of low accuracy of visual SLAM mapping and positioning caused by weak texture of the visual SLAM, such as low matching success degree or poor precision and other factors caused by insufficient number of feature points in the surrounding environment.
In a first aspect, an embodiment of the present application provides a visual SLAM method, including:
acquiring a current image, and determining a characteristic line of a preset type target object from the current image;
acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; matching the characteristic line of the preset type target object with the historical characteristic line;
and when the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, and constructing a map according to the current pose.
Optionally, the determining a feature line of a preset type of target object from the current image includes:
extracting each line segment in the image from the current image;
determining a target object with at least two line segments intersected at one point according to the connection condition of each line segment;
and when the target object conforms to a preset type, taking a line segment corresponding to the preset type target object as a characteristic line of the preset type target object.
Optionally, the matching processing of the feature line of the preset type target object and the historical feature line includes:
determining the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object;
calculating a first error value between the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object;
and when the first error value is smaller than a preset first threshold value, the matching is successful.
Optionally, the matching the characteristic line of the preset type target object with the historical characteristic line further includes:
determining the characteristic points of the preset type target object according to the characteristic lines of the preset type target object;
determining historical characteristic points of the preset type target object according to the historical characteristic lines;
calculating a second error value between the position of the historical characteristic point and the position of the characteristic point of the preset type target object;
and when the second error value is smaller than a preset second threshold value, the matching is successful.
Optionally, the determining the feature point of the preset type object according to the feature line of the preset type object includes:
determining an intersection point of the feature lines of the preset type target object;
and taking the intersection point as a characteristic point of the preset type target object.
Optionally, the determining the current pose according to the feature line of the preset type target object and the historical feature line includes:
acquiring a sampling point set of the feature line of the preset type target object and the historical feature line at the same position;
calculating the sampling point set, the historical characteristic points and the characteristic points of the preset type target object according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image;
and determining the current pose according to the rotation matrix and the translation matrix.
Optionally, the determining a current pose according to the feature line of the preset type target object and the historical feature line further includes:
acquiring at least three pairs of sampling points at the same positions of the characteristic line of the preset type target object and the historical characteristic line;
calculating the at least three pairs of sampling points according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image;
and determining the current pose according to the rotation matrix and the translation matrix.
In a second aspect, an embodiment of the present application provides a visual SLAM device, including:
the acquisition module is used for acquiring a current image and determining a characteristic line of a preset type target object from the current image;
the matching module is used for acquiring adjacent images and determining a historical characteristic line of the preset type target object from the adjacent images; matching the characteristic line of the preset type target object with the historical characteristic line;
and the construction module is used for determining the current pose according to the characteristic line after the matching is successful, and constructing a map according to the current pose.
In a third aspect, embodiments of the present application provide a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of any one of the above-mentioned visual SLAM methods when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of any one of the above-mentioned visual SLAM methods.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a robot, causes the robot to perform any one of the above-mentioned visual SLAM methods of the first aspect.
The method comprises the steps of obtaining a current image, determining a characteristic line of a preset type target object from the current image, and determining a characteristic line for comparison in the current environment of the robot; acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line so as to judge whether the current acquired characteristic line of the robot is the same as the historical characteristic line of the preset type target object, and after the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, so that the pose of the current robot is further accurately obtained according to the current environment, and a map is constructed according to the current pose, so that the accuracy and robustness of visual SLAM mapping and positioning are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first schematic flow chart of a visual SLAM method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a target of a visual SLAM method provided by an embodiment of the present application;
fig. 3 is a second flowchart of a visual SLAM method provided in an embodiment of the present application;
FIG. 4 is a third schematic flow chart of a visual SLAM method provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a visual SLAM device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a robot provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Fig. 1 is a schematic flow chart of a visual SLAM method in an embodiment of the present application, where an execution subject of the method may be a robot, and as shown in fig. 1, the visual SLAM method may include the following steps:
step S101, obtaining a current image, and determining a characteristic line of a preset type target object from the current image.
In this embodiment, if the visual SLAM only considers feature point information in a natural environment, the accuracy of mapping and positioning of the visual SLAM will be low, so that a current image can be acquired by using a camera mounted on the robot, and the robot determines a feature line of a preset type of target object from the current image, so that the robot compares the features of the special type of target object in the current environment, and the current pose of the robot is accurately judged. As shown in fig. 2, if the current target object is a corner, the three line segments forming the corner are feature lines of the corner, and the intersection point between the feature lines may be a feature point of the corner. The preset type of target object may be a right-angle type of target object, that is, the target object has a right-angle feature.
Optionally, the camera may be installed at a front section of the robot and has a certain inclination angle, at least one camera may be installed on the robot, and the camera includes at least one of a monocular camera and a binocular camera for sensing the surrounding environment, although there are many sensors for sensing the surrounding environment, the camera may be an important research content in the field by virtue of its advantages of low price, small size, convenience in installation, etc.
Optionally, step S101 includes:
and extracting each line segment in the image from the current image.
And determining a target object of which at least two line segments intersect at one point according to the connection condition of each line segment.
And when the target object conforms to a preset type, taking a line segment corresponding to the preset type target object as a characteristic line of the preset type target object.
In this embodiment, the robot extracts each line segment in the image from the acquired image of the current environment, determines a target object where at least two line segments intersect at one point according to the connection status of each line segment, and determines the type of the target object, and when the type of the target object meets a preset type, takes at least two line segments of the target object obtained in the above method as the feature line in this embodiment.
S102, acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line.
In this embodiment, after the robot acquires the current image, an adjacent image at an adjacent time, that is, the previous image, is further acquired, so that in order to compare the two images, a historical feature line of the preset type target object can be determined from the adjacent image, so that the historical feature line is matched with a feature line of the preset type target object obtained in the current image, and whether the current image acquired by the current robot and the target object in the adjacent image can be matched is determined, and the robot further and accurately determines the current pose according to the target object in the two images.
Optionally, after the adjacent images are acquired, tracking processing may be performed according to the feature value of the feature line of the preset type target object determined from the current image, so as to determine the historical feature line in the adjacent images.
Optionally, as shown in fig. 3, step S102 includes:
and S301, determining the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object.
Step S302, calculating a first error value between the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object.
Step S303, when the first error value is smaller than a preset first threshold, the matching is successful.
In this embodiment, the robot may select a preset number of sampling points from the historical characteristic line of a preset type of target object in adjacent images, determine the position coordinates of the sampling points, and calculate the mean value of the sampling points according to the position coordinates to obtain the characteristic value of the historical characteristic line; and selecting a preset number of sampling points from the positions, identical to the sampling points in the selected historical characteristic line, of the characteristic line of the preset type target object in the current image, determining the position coordinates of the sampling points, calculating the mean value of the sampling points according to the position coordinates to obtain the characteristic value of the characteristic line of the preset type target object, calculating the difference value of the characteristic line of the preset type target object and the position coordinates to obtain a first error value, comparing the first error value and the characteristic value to obtain a first error value, and when the first error value is smaller than a first threshold value preset in the robot, indicating that the first error value and the characteristic. The first threshold may be set according to the angular resolution of the camera, the frame rate, and the motion speed of the robot, but is not limited thereto, and for example, if the current environment moves at 30 frame rate, 1080p, 120 degree field angle, and 30cm/s of the robot, the first threshold may be set to 3.6 pixels.
Optionally, as shown in fig. 4, step S102 further includes:
step S401, determining the characteristic points of the preset type target object according to the characteristic lines of the preset type target object.
Optionally, step S401 includes:
determining an intersection point of the feature lines of the preset type target object; and taking the intersection point as a characteristic point of the preset type target object.
Optionally, a point at a preset position around the feature line may also be used as a feature point of the preset type of target object.
And S402, determining the historical characteristic points of the preset type target object according to the historical characteristic lines.
And S403, calculating a second error value between the position of the historical characteristic point and the position of the characteristic point of the preset type target object.
And S404, when the second error value is smaller than a preset second threshold value, the matching is successful.
In this embodiment, the robot may determine the historical feature points of the preset type target object from the historical feature line and the feature points of the preset type target object from the feature line of the preset type target object in the current image by the feature point selection means, obtain a second error value by calculating a difference between positions of the two feature points, and compare the second error value with the second error value, where the second error value is smaller than a second threshold preset in the robot, indicating that the two are successfully matched. The second threshold may be set according to the angular resolution of the camera, the frame rate, and the moving speed of the robot, which is not limited herein, for example, if the current environment is a 30 frame rate, 1080p, 120 degree field angle, and a 30cm/s speed of the robot, the second threshold may be set to 9 pixels; the feature point may be an intersection of feature lines, as shown in fig. 2.
Optionally, when there are at least two preset types of objects, the relative position relationship between the at least two preset types of objects may be determined, and the auxiliary registration is performed during the matching process according to the relative position relationship between the at least two preset types of objects, so as to improve the matching speed. The relative position relationship of the at least two preset type objects can be determined by the relative position of the intersection point of the characteristic lines of the at least two preset type objects.
Optionally, the feature points may be selected by randomly setting at least two points on a fixed object in an operation scene of the robot as the feature points, so that the robot performs matching processing according to the feature points appearing in the image.
And S103, when the matching is successful, determining the current pose according to the feature line of the preset type target object and the historical feature line, and constructing a map according to the current pose.
In this embodiment, when matching is successful, it is described that the preset type of target object in the current image of the robot is the same as the preset type of target object in the adjacent image, and the current pose of the robot can be calculated in combination because the preset type of target object in the current image meets the set threshold condition, so that the robot can calculate in combination with the historical feature lines in the adjacent image according to the feature lines of the preset type of target object in the current image, obtain the movement mode of the robot in the process of moving from the time of acquiring the adjacent image to the time of acquiring the current image, obtain the pose of the robot at the current time, and further construct a map with the current pose as a reference.
Optionally, step S103 includes:
and acquiring a sampling point set of the feature line of the preset type target object at the same position as the historical feature line.
And calculating the sampling point set, the historical characteristic points and the characteristic points of the preset type target object according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image.
And determining the current pose according to the rotation matrix and the translation matrix.
In this embodiment, the robot may determine the current pose of the robot according to a combination manner of a feature line and a feature point in a preset type target object, the robot may determine a rotation matrix and a translation matrix between an adjacent image and the current image by acquiring a sampling point at the same position of the feature line of the preset type target object as the historical feature line, and the obtained historical feature point and the feature point of the preset type target object, and calculate the acquired data point by using a preset algorithm, and may obtain a movement manner of the robot in a process from the time of acquiring the adjacent image to the time of acquiring the current image according to the rotation matrix and the translation matrix, that is, may obtain the pose of the current time according to the pose of the adjacent image by using the rotation matrix and the translation matrix.
Optionally, a set of sampling points at positions close to the feature line of the preset type target object and the historical feature line may be selected for calculation, where the sampling points at the close positions may be sampling points at positions having a position difference smaller than or equal to a preset position threshold value, and the position difference is respectively selected from the feature line of the preset type target object and the historical feature line.
It is understood that the sampling point set obtained by the feature line may be obtained by performing a mean calculation on points in the point set to obtain points representing the feature line of the preset type of object and points representing the historical feature line. For example, a point a, a point B, and a point C are selected from a feature line of a preset type target object, and after calculation, a point representing the feature line is (a + B + C)/2; similarly, three points at corresponding positions are selected from the historical feature line, and the calculation is also performed, so that the points representing the historical feature line can be obtained.
By way of specific example and not limitation, if the current image includes a feature point P preset in a real scene1、P2、P3And a characteristic line L; the robot passes the acquisition of the current imageA and the adjacent image B will get P under the current image A1′、P2′、P3'and a point L' representing a characteristic line; and P of B under adjacent pictures1″、P2″、P3"and a point L representing a characteristic line"; the points representing the characteristic line L can be obtained by selecting a preset number of sampling points on the characteristic line to average. Calculating the obtained data according to a preset algorithm to obtain a formula:
P′1=K*P1;P″1=K*(R*P1+T);
P′2=K*P2;P″2=K*(R*P2+T);
P′3=K*P3;P″3=K*(R*P3+T);
L′=K*L;L″=K*(R*L+T);
wherein K is a camera internal reference matrix preset in the robot in advance; the motion mode (R, T) of the robot can be deduced according to the formula, wherein R is a rotation matrix, and T is a translation matrix. According to the rotation matrix and the translation matrix, the movement mode of the robot in the process from the moment of acquiring the adjacent images to the moment of acquiring the current images can be obtained, and the pose of the robot at the current moment can be obtained through the rotation matrix and the translation matrix according to the pose of the adjacent images.
Optionally, step S103 further includes:
and acquiring at least three pairs of sampling points at the same positions of the characteristic line of the preset type target object and the historical characteristic line.
And calculating the at least three pairs of sampling points according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image.
And determining the current pose according to the rotation matrix and the translation matrix.
In this embodiment, the robot may further determine the current pose of the robot according to the feature line, the robot may determine a rotation matrix and a translation matrix between the adjacent image and the current image by acquiring at least three pairs of sampling points at the same positions of the feature line of the preset type target object and the historical feature line, and calculating the acquired at least three pairs of sampling points by using the algorithm consistent with the above, and may obtain a movement manner of the robot in a process from the time of acquiring the adjacent image to the time of acquiring the current image according to the rotation matrix and the translation matrix, that is, may obtain the pose of the current time according to the pose of the adjacent image through the rotation matrix and the translation matrix.
The method comprises the steps of obtaining a current image, determining a characteristic line of a preset type target object from the current image, and determining a characteristic line for comparison in the current environment of the robot; acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line so as to judge whether the current acquired characteristic line of the robot is the same as the historical characteristic line of the preset type target object, and after the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, so that the pose of the current robot is further accurately obtained according to the current environment, and a map is constructed according to the current pose, so that the accuracy and robustness of visual SLAM mapping and positioning are improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the above-described visual SLAM method, fig. 5 is a schematic structural diagram of a visual SLAM device in an embodiment of the present application, and as shown in fig. 5, the visual SLAM device may include:
an obtaining module 501, configured to obtain a current image, and determine a feature line of a preset type target object from the current image.
A matching module 502, configured to obtain adjacent images, and determine a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line.
And a building module 503, configured to determine a current pose according to the feature line after matching is successful, and build a map according to the current pose.
Optionally, the obtaining module 501 may include:
and the extracting unit is used for extracting each line segment in the image from the current image.
The first determining unit is used for determining a target object of which at least two line segments intersect at one point according to the connection condition of each line segment.
And the characteristic line unit is used for taking the line segment corresponding to the preset type target object as the characteristic line of the preset type target object when the target object conforms to the preset type.
Optionally, the matching module 502 may include:
and the second determining unit is used for determining the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object.
The first calculating unit is used for calculating a first error value between the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object.
And the first matching unit is used for matching successfully when the first error value is smaller than a preset first threshold value.
Optionally, the matching module 502 may further include:
and the third determining unit is used for determining the characteristic point of the preset type target object according to the characteristic line of the preset type target object.
And the fourth determining unit is used for determining the historical characteristic points of the preset type target object according to the historical characteristic lines.
And the second calculation unit is used for calculating a second error value between the position of the historical characteristic point and the position of the characteristic point of the preset type target object.
And the second matching unit is used for matching successfully when the second error value is smaller than a preset second threshold value.
Optionally, the second determining unit may include:
and the intersection point determining subunit is used for determining the intersection points of the intersection between the characteristic lines of the preset type target objects.
And the feature point subunit is used for taking the intersection point as a feature point of the preset type target object.
Optionally, the building module 503 may include:
and the first acquisition unit is used for acquiring a sampling point set at the same position of the characteristic line of the preset type target object and the historical characteristic line.
And the third calculating unit is used for calculating the sampling point set, the historical characteristic points and the characteristic points of the preset type target object according to a preset algorithm and determining a rotation matrix and a translation matrix between the adjacent image and the current image.
And the fifth determining unit is used for determining the current pose according to the rotation matrix and the translation matrix.
Optionally, the building module 503 may further include:
and the second acquisition unit is used for acquiring at least three pairs of sampling points at the same positions of the characteristic line of the preset type target object and the historical characteristic line.
And the fourth calculating unit is used for calculating the at least three pairs of sampling points according to a preset algorithm and determining a rotation matrix and a translation matrix between the adjacent image and the current image.
And the sixth determining unit is used for determining the current pose according to the rotation matrix and the translation matrix.
The method comprises the steps of obtaining a current image, determining a characteristic line of a preset type target object from the current image, and determining a characteristic line for comparison in the current environment of the robot; acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line so as to judge whether the current acquired characteristic line of the robot is the same as the historical characteristic line of the preset type target object, and after the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, so that the pose of the current robot is further accurately obtained according to the current environment, and a map is constructed according to the current pose, so that the accuracy and robustness of visual SLAM mapping and positioning are improved.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the module described above may refer to corresponding processes in the foregoing system embodiments and method embodiments, and are not described herein again.
Fig. 6 is a schematic structural diagram of a robot according to an embodiment of the present application. For convenience of explanation, only portions related to the embodiments of the present application are shown.
As shown in fig. 6, the robot 6 of this embodiment includes: at least one processor 600 (only one shown in fig. 6), a memory 601 connected to the processor 600, and a computer program 602, such as a visual SLAM program, stored in the memory 601 and executable on the at least one processor 600. The processor 600, when executing the computer program 602, implements the steps in the various visual SLAM method embodiments described above, such as steps S101-S103 shown in fig. 1. Alternatively, the processor 600 executes the computer program 602 to implement the functions of the modules in the device embodiments, such as the functions of the modules 501 to 503 shown in fig. 5.
Illustratively, the computer program 602 may be partitioned into one or more modules that are stored in the memory 601 and executed by the processor 600 to accomplish the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 602 in the robot 6. For example, the computer program 602 may be divided into an obtaining module 501, a matching module 502, and a constructing module 503, and the specific functions of each module are as follows:
an obtaining module 501, configured to obtain a current image, and determine a feature line of a preset type target object from the current image;
a matching module 502, configured to obtain adjacent images, and determine a historical characteristic line of the preset type target object from the adjacent images; matching the characteristic line of the preset type target object with the historical characteristic line;
and a building module 503, configured to determine a current pose according to the feature line after matching is successful, and build a map according to the current pose.
The robot 6 may include, but is not limited to, a processor 600, a memory 601. Those skilled in the art will appreciate that fig. 6 is merely an example of the robot 6, and does not constitute a limitation on the robot 6, and may include more or less components than those shown, or combine certain components, or different components, such as input and output devices, network access devices, buses, etc.
The Processor 600 may be a Central Processing Unit (CPU), and the Processor 600 may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 601 may in some embodiments be an internal storage unit of the robot 6, such as a hard disk or a memory of the robot 6. The memory 601 may also be an external storage device of the robot 6 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the robot 6. Further, the memory 601 may also include both an internal storage unit and an external storage device of the robot 6. The memory 601 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The memory 601 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/robot are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/robot, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier wave signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A visual SLAM method, comprising:
acquiring a current image, and determining a characteristic line of a preset type target object from the current image;
acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; matching the characteristic line of the preset type target object with the historical characteristic line;
and when the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, and constructing a map according to the current pose.
2. The visual SLAM method of claim 1, wherein said determining a feature line of a preset type of object from the current image comprises:
extracting each line segment in the image from the current image;
determining a target object with at least two line segments intersected at one point according to the connection condition of each line segment;
and when the target object conforms to a preset type, taking a line segment corresponding to the preset type target object as a characteristic line of the preset type target object.
3. The visual SLAM method of claim 1, wherein said matching the feature line of the preset type of object to the historical feature line comprises:
determining the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object;
calculating a first error value between the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object;
and when the first error value is smaller than a preset first threshold value, the matching is successful.
4. The visual SLAM method of claim 1, wherein the matching the feature line of the preset type of object to the historical feature line further comprises:
determining the characteristic points of the preset type target object according to the characteristic lines of the preset type target object;
determining historical characteristic points of the preset type target object according to the historical characteristic lines;
calculating a second error value between the position of the historical characteristic point and the position of the characteristic point of the preset type target object;
and when the second error value is smaller than a preset second threshold value, the matching is successful.
5. The visual SLAM method of claim 4, wherein said determining feature points of the preset type of object from the feature lines of the preset type of object comprises:
determining an intersection point of the feature lines of the preset type target object;
and taking the intersection point as a characteristic point of the preset type target object.
6. The visual SLAM method of claim 4, wherein said determining a current pose from the feature lines of the preset type of object and the historical feature lines comprises:
acquiring a sampling point set of the feature line of the preset type target object and the historical feature line at the same position;
calculating the sampling point set, the historical characteristic points and the characteristic points of the preset type target object according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image;
and determining the current pose according to the rotation matrix and the translation matrix.
7. The visual SLAM method of claim 1, wherein the determining a current pose from the feature lines of the preset type of object and the historical feature lines, further comprises:
acquiring at least three pairs of sampling points at the same positions of the characteristic line of the preset type target object and the historical characteristic line;
calculating the at least three pairs of sampling points according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image;
and determining the current pose according to the rotation matrix and the translation matrix.
8. A visual SLAM device, comprising:
the acquisition module is used for acquiring a current image and determining a characteristic line of a preset type target object from the current image;
the matching module is used for acquiring adjacent images and determining a historical characteristic line of the preset type target object from the adjacent images; matching the characteristic line of the preset type target object with the historical characteristic line;
and the construction module is used for determining the current pose according to the characteristic line after the matching is successful, and constructing a map according to the current pose.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of a visual SLAM method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of a visual SLAM method according to any one of claims 1 to 7.
CN202011352975.2A 2020-11-26 Visual SLAM method, device, robot and storage medium Active CN112381939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011352975.2A CN112381939B (en) 2020-11-26 Visual SLAM method, device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011352975.2A CN112381939B (en) 2020-11-26 Visual SLAM method, device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN112381939A true CN112381939A (en) 2021-02-19
CN112381939B CN112381939B (en) 2024-11-12

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998433A (en) * 2022-05-31 2022-09-02 Oppo广东移动通信有限公司 Pose calculation method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077809A (en) * 2014-06-24 2014-10-01 上海交通大学 Visual SLAM method based on structural lines
JP2017208038A (en) * 2016-05-20 2017-11-24 株式会社リコー Image processing apparatus, image processing method, image processing system and program
CN109558879A (en) * 2017-09-22 2019-04-02 华为技术有限公司 A kind of vision SLAM method and apparatus based on dotted line feature
CN110246147A (en) * 2019-05-14 2019-09-17 中国科学院深圳先进技术研究院 Vision inertia odometer method, vision inertia mileage counter device and mobile device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077809A (en) * 2014-06-24 2014-10-01 上海交通大学 Visual SLAM method based on structural lines
JP2017208038A (en) * 2016-05-20 2017-11-24 株式会社リコー Image processing apparatus, image processing method, image processing system and program
CN109558879A (en) * 2017-09-22 2019-04-02 华为技术有限公司 A kind of vision SLAM method and apparatus based on dotted line feature
CN110246147A (en) * 2019-05-14 2019-09-17 中国科学院深圳先进技术研究院 Vision inertia odometer method, vision inertia mileage counter device and mobile device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIANG BIAN等: "A Point-line-based SLAM Framework for UAV Close Proximity Transmission Tower Inspection", PROCEEDINGS OF THE 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS DECEMBER 12-15, 2018, KUALA LUMPUR, MALAYSIA, 14 March 2019 (2019-03-14), pages 1016 - 1021 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998433A (en) * 2022-05-31 2022-09-02 Oppo广东移动通信有限公司 Pose calculation method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
US11002840B2 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US9270891B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
CN112880687B (en) Indoor positioning method, device, equipment and computer readable storage medium
CN112667837A (en) Automatic image data labeling method and device
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN110260857A (en) Calibration method, device and the storage medium of vision map
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN115366097A (en) Robot following method, device, robot and computer readable storage medium
CN110673607B (en) Feature point extraction method and device under dynamic scene and terminal equipment
CN113227708B (en) Method and device for determining pitch angle and terminal equipment
CN111157012B (en) Robot navigation method and device, readable storage medium and robot
CN113601510A (en) Robot movement control method, device, system and equipment based on binocular vision
CN113515143A (en) Robot navigation method, robot and computer readable storage medium
CN111223139B (en) Target positioning method and terminal equipment
CN112381939B (en) Visual SLAM method, device, robot and storage medium
CN112381939A (en) Visual SLAM method, device, robot and storage medium
WO2022205841A1 (en) Robot navigation method and apparatus, and terminal device and computer-readable storage medium
CN114066930A (en) Planar target tracking method and device, terminal equipment and storage medium
CN112614181B (en) Robot positioning method and device based on highlight target
CN114739382A (en) Robot, and method, device and storage medium for establishing image of robot
CN117541464A (en) Global initialization method, system, vehicle, storage medium and equipment
CN118982462A (en) Dynamic feature point removing method and system based on dense optical flow and epipolar constraint
CN115690148A (en) Electronic device, optical flow tracking method, optical flow tracking device, and storage medium
CN117671217A (en) Equipment installation relocation method and device, server and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 room 1601, building 2, Vanke Cloud City phase 6, Tongfa South Road, Xili community, Xili street, Nanshan District, Shenzhen City, Guangdong Province (16th floor, block a, building 6, Shenzhen International Innovation Valley)

Applicant after: Shenzhen Ledong robot Co.,Ltd.

Address before: 518000 room 1601, building 2, Vanke Cloud City phase 6, Tongfa South Road, Xili community, Xili street, Nanshan District, Shenzhen City, Guangdong Province (16th floor, block a, building 6, Shenzhen International Innovation Valley)

Applicant before: SHENZHEN LD ROBOT Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant