[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110738703A - Positioning method and device, terminal and storage medium - Google Patents

Positioning method and device, terminal and storage medium Download PDF

Info

Publication number
CN110738703A
CN110738703A CN201910921590.4A CN201910921590A CN110738703A CN 110738703 A CN110738703 A CN 110738703A CN 201910921590 A CN201910921590 A CN 201910921590A CN 110738703 A CN110738703 A CN 110738703A
Authority
CN
China
Prior art keywords
image
dimensional
feature
key frame
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910921590.4A
Other languages
Chinese (zh)
Other versions
CN110738703B (en
Inventor
金珂
李姬俊楠
马标
蒋燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910921590.4A priority Critical patent/CN110738703B/en
Publication of CN110738703A publication Critical patent/CN110738703A/en
Application granted granted Critical
Publication of CN110738703B publication Critical patent/CN110738703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application discloses positioning methods, which comprise the steps of extracting a th image feature of an image to be processed, matching a second image feature from an image feature of a keyframe two-dimensional image stored in a preset map and a corresponding depth image feature according to the th image feature, and determining pose information of image acquisition equipment for acquiring the image to be processed according to the th image feature and the second image feature.

Description

Positioning method and device, terminal and storage medium
Technical Field
The present application relates to positioning technology, and relates to, but not limited to, indoor positioning methods and apparatuses, terminals, and storage media.
Background
In the related art, the background is matched with a building indoor map measured in advance based on two-dimensional features of a visual image, a corresponding position of the background in a room is determined, and then the position of a person in the room is confirmed according to the position of the background, so that posture information of a camera cannot be obtained after positioning, and the positioning accuracy is low.
Disclosure of Invention
In view of the above, embodiments of the present application provide positioning methods and apparatuses, a terminal, and a storage medium to solve at least problems in the related art.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides positioning methods, which include:
image features of the image to be processed are extracted;
according to the th image feature, matching a second image feature from the image feature of the keyframe two-dimensional image and the corresponding depth image feature stored in a preset map;
and determining the position and orientation information of an image acquisition device for acquiring the image to be processed according to the th image characteristic and the second image characteristic.
The embodiment of this application provides positioner, the device includes:
an extraction module, which is used for extracting image characteristics of the image to be processed;
an matching module, configured to match a second image feature from the image features of the keyframe two-dimensional image and the corresponding depth image features stored in a preset map according to the image feature;
an determining module, configured to determine pose information of an image capturing device used for capturing the image to be processed according to the image feature and the second image feature.
Correspondingly, the present application provides terminals, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the steps of the method.
The present application provides computer readable storage media, on which a computer program is stored, and the computer program realizes the steps of the method when being executed by a processor.
The embodiment of the application provides positioning methods and devices, a terminal and a storage medium, wherein the method comprises the steps of firstly extracting image characteristics of an image to be processed, then matching second image characteristics from image characteristics and corresponding depth image characteristics of a key frame two-dimensional image stored in a preset map according to the image characteristics, and finally determining image acquisition equipment information for acquiring the image to be processed according to the image characteristics and the second image characteristics.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a positioning method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a positioning method according to an embodiment of the present application;
fig. 3 is a schematic view of an implementation flow of creating a preset map according to an embodiment of the present application;
fig. 4 is a schematic flow chart illustrating another implementation of the positioning method according to the embodiment of the present application;
fig. 5 is a schematic flow chart of another implementation of the positioning method according to the embodiment of the present application;
FIG. 6 is a diagram illustrating a ratio vector according to an embodiment of the present application;
fig. 7 is an application scene diagram of determining a keyframe two-dimensional image corresponding to a second image feature according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of determining location information of an acquisition device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a positioning device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
An embodiment of the present application provides positioning methods, and fig. 1 is a schematic flowchart illustrating an implementation process of a positioning method according to an embodiment of the present application, where as shown in fig. 1, the method includes the following steps:
in step S101, th image features of the image to be processed are extracted.
In step S101, firstly, a feature point set of the image to be processed is extracted, and then identification information of each feature point in the feature point set and two-Dimensional position information of each feature point in the image to be processed are determined, wherein the identification information of the feature point can be understood as descriptor information capable of identifying the feature point only .
And step S102, according to the th image feature, matching a second image feature from the image feature of the two-dimensional key frame image stored in a preset map and the corresponding depth image feature.
Here, the second image feature includes 2D position information, three-Dimensional (3D) position information, and identification information of the feature point of the keyframe two-Dimensional image, a set of key image features of the keyframe two-Dimensional image in the preset map, and a set of ratio vectors corresponding to ratios occupied in the keyframe two-Dimensional image per sample feature points, and the step S102 may be understood as selecting a second image feature having a higher degree of matching with the -th image feature from among image features of the keyframe two-Dimensional image stored in the preset map, but an image feature of the keyframe two-Dimensional image per corresponds to the depth image feature .
And step S103, determining the pose information of the image acquisition equipment for acquiring the image to be processed according to the th image feature and the second image feature.
For example, firstly, in a three-dimensional coordinate space where the image acquisition device is located, 2D position information of a feature point of the image to be processed is converted into 3D position information, and then the 3D position information is compared with 3D position information of the feature point indicated by the depth image feature in the three-dimensional coordinate system of a preset map to determine the pose information of the image acquisition device.
In the embodiment of the application, for the acquired image to be processed, firstly, image features are extracted, then, second image features matched with the image features are selected from the image features of the keyframe two-dimensional image in a preset map and the corresponding depth image features, and finally, the positioning of the image acquisition equipment can be realized through the position information of feature points of the two image features; therefore, the second image characteristics are matched from the image characteristics which do not only comprise the key frame two-dimensional image and the corresponding depth image characteristics, and the matched frame image in the preset map can be obtained, so that the image acquisition equipment is positioned, the image two-dimensional information is expanded to be three-dimensional, the positioning accuracy is improved, the position and the posture can be provided on the positioning result at the same time, and the degree of freedom of the positioning result is increased.
In , the pose information of the image capturing device includes the position of the image capturing device in the map coordinate system and the capturing orientation of the image capturing device relative to the map coordinate system, and the step S103 can be implemented by:
step S131, determining feature points of the keyframe two-dimensional image corresponding to the second image feature, and map coordinates in a map coordinate system corresponding to the preset map.
Here, a feature point corresponding to a second image feature in a preset second map, a 3D coordinate in a map coordinate system corresponding to the preset map, is acquired.
Step S132, determining a feature point of the two-dimensional key frame image corresponding to the second image feature, and a camera coordinate in a camera coordinate system in which the image capturing device is located.
Here, the map coordinates are used as input of a front-end pose tracking algorithm (PnP) algorithm, and camera coordinates in a camera coordinate system where the feature Point is located in the image capturing device are obtained.
Step S133, determining a conversion relationship between the camera coordinate system and the map coordinate system according to the map coordinate and the camera coordinate.
Here, the map coordinates and the current coordinates are compared, and a rotation vector and a translation vector of the image pickup device in the camera coordinate system with respect to the map coordinate system are determined.
And S134, determining the position of the image acquisition equipment in the map coordinate system and the acquisition orientation of the image acquisition equipment relative to the map coordinate system according to the conversion relation and the camera coordinate of the image acquisition equipment in the camera coordinate system.
Rotating the current coordinate of the image acquisition equipment by adopting a rotation vector, and determining the acquisition orientation of the image acquisition equipment relative to the map coordinate system; and translating the current coordinate of the image acquisition equipment by adopting the translation vector, and determining the position of the image acquisition equipment in a map coordinate system.
In the embodiment of the application, the 3D coordinates of the feature point corresponding to the second map feature in the camera coordinate system are determined, so that the rotation relation of the camera coordinate system relative to the map coordinate system is determined by comparing the 3D coordinates of the feature point in the map coordinate system with the 3D coordinates in the camera coordinate system, and then the acquisition orientation and position of the image acquisition device are solved according to the rotation relation.
An embodiment of the present application provides positioning methods, and fig. 2 is a schematic diagram illustrating an implementation flow of the positioning method according to the embodiment of the present application, and as shown in fig. 2, the method includes the following steps:
step S201, extracting a feature point set of the image to be processed.
Here, feature points of the image to be processed are extracted, and a feature point set is obtained.
Step S202, determining identification information of each feature points in the feature point set and two-dimensional position information of each feature points in the image to be processed.
Here, for each feature point in the feature point set, descriptor information (identification information) of the feature point is determined, and the 2D position information may be regarded as the 2D coordinates of the feature point.
The above steps S201 and S202 give ways of implementing "extraction of th image feature of an image to be processed", in which 2D coordinates of each feature point of an image to be processed and descriptor information of the feature point are obtained.
Step S203, respectively determining the ratio of different sample characteristic points in the characteristic point set to obtain th ratio vector.
The th ratio vector can be determined according to the number of sample images, the times of appearance of the sample characteristic points in the images to be processed and the total number of the sample characteristic points appearing in the images to be processed.
Step S204, a second ratio vector is obtained.
The second ratio vector is stored in a preset bag-of-words model in advance, so that the second ratio vector is obtained from the preset bag-of-words model when the image features of the image to be processed need to be matched, the determination process of the second ratio vector is similar to that of the th ratio vector, and the dimensions of the th ratio vector and the second ratio vector are the same.
Step S205, matching a second image feature from the image features of the two-dimensional keyframe image according to the th image feature, the th ratio vector and the second ratio vector.
Here, the step S205 may be implemented by:
, according to the ratio vector and the second ratio vector, determining similar image features with similarity greater than a second threshold value with the image features from the image features of the keyframe two-dimensional image.
Here, -th ratio vectors of the to-be-processed image and second ratio vectors of -th keyframe two-dimensional images are compared one by one, and the similarity between each -th keyframe two-dimensional image and the to-be-processed image is determined by using the two ratio vectors, so that similar keyframe two-dimensional images with the similarity larger than or equal to a second threshold value are screened out, and a similar keyframe two-dimensional image set is obtained.
And secondly, determining a similar key frame two-dimensional image to which the similar image features belong to obtain a similar key frame two-dimensional image set.
And thirdly, selecting a second image feature with similarity meeting a preset similarity threshold with the th image feature from the image features of the two-dimensional images of the similar key frames.
The method comprises the steps of selecting a second image feature with the highest similarity to a characteristic of a th image from image features contained in similar keyframe two-dimensional images, for example, firstly determining a time difference between acquisition times of at least two similar keyframe two-dimensional images and a similarity difference between image features of the at least two similar keyframe two-dimensional images and the characteristic of the th image feature, respectively, then combining the similar keyframe two-dimensional images with the time difference smaller than a third threshold and the similarity difference smaller than a fourth threshold to obtain a combined frame image, namely, selecting a plurality of similar keyframe two-dimensional images with close acquisition times and close similarity to the image to be processed, which indicate that the keyframe two-dimensional images may be continuous pictures, so that such a plurality of similar keyframe two-dimensional images are combined at a starting with the keyframe image to form a combined frame image (which may also be an island), thus obtaining a plurality of combined frame images, and finally selecting a plurality of combined frame images from image features of the combined frame image, , which satisfy a preset similarity threshold, and selecting a target image feature information corresponding image feature of a target image characteristics of a target image, such as a target image characteristics of a target image characteristic of a target image, such that the combined image characteristics of the combined image, such as a target image, such as a target image, such as a target image, a target.
The above steps S203 to S205 provide ways of implementing "matching the second image feature from the image feature of the keyframe two-dimensional image and the corresponding depth image feature stored in the preset map according to the image feature", in which the similarity between the second image feature and the image feature is ensured by retrieving the second image feature matching the image feature from the image feature of the keyframe two-dimensional image and the corresponding depth image feature by using the preset bag-of-words model.
Step S206, determining the image containing the second image characteristic as a matching frame image of the image to be processed.
Here, the two-dimensional key frame image including the second image feature is used to indicate that the two-dimensional key frame image is very similar to the image to be processed, so that the two-dimensional key frame image is used as a matching frame image of the image to be processed.
Step S207, determining a target euclidean distance between any two feature points included in the matching frame image, which is smaller than the th threshold, to obtain a target euclidean distance set.
The target Euclidean distance smaller than the th threshold value can be considered as the sets with the smallest Euclidean distance among the Euclidean distances, which are determined from the plurality of Euclidean distances, whether the smallest Euclidean distance is smaller than the th threshold value or not is judged, and if the smallest Euclidean distance is smaller than the th threshold value, the smallest Euclidean distance is determined to be the target Euclidean distance, so that the target Euclidean distance set with the smallest Euclidean distance is also the plurality of Euclidean distances.
Step S208, if the number of the target Euclidean distances contained in the target Euclidean distance set is larger than a preset number threshold, determining the pose information of the image acquisition equipment according to the th image feature and the second image feature.
If the number of the target Euclidean distances contained in the target Euclidean distance set is greater than a preset number threshold, the pose information of the image acquisition device is determined based on the 3D position information of the feature point indicated by the depth image feature contained in the second image feature and the 2D position information of the feature point of the image to be processed corresponding to the th image feature, if the number of the target Euclidean distances contained in the target Euclidean distance set is greater than a fifth threshold, the number of the target Euclidean distances is large enough, and the feature point matched with the image feature is enough to indicate that the similarity between the two-dimensional image of the key frame and the image to be processed is high enough, then, the 3D position information of the feature point of the two-dimensional image of the key frame and the 2D position information of the feature point of the image to be processed corresponding to the th image feature are used as the input of the PnP algorithm, the 2D position information (such as 2D coordinates) of the feature point in the current frame of the image to be processed is firstly obtained under the camera coordinate system, and then the pose information of the image acquisition device in the current frame is obtained according to the 3D position information of the image coordinate system, namely, the pose information of the image acquisition device under the camera coordinate system, and the feature point of the key point under the camera coordinate system, namely, the image acquisition device can be solved.
The above-mentioned steps S206 to S208 give ways of implementing "determining the pose information of the image capturing apparatus for capturing the image to be processed from the second image feature and the second image feature", in which the 2D position information of the two-dimensional image of the key frame and the 3D position information of the depth image are simultaneously considered, and the position and the posture can be simultaneously provided on the positioning result, so the positioning accuracy of the image capturing apparatus is improved.
In the embodiment of the application, an image to be processed is obtained through image acquisition equipment, a constructed preset map is loaded, a matched frame image corresponding to the image to be processed is retrieved and matched by using a preset bag-of-words model, and finally, 2D position information of feature points of the image to be processed and 3D position information of a depth image are used as input of a PnP algorithm to obtain an accurate pose of a current camera in the map so as to achieve the aim of positioning the camera; therefore, the purpose of positioning can be achieved through the key frame two-dimensional image and the depth image, the position and the posture of the image acquisition equipment under the map coordinate system are obtained, the positioning result precision is improved, the dependence on external base station equipment is not needed, the cost is low, and the robustness is strong.
An embodiment of the present application provides positioning methods, and fig. 3 is a schematic diagram illustrating an implementation flow of creating a preset map according to an embodiment of the present application, where as shown in fig. 3, the method includes the following steps:
step S221, selecting a plurality of key frame two-dimensional images meeting preset conditions from a sample image library to obtain a key frame two-dimensional image set.
Here, in step S221, the key frame two-dimensional image is selected from the sample image library according to the input selection instruction, that is, if the plurality of sample images correspond to other than scenes, the user manually selects the key frame two-dimensional image, so that the validity of the selected key image in different environments is ensured, or the key frame two-dimensional image is selected from the sample image library according to a preset frame rate or disparity, that is, if the plurality of sample images correspond to the same scenes, a sample image satisfying the preset frame rate or preset disparity is automatically selected as the key frame two-dimensional image by setting the preset frame rate or preset disparity in advance, so that the validity of the selected key image is improved, and the efficiency of selecting the key frame two-dimensional image is also improved.
Step S222, extracting the image characteristics of each key frame two-dimensional image to obtain a key two-dimensional image characteristic set.
The key two-dimensional image feature set is obtained so as to match a second image feature which is highly similar to the image feature from the key two-dimensional image feature set, thereby obtaining a corresponding matched frame image.
Step S223, collecting depth information of each key frame two-dimensional image to obtain a key frame depth image.
Here, the depth information of each keyframe two-dimensional image is acquired using a depth camera at a particular frame rate, resulting in the keyframe depth image.
Step S224 aligns the key frame depth image with the key frame two-dimensional image such that the key two-dimensional image features correspond to the depth image features of the key frame depth image.
Here, aligning the key frame depth image with the key frame two-dimensional image includes: timestamp alignment and pixel alignment. The step S224 may be implemented by:
at step , th timestamp information for each keyframe depth image and second timestamp information for each keyframe two-dimensional image are determined, respectively.
Here, the th and second timestamp information are determined to achieve timestamp alignment for the key frame depth image and the key frame two-dimensional image.
Second, if the difference between the ith th time stamp information and the jth second time stamp information is less than a preset difference, it is determined that the ith keyframe two-dimensional image matches the jth keyframe depth image.
Here, if the difference between the two timestamps is small, it means that the key frame two-dimensional image and the key frame depth image are for the same pictures, so it is determined that such two key frame two-dimensional images and key frame depth images match.
Thirdly, th calibration parameters of the image acquisition equipment for acquiring the ith key frame two-dimensional image and second calibration parameters of the image acquisition equipment for acquiring the jth key frame two-dimensional image are acquired.
The th calibration parameter is understood herein to be a parameter for calibrating an image capture device for capturing a two-dimensional image of a keyframe, and in specific examples, the parameter includes a rotation matrix and a translation matrix that together describe how to convert a point from a world coordinate system to a camera coordinate system, e.g., a rotation matrix that describes the orientation of the coordinates of the world coordinate system with respect to the camera coordinates, a translation matrix that describes the position of the spatial origin under the camera coordinate system.
And fourthly, aligning the ith key frame two-dimensional image and the jth key frame depth image according to the th calibration parameter and the second calibration parameter, so that the ith key frame two-dimensional image corresponds to the depth image characteristic of the jth key frame depth image of the color image characteristic.
Firstly, determining an alignment matrix of the jth key frame depth image relative to the ith key frame two-dimensional image according to the calibration parameters and the second calibration parameters, wherein the alignment matrix comprises a rotation matrix and a translation matrix, then, adjusting coordinates of each pixel point in the jth key frame depth image according to the alignment matrix so that each pixel point in the adjusted jth key frame depth image corresponds to the coordinates of the pixel point in the ith key frame two-dimensional image, for example, rotating the depth coordinates of the pixel point of the jth key frame depth image in a camera coordinate system through the rotation matrix, and translating the rotated depth coordinates by using the translation matrix so that the translated depth coordinates correspond to the two-dimensional coordinates of the pixel point in the ith key frame two-dimensional image in the camera coordinate system.
And step S225, determining the ratio of the characteristic points of each sample image in the two-dimensional key frame image to obtain a ratio vector set.
After the ratio vector set is obtained, different sample feature points and the ratio vector set are stored in a preset bag-of-words model, so that a matched frame image of the image to be processed is retrieved from the two-dimensional key frame image by adopting the preset bag-of-words model. The step S223 may be implemented by the following processes:
firstly, determining the average time of according to the th number of sample images contained in the sample image library and the th time of the p-th sample feature point in the sample image library, wherein the th average time is used for indicating the average time of the p-th sample feature point in each sample image, for example, the th average time can be understood as the time of the p-th sample feature point in the sample image library.
Secondly, determining a second average frequency according to a second frequency of the p-th sample characteristic point appearing in the q-th key frame two-dimensional image and a second number of sample characteristic points contained in the q-th key frame two-dimensional image; the second average degree is used for indicating the proportion of the p-th sample characteristic point occupying the sample characteristic points contained in the q-th key frame two-dimensional image.
And finally, obtaining the ratio of the sample feature points in the two-dimensional key frame image according to the th average times and the second average times to obtain the ratio vector set, for example, multiplying the th average times by the second average times to obtain a ratio vector.
Step S226, storing the ratio vector set, the key two-dimensional image feature set, and the depth image features corresponding to each key two-dimensional image feature to obtain the preset map.
Here, the ratio vector set corresponding to the key frame two-dimensional image, the depth image feature corresponding to each key two-dimensional image feature, and the key image feature set are stored in a preset map, so that when the image acquisition device is positioned, the ratio vector set is used to compare with the ratio vector set corresponding to the image to be processed determined by using the preset bag-of-words model, so as to determine a matching frame image highly similar to the image to be processed.
In the embodiment of the application, the key frame two-dimensional image and the depth image are selected for the sample image according to the fixed frame rate, the effectiveness of the selected key frame two-dimensional image and the selected depth image is improved, then, the image features of the key frame two-dimensional image and the depth image features are aligned, and the preset map is constructed, so that when the preset map is used for positioning the image acquisition equipment, the two-dimensional position and the three-dimensional acquisition orientation information can be positioned, and the positioning accuracy is improved.
An embodiment of the present application provides positioning methods, and fig. 4 is a schematic flowchart of another implementation flow of the positioning method in the embodiment of the present application, and as shown in fig. 4, the method includes the following steps:
step S231, selecting a plurality of keyframe two-dimensional images satisfying a preset condition from the sample image library, to obtain a keyframe two-dimensional image set.
And step S232, extracting the image characteristics of each key frame two-dimensional image to obtain a key two-dimensional image characteristic set.
And step S233, collecting the depth information of each key frame two-dimensional image to obtain a key frame depth image.
In step S234, the key frame depth image is aligned with the key frame two-dimensional image so that the key two-dimensional image features correspond to the depth image features of the key frame depth image.
And step S235, determining the ratio of each sample feature point in the two-dimensional image of the key frame to obtain a ratio vector set.
Step S236, storing the ratio vector set, the key two-dimensional image feature set, and the depth image features corresponding to each key two-dimensional image feature to obtain the preset map.
In the above steps S231 to S236, the creation process of the preset map is completed, and the image features, the ratio vector set, and the depth image features of the keyframe two-dimensional image are stored in the preset map, so that the obtained second image features matched with the image features of the image to be processed include the depth information and the three-dimensional position information of the features, and thus the image acquisition device is positioned and directly acquired by the depth camera without consuming a large amount of computing resources for computing, and the real-time performance and the degree of freedom of positioning are improved.
In step S237, a preset map is loaded, and th image features of the image to be processed are extracted.
Here, when the image capturing apparatus is positioned, a preset map needs to be loaded first.
Step S238, according to the th image feature, matching a second image feature from the image feature of the two-dimensional key frame image stored in the preset map and the corresponding depth image feature.
Step S239, determining pose information of an image capturing device for capturing the image to be processed according to the th image feature and the second image feature.
In the process, a second image feature which is highly similar to the th image feature is matched from a keyframe two-dimensional image stored in a preset map, and then the pose information of the acquisition equipment can be finally determined by utilizing the 2D position information and the 3D position information in the two image features.
An embodiment of the present application provides positioning methods, fig. 5 is a schematic flowchart of an implementation process of of the positioning method of the embodiment of the present application, and as shown in fig. 5, the method includes the following steps:
and S301, acquiring a two-dimensional key frame image by using the RGB camera to obtain the two-dimensional key frame image.
Here, the camera may be a monocular camera or a binocular camera.
And step S302, acquiring a depth image at a fixed frame rate by using a depth camera to obtain a key frame depth image.
The depth camera here may be a time-of-flight (TOF) depth camera, with which depth image acquisition is performed at a fixed frame rate, the depth image is also referred to as a range image, which refers to an image with the distance from the image acquisition to each point in the scene as a pixel value, the depth image visually reflects the geometry of the visible surface of the object, in the image frame provided by the depth data stream, each pixels represent the distance of the object to the camera plane at the specific coordinates in the field of view of the three-dimensional vision sensor, the depth camera may be a binocular camera, a structured light camera, or a TOF camera, etc., the binocular stereo measurement is performed by triangulation after matching the pairs of left and right stereo images, the structured light camera measures depth by projecting patterns onto the object, the camera acquires the corresponding reflected image of the object, calculates the distance of the object based on calibrated information , and the TOF camera calculates the distance of the object by sending light pulses continuously to the object and then receiving light pulses from the sensor to detect the distance of the object.
Step S303, aligning the two-dimensional image of the key frame with the depth image of the key frame.
Here, the key frame two-dimensional image and the key frame depth image are aligned, including timestamp alignment and pixel alignment. The method can be realized by the following steps:
and , respectively obtaining the time stamp delay of the key frame two-dimensional image and the key frame depth image through calibration.
And secondly, selecting a key frame two-dimensional image and a key frame depth image of which the time stamps have the most difference value smaller than fixed threshold values to form a data stream containing two-dimensional characteristic information and depth information.
And thirdly, calibrating the RGB camera and the depth camera respectively to obtain internal parameters and external parameters of the RGB camera and the depth camera.
Here, the internal reference means a parameter for correcting distortion occurring in the radial direction and the tangential direction of the real lens. The external parameters are a rotation matrix and a translation matrix, and the rotation matrix and the translation matrix jointly describe the conversion of the pixel points from the world coordinate system to the camera coordinate system, for example, the rotation matrix: describing the direction of the coordinate axis of the map coordinate system relative to the camera coordinate axis; translation matrix: position of origin in space described in the Camera coordinate System)
And fourthly, determining a rotation matrix and a translation vector of pixel alignment from the two-dimensional image of the key frame to the depth image of the key frame.
Here, it is assumed that the internal reference of the RGB camera is obtained in the third calibration as shown in equation (1):
wherein f isx_rgb,fy_rgb,cx_rgbAnd cy_rgbRespectively, internal reference K of RGB camerargbCorrection parameters on the x-axis and y-axis, respectively, in the camera coordinate system.
For an RGB camera, there are: zrgb*prgb=Krgb*[I|0]PrgbWherein P isrgb=[XrgbYrgbZrgb1]Is a homogeneous three-dimensional point in the RGB camera coordinate system, and the homogeneous pixel coordinate of the keyframe two-dimensional image in the camera coordinate system is represented as prgb=[u v 1]. Wherein, the homogeneous three-dimensional point PrgbNon-homogeneous coordinates can be used
Figure BDA0002217739750000152
To indicate that is
Figure BDA0002217739750000153
Similarly, for the internal reference K of the depth camerairAnd similar mapping relationships can also be obtained,wherein p isirIs the homogeneous pixel coordinates of the key frame depth image in the camera coordinate system,
Figure BDA0002217739750000155
non-homogeneous three-dimensional point coordinates under a camera coordinate system.
The external reference of the RGB camera is RrgbAnd TrgbThe external reference of the depth camera is denoted as RirAnd TirTransformation relation R between external parameters of two camerasir2rgbAnd Tir2rgbAs shown in equation (2):
Figure BDA00022177397500001512
three-dimensional point with non-homogeneous coordinate
Figure BDA0002217739750000157
And three-dimensional points
Figure BDA0002217739750000158
The relationship between them is:
Figure BDA0002217739750000159
finally, the following equation (3) can be obtained:
to simplify the representation, let:
Figure BDA00022177397500001511
T=Kir*Tir2rgbthen equation (3) can be expressed as shown in equation (4):
Zrgb*prgb=R*Zir*pir+T (4);
finally, R is solved by solving an over-determined equationir2rgbAnd Tir2rgb
And step S304, extracting two-dimensional image characteristics of the two-dimensional image of the key frame, and combining the depth information of the corresponding pixels of the depth image of the key frame for pose calculation.
In step S303, it is necessary to extract 2D position information, 3D position information, and identification information (i.e., descriptor information of the feature point) of the feature point of the two-dimensional image of the key frame, where the 3D position information of the feature point of the two-dimensional image of the key frame is obtained by mapping the 2D position information of the feature point of the two-dimensional image of the key frame in a three-dimensional coordinate system where a preset map is located, for example, a plurality of 2D feature points are extracted, the number of the extracted feature points is 150 (150 is an empirical value, the number of feature points is too small, the tracking failure rate is high, the number of the feature points is too large, and the algorithm efficiency is affected too much), and the feature points are used for feature point matching.
Step S305, determining the ratio of each sample feature points in the two-dimensional key frame image in real time in the acquisition process to obtain a ratio vector.
Here, step S305 may be understood as extracting, in real time, a ratio vector of a two-dimensional image of a key frame with respect to a current frame image during a collection process of the two-dimensional image of the key frame, as shown in fig. 6, describing a bag-of-words model in a vocabulary tree form, where the bag-of-words model includes a sample image library 41, i.e., a root node of the vocabulary tree, sample images 42, 43, and 44, i.e., leaf nodes 42 and 43, sample feature points 1 to 3 are different sample feature points in the sample image 42, feature points 4 to 6 are different sample feature points in the sample image 43, feature points 7 to 9 are different sample feature points in the sample image 44, it is assumed in the bag-of-words model that there are w sample feature points, i.e., w is the number of types of feature points extracted from the sample image of the bag-of-words model, so that has w sample feature points in total, each sample feature point scores the two-dimensional image of the key frame, and the ratio vector is a floating point number of 0 to 1, so that each two-dimensional image of the key frame image can be represented by a floating point number of w, and the ratio vector
Figure BDA0002217739750000161
The scoring process is shown in formula (5):
Figure BDA0002217739750000162
where N is the number of sample images (i.e., the th number), NiIs a sample feature point wiNumber of occurrences in the sample image (i.e., th time), ItFor the image I acquired at the time t,
Figure BDA0002217739750000171
is a sample feature point wiKeyframe two-dimensional images I acquired at a momenttThe number of occurrences (i.e. the second number),
Figure BDA0002217739750000172
for key-frame two-dimensional images ItThe total number of sample feature points (i.e., the second number) present therein. And obtaining a w-dimensional floating point number vector, namely a ratio vector, of each two-dimensional keyframe image through sample feature point grading, and taking the ratio vector as the feature information of a preset bag-of-words model.
In the above steps S301 to S305, preset maps depending on the two-dimensional key frame image and the depth key frame image are constructed, and the preset maps store the image features (including the 2D position information, and the identification information, such as the 2D coordinates, the 3D coordinates, and the descriptor information) of the two-dimensional key frame image in a binary format to the local device, and are loaded for use when the image capturing device needs to be performed.
And S306, loading the constructed preset map.
And step S307, acquiring an image by using a camera to obtain an image to be processed.
Step S308, in the process of acquiring the image to be processed, th image features in the current frame of the image to be processed are extracted in real time.
Here, extracting th image features in the current frame of the image to be processed in real time is similar to the process of step S303, but it is not necessary to determine 3D position information of the image to be processed because it is not necessary to provide 3D position information of the image to be processed in the subsequent PnP algorithm.
Step S309, retrieving a matching frame image of the current frame of the image to be processed in a preset map through the bag-of-words model.
Here, the retrieving of the matching frame image of the current frame of the image to be processed in the preset map through the bag-of-words model may be understood as retrieving the matching frame image of the current frame of the image to be processed in the preset map by using a ratio vector set, which is characteristic information of the bag-of-words model.
The step S309 may be implemented by the following process:
, finding out the similarity between the current frame of the image to be processed and the two-dimensional image of each key frame, the similarity s (v)1,v2) The calculation of (c) is shown in equation (6).
Figure BDA0002217739750000181
Wherein v is1And v2The method comprises the steps of respectively representing a th ratio vector occupied by each sample feature point contained in a bag-of-words model in a current frame of an image to be processed and a second ratio vector occupied by each sample feature point in a two-dimensional image of a key frame, wherein if the bag-of-words model contains w sample feature points, the th ratio vector and the second ratio vector are both w-dimensional vectors, and screening out similar two-dimensional images of the key frame with the similarity reaching a second threshold value in the two-dimensional image of the key frame to form a similar two-dimensional image set of the key frame.
And secondly, selecting similar key frame two-dimensional images from the similar key frame two-dimensional image set, wherein the difference of the timestamps is smaller than a third threshold value, and the similar key frame two-dimensional images with the similarity difference smaller than a fourth threshold value are combined at to obtain a combined frame image (or an island).
Here, the second step may be understood as selecting similar keyframe two-dimensional images from the set of similar keyframe two-dimensional images with close timestamps and close matching scores of the similarities to be combined at to form islands, so that the set of similar keyframe two-dimensional images is divided into multiple combined frame images (i.e., multiple islands). The ratio of the similarities between the th keyframe two-dimensional image and the last keyframe two-dimensional image in the combined frame images is very small, and the ratio of the similarities is very small
Figure BDA0002217739750000182
As shown in equation (7):
Figure BDA0002217739750000183
wherein,
Figure BDA0002217739750000184
and s (v)t,vt-△t) Respectively representing the similarity of the two-dimensional images of the front and the rear two key frames with the to-be-processed image of the current frame.
Thirdly, respectively determining the sum of the similarity of the image characteristics of the two-dimensional image of each key frame contained in the plurality of joint frame images and the image characteristics, as shown in formula (8),
and fourthly, determining the combined frame image with the maximum similarity sum as a target combined frame image with the highest similarity with the image to be processed, and finding out a matched frame image with the highest similarity with the current frame of the image to be processed from the target combined frame image.
And S310, determining the pose information of the current camera in a map coordinate system by adopting a PnP algorithm.
Here, the step S310 may be implemented by:
, current frame X of image to be processedCThe Nth feature point F ofCNGo through the matching frame image X3And determining the Euclidean distance between any two feature points in the matched frame image. As shown in FIG. 7, the current frame X of the image to be processedc51 with the current frame X c51 matched frame image X 352. Calculating a feature point X053 and X 154 to obtain a Euclidean distance F 0501; calculating a feature point X154 and X 255 to obtain Euclidean distance F 1502; calculating a feature point X456 and X 352 in euroFrom, to, the Euclidean distance F 2503; calculating a feature point Xc51 and X 456 to obtain Euclidean distance F 3504。
Secondly, selecting groups (namely target Euclidean distance sets) with the minimum Euclidean distance for threshold value judgment, if the Euclidean distance is smaller than a threshold value, determining the Euclidean distance as the target Euclidean distance, forming the target Euclidean distance sets, otherwise, not forming the target Euclidean distance sets, jumping to the step, and traversing X until the target Euclidean distance sets are traversedCFor example, as shown in fig. 7, by comparing a plurality of euclidean distances, sets of minimum euclidean distance combinations { F } are obtained1,F2,F3}。
Third, a set of target Euclidean distances is formed, which can be expressed as { F }1,F2,F3And fourthly, if the number of the elements in the target Euclidean distance set is greater than a fifth threshold value, performing the fourth step, otherwise, finishing the algorithm and outputting a matching frame X3The location information of (1).
Fourthly, calling a function in the PnP to solve the X based on the target Euclidean distance setCLocation information in a map coordinate system. The process of the PnP algorithm is as follows:
the input of the PnP algorithm is the 3D coordinates of the feature points in the two-dimensional image of the key frame and the 2D coordinates of the feature points in the current frame of the image to be processed, and the output of the PnP algorithm is the position of the current frame of the image to be processed in a map coordinate system.
The PnP algorithm does not directly solve a camera pose matrix according to the matching pair sequence, but firstly solves the 3D coordinates of the feature points in the current frame of the image to be processed in a camera coordinate system, and then solves the camera pose according to the 3D coordinate system in the map coordinate system and the 3D coordinates of the feature points in the current frame of the image to be processed in the camera coordinate system. The solution of the PnP algorithm starts from the cosine theorem, and the centers of the camera coordinate systems are set as points O, A, B and C, which are three feature points in the current frame of the image to be processed, as shown in fig. 8:
according to the cosine theorem, the relationship between A, B and C is shown in equation (9):
Figure BDA0002217739750000201
eliminating the above formula and dividing by OC2And in addition
Figure BDA0002217739750000202
Then equation (10) can be derived:
Figure BDA0002217739750000203
then, replacing is performed
Figure BDA0002217739750000204
Formula (11) can be obtained:
Figure BDA0002217739750000205
substituting equation (8) into equations (10) and (11), respectively, results in equations (12) and (13), respectively:
(1-w)x2-w·y2-2·x·cos<a,c>+2·w·x·y·cos<a,b>+1=0 (12);
(1-v)y2-v·x2-2·y·cos<b,c>+2·v·x·y·cos<a,b>+1=0 (13);
wherein w, v, cos < a, C >, cos < b, C >, cos < a, b > are known quantities since the 2D coordinates of A, B and C are known, so that there are only two unknown quantities x, y, and the values of x, y can be obtained by equations (8) and (9), and thus, the values of OA, OB and OC can be solved, as shown in equation (14):
Figure BDA0002217739750000206
finally, the 3D coordinates of the A, B and C feature points in the current three-dimensional coordinate system can be obtained, and can be obtained through the formula (15):
Figure BDA0002217739750000211
and after the 3D coordinates of the A, B characteristic points and the C characteristic points in the current three-dimensional coordinate system are obtained, the position of the acquisition equipment is determined through the transformation from the map coordinate system to the camera coordinate system.
In the steps S306 to S310, a constructed preset map is loaded on the to-be-processed image acquired by the image acquisition device, a matching frame image of the to-be-processed image is retrieved from a key frame two-dimensional image in the preset map through a bag-of-words model, and finally, the accurate pose of the current camera in the map is solved by adopting a PnP algorithm to determine the position and the pose of the device in a map coordinate system, so that the positioning result has high precision, no need of depending on external base station equipment, low cost and strong robustness.
In the embodiment of the application, the 2D coordinates and the 3D coordinates of the two-dimensional images of the key frames are considered at the same time, the 3D coordinates of the acquisition equipment can be provided in the positioning result, and the positioning accuracy is improved; in the process of drawing and positioning, other external base station equipment is not required to be introduced, so the cost is low; and algorithms with high error rate such as object identification and the like do not need to be introduced, so that the positioning success rate is high and the robustness is strong.
The embodiment of the present application provides positioning apparatuses, where the apparatuses include modules and units included in the modules, and may be implemented by a processor in a computer device, or may be implemented by a specific logic circuit, and in the implementation process, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a field programmable array (FPGA), or the like.
Fig. 9 is a schematic structural diagram of a positioning apparatus according to an embodiment of the present application, and as shown in fig. 9, the apparatus 600 includes:
the th extraction module 601 is used for extracting th image features of the image to be processed;
an matching module 602, configured to match a second image feature from the image features and corresponding depth image features of the keyframe two-dimensional image stored in the preset map according to the image feature;
an determining module 603 configured to determine pose information of an image capturing device used for capturing the image to be processed according to the image feature and the second image feature.
In the device, the th image feature of the image to be processed comprises identification information and two-dimensional position information of the feature point of the image to be processed;
the second image feature includes: two-dimensional position information of the feature points of the two-dimensional key frame image, depth information of the feature points and identification information.
In the above apparatus, the th extraction module 601 includes:
extraction submodule for extracting feature point set of the image to be processed;
an determining submodule for determining identification information of each feature point in the feature point set and two-dimensional position information of each feature point in the image to be processed.
In the above apparatus, the apparatus further comprises:
the second determining module is used for determining the image containing the second image characteristics as a matched frame image of the image to be processed;
a third determining module, configured to determine a target euclidean distance smaller than an th threshold between any two feature points included in the matched frame image, to obtain a target euclidean distance set;
correspondingly, the th determining module 603 is further configured to determine the pose information of the image capturing device according to the th image feature and the second image feature if the number of target euclidean distances included in the target euclidean distance set is greater than a preset number threshold.
In the above apparatus, the th matching module 602 includes:
an th ratio submodule, configured to determine ratios of different sample feature points in the feature point set, respectively, to obtain a th ratio vector;
the second ratio submodule is used for acquiring a second ratio vector, and the second ratio vector is the ratio of the plurality of sample characteristic points in the characteristic points contained in the two-dimensional key frame image;
and the matching submodule is used for matching a second image feature from the image features and the corresponding depth features of the two-dimensional keyframe image according to the th image feature, the th ratio vector and the second ratio vector.
In the above apparatus, the th matching sub-module includes:
an determining unit, configured to determine, according to the ratio vector and the second ratio vector, a similar image feature having a similarity greater than a second threshold with the th image feature from the image features and the corresponding depth features of the keyframe two-dimensional image;
the second determining unit is used for determining the two-dimensional images of the similar key frames to which the similar image features belong to obtain a similar key frame two-dimensional image set;
an selecting unit, configured to select, from the image features of the two-dimensional images of the similar key frames, a second image feature whose similarity with the image feature satisfies a preset similarity threshold.
In the above apparatus, the th selecting unit includes:
a determining subunit, for determining the time difference between the acquisition times of the at least two similar key frame two-dimensional images, and the similarity difference between the image features of the at least two similar key frame two-dimensional images and the image features respectively;
an union subunit, configured to unite the two-dimensional images of the similar key frames whose time difference is smaller than the third threshold and whose similarity difference is smaller than the fourth threshold, to obtain a united frame image;
an th selecting subunit, configured to select, from the image features of the joint frame image, a second image feature whose similarity with the th image feature satisfies a preset similarity threshold.
In the device, the th selection subunit is configured to determine a sum of similarity between an image feature of each keyframe two-dimensional image included in the multiple combined frame images and the th image feature, determine the combined frame image with the largest sum of similarity as a target combined frame image with the highest similarity to the image to be processed, and select a second image feature with similarity meeting a preset similarity threshold value with the th image feature from the image features of the target combined frame image and the corresponding depth image features according to identification information of feature points of the target combined frame image and identification information of feature points of the image to be processed.
In the above apparatus, the th determining module 603 includes:
the second determining submodule is used for determining feature points of the keyframe two-dimensional image corresponding to the second image features and map coordinates in a map coordinate system corresponding to the preset map;
the third determining submodule is used for determining the feature points of the two-dimensional key frame image corresponding to the second image features and the camera coordinates in the camera coordinate system of the image acquisition equipment;
the fourth determining submodule is used for determining the conversion relation of the camera coordinate system relative to the map coordinate system according to the map coordinate and the camera coordinate;
and the fifth determining submodule is used for determining the position of the image acquisition equipment in the map coordinate system and the acquisition orientation of the image acquisition equipment relative to the map coordinate system according to the conversion relation and the camera coordinate of the image acquisition equipment in the camera coordinate system.
In the above apparatus, the apparatus further comprises:
an selection module, configured to select, from a sample image library, a plurality of keyframe two-dimensional images that satisfy preset conditions, to obtain a keyframe two-dimensional image set;
an extraction module, configured to extract image features of every keyframe two-dimensional images to obtain a key two-dimensional image feature set;
an acquisition module, configured to acquire depth information of each keyframe two-dimensional image to obtain a keyframe depth image;
an alignment module to align the keyframe depth image with the keyframe two-dimensional image such that key two-dimensional image features correspond to the depth image features of the keyframe depth image;
an scale module, configured to determine a ratio of feature points of each sample image to the keyframe two-dimensional image, to obtain a ratio vector set;
and the storage module is used for storing the ratio vector set, the key two-dimensional image feature set and the depth image features corresponding to each key two-dimensional image feature to obtain the preset map.
In the above apparatus, the th alignment module comprises:
a sixth determining sub-module for determining time stamp information per keyframe depth images and second time stamp information per keyframe two-dimensional images, respectively;
a seventh determining sub-module, configured to determine that the ith keyframe two-dimensional image matches the jth keyframe depth image if the difference between the ith th timestamp information and the jth second timestamp information is less than a preset difference;
an th obtaining sub-module, configured to obtain th calibration parameters of an image capturing device used for capturing the ith key frame two-dimensional image and second calibration parameters of an image capturing device used for capturing the jth key frame two-dimensional image;
an alignment sub-module, configured to align the ith keyframe two-dimensional image with the jth keyframe depth image according to the calibration parameters and the second calibration parameters, so that the depth image features of the ith keyframe two-dimensional image and the jth keyframe depth image correspond to each other.
In the above apparatus, the th alignment submodule includes:
a third determining unit, configured to determine an alignment matrix of the jth keyframe depth image with respect to the ith keyframe two-dimensional image according to the -th calibration parameter and the second calibration parameter;
and an adjusting unit, configured to adjust coordinates of each pixel point in the jth key frame depth image according to the alignment matrix, so that each pixel point in the adjusted jth key frame depth image corresponds to the coordinate of the pixel point in the ith key frame two-dimensional image.
In the above apparatus, the th proportional module comprises:
the eighth determining submodule is used for determining average times according to the th number of the sample images contained in the sample image library and the th times of the p-th sample characteristic point appearing in the sample image library, wherein p is an integer greater than or equal to 1, and the average times are used for indicating the times of the p-th sample characteristic point appearing in each sample image on average;
a ninth determining submodule, configured to determine a second average number of times according to a second number of times that the p-th sample feature point appears in the q-th keyframe two-dimensional image and a second number of sample feature points included in the q-th keyframe two-dimensional image; wherein q is an integer greater than or equal to 1; the second average number of times is used for indicating the proportion of the p-th sample feature point occupying the sample feature point contained in the q-th key frame two-dimensional image;
and the third proportion submodule is used for obtaining the ratio of the sample characteristic points in the two-dimensional key frame image according to the th average times and the second average times, and obtaining the ratio vector set.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the positioning method is implemented in the form of a software functional module, and the positioning method is sold or used as an independent product, the positioning method may also be stored in computer-readable storage media, based on such understanding, a portion of the technical solution of the embodiment of the present application, which essentially or contributes to the related art, may be embodied in the form of a software product, where the computer software product is stored in storage media, and includes several instructions for causing an automatic test line of a device including the storage media to execute all or part of the method described in each embodiment of the present application.
Correspondingly, the present application provides computer-readable storage media, on which a computer program is stored, and the computer program is executed by a processor to implement the steps in the positioning method provided in the above embodiments.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to " embodiments" or " embodiments" means that a particular feature, structure or characteristic described in connection with the embodiments is included in at least embodiments of the present application it is not necessary for to refer to the same embodiment as "in embodiments" or "in embodiments" appearing throughout the specification.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises an series of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units into only logical functional divisions, and other divisions may be possible in actual practice, e.g., multiple units or components may be combined, or may be integrated into another systems, or features may be omitted or not executed.
The units described as separate parts may or may not be physically separate, parts displayed as units may or may not be physical units, may be located in places, may be distributed on a plurality of network units, and part or all of the units may be selected according to actual needs to achieve the purpose of the embodiments of the present application.
In addition, all the functional units in the embodiments of the present application may be integrated into processing units, or each unit may be separately used as units, or two or more units may be integrated into units, and the integrated units may be implemented in a form of hardware, or in a form of hardware and software functional units.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Based on the understanding, the technical solution of the embodiments of the present application, which is essentially or partially contributed to by the related art, may be embodied in the form of a software product stored in storage media and including instructions for causing an automatic test line of a device to perform all or part of the method according to the embodiments of the present application.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1, A method of positioning, the method comprising:
image features of the image to be processed are extracted;
according to the th image feature, matching a second image feature from the image feature of the keyframe two-dimensional image and the corresponding depth image feature stored in a preset map;
and determining the position and orientation information of an image acquisition device for acquiring the image to be processed according to the th image characteristic and the second image characteristic.
2. The method as claimed in claim 1, wherein the th image feature of the image to be processed comprises identification information and two-dimensional position information of feature points of the image to be processed, and the extracting th image feature of the image to be processed comprises:
extracting a feature point set of the image to be processed;
and determining identification information of each feature point in the feature point set and two-dimensional position information of each feature point in the image to be processed.
3. The method according to claim 2, wherein the matching the second image feature from the image feature and the corresponding depth image feature of the two-dimensional keyframe image stored in the preset map according to the th image feature comprises:
respectively determining the ratios of different sample characteristic points in the characteristic point set to obtain th ratio vectors;
obtaining a second ratio vector, wherein the second ratio vector is the ratio of the plurality of sample feature points in the feature points contained in the two-dimensional key frame image;
and matching a second image feature from the image features and the corresponding depth features of the two-dimensional keyframe image according to the th image feature, the th ratio vector and the second ratio vector.
4. The method of claim 3, wherein matching second image features from image features and corresponding depth features of the keyframe two-dimensional image based on the -th image feature, the -th ratio vector, and the second ratio vector comprises:
according to the th ratio vector and the second ratio vector, determining similar image features with similarity larger than a second threshold value with the th image feature from the image features and the corresponding depth features of the two-dimensional key frame image;
determining a similar key frame two-dimensional image to which the similar image features belong to obtain a similar key frame two-dimensional image set;
and selecting a second image feature with the similarity meeting a preset similarity threshold with the th image feature from the image features of the two-dimensional images of the similar key frames.
5. The method according to claim 4, wherein the selecting, from the image features of the similar key frame two-dimensional images, a second image feature with a similarity satisfying a preset similarity threshold with the th image feature comprises:
determining the time difference between the acquisition times of at least two similar key frame two-dimensional images and the similarity difference between the image characteristics of the at least two similar key frame two-dimensional images and the th image characteristics respectively;
combining the two-dimensional images of the similar key frames with the time difference smaller than a third threshold and the similarity difference smaller than a fourth threshold to obtain a combined frame image;
and selecting a second image feature with the similarity meeting a preset similarity threshold with the th image feature from the image features of the joint frame image.
6. The method according to claim 5, wherein the selecting, from the image features of the joint frame image, a second image feature whose similarity with the th image feature satisfies a preset similarity threshold comprises:
respectively determining the sum of the similarity of the image characteristics of each key frame two-dimensional image contained in a plurality of joint frame images and the image characteristics;
determining the combined frame image with the maximum sum of the similarity as a target combined frame image with the highest similarity with the image to be processed;
and selecting a second image feature with similarity meeting a preset similarity threshold with the th image feature from the image features and the corresponding depth image features of the target joint frame image according to the identification information of the feature points of the target joint frame image and the identification information of the feature points of the image to be processed.
7. The method of claim 1, wherein determining pose information for the image capture device based on the second image feature comprises:
determining feature points of the key frame two-dimensional image corresponding to the second image features and map coordinates in a map coordinate system corresponding to the preset map;
determining feature points of the keyframe two-dimensional image corresponding to the second image features, and camera coordinates in a camera coordinate system where the image acquisition equipment is located;
determining a conversion relation of the camera coordinate system relative to the map coordinate system according to the map coordinate and the camera coordinate;
and determining the position of the image acquisition equipment in the map coordinate system and the acquisition orientation of the image acquisition equipment relative to the map coordinate system according to the conversion relation and the camera coordinate of the image acquisition equipment in the camera coordinate system.
8. The method according to claim 1, wherein prior to said extracting th image features of the image to be processed, the method further comprises:
selecting a plurality of key frame two-dimensional images meeting preset conditions from a sample image library to obtain a key frame two-dimensional image set;
extracting the image characteristics of every key frame two-dimensional image to obtain a key two-dimensional image characteristic set;
acquiring depth information of each key frame two-dimensional image to obtain a key frame depth image;
aligning the key frame depth image with the key frame two-dimensional image such that key two-dimensional image features correspond to key frame depth image feature ;
determining the ratio of the characteristic points of each sample image in the two-dimensional images of the key frame to obtain a ratio vector set;
and storing the ratio vector set, the key two-dimensional image feature set and the depth image features corresponding to every key two-dimensional image features to obtain the preset map.
9. The method of claim 8, wherein aligning the keyframe depth image with the keyframe two-dimensional image such that the keyframe two-dimensional image features correspond with the keyframe depth image features comprises:
respectively determining time stamp information of each key frame depth image and second time stamp information of each key frame two-dimensional image;
if the difference value between the ith th time stamp information and the jth second time stamp information is smaller than a preset difference value, determining that the ith key frame two-dimensional image is matched with the jth key frame depth image, wherein i and j are integers which are larger than or equal to 1;
th calibration parameters of an image acquisition device used for acquiring the ith key frame two-dimensional image and second calibration parameters of the image acquisition device used for acquiring the jth key frame two-dimensional image are acquired;
and aligning the ith key frame two-dimensional image and the jth key frame depth image according to the th calibration parameter and the second calibration parameter, so that the ith key frame two-dimensional image corresponds to the depth image characteristic of the jth key frame depth image of the color image characteristic.
10. The method of claim 9, wherein said aligning said ith keyframe two-dimensional image with said jth keyframe depth image from said calibration parameters and said second calibration parameters comprises:
determining an alignment matrix of the jth keyframe depth image relative to the ith keyframe two-dimensional image according to the calibration parameters and the second calibration parameters;
and adjusting the coordinates of each pixel point in the jth key frame depth image according to the alignment matrix, so that each pixel point in the jth key frame depth image after adjustment corresponds to the coordinates of the pixel point in the ith key frame two-dimensional image.
11. The method of claim 8, wherein determining the ratio of the feature points of each sample image in the two-dimensional key frame image to obtain a ratio vector set comprises:
determining average times according to the th number of sample images contained in the sample image library and the th times of the p-th sample characteristic point appearing in the sample image library, wherein p is an integer greater than or equal to 1;
determining a second average number of times according to a second number of times that the p-th sample feature point appears in the q-th key frame two-dimensional image and a second number of sample feature points contained in the q-th key frame two-dimensional image; wherein q is an integer greater than or equal to 1; the second average number of times is used for indicating the proportion of the p-th sample feature point occupying the sample feature point contained in the q-th key frame two-dimensional image;
and obtaining the ratio of the sample feature points in the two-dimensional key frame image according to the th average times and the second average times, and obtaining the ratio vector set.
12, A positioning device, comprising:
an extraction module, which is used for extracting image characteristics of the image to be processed;
an matching module, configured to match a second image feature from the image features of the keyframe two-dimensional image and the corresponding depth image features stored in a preset map according to the image feature;
an determining module, configured to determine pose information of an image capturing device used for capturing the image to be processed according to the image feature and the second image feature.
A terminal of the kind 13, , comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor, when executing the program, performs the steps of the method of any of claims 1 to 11 and .
14, computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any of claims 1 to 11 through .
CN201910921590.4A 2019-09-27 2019-09-27 Positioning method and device, terminal and storage medium Active CN110738703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910921590.4A CN110738703B (en) 2019-09-27 2019-09-27 Positioning method and device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910921590.4A CN110738703B (en) 2019-09-27 2019-09-27 Positioning method and device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110738703A true CN110738703A (en) 2020-01-31
CN110738703B CN110738703B (en) 2022-08-26

Family

ID=69269671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910921590.4A Active CN110738703B (en) 2019-09-27 2019-09-27 Positioning method and device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110738703B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310654A (en) * 2020-02-13 2020-06-19 北京百度网讯科技有限公司 Map element positioning method and device, electronic equipment and storage medium
CN111340890A (en) * 2020-02-20 2020-06-26 北京百度网讯科技有限公司 Camera external reference calibration method, device, equipment and readable storage medium
CN111539990A (en) * 2020-04-20 2020-08-14 深圳Tcl数字技术有限公司 Moving object position detection method, apparatus, device, and medium
CN111563138A (en) * 2020-04-30 2020-08-21 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN114862658A (en) * 2022-04-01 2022-08-05 奥比中光科技集团股份有限公司 Image processing system, method, intelligent terminal and computer readable storage medium
CN115830110A (en) * 2022-10-26 2023-03-21 北京城市网邻信息技术有限公司 Instant positioning and map construction method and device, terminal equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9852542B1 (en) * 2012-04-13 2017-12-26 Google Llc Methods and apparatus related to georeferenced pose of 3D models
CN109784250A (en) * 2019-01-04 2019-05-21 广州广电研究院有限公司 The localization method and device of automatically guiding trolley
CN110246163A (en) * 2019-05-17 2019-09-17 联想(上海)信息技术有限公司 Image processing method and its device, equipment, computer storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9852542B1 (en) * 2012-04-13 2017-12-26 Google Llc Methods and apparatus related to georeferenced pose of 3D models
CN109784250A (en) * 2019-01-04 2019-05-21 广州广电研究院有限公司 The localization method and device of automatically guiding trolley
CN110246163A (en) * 2019-05-17 2019-09-17 联想(上海)信息技术有限公司 Image processing method and its device, equipment, computer storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RAYMOND PHAN ET.AL: "Robust Semi-Automatic Depth Map Generation in Unconstrained Images and Video Sequences for 2D to Stereoscopic 3D Conversion", 《IEEE TRANSACTIONS ON MULTIMEDIA 》 *
李宸鑫 等: "三维图像重建中的深度信息处理方法", 《电子技术与软件工程》 *
罗浩: "RGB-D视觉内容理解及其应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310654A (en) * 2020-02-13 2020-06-19 北京百度网讯科技有限公司 Map element positioning method and device, electronic equipment and storage medium
CN111310654B (en) * 2020-02-13 2023-09-08 北京百度网讯科技有限公司 Map element positioning method and device, electronic equipment and storage medium
CN111340890A (en) * 2020-02-20 2020-06-26 北京百度网讯科技有限公司 Camera external reference calibration method, device, equipment and readable storage medium
CN111340890B (en) * 2020-02-20 2023-08-04 阿波罗智联(北京)科技有限公司 Camera external parameter calibration method, device, equipment and readable storage medium
CN111539990A (en) * 2020-04-20 2020-08-14 深圳Tcl数字技术有限公司 Moving object position detection method, apparatus, device, and medium
CN111563138A (en) * 2020-04-30 2020-08-21 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN111563138B (en) * 2020-04-30 2024-01-05 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN114862658A (en) * 2022-04-01 2022-08-05 奥比中光科技集团股份有限公司 Image processing system, method, intelligent terminal and computer readable storage medium
CN114862658B (en) * 2022-04-01 2023-05-05 奥比中光科技集团股份有限公司 Image processing system, method, intelligent terminal and computer readable storage medium
CN115830110A (en) * 2022-10-26 2023-03-21 北京城市网邻信息技术有限公司 Instant positioning and map construction method and device, terminal equipment and storage medium
CN115830110B (en) * 2022-10-26 2024-01-02 北京城市网邻信息技术有限公司 Instant positioning and map construction method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN110738703B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN110738703B (en) Positioning method and device, terminal and storage medium
CN111354042B (en) Feature extraction method and device of robot visual image, robot and medium
CN110568447B (en) Visual positioning method, device and computer readable medium
CN108765498B (en) Monocular vision tracking, device and storage medium
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
CN106683070B (en) Height measuring method and device based on depth camera
CN105627932B (en) A kind of distance measuring method and device based on binocular vision
CN107833181B (en) Three-dimensional panoramic image generation method based on zoom stereo vision
CN107392958B (en) Method and device for determining object volume based on binocular stereo camera
CN111179358A (en) Calibration method, device, equipment and storage medium
CN112150528A (en) Depth image acquisition method, terminal and computer readable storage medium
CN111862180B (en) Camera set pose acquisition method and device, storage medium and electronic equipment
CN112148742B (en) Map updating method and device, terminal and storage medium
CN112150548B (en) Positioning method and device, terminal and storage medium
CN102072706A (en) Multi-camera positioning and tracking method and system
CN111144349A (en) Indoor visual relocation method and system
CN110766731A (en) Method and device for automatically registering panoramic image and point cloud and storage medium
CN112613381A (en) Image mapping method and device, storage medium and electronic device
CN112184811A (en) Monocular space structured light system structure calibration method and device
JP2014010495A (en) Image processing device and imaging device provided with the same, image processing method and image processing program
CN112771575A (en) Distance determination method, movable platform and computer readable storage medium
CN112132900B (en) Visual repositioning method and system
CN115359130A (en) Radar and camera combined calibration method and device, electronic equipment and storage medium
GB2569609A (en) Method and device for digital 3D reconstruction
CN112907657A (en) Robot repositioning method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant