CN106352877B - A kind of mobile device and its localization method - Google Patents
A kind of mobile device and its localization method Download PDFInfo
- Publication number
- CN106352877B CN106352877B CN201610652818.0A CN201610652818A CN106352877B CN 106352877 B CN106352877 B CN 106352877B CN 201610652818 A CN201610652818 A CN 201610652818A CN 106352877 B CN106352877 B CN 106352877B
- Authority
- CN
- China
- Prior art keywords
- group
- feature descriptor
- mobile device
- feature
- current time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of mobile device and its localization method, and the localization method of mobile device includes: to extract first group of Feature Descriptor of current time acquired visual signature point in the moving process of mobile device acquisition visual signature point;First group of Feature Descriptor is subjected to closed loop detection with the every group of Feature Descriptor extracted before respectively;When detecting closed loop based on first group of Feature Descriptor and second group of Feature Descriptor, pass through the space coordinate of visual signature point described in second group of Feature Descriptor, determine mobile device in the pose at the current time, wherein, second group of Feature Descriptor for it before extract each group Feature Descriptor in wherein one group.The present invention is solved to the error of itself pose estimation under mobile device accumulates in moving process, and the technical issues of seriously affect positioning accuracy, and then improve the precision based on mobile device positioning.
Description
Technical field
The present invention relates to field of locating technology more particularly to a kind of mobile device and its localization methods.
Background technique
The robot of existing view-based access control model and inertia device carries out indoor orientation method and is broadly divided into two major classes: 1) establishing ring
The localization method of condition figure, such as: vision SLAM (Simultaneous Localization And Mapping, immediately positioning with
Map structuring) technology, 2) it does not need to establish the localization method of environmental map, such as: vision/inertia odometer technology.
Establish the localization method of environmental map: robot while estimating self-position posture, will usually build environment
On the spot scheme, passes through each position and attitude and map Road target relative positional relationship in optimization robot itself track and track
To obtain the location information of robot.The positioning method accuracy for establishing environmental map is higher, but indoor environment map is built
It stands to bring environmental information in the optimization algorithm of positioning into, it is necessary to consume a large amount of calculation resources of robot, therefore excellent
The operand for changing algorithm often becomes the bottleneck for influencing to establish the real-time of localization method of environmental map.And it existing does not need to build
The application of the localization method of vertical environmental map in a mobile device makes real-time be guaranteed, but with the increasing of motion profile
It is long, mobile device accumulated in moving process under to the error of self-position Attitude estimation, cause to position of mobile equipment posture
Evaluated error can continue to increase, seriously affect positioning accuracy.
Summary of the invention
The embodiment of the present invention solves in the prior art by providing a kind of mobile device and its localization method based on movement
The positioning of device can continue to increase the evaluated error of mobile device pose, and the technical issues of seriously affect positioning accuracy.
In a first aspect, the embodiment of the invention provides a kind of localization methods of mobile device, comprising:
In the moving process of mobile device acquisition visual signature point, current time acquired visual signature point is extracted
First group of Feature Descriptor;
First group of Feature Descriptor is subjected to closed loop detection with the every group of Feature Descriptor extracted before respectively;
When detecting closed loop based on first group of Feature Descriptor and second group of Feature Descriptor, pass through described second
The space coordinate of visual signature point described in group Feature Descriptor, determines the mobile device in the position at the current time
Appearance, wherein second group of Feature Descriptor is wherein one group in each group Feature Descriptor extracted before.
Preferably, described to close first group of Feature Descriptor with the every group of Feature Descriptor extracted before respectively
Ring detection, comprising:
First group of Feature Descriptor is subjected to Similar contrasts with the every group of Feature Descriptor extracted before respectively,
Determine that each group Feature Descriptor extracted before and first group of Feature Descriptor meet default condition of similarity respectively
Description subnumber amount;
The each group Feature Descriptor extracted before judgement is described respectively meets default phase with first group of Feature Descriptor
Whether it is greater than preset quantity threshold value like the description subnumber amount of condition, wherein meet the description subnumber amount of the default condition of similarity
Characterization detects closed loop when greater than the preset quantity threshold value.
Preferably, it is described by first group of Feature Descriptor respectively with it is described before extract every group of Feature Descriptor into
Row Similar contrasts judge whether to meet default condition of similarity, comprising:
Each Feature Descriptor in first group of Feature Descriptor is described with the every group of feature extracted before respectively
Each Feature Descriptor in son compares;
Judge whether the vector angle between the Feature Descriptor compared is less than predetermined angle threshold value, wherein in institute
The Feature Descriptor that characterization compares when stating vector angle less than the predetermined angle threshold value meets the default condition of similarity.
Preferably, the space coordinate by visual signature point described in second group of Feature Descriptor determines
Pose of the mobile device at the current time out, comprising:
Determine multiple Feature Descriptors in second group of Feature Descriptor;
Determine the corresponding two dimensional image coordinate in current time acquired image frames of the multiple Feature Descriptor;
Space coordinate, the two dimensional image coordinate based on the point of visual signature described by the multiple Feature Descriptor, with
And the Intrinsic Matrix of the image acquisition units built in the mobile device establishes the transfer for indicating the pose of the mobile device
Matrix:
Wherein, T is the transfer matrix, XiIt is sat for the space of the point of visual signature described by the multiple Feature Descriptor
Mark,For the corresponding two dimensional image coordinate in the current time acquired image frames of the multiple Feature Descriptor, K is described
The Intrinsic Matrix of image acquisition units built in mobile device, R are the posture of the mobile device, and t is the mobile device
Position;
It solves the transfer matrix and obtains the mobile device in the pose at the current time.
Preferably, described in the moving process of mobile device acquisition visual signature point, the method also includes:
Acquire inertial data and visual information of the mobile device in the moving process;
Movement of the mobile device in the moving process is estimated based on the inertial data and the visual information
Track.
Preferably, in the space coordinate by visual signature point described in second group of Feature Descriptor, really
The mobile device is made after the pose at the current time, the method also includes:
Pose based on the mobile device determined at the current time replaces being based on the inertial data and institute
The pose at the correspondence moment of visual information estimation is stated, to correct the motion profile.
Preferably, the every group of Feature Descriptor extracted before described specifically: from institute when collecting key images frame every time
It states and extracts one group in key images frame, wherein the key images frame is according to pre-set space interval successively from the mobile dress
It sets in all images frame of acquisition and determines.
Second aspect, the embodiment of the invention provides a kind of mobile devices, comprising:
Extraction unit, for extracting current time institute in the moving process of mobile device acquisition visual signature point
Acquire first group of Feature Descriptor of visual signature point;
Detection unit, for carrying out first group of Feature Descriptor with the every group of Feature Descriptor extracted before respectively
Closed loop detection;
Determination unit, for detecting closed loop based on first group of Feature Descriptor and second group of Feature Descriptor
When, by the space coordinate of visual signature point described in second group of Feature Descriptor, determine that the mobile device exists
The pose at the current time, wherein second group of Feature Descriptor is in each group Feature Descriptor extracted before
Wherein one group.
Preferably, the detection unit, comprising:
Contrast subunit, by first group of Feature Descriptor respectively with it is described before extract every group of Feature Descriptor into
It is pre- to determine that each group Feature Descriptor extracted before and first group of Feature Descriptor meet respectively by row Similar contrasts
If the description subnumber amount of condition of similarity;
Judgment sub-unit, each group Feature Descriptor and first group of Feature Descriptor extracted before judgement is described respectively
Whether the description subnumber amount for meeting default condition of similarity is greater than preset quantity threshold value, wherein meets the default condition of similarity
Characterization detects closed loop when description subnumber amount is greater than the preset quantity threshold value.
Preferably, the contrast subunit, is specifically used for:
Each Feature Descriptor in first group of Feature Descriptor is described with the every group of feature extracted before respectively
Each Feature Descriptor in son compares;
Judge whether the vector angle between the Feature Descriptor compared is less than predetermined angle threshold value, wherein in institute
The Feature Descriptor that characterization compares when stating vector angle less than the predetermined angle threshold value meets the default condition of similarity.
Preferably, the determination unit, comprising:
First determines subelement, for determining multiple Feature Descriptors in second group of Feature Descriptor;
Second determines subelement, for determining that the multiple Feature Descriptor is corresponding in current time acquired image frames
Two dimensional image coordinate;
Matrix establishes subelement, for based on the point of visual signature described by the multiple Feature Descriptor space coordinate,
The Intrinsic Matrix of the two dimensional image coordinate and the Built-in Image acquisition unit of the mobile device, which is established, indicates the shifting
The transfer matrix of the pose of dynamic device:
Wherein, T is the transfer matrix, XiIt is sat for the space of the point of visual signature described by the multiple Feature Descriptor
Mark,For the corresponding two dimensional image coordinate in the current time acquired image frames of the multiple Feature Descriptor, K is described
The Intrinsic Matrix of the Built-in Image acquisition unit of mobile device, R are the posture of the mobile device, and t is the mobile device
Position;
Subelement is solved, obtains the mobile device in the pose at the current time for solving the transfer matrix.
Preferably, the mobile device further include:
Acquisition unit, for acquiring inertial data and visual information of the mobile device in the moving process;
Track estimation unit, for estimating the mobile device described based on the inertial data and the visual information
Motion profile in moving process.
Preferably, the mobile device further include:
Amending unit is replaced for the pose based on the mobile device determined at the current time based on described
The pose at the correspondence moment of inertial data and visual information estimation, to correct the motion profile.
Preferably, the every group of Feature Descriptor extracted before described specifically: from institute when collecting key images frame every time
It states and extracts one group in key images frame, wherein the key images frame is according to pre-set space interval successively from the mobile dress
It sets in all images frame of acquisition and determines.
One or more technical solution provided in an embodiment of the present invention, at least realizes following technical effect or advantage:
Visual signature point is acquired in moving process by mobile device, extracts current time acquired visual signature point
First group of Feature Descriptor carries out closed loop detection with the every group of Feature Descriptor extracted before, so that it is determined that again whether mobile device
The same area passed through before primary process.Then, special in second group extracted with before based on first group of Feature Descriptor
When sign description detects closed loop, by the space coordinate for the visual signature point that second group of Feature Descriptor describes, determine to move
Dynamic pose of the device at current time, so as to be recorded when mobile device is again through the same area according to previous time
The space coordinate of visual signature point recalculate the current pose of mobile device, to correct mobile device in the position of closed loop location
Appearance, so that the deviation accumulation to pose estimation is eliminated, to solve under mobile device accumulates in moving process to itself position
The error of appearance estimation, and the technical issues of seriously affect positioning accuracy, to be effectively increased when not establishing environmental map
Based on the precision of mobile device positioning, so that the accurate positionin when not establishing environmental map is realized, to ensure simultaneously
Real-time and positioning accuracy based on mobile device positioning.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is the flow chart of the localization method of mobile device in the embodiment of the present invention;
Fig. 2 is the refined flow chart of step S103 in Fig. 1;
Fig. 3 is the function unit figure of mobile device in the embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is existed by the localization method and mobile device of the mobile device provided with solving mobile device
To the error of itself pose estimation under being accumulated in moving process, and the technical issues of seriously affect positioning accuracy.The present invention is implemented
The technical solution of example is in order to solve the above technical problems, general thought is as follows:
In the moving process of mobile device acquisition visual signature point, the of current time acquired visual signature point is extracted
First group of Feature Descriptor is carried out closed loop detection with the every group of Feature Descriptor extracted before respectively by one group of Feature Descriptor.
For example, mobile device is the robot equipped with image acquisition units, the image acquisition units of carrying can for fisheye camera or
Other cameras of function better than fisheye camera, scanning device.It can be seen that through mobile device by the two steps in movement
Feature Descriptor of the acquisition for closed loop detection in the process, thus when collecting one group of Feature Descriptor every time, and before
The every group of Feature Descriptor extracted carries out closed loop detection, to determine each current time by the carry out closed loop detection of circulation
Whether the region of arrival is the same area reached before.
And then wherein one group of detection in each group Feature Descriptor extracted with before based on first group of Feature Descriptor
When to closed loop, by the space coordinate of visual signature point described in this group of Feature Descriptor extracting before, determine to move
Pose of the device at current time, it can be seen that pass through visual signature point described in this group of Feature Descriptor extracting before
Space coordinate determine that mobile device in the pose at current time, can be modified mobile device when detecting closed loop
Pose state, so that the error to the estimation of itself pose under mobile device accumulates in moving process is eliminated, to improve
Based on the positioning accuracy of mobile device positioning, so that the accurate positionin when not establishing environmental map is realized, with simultaneously
Ensure real-time and positioning accuracy based on mobile device positioning.
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Refering to what is shown in Fig. 1, Fig. 1 is the flow chart of the localization method of mobile device in the embodiment of the present invention, the localization method
Include the following steps:
S101, mobile device acquisition visual signature point moving process in, extract current time acquired visual signature
First group of Feature Descriptor of point.
Specifically, Feature Descriptor be vector, particularly for description institute's acquired image frames in visual signature point to
Amount.First group of Feature Descriptor be for describe one group of each visual signature point in current time acquired image frame to
Amount.Specifically, visual signature point is the point for having ambient enviroment feature, and such as: table angle, stool leg, door angle etc. are visual signature point,
Herein without enumerating.
The image acquisition units carried in mobile device in moving process are carried out in mobile device and carry out Image Acquisition, often
Secondary image acquisition units collect the visual signature point after picture frame in acquired image frames, then extract the spy of these visual signatures
Sign description, to obtain one group of Feature Descriptor corresponding to a picture frame.
In one embodiment, first group of Feature Descriptor for extracting current time acquired visual signature point is specific
Are as follows: the Feature Descriptor for extracting visual signature point in current time institute acquired image frames is first group of Feature Descriptor, and record mentions
First group of description taken.In order to reduce the consumption to mobile device computing resource, in another embodiment, extract current
First group of Feature Descriptor of moment acquired visual signature point specifically: determining the picture frame of current time acquisition for key
When picture frame, the Feature Descriptor (Feature of the visual signature point in the picture frame of current time acquisition is extracted
It Descriptor) is first group of Feature Descriptor.
S102, first group of Feature Descriptor is subjected to closed loop detection with the every group of Feature Descriptor extracted before respectively.
In S102, first group of feature description is extracted in the embodiment and S101 of the every group of Feature Descriptor extracted before
The embodiment of son is same or similar:
Specifically, in one embodiment, the every group of Feature Descriptor extracted before specifically: in each image before
When acquisition unit collects picture frame, the Feature Descriptor of visual signature point in acquired image frame, a picture frame are extracted
It is corresponding to extract one group of Feature Descriptor, the one group of Feature Descriptor extracted every time is recorded, to obtain described in S102 it
The each group Feature Descriptor of preceding extraction.
Specifically, in order to reduce the consumption to mobile device computing resource before, in another embodiment: extracting
Every group of Feature Descriptor is that one group of Feature Descriptor is extracted from key images frame when collecting key images frame every time, is collected
Image in frame when not being key images frame then without extracting Feature Descriptor.Wherein, key images frame is according to default sky
Between be spaced and successively determined from all images frame that mobile device acquires.Specifically, each image acquisition units acquisition
When to picture frame, carry out judging whether acquired image frame is key images frame based on pre-set space interval, if being judged as pass
Key picture frame just extracts the Feature Descriptor of visual signature point in institute's acquired image frames, if judging result is not key images frame
Without extracting Feature Descriptor.
In the specific implementation process, pre-set space interval is carried out according to the calculation resources and positioning accuracy demand of mobile device
Setting, herein without specifically limiting.For example, 0.5m is divided between pre-set space, then the initial position of mobile device starting
After the picture frame of acquisition is determined as key frame, then the picture frame that is acquired when every movement 0.5 after initial position with mobile device
It is judged as key images frame, and mobile device is not key images frame in other positions acquired image frame, such as: (0m,
0.5m), (0.5m, 1m), (1m, 1.5m) ... are judged as not it is key images frame apart from interior acquired image frame.
The embodiment for carrying out closed loop detection in S102 is specifically described below: first group of Feature Descriptor is distinguished
Similar contrasts are carried out with every group of Feature Descriptor extracting before, each group Feature Descriptor that extracts before determining respectively and the
One group of Feature Descriptor meets the description subnumber amount of default condition of similarity;Respectively judgement before extract each group Feature Descriptor with
Whether the description subnumber amount that first group of Feature Descriptor meets default condition of similarity is greater than preset quantity threshold value, wherein meets pre-
If characterization detects closed loop when the description subnumber amount of condition of similarity is greater than preset quantity threshold value.To be retouched based on first group of feature
When a certain group of Feature Descriptor stating son and extracting before detects closed loop, then it is assumed that the correspondence that mobile device is passed through before reaching
The same area.
Specifically, below for being extracted three groups of Feature Descriptors before current time, to once being closed
The embodiment of ring detection is illustrated description, thus those skilled in the art can be known according to following citing description other moment into
The embodiment of row closed loop detection:
T1 moment, T2 moment, T3 moment before current time (i.e. T4 moment), which correspond to, is extracted three groups of features descriptions
Son.For convenience, it is respectively designated as: the B group feature description that A group Feature Descriptor that the T1 moment extracts, T2 moment extract
The C group Feature Descriptor that son, T3 moment extract, the D group Feature Descriptor (i.e. first group of Feature Descriptor) that the T4 moment extracts, then
It independently executes following three step: D group Feature Descriptor and A group Feature Descriptor being subjected to Similar contrasts, determine D group feature
The description subnumber amount for meeting default condition of similarity between description and A group Feature Descriptor is a;By D group Feature Descriptor and B group
Feature Descriptor carries out Similar contrasts, determines to meet default condition of similarity between D group Feature Descriptor and B group Feature Descriptor
Description subnumber amount be b, by D group Feature Descriptor and C group Feature Descriptor progress Similar contrasts, determine D group feature description
The description subnumber amount for meeting default condition of similarity between son and C group Feature Descriptor is c.Next, it is determined that whether describing subnumber amount a
Greater than preset quantity threshold value, judge to describe whether subnumber amount b is greater than preset quantity threshold value, and judges whether describe subnumber amount c
Greater than preset quantity threshold value.If judging, describing subnumber amount a is greater than preset quantity threshold value, then it is assumed that current time reaches the T1 moment
The same area reached;If describing subnumber amount b is greater than preset quantity threshold value, then it is assumed that the arrival T2 moment at current time is arrived
Up to the same area crossed;If describing subnumber amount c is greater than preset quantity threshold value, then it is assumed that the arrival T3 moment at current time reached
The same area.
In the specific implementation process, preset quantity threshold value is arranged according to actual needs, for example, setting is pre- in the present embodiment
If amount threshold is 3, when the description subnumber amount for meeting default condition of similarity is greater than 3, characterization detects closed loop.For example, meeting pre-
If the description subnumber amount of condition of similarity has the characterizations such as 4,5 or 6 to detect closed loop.
Specifically, the circulation for carrying out closed loop detection in S102 is described in detail below:
It is the T2 moment at current time, (this is special for B group for first group of Feature Descriptor of the acquired visual signature point of extraction
Sign description), the A group Feature Descriptor extracted with the T1 moment carries out closed loop detection.Then, it is the T3 moment at current time, mentions
The A group for taking first group of Feature Descriptor (this is C group Feature Descriptor) of acquired visual signature point to extract with the T1 moment is special
The B group Feature Descriptor progress closed loop detection that sign description carries out closed loop detection, also extracts with the T2 moment.Then, when current
Carving is the T4 moment, extract first group of Feature Descriptor (this is D group Feature Descriptor) of acquired visual signature point respectively with
The B group Feature Descriptor that the A group Feature Descriptor that the T1 moment extracts carries out closed loop detection, the T2 moment extracts carries out closed loop detection,
And the C group Feature Descriptor that the T3 moment extracts carries out closed loop detection.It circuits sequentially, it is each thus in T2, T3, T4, T4, T6 ...
When current time extracts first group of Feature Descriptor, carried out respectively with the every group of Feature Descriptor extracted before current time
Closed loop detection.
Specifically, default condition of similarity is that the vector angle described between son is less than predetermined angle threshold value.Judge whether full
Default condition of similarity specific embodiment enough are as follows: each Feature Descriptor in first group of Feature Descriptor is distinguished into premise therewith
The each Feature Descriptor in every group of Feature Descriptor taken compares;Judge between the Feature Descriptor that compares to
Whether amount angle is less than predetermined angle threshold value, wherein the vector angle between the Feature Descriptor compared is less than default
Two Feature Descriptors compared are characterized when angle threshold and meet default condition of similarity, to judge the matching of description
Degree.
In the specific implementation process, predetermined angle threshold value is arranged according to actual needs.For example, predetermined angle threshold value is set as
30 degree, then the vector angle between two Feature Descriptors compared is that [0,30] degree just meets default condition of similarity, no
Then to be unsatisfactory for default condition of similarity, for example, predetermined angle threshold value is set as 15 degree, then two Feature Descriptors compared
Between vector angle be that [0,15] degree just meets default condition of similarity, be otherwise unsatisfactory for default condition of similarity.
S103, when detecting closed loop based on first group of Feature Descriptor and second group of Feature Descriptor, pass through second group
The space coordinate of the point of visual signature described in Feature Descriptor determines mobile device in the pose at current time, wherein the
Two groups of Feature Descriptors for it before extract each group Feature Descriptor in wherein one group.
Specifically, in S103: pose of the shift position at current time includes shift position in the position at current time
And posture.Refering to what is shown in Fig. 2, in one embodiment, passing through visual signature point described in second group of Feature Descriptor
Space coordinate determines that mobile device in the pose at current time, includes the following steps:
S1031, multiple Feature Descriptors in second group of Feature Descriptor are determined.
Specifically, the multiple Feature Descriptors determined are to meet in advance with the Feature Descriptor in first group of Feature Descriptor
If the Feature Descriptor of condition of similarity.The quantity for the Feature Descriptor determined from second group of Feature Descriptor is according to present count
Measure threshold value setting.For example preset quantity threshold value is 3, then determines from second group of Feature Descriptor and first group of Feature Descriptor
In Feature Descriptor meet 4 Feature Descriptors of default condition of similarity.
Be 3 to illustrate with preset quantity threshold value: step S102 has been determined from second group of Feature Descriptor and the
One group of Feature Descriptor meets the Feature Descriptor of default condition of similarity: such as meet default condition of similarity has 5 features to retouch
Son or 6 Feature Descriptors or 7 Feature Descriptors etc. are stated, then are just retouched from this 5 or 6 or 7 features in S1031
It states and determines 4 in son.Meet with first group of Feature Descriptor only 4 of default condition of similarity in second group of Feature Descriptor
Feature Descriptor, then this 4 Feature Descriptors are determined.Be 4 to illustrate with preset quantity threshold value: step S102 from
The Feature Descriptor for meeting default condition of similarity with first group of Feature Descriptor is determined in second group of Feature Descriptor, such as:
There are 5 Feature Descriptors or 6 Feature Descriptors or 7 Feature Descriptors or 8 Feature Descriptors etc., then in S1031
In: then 5 are determined from this 5 or 6 or 7 or 8 Feature Descriptors.
S1032, the corresponding two dimensional image coordinate in current time acquired image frames of multiple Feature Descriptors is determined.
Specifically, the multiple Feature Descriptors determined are different, for example, the Feature Descriptor determined has: " table angle
1 " Feature Descriptor, the Feature Descriptor at " table angle 2 ", the Feature Descriptor of " stool leg 1 ", " stool leg 2 " Feature Descriptor,
Then: determining two dimensional image coordinate of the Feature Descriptor at " table angle 1 " in current time acquired image frames, determine " table angle 2 "
Two dimensional image coordinate of the Feature Descriptor in current time acquired image frames determines the Feature Descriptor of " stool leg 1 " current
Two dimensional image coordinate in moment acquired image frames determines the Feature Descriptor of " stool leg 2 " in current time acquired image frames
Two dimensional image coordinate.In the specific implementation process, it is matched by visual signature, matches this having confirmed in S1031
Two dimensional image coordinate of the corresponding multiple visual signature points described of multiple Feature Descriptors in current time acquired image frames.
S1033, the space coordinate based on the point of visual signature described by multiple Feature Descriptors, the two dimensional image determined are sat
The Intrinsic Matrix of image acquisition units built in mark and mobile device establishes the transfer square for indicating the pose of mobile device
Battle array:
Wherein, T is transfer matrix, XiFor the space coordinate of the point of visual signature described by multiple Feature Descriptors,It is multiple
The corresponding two dimensional image coordinate in current time acquired image frames of Feature Descriptor, K are the Image Acquisition built in mobile device
The Intrinsic Matrix of unit, R are the posture of mobile device, and t is the position of mobile device.
In one embodiment, the space that multiple features needed for S1033 describe described visual signature point is sat
Mark is obtained by the way that mode is implemented as follows: after each image acquisition units collect picture frame, being recorded each in institute's acquired image frames
The space coordinate of visual signature point.In another embodiment, after each image acquisition units collect key images frame, institute is recorded
Acquire the space coordinate of each visual signature point in key images frame.Then after S1032, described from second group of feature of record
The space coordinate of each visual signature point determines that the space of the point of visual signature described in multiple Feature Descriptor is sat in son
Mark.
Specifically, transfer matrix T is determined according to the quantity of the S1031 Feature Descriptor determined.In a specific embodiment
In, S1031 specifically: determine to meet default condition of similarity with first group of Feature Descriptor from second group of Feature Descriptor
4 Feature Descriptors, the then space coordinate based on visual signature point described by this 4 Feature Descriptors: X1、X2、X3、X4, this 4
The point of visual signature described by a Feature Descriptor corresponds to the two dimensional image coordinate in current time institute's acquired image frames: The Intrinsic Matrix K of image acquisition units built in mobile device establishes the 4*4 transfer for indicating the pose of mobile device
Matrix T.
It finally executes S1034: solving the transfer matrix that S1033 is established and obtain mobile device in the pose at current time.Tool
Body, solution obtains in the pose at current time including posture R and position t of the mobile device at current time.
In further technical solution, pose of the mobile device that the embodiment of the present invention is determined at current time is used for
Correct the motion profile estimated based on inertial data and visual information.
Specific embodiment are as follows: in the moving process of mobile device acquisition visual signature point, acquisition mobile device is being moved
Inertial data and visual information during dynamic, based on inertial data and visual information estimation mobile device in moving process
Motion profile.Specifically, passing through the IMU (inertial measurement unit, the inertia measurement list that carry in mobile device
Member) inertial data of the acquisition mobile device in moving process.And IMU includes accelerometer and gyroscope, accelerometer and gyroscope
After acceleration and angular speed in corresponding measurement mobile device itself moving process, mobile device is extrapolated in the position at each moment
It sets and posture, the image acquisition units carried in mobile device are acquired visual information of the mobile device in moving process,
The position for the mobile device extrapolated and posture are further estimated using visual information, to obtain mobile device in movement
Motion profile in the process.Then it is replaced in the pose at current time based on inertial data based on the S103 mobile device determined
With the pose at the correspondence moment of visual information estimation, the motion profile estimated with amendment based on inertial data and visual information.Tool
Body, the mobile device that the solution S1033 transfer matrix established obtains is replaced being based in the posture R and position t at current time
The posture R and position t at the correspondence moment of inertial data and visual information estimation, are believed with reaching amendment based on inertial data and vision
Cease the effect of the motion profile of estimation.
Based on the same inventive concept, the embodiment of the invention provides a kind of mobile devices, refering to what is shown in Fig. 3, including as follows
Functional unit:
Extraction unit 201, for extracting current time and being adopted in the moving process of mobile device acquisition visual signature point
Collect first group of Feature Descriptor of visual signature point;
Detection unit 202, for carrying out first group of Feature Descriptor with the every group of Feature Descriptor extracted before respectively
Closed loop detection:
Determination unit 203, for when detecting closed loop based on first group of Feature Descriptor and second group of Feature Descriptor,
By the space coordinate of visual signature point described in second group of Feature Descriptor, determine mobile device in the position at current time
Appearance, wherein second group of Feature Descriptor for it before extract each group Feature Descriptor in wherein one group.
Preferably, detection unit 202, comprising:
Contrast subunit is similar right to the every group of Feature Descriptor progress extracted before respectively by first group of Feature Descriptor
Than each group Feature Descriptor and first group of Feature Descriptor that extract before determining respectively meet the description of default condition of similarity
Subnumber amount;
Judgment sub-unit, each group Feature Descriptor extracted before judgement respectively and first group of Feature Descriptor meet default
Whether the description subnumber amount of condition of similarity is greater than preset quantity threshold value, wherein the description subnumber amount for meeting default condition of similarity is big
Characterization detects closed loop when preset quantity threshold value.
Preferably, contrast subunit is specifically used for:
By each Feature Descriptor in first group of Feature Descriptor respectively and in every group of Feature Descriptor extracting before
Each Feature Descriptor compare;
Judge whether the vector angle between the Feature Descriptor that compares is less than predetermined angle threshold value, wherein to
The Feature Descriptor that characterization compares when amount angle is less than predetermined angle threshold value meets default condition of similarity.
Preferably, determination unit 203, comprising:
First determines subelement, for determining multiple Feature Descriptors in second group of Feature Descriptor;
Second determines subelement, for determining the corresponding two dimension in current time acquired image frames of multiple Feature Descriptors
Image coordinate:
Matrix establishes subelement, for space coordinate, two dimension based on the point of visual signature described by multiple Feature Descriptors
The Intrinsic Matrix of the Built-in Image acquisition unit of image coordinate and mobile device establishes turning for the pose for indicating mobile device
Move matrix:
Wherein, T is transfer matrix, XiFor the space coordinate of the point of visual signature described by multiple Feature Descriptors,It is multiple
The corresponding two dimensional image coordinate in current time acquired image frames of Feature Descriptor, K are that the Built-in Image of mobile device acquires
The Intrinsic Matrix of unit, R are the posture of mobile device, and t is the position of mobile device;
Subelement is solved, obtains mobile device in the pose at current time for solving transfer matrix.
Preferably, the mobile device further include:
Acquisition unit, for acquiring inertial data and visual information of the mobile device in moving process;
Track estimation unit, for the movement based on inertial data and visual information estimation mobile device in moving process
Track.
Preferably, the mobile device further include:
Amending unit replaces being based on inertial data and view for the pose based on the mobile device determined at current time
The pose at the correspondence moment of information estimation is felt, with correction motion track.
Preferably, the every group of Feature Descriptor extracted before specifically: scheme when collecting key images frame every time from key
As extracting one group in frame, wherein key images frame is all images successively acquired from mobile device according to pre-set space interval
It is determined in frame.
One or more technical solution provided in an embodiment of the present invention, at least realizes following technical effect or advantage:
Visual signature point is acquired in moving process by mobile device, extracts current time acquired visual signature point
First group of Feature Descriptor carries out closed loop detection with the every group of Feature Descriptor extracted before, so that it is determined that again whether mobile device
The same area passed through before primary process.Then, special in second group extracted with before based on first group of Feature Descriptor
When sign description detects closed loop, by the space coordinate for the visual signature point that second group of Feature Descriptor describes, determine to move
Dynamic pose of the device at current time, so as to be recorded when mobile device is again through the same area according to previous time
The space coordinate of visual signature point recalculate the current pose of mobile device, to correct mobile device in the position of closed loop location
Appearance, so that the deviation accumulation to pose estimation is eliminated, to solve under mobile device accumulates in moving process to itself position
The error of appearance estimation, and the technical issues of seriously affect positioning accuracy, to be effectively increased when not establishing environmental map
Based on the precision of mobile device positioning, so that the accurate positionin when not establishing environmental map is realized, to ensure simultaneously
Real-time and positioning accuracy based on mobile device positioning.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect
Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself
All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment
Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or
Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any
Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed
All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments in this include institute in other embodiments
Including certain features rather than other feature, but the combination of the feature of different embodiment means in the scope of the present invention
Within and form different embodiments.For example, in detail in the claims, the one of any of embodiment claimed all may be used
Come in a manner of in any combination using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors
Client modules realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can make in practice
The reinforcement protection of software installation packet according to an embodiment of the present invention is realized with microprocessor or digital signal processor (DSP)
The some or all functions of some or all components in device.The present invention is also implemented as being retouched here for executing
The some or all device or device programs (for example, computer program and computer program product) for the method stated.
It is such to realize that program of the invention can store on a computer-readable medium, or can have one or more signal
Form.Such signal can be downloaded from an internet website to obtain, be perhaps provided on the carrier signal or with it is any its
He provides form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability
Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between parentheses should not be configured to limitations on claims.Word " comprising " does not exclude the presence of not
Element or step listed in the claims.In the unit claims listing several devices, several in these devices
A can be is embodied by the same item of hardware.The use of word first, second, and third does not indicate any suitable
Sequence.These words can be construed to title.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (12)
1. a kind of localization method of mobile device characterized by comprising
In the moving process of mobile device acquisition visual signature point, the of current time acquired visual signature point is extracted
One group of Feature Descriptor;
First group of Feature Descriptor is subjected to closed loop detection with the every group of Feature Descriptor extracted before respectively;
It is special by described second group when detecting closed loop based on first group of Feature Descriptor and second group of Feature Descriptor
The space coordinate of visual signature point described in sign description, determine the mobile device in the pose at the current time,
Wherein, second group of Feature Descriptor is wherein one group in each group Feature Descriptor extracted before;
Wherein, the space coordinate by visual signature point described in second group of Feature Descriptor is determined described
Pose of the mobile device at the current time, comprising:
Determine multiple Feature Descriptors in second group of Feature Descriptor;
Determine the corresponding two dimensional image coordinate in current time acquired image frames of the multiple Feature Descriptor;
Space coordinate, the two dimensional image coordinate, Yi Jisuo based on the point of visual signature described by the multiple Feature Descriptor
The Intrinsic Matrix for stating the image acquisition units built in mobile device establishes the transfer matrix for indicating the pose of the mobile device:
Wherein, T is the transfer matrix, XiFor the space coordinate of the point of visual signature described by the multiple Feature Descriptor,For
The corresponding two dimensional image coordinate in the current time acquired image frames of the multiple Feature Descriptor, K are the mobile dress
The Intrinsic Matrix of built-in image acquisition units is set, R is the posture of the mobile device, and t is the position of the mobile device;
It solves the transfer matrix and obtains the mobile device in the pose at the current time.
2. the localization method of mobile device as described in claim 1, which is characterized in that described to describe first group of feature
Son carries out closed loop detection with the every group of Feature Descriptor extracted before respectively, comprising:
First group of Feature Descriptor is subjected to Similar contrasts with the every group of Feature Descriptor extracted before respectively, respectively
Determine that each group Feature Descriptor extracted before and first group of Feature Descriptor meet retouching for default condition of similarity
State subnumber amount;
The each group Feature Descriptor extracted before judgement is described respectively presets similar item to first group of Feature Descriptor satisfaction
Whether the description subnumber amount of part is greater than preset quantity threshold value, wherein the description subnumber amount for meeting the default condition of similarity is greater than
Characterization detects closed loop when the preset quantity threshold value.
3. the localization method of mobile device as claimed in claim 2, which is characterized in that described to describe first group of feature
Son carries out Similar contrasts with the every group of Feature Descriptor extracted before respectively, judges whether to meet default condition of similarity, wraps
It includes:
By each Feature Descriptor in first group of Feature Descriptor respectively and in every group of Feature Descriptor extracting before
Each Feature Descriptor compare;
Judge whether the vector angle between the Feature Descriptor that compares is less than predetermined angle threshold value, wherein it is described to
The Feature Descriptor that characterization compares when amount angle is less than the predetermined angle threshold value meets the default condition of similarity.
4. the localization method of mobile device as described in claim 1, which is characterized in that described special in mobile device acquisition vision
In the moving process for levying point, the method also includes:
Acquire inertial data and visual information of the mobile device in the moving process;
Motion profile of the mobile device in the moving process is estimated based on the inertial data and the visual information.
5. the localization method of mobile device as claimed in claim 4, which is characterized in that pass through second group of feature described
Description son described in visual signature point space coordinate, determine the mobile device the current time pose it
Afterwards, the method also includes:
Pose based on the mobile device determined at the current time replaces being based on the inertial data and the view
The pose at the correspondence moment of information estimation is felt, to correct the motion profile.
6. the localization method of mobile device as described in claim 1, which is characterized in that the every group of feature extracted before described is retouched
State son specifically: extract one group from the key images frame when collecting key images frame every time, wherein the key images
Frame is successively to determine from all images frame that the mobile device acquires according to pre-set space interval.
7. a kind of mobile device characterized by comprising
Extraction unit is acquired in the moving process of mobile device acquisition visual signature point, extracting current time
First group of Feature Descriptor of visual signature point;
Detection unit, for first group of Feature Descriptor to be carried out closed loop with the every group of Feature Descriptor extracted before respectively
Detection;
Determination unit, for leading to when detecting closed loop based on first group of Feature Descriptor and second group of Feature Descriptor
The space coordinate for crossing visual signature point described in second group of Feature Descriptor, determines that the mobile device is worked as described
The pose at preceding moment, wherein in each group Feature Descriptor that second group of Feature Descriptor extracts before for described in wherein
One group;
The determination unit includes:
First determines subelement, for determining multiple Feature Descriptors in second group of Feature Descriptor;
Second determines subelement, for determining the corresponding two dimension in current time acquired image frames of the multiple Feature Descriptor
Image coordinate;
Matrix establishes subelement, for the space coordinate, described based on the point of visual signature described by the multiple Feature Descriptor
The Intrinsic Matrix of two dimensional image coordinate and the Built-in Image acquisition unit of the mobile device, which is established, indicates the mobile dress
The transfer matrix for the pose set:
Wherein, T is the transfer matrix, XiFor the space coordinate of the point of visual signature described by the multiple Feature Descriptor,
For the corresponding two dimensional image coordinate in the current time acquired image frames of the multiple Feature Descriptor, K is the movement
The Intrinsic Matrix of the Built-in Image acquisition unit of device, R are the posture of the mobile device, and t is the position of the mobile device
It sets;
Subelement is solved, obtains the mobile device in the pose at the current time for solving the transfer matrix.
8. mobile device as claimed in claim 7, which is characterized in that the detection unit, comprising:
First group of Feature Descriptor is carried out phase with the every group of Feature Descriptor extracted before respectively by contrast subunit
Like comparison, determine that each group Feature Descriptor extracted before and first group of Feature Descriptor meet default phase respectively
Like the description subnumber amount of condition;
Judgment sub-unit, each group Feature Descriptor and first group of Feature Descriptor extracted before judgement is described respectively meet
Whether the description subnumber amount of default condition of similarity is greater than preset quantity threshold value, wherein meets the description of the default condition of similarity
Characterization detects closed loop when subnumber amount is greater than the preset quantity threshold value.
9. mobile device as claimed in claim 8, which is characterized in that the contrast subunit is specifically used for:
By each Feature Descriptor in first group of Feature Descriptor respectively and in every group of Feature Descriptor extracting before
Each Feature Descriptor compare;
Judge whether the vector angle between the Feature Descriptor that compares is less than predetermined angle threshold value, wherein it is described to
The Feature Descriptor that characterization compares when amount angle is less than the predetermined angle threshold value meets the default condition of similarity.
10. mobile device as claimed in claim 7, which is characterized in that the mobile device further include:
Acquisition unit, for acquiring inertial data and visual information of the mobile device in the moving process;
Track estimation unit, for estimating the mobile device in the movement based on the inertial data and the visual information
Motion profile in the process.
11. mobile device as claimed in claim 10, which is characterized in that the mobile device further include:
Amending unit replaces being based on the inertia for the pose based on the mobile device determined at the current time
The pose at the correspondence moment of data and visual information estimation, to correct the motion profile.
12. mobile device as claimed in claim 7, which is characterized in that the every group of Feature Descriptor extracted before described is specific
Are as follows: extract one group from the key images frame when collecting key images frame every time, wherein the key images frame be according to
It is successively determined from all images frame that the mobile device acquires at pre-set space interval.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610652818.0A CN106352877B (en) | 2016-08-10 | 2016-08-10 | A kind of mobile device and its localization method |
PCT/CN2017/096945 WO2018028649A1 (en) | 2016-08-10 | 2017-08-10 | Mobile device, positioning method therefor, and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610652818.0A CN106352877B (en) | 2016-08-10 | 2016-08-10 | A kind of mobile device and its localization method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106352877A CN106352877A (en) | 2017-01-25 |
CN106352877B true CN106352877B (en) | 2019-08-23 |
Family
ID=57843765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610652818.0A Active CN106352877B (en) | 2016-08-10 | 2016-08-10 | A kind of mobile device and its localization method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106352877B (en) |
WO (1) | WO2018028649A1 (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106352877B (en) * | 2016-08-10 | 2019-08-23 | 纳恩博(北京)科技有限公司 | A kind of mobile device and its localization method |
KR102696652B1 (en) * | 2017-01-26 | 2024-08-21 | 삼성전자주식회사 | Stero matching method and image processing apparatus |
CN107907131B (en) * | 2017-11-10 | 2019-12-13 | 珊口(上海)智能科技有限公司 | positioning system, method and applicable robot |
US10436590B2 (en) | 2017-11-10 | 2019-10-08 | Ankobot (Shanghai) Smart Technologies Co., Ltd. | Localization system and method, and robot using the same |
CN108717710B (en) | 2018-05-18 | 2022-04-22 | 京东方科技集团股份有限公司 | Positioning method, device and system in indoor environment |
WO2019228520A1 (en) * | 2018-06-01 | 2019-12-05 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for indoor positioning |
CN110617821B (en) * | 2018-06-19 | 2021-11-02 | 北京嘀嘀无限科技发展有限公司 | Positioning method, positioning device and storage medium |
CN111383282B (en) * | 2018-12-29 | 2023-12-01 | 杭州海康威视数字技术股份有限公司 | Pose information determining method and device |
CN110207537A (en) * | 2019-06-19 | 2019-09-06 | 赵天昊 | Fire Control Device and its automatic targeting method based on computer vision technique |
WO2020258187A1 (en) * | 2019-06-27 | 2020-12-30 | 深圳市大疆创新科技有限公司 | State detection method and apparatus and mobile platform |
CN110293563B (en) * | 2019-06-28 | 2022-07-26 | 炬星科技(深圳)有限公司 | Method, apparatus, and storage medium for estimating pose of robot |
CN110275540A (en) * | 2019-07-01 | 2019-09-24 | 湖南海森格诺信息技术有限公司 | Semantic navigation method and its system for sweeping robot |
CN110334560B (en) * | 2019-07-16 | 2023-04-07 | 山东浪潮科学研究院有限公司 | Two-dimensional code positioning method and device |
CN112284399B (en) * | 2019-07-26 | 2022-12-13 | 北京魔门塔科技有限公司 | Vehicle positioning method based on vision and IMU and vehicle-mounted terminal |
CN112634360B (en) * | 2019-10-08 | 2024-03-05 | 北京京东乾石科技有限公司 | Visual information determining method, device, equipment and storage medium |
CN111105459B (en) * | 2019-12-24 | 2023-10-20 | 广州视源电子科技股份有限公司 | Descriptive sub map generation method, positioning method, device, equipment and storage medium |
CN113112547A (en) * | 2021-04-23 | 2021-07-13 | 北京云迹科技有限公司 | Robot, repositioning method thereof, positioning device and storage medium |
CN114527752B (en) * | 2022-01-25 | 2024-08-09 | 浙江省交通投资集团有限公司智慧交通研究分公司 | Method for accurately positioning detection data of orbit inspection robot in low satellite signal environment |
CN114415698B (en) * | 2022-03-31 | 2022-11-29 | 深圳市普渡科技有限公司 | Robot, positioning method and device of robot and computer equipment |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1569558A (en) * | 2003-07-22 | 2005-01-26 | 中国科学院自动化研究所 | Moving robot's vision navigation method based on image representation feature |
US20080195316A1 (en) * | 2007-02-12 | 2008-08-14 | Honeywell International Inc. | System and method for motion estimation using vision sensors |
CN102109348B (en) * | 2009-12-25 | 2013-01-16 | 财团法人工业技术研究院 | System and method for positioning carrier, evaluating carrier gesture and building map |
WO2012166814A1 (en) * | 2011-05-31 | 2012-12-06 | Honda Motor Co., Ltd. | Online environment mapping |
CN103869814B (en) * | 2012-12-17 | 2017-04-19 | 联想(北京)有限公司 | Terminal positioning and navigation method and mobile terminal |
US9243916B2 (en) * | 2013-02-21 | 2016-01-26 | Regents Of The University Of Minnesota | Observability-constrained vision-aided inertial navigation |
US9519286B2 (en) * | 2013-03-19 | 2016-12-13 | Robotic Research, Llc | Delayed telop aid |
CN104374395A (en) * | 2014-03-31 | 2015-02-25 | 南京邮电大学 | Graph-based vision SLAM (simultaneous localization and mapping) method |
CN104851094A (en) * | 2015-05-14 | 2015-08-19 | 西安电子科技大学 | Improved method of RGB-D-based SLAM algorithm |
CN105783913A (en) * | 2016-03-08 | 2016-07-20 | 中山大学 | SLAM device integrating multiple vehicle-mounted sensors and control method of device |
CN106352877B (en) * | 2016-08-10 | 2019-08-23 | 纳恩博(北京)科技有限公司 | A kind of mobile device and its localization method |
-
2016
- 2016-08-10 CN CN201610652818.0A patent/CN106352877B/en active Active
-
2017
- 2017-08-10 WO PCT/CN2017/096945 patent/WO2018028649A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2018028649A1 (en) | 2018-02-15 |
CN106352877A (en) | 2017-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106352877B (en) | A kind of mobile device and its localization method | |
CN109059895B (en) | Multi-mode indoor distance measurement and positioning method based on mobile phone camera and sensor | |
CN104487915B (en) | Maintain the continuity of amplification | |
US10373244B2 (en) | System and method for virtual clothes fitting based on video augmented reality in mobile phone | |
CN105654512B (en) | A kind of method for tracking target and device | |
JP5722502B2 (en) | Planar mapping and tracking for mobile devices | |
Tanskanen et al. | Live metric 3D reconstruction on mobile phones | |
US9122916B2 (en) | Three dimensional fingertip tracking | |
JP6343670B2 (en) | Autonomous mobile device and self-position estimation method | |
CN110986969B (en) | Map fusion method and device, equipment and storage medium | |
CN108028871A (en) | The more object augmented realities of unmarked multi-user in mobile equipment | |
JP6609640B2 (en) | Managing feature data for environment mapping on electronic devices | |
JP2015532077A (en) | Method for determining the position and orientation of an apparatus associated with an imaging apparatus that captures at least one image | |
CN101271333A (en) | Localization method for a moving robot | |
WO2019057197A1 (en) | Visual tracking method and apparatus for moving target, electronic device and storage medium | |
CN111595344B (en) | Multi-posture downlink pedestrian dead reckoning method based on map information assistance | |
CN106030610A (en) | Real-time 3D gesture recognition and tracking system for mobile devices | |
CN112020694A (en) | Method, system, and non-transitory computer-readable recording medium for supporting object control | |
CN109035308A (en) | Image compensation method and device, electronic equipment and computer readable storage medium | |
CN109948624A (en) | Method, apparatus, electronic equipment and the computer storage medium of feature extraction | |
CN112731503A (en) | Pose estimation method and system based on front-end tight coupling | |
CN115482556A (en) | Method for key point detection model training and virtual character driving and corresponding device | |
US10551195B2 (en) | Portable device with improved sensor position change detection | |
CN115461794A (en) | Method, system, and non-transitory computer-readable recording medium for estimating user gesture from two-dimensional image | |
CN104113684B (en) | Control method and electronic installation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20221229 Address after: 100192 203, floor 2, building A-1, North Territory, Dongsheng science and Technology Park, Zhongguancun, No. 66, xixiaokou Road, Haidian District, Beijing Patentee after: Weilan continental (Beijing) Technology Co.,Ltd. Address before: Room C206, B-2 Building, North Territory of Dongsheng Science Park, Zhongguancun, 66 Xixiaokou Road, Haidian District, Beijing, 100192 Patentee before: NINEBOT (BEIJING) TECH Co.,Ltd. |