[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110388926B - Indoor positioning method based on mobile phone geomagnetism and scene image - Google Patents

Indoor positioning method based on mobile phone geomagnetism and scene image Download PDF

Info

Publication number
CN110388926B
CN110388926B CN201910629743.8A CN201910629743A CN110388926B CN 110388926 B CN110388926 B CN 110388926B CN 201910629743 A CN201910629743 A CN 201910629743A CN 110388926 B CN110388926 B CN 110388926B
Authority
CN
China
Prior art keywords
scene
user
scene image
particle
particles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910629743.8A
Other languages
Chinese (zh)
Other versions
CN110388926A (en
Inventor
颜成钢
巩鹏博
孙垚棋
张继勇
张勇东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910629743.8A priority Critical patent/CN110388926B/en
Publication of CN110388926A publication Critical patent/CN110388926A/en
Application granted granted Critical
Publication of CN110388926B publication Critical patent/CN110388926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/04Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means
    • G01C21/08Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means involving use of the magnetic field of the earth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an indoor positioning method based on mobile phone geomagnetism and scene images. The invention comprises the following steps: step 1, collecting fingerprint information of an indoor map of a map offline; step 2, step length presumption model: presume step length and walking direction of users; step 3, measuring the model: extracting deep information from the scene map by using a convolutional neural network; step 4, tracking the position track by using an adaptive particle filter algorithm; according to the invention, infrastructure does not need to be built, and only the smart phone used by people at present is needed. The images of the earth magnetism and the scene can be measured by a sensor carried by the smart phone. The invention utilizes the respective advantages of the geomagnetism and the scene picture, and brings better accuracy for indoor positioning.

Description

Indoor positioning method based on mobile phone geomagnetism and scene image
Technical Field
The invention belongs to the field of indoor positioning, and particularly relates to a method for dynamically positioning by using a particle filter algorithm, wherein a map is built by using geomagnetism and scene images, and position features are extracted by using a convolutional neural network.
Background
Indoor location technology plays a very important role in improving our convenience of daily life today and is of great use in many ways, such as directing a robot to automatically travel in a building, when a person finds his destination in an unfamiliar building environment, directing a vehicle to find a location in an underground garage, etc.
Accurate indoor positioning is still a significant challenge today. The geomagnetism is ubiquitous, and the signal is an important signal which can be used for indoor positioning without using infrastructure. The same is true for visual images, both of which can be conveniently captured by a cell phone sensor. In indoor positioning, the geomagnetism and the visual images can often form a complementary whole, the visual images can effectively identify the position of an indoor scene, and the geomagnetism can more accurately position the position.
Disclosure of Invention
The invention mainly provides a method for building a map with geomagnetism and scene images, extracting deep position features by using a convolutional neural network, and then performing indoor positioning by using an adaptive particle filter algorithm.
The invention provides a novel indoor positioning system, and the method does not need any infrastructure to build in advance, and the needed geomagnetism and scene images can be given by a mobile phone sensor. The novel indoor positioning system can be mainly divided into two stages of off-line map information acquisition and on-line indoor positioning. The off-line map information collection stage is mainly used for collecting indoor map information. The online indoor positioning is mainly divided into three aspects, namely a step length estimation model, a measurement model and an adaptive particle filter tracking model. The step length presumption model is used for estimating the step length and the walking direction of the user at any moment and recording the walking track of the user. The measurement model converts the measured scene image into deep features through a convolutional neural network, calculates the similarity with the recorded map information, and calculates the most probable scene position. The adaptive particle filter tracking model tracks the user in steps, after each user step, estimating the potential location of the user by a probability distribution. The method is implemented according to the following steps:
step 1, collecting fingerprint information of a map indoor map offline, and specifically comprising the following steps:
1-1, dividing the indoor map into 60 cm-60 cm grids, and making position labels.
And 1-2, acquiring geomagnetic and scene image information for the corresponding grids along the main road direction.
And 1-3, collecting and storing the measurement information of the ready-made label corresponding to the building.
Step 2, step length presumption model: presume the step length and walking direction of the user, the concrete step is as follows:
2-1, calculating the walking direction and gait of the user under the data of the mobile phone sensor;
and 2-2, setting the step length to be 60cm under the initial condition, and recording the tracked walking track of the user.
Step 3, measuring the model: extracting deep-layer information from the scene map by using a convolutional neural network, which comprises the following specific steps:
and 3-1, training the scene image according to the scene and the position by using a convolutional neural network, and taking the layer before the softmax layer as the deep position information of the scene image.
And 3-2, extracting deep information by using the scene image acquired on line through a trained convolutional neural network, and comparing the deep information with the similarity of map information stored in advance.
And 4, tracking the position track by using an adaptive particle filter algorithm.
The adaptive particle filter algorithm is mainly used for tracking the walking track of the user, and after the user walks once, the algorithm updates the estimated position of the user. In addition, the algorithm does not directly estimate the true position of the user, but estimates the potential position of the user in the form of a probability distribution. The core idea of the adaptive particle filter algorithm is to express posterior probability distribution by a group of particles and relevant weights thereof and realize a recursive Bayes filter by a Monte Carlo method. However, the weight updating part is adjusted to be more suitable for the geomagnetism and the scene image. The method comprises the following specific steps:
4-1, particle initialization is performed using geomagnetism and a scene image at an initial time.
4-2, carrying out particle propagation by using a step size inference model;
and 4-3, updating the weight of the position of the particle after propagation and normalizing.
4-4, sorting according to the weight, taking the first 40% of the particles after propagation, and estimating the most possible positions after the propagation according to the weight average.
And 4-5, resampling the particles.
4-6 continuously update this process of particle propagation.
The method of the invention has the advantages and beneficial results that:
1. according to the invention, infrastructure does not need to be built, and only the smart phone used by people at present is needed. The images of the earth magnetism and the scene can be measured by a sensor carried by the smart phone.
2. The invention provides an indoor positioning method for fusing geomagnetism and scene images, which utilizes respective advantages of geomagnetism and scene images and brings better accuracy to indoor positioning. The geomagnetic values have low resolution, and have very similar phenomena for different positions, and do not have good distinguishability. The scene picture has good and high resolution for specific scene identification, but accurate identification of a specific position is insufficient after the scene is identified. Therefore, the scene and the approximate position can be recognized from the scene image, the map area is defined within a certain range, and the influence of the earth magnetism outside the range on the position recognition is eliminated. Geomagnetism can also be roughly recognized at the scene position, thereby further improving the accuracy of the position.
3. The particle filtering algorithm is better adapted to the indoor positioning accuracy under the conditions of geomagnetism and scene images.
Drawings
FIG. 1 is a framework of the invention with respect to indoor positioning as a whole, including a step-size inference model, a measurement model, an adaptive particle filter model and their corresponding designs.
FIG. 2 is a network structure for extracting deep scene position features by convolutional neural network in a measurement model according to the present invention.
Fig. 3 is a series of flow diagrams of the present invention with respect to an adaptive particle filter model.
Detailed Description
The present invention will be described in detail with reference to specific embodiments.
The invention provides a method for indoor positioning based on geomagnetism and scene images, and a general framework is shown as figure 1. Specifically, the method is carried out according to the following steps.
Step 1, off-line collecting map indoor map fingerprint information
One fingerprint information corresponds to four measurements and their corresponding locations, and in order to collect fingerprint information, the indoor map is divided into 60cm by 60cm grids. Each grid is set to a unique location and assigned a location tag. Usually there are many roads in an indoor map, which are divided into two directions for each road, i.e. lanesThe two main directions of the road are used for respectively acquiring geomagnetic and scene image information. Wherein the measured geomagnetic value is a four-dimensional vector (m)x,my,mzM), wherein (M)x,my,mz) Measuring the test values of the mobile phone under three coordinate systems of geomagnetism, wherein M represents the intensity of the geomagnetism to be (M)x,my,mz) The second order norm of (d). For each location grid, the applicable handset randomly collects 10 times in the main correspondence, and these are put into the offline fingerprint as the measurement information and location tag of the grid location.
Step 2, step length presumption model: presume step length and walking direction of users;
and 2-1, detecting the walking direction and gait of the pedestrian by using sensors such as acceleration and angular velocity of the smart phone, and setting the step length to be 60cm under the initial condition of the user. The location coordinate for the user is (E)k,Nk) Step length of user is LkAngle of direction thetakThe next time positioning coordinate is (E)k+1,Nk+1) The calculation formula of pedestrian positioning is as follows:
Figure BDA0002128343270000051
wherein L iskStep size of walking at time k, θkThe direction angle at time k.
2-2. user walking track record, let (P)t-k,Pt-k+1,...,Pt-1) Record the latest user walking track, wherein PiRepresenting the estimated optimal user position at time i. In addition | | Pi-Pi-1I represents PiAnd Pi-1The euclidean distance between them. And if i.ltoreq.t-1 at t-k +1 ≦ t-1, | | | Pi-Pi-1If | ≦ δ, the user trajectory is continuous, where δ represents the maximum walking step size of the user, here set to 2 m.
2-3, for the step length estimation and direction estimation of the user at the next moment, if the user track is continuous, the step length gamma is defined asPiAnd Pi-1The formula is defined as:
Figure BDA0002128343270000052
wherein,
Figure BDA0002128343270000061
wherein with respect to the weight wiRepresents the latest step size Pi-Pi-1With a weight of | l, the newer the step size, the higher the weight.
Step 3, measuring the model: and extracting deep information from the scene map by using a convolutional neural network, and defining the sending similarity between the deep information and the scene map.
3-1, extracting deep information of the measured scene image by adopting a convolution neural network, wherein the structure of the convolution neural network is shown in figure 2. And (3) utilizing the collected scene image data set, making a position label on the collected scene image data set, and adding some irrelevant scene image data sets for training. As shown, after the scene image passes through a series of convolutional and pooling layers, its output is then passed through 3 fully-connected layers, and finally through softmax for position estimation. For the previous fully-connected layer of the softmax layer, this vector is referred to as deep scene image position information, and is defined as V.
3-2, after how to provide the deep information of the scene image, the similarity comparison after extracting the deep position information of the scene image is defined next. Assuming that a and B are respectively scene images at a certain position, and a and B are respectively corresponding extracted deep layer position vectors, the degree of acquaintance between a and B is defined as:
Figure BDA0002128343270000062
where σ adjusts the influence of the distance between two deep-level position vectors.
And step 4, tracking the position track by using an improved particle filter algorithm, wherein an adaptive particle filter framework is shown in figure 3.
4-1. particle initialization: the scene image at time 0 and the observation value of geomagnetism are set to Zs,0,Zg,0. So that the particle initialization at the initial time is according to Zs,0,Zg,0To estimate all possible positions of the user at the initial time
Figure BDA00021283432700000721
Probability distribution of (2) using
Figure BDA0002128343270000071
And (4) showing. However, it is not limited to
Figure BDA0002128343270000072
Is unknown and cannot be sampled, so the sampling at the initial time is based on the probability distribution of the measured values
Figure BDA0002128343270000073
That is, the probability distribution of the observed value at time 0 under the condition of assuming the initial time position is used
Figure BDA0002128343270000074
Figure BDA0002128343270000075
And (4) showing. For the
Figure BDA0002128343270000076
The particle filter algorithm performs particle initialization based on the probability distribution of the measurements. For initial time position
Figure BDA0002128343270000077
If there is a high probability of measurement, then this position will be assigned a higher weight. Returning to the stage of acquiring geomagnetic and scene image data offline, an indoor map is divided into grids, each grid is considered to be a unique position, and information of the geomagnetic and scene map is recorded. To calculate the measurement probability
Figure BDA0002128343270000078
Firstly, a scene is calculated by utilizing a trained convolutional neural network according to a scene image at an initial moment, and initial particles are distributed in the scene
Figure BDA0002128343270000079
And according to
Figure BDA00021283432700000710
Off-line measurement calculation and Z of the grid in which the grid is locateds,0,Zg,0The degree of similarity between the two images,
Figure BDA00021283432700000711
Figure BDA00021283432700000712
it is the average measured similarity.
4-2, using a step-size inference model to perform particle propagation: after the user walks for each step, the particle filter algorithm predicts the probability distribution of the potential position of the new user according to the position and the corresponding weight of each new particle, which is called particle propagation. Particle propagation may also be expressed as per old particle
Figure BDA00021283432700000713
To new particles
Figure BDA00021283432700000714
The process of (1). For the latest predicted position
Figure BDA00021283432700000715
To be provided with
Figure BDA00021283432700000716
Prediction is performed, wherein u represents the step size,
Figure BDA00021283432700000717
the step direction is indicated. In steps of | | u | |And (3) calculating by using a formula (2) of step two. Each particle position after prediction
Figure BDA00021283432700000718
Namely, it is
Figure BDA00021283432700000719
A two-dimensional gaussian distribution, i.e.,
Figure BDA00021283432700000720
on the map building, a coordinate system x is set as the main direction of walking of a user, so that V is a diagonal matrix V, namely diag (delta)1 2δ2 2)。δ1Controlling the change of position in the main direction of travel and setting delta1Is 60 cm. Delta2Controlling the position change in the direction perpendicular to the main traveling direction and setting delta1Is 30 cm.
4-3, updating the particle weight: after the particles are propagated, the weight corresponding to each particle needs to be updated. The weights follow the new formula as follows:
Figure BDA00021283432700000815
or
Figure BDA00021283432700000816
In which the probability distribution
Figure BDA0002128343270000081
Indicating position at time t
Figure BDA0002128343270000082
Off-line geomagnetic measurement value and of corresponding grid
Figure BDA0002128343270000083
The similarity between them. The similarity algorithm refers to equation (3) of step 3. For probability distribution
Figure BDA0002128343270000084
Scene that pedestrians cannot step to acquire in walking processImage, so only weight updates under geomagnetism are calculated when no scene image is captured, and first if there is a captured image
Figure BDA0002128343270000085
I.e. indicating the position at time t
Figure BDA0002128343270000086
Offline scene measurements of the corresponding grid and
Figure BDA0002128343270000087
the similarity between them. And setting:
Figure BDA0002128343270000088
reverting to the particle initialization phase, where setting is performed
Figure BDA0002128343270000089
Finally for
Figure BDA00021283432700000810
Is approximated to
Figure BDA00021283432700000811
Also indicates the position at time t
Figure BDA00021283432700000812
Off-line geomagnetic measurement value and of corresponding grid
Figure BDA00021283432700000813
The similarity between them.
4-4. position estimation: after the weights of all the particles are updated, the weights of all the particles are normalized. Taking the first 40% of particles according to the weight
Figure BDA00021283432700000814
And carrying out weight averaging. The averaged position is the optimal estimated position.
4-5, resampling particles: the particles with high weight are continuously sampled in the process of resampling, and the goal of resampling is to eliminate the wrong particles, namely, to eliminate some particles with weight close to 0. The weight of each particle after resampling is set to 1/n. n is the number of particles.

Claims (1)

1. An indoor positioning method based on mobile phone geomagnetism and scene images is characterized by comprising the following steps:
step 1, collecting fingerprint information of an indoor map of a map offline;
step 2, step length presumption model: presume step length and walking direction of users;
step 3, measuring the model: extracting deep information from the scene map by using a convolutional neural network;
step 4, tracking the position track by using an adaptive particle filter algorithm;
the step 1 is specifically realized as follows:
1-1, dividing the indoor map into 60cm by 60cm grids, setting each grid as a unique position and distributing a position label;
1-2, collecting geomagnetic and scene image information along the main road direction by the corresponding grids;
1-3, collecting and storing the measurement information of the ready-made labels of the corresponding buildings;
the steps 1-2 are as follows:
dividing each road into two directions, and respectively acquiring geomagnetic and scene image information; wherein the measured geomagnetic value is a four-dimensional vector (m)x,my,mzM), wherein (M)x,my,mz) Measuring the test values of the mobile phone under three coordinate systems of geomagnetism, wherein M represents the intensity of the geomagnetism to be (M)x,my,mz) A second order norm of; for each position grid, the mobile phone randomly collects 10 times of data at the main corresponding grid position, and the data is taken as the measurement information and the position label of the grid position and put into the off-line fingerprint;
the step 2 is realized as follows:
2-1, detecting the walking direction and gait of the pedestrian by using a smart phone sensor, and setting the step length to be 60cm under the initial condition of a user; the location coordinate for the user is (E)k,Nk) Step length of user is LkAngle of direction thetakThe next time positioning coordinate is (E)k+1,Nk+1) The calculation formula of pedestrian positioning is as follows:
Figure FDA0003242170850000021
wherein L iskStep size of walking at time k, θkIs the azimuth angle at time k;
2-2, recording the walking track of the user: is provided (P)t-k,Pt-k+1,...,Pt-1) Record the latest user walking track, wherein PiRepresenting the estimated optimal user position at the ith moment; in addition | | Pi-Pi-1I represents PiAnd Pi-1The Euclidean distance between; and if i.ltoreq.t-1 at t-k +1 ≦ t-1, | | | Pi-Pi-1If the | is less than or equal to delta, the user track is continuous, wherein the delta represents the maximum walking step length of the user and is set to be 2 m;
2-3, estimating the user step length and the direction at the next moment: if the user trajectory is continuous, the step size gamma is defined as PiAnd Pi-1The formula is defined as:
Figure FDA0003242170850000022
wherein,
Figure FDA0003242170850000023
wherein with respect to the weight wiRepresents the latest step size Pi-Pi-1The weight of | is higher as the step length is newer;
the step 3 comprises the following steps:
3-1, training the scene image by using a convolutional neural network according to the scene and the position, and taking the previous layer of the softmax layer as the deep position information of the scene image;
for the deep layer position vector of the measured scene image, extracting by adopting a convolutional neural network, specifically: utilizing the collected scene image data set and calibrating the position label thereof, and then adding some irrelevant scene image data sets for training; after a scene image passes through a series of convolution layers and pooling layers, the output of the scene image continuously passes through 3 full-connection layers, and finally position estimation is carried out through a softmax layer; for the previous fully-connected layer of the softmax layer, the vector is called a deep layer position vector, and the vector is defined as V;
3-2, extracting deep information by using the scene image acquired on line through a trained convolutional neural network, and comparing the deep information with the similarity of map information stored in advance;
defining similarity comparison after extracting deep layer position vectors of the scene images; assuming A, B is a scene image at a certain position, and a and b are corresponding extracted deep layer position vectors, the degree of acquaintance between A, B is defined as:
Figure FDA0003242170850000031
wherein the parameter sigma is used for adjusting the influence of the distance between two deep-layer position vectors;
the step 4 is realized as follows:
4-1, initializing particles by using geomagnetism and a scene image at an initial time:
the scene image at time 0 and the observation value of geomagnetism are set to Zs,0,Zg,0(ii) a So that the particle initialization at the initial time is according to Zs,0,Zg,0To estimate all possible positions of the user at the initial time
Figure FDA0003242170850000032
Probability distribution of (2) using
Figure FDA0003242170850000033
Represents; the sampling at the initial moment being based on the probability distribution of the measured values
Figure FDA0003242170850000034
That is, the probability distribution of the observed value at time 0 under the condition of assuming the initial time position is used
Figure FDA0003242170850000035
Figure FDA0003242170850000036
Represents; for the
Figure FDA0003242170850000037
The calculation of (2): firstly, a scene in which the user is located is calculated by using a trained convolutional neural network according to a scene image at an initial moment; then, initial particles are distributed according to the scene
Figure FDA0003242170850000038
And according to
Figure FDA0003242170850000039
Off-line measurement calculation and Z of the grid in which the grid is locateds,0,Zg,0The degree of similarity between the two images,
Figure FDA00032421708500000310
is the average measured similarity;
4-2, using a step-size inference model to perform particle propagation: after the user walks for each step, the particle filter algorithm updates the position and the corresponding weight of each particle to predict the probability distribution of the new potential position of the user, which is called particle propagation; particle propagation may also be expressed as per old particle
Figure FDA0003242170850000041
To new particles
Figure FDA0003242170850000042
The process of (2); for the latest predicted position
Figure FDA0003242170850000043
To be provided with
Figure FDA0003242170850000044
Prediction is performed, wherein u represents the step size,
Figure FDA0003242170850000045
represents the step direction; calculating the | u | by a formula (2) in the step 2; each particle position after prediction
Figure FDA0003242170850000046
Namely, it is
Figure FDA0003242170850000047
A two-dimensional gaussian distribution is added,
Figure FDA0003242170850000048
on the map building, a coordinate system x is set as the main direction of walking of a user, so that V is a diagonal matrix V, namely diag (delta)1 22 2);δ1Controlling the change of position in the main direction of travel and setting delta1Is 60 cm; delta2Controlling the position change in the direction perpendicular to the main traveling direction and setting delta2Is 30 cm;
4-3, updating the particle weight: after the particles are propagated, the weight corresponding to each particle needs to be updated; the formula for weight update is as follows:
Figure FDA0003242170850000049
in which the probability distribution
Figure FDA00032421708500000410
Indicating position at time t
Figure FDA00032421708500000411
Off-line geomagnetic measurement value and of corresponding grid
Figure FDA00032421708500000412
The similarity between them; for probability distribution
Figure FDA00032421708500000413
The pedestrians cannot acquire the scene images step by step in the walking process, so that only weight updating under the geomagnetism is calculated when the scene images are not acquired, and if the scene images are acquired, the weight updating under the geomagnetism is calculated firstly
Figure FDA00032421708500000414
I.e. indicating the position at time t
Figure FDA00032421708500000415
Offline scene measurements of the corresponding grid and
Figure FDA00032421708500000416
the similarity between them; and setting:
Figure FDA00032421708500000417
reverting to the particle initialization phase, where setting is performed
Figure FDA0003242170850000051
Finally for
Figure FDA0003242170850000052
Is approximated to
Figure FDA0003242170850000053
Also indicates the position at time t
Figure FDA0003242170850000054
Off-line geomagnetic measurement value and of corresponding grid
Figure FDA0003242170850000055
The similarity between them;
4-4. position estimation: after the weights of all the particles are updated, normalizing the weights of all the particles; taking the first 40% of particles according to the weight
Figure FDA0003242170850000056
Carrying out weight averaging; the averaged position is the optimal estimated position;
4-5, resampling particles: in the resampling process, the particles with high weight have higher probability to be continuously sampled, and the resampling aims to eliminate wrong particles, namely to eliminate some particles with weight close to 0; the weight of each particle after resampling is set to 1/n; n is the number of particles.
CN201910629743.8A 2019-07-12 2019-07-12 Indoor positioning method based on mobile phone geomagnetism and scene image Active CN110388926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910629743.8A CN110388926B (en) 2019-07-12 2019-07-12 Indoor positioning method based on mobile phone geomagnetism and scene image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910629743.8A CN110388926B (en) 2019-07-12 2019-07-12 Indoor positioning method based on mobile phone geomagnetism and scene image

Publications (2)

Publication Number Publication Date
CN110388926A CN110388926A (en) 2019-10-29
CN110388926B true CN110388926B (en) 2021-10-29

Family

ID=68286531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910629743.8A Active CN110388926B (en) 2019-07-12 2019-07-12 Indoor positioning method based on mobile phone geomagnetism and scene image

Country Status (1)

Country Link
CN (1) CN110388926B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111256684A (en) * 2020-01-18 2020-06-09 杭州电子科技大学 Geomagnetic indoor positioning method based on multilayer gate control circulation unit network
CN111964667B (en) * 2020-07-03 2022-05-20 杭州电子科技大学 geomagnetic-INS (inertial navigation System) integrated navigation method based on particle filter algorithm
CN112146660B (en) * 2020-09-25 2022-05-03 电子科技大学 Indoor map positioning method based on dynamic word vector
CN113008226B (en) * 2021-02-09 2022-04-01 杭州电子科技大学 Geomagnetic indoor positioning method based on gated cyclic neural network and particle filtering
CN113873442B (en) * 2021-09-08 2023-08-04 宁波大榭招商国际码头有限公司 Positioning method for external collection card
CN118075873B (en) * 2024-04-19 2024-06-21 浙江口碑网络技术有限公司 Positioning method and data processing method based on wireless network data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109298389A (en) * 2018-08-29 2019-02-01 东南大学 Indoor pedestrian based on multiparticle group optimization combines position and orientation estimation method
CN109883428A (en) * 2019-03-27 2019-06-14 成都电科慧安科技有限公司 A kind of high-precision locating method merging inertial navigation, earth magnetism and WiFi information
CN109917404A (en) * 2019-02-01 2019-06-21 中山大学 A kind of indoor positioning environmental characteristic point extracting method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105424030B (en) * 2015-11-24 2018-11-09 东南大学 Fusion navigation device and method based on wireless fingerprint and MEMS sensor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109298389A (en) * 2018-08-29 2019-02-01 东南大学 Indoor pedestrian based on multiparticle group optimization combines position and orientation estimation method
CN109917404A (en) * 2019-02-01 2019-06-21 中山大学 A kind of indoor positioning environmental characteristic point extracting method
CN109883428A (en) * 2019-03-27 2019-06-14 成都电科慧安科技有限公司 A kind of high-precision locating method merging inertial navigation, earth magnetism and WiFi information

Also Published As

Publication number Publication date
CN110388926A (en) 2019-10-29

Similar Documents

Publication Publication Date Title
CN110388926B (en) Indoor positioning method based on mobile phone geomagnetism and scene image
CN110602647B (en) Indoor fusion positioning method based on extended Kalman filtering and particle filtering
CN106714110B (en) Wi-Fi position fingerprint map automatic construction method and system
CN108632761B (en) Indoor positioning method based on particle filter algorithm
Hilsenbeck et al. Graph-based data fusion of pedometer and WiFi measurements for mobile indoor positioning
CN106412839B (en) Based on secondary partition and the matched indoor positioning of fingerprint gradient and tracking
Hähnel et al. Gaussian processes for signal strength-based location estimation
US7929730B2 (en) Method and system for object detection and tracking
CN109597864B (en) Method and system for real-time positioning and map construction of ellipsoid boundary Kalman filtering
CN107339992B (en) Indoor positioning and landmark semantic identification method based on behaviors
CN110501010A (en) Determine position of the mobile device in geographic area
CN108521627B (en) Indoor positioning system and method based on WIFI and geomagnetic fusion of HMM
CN104215238A (en) Indoor positioning method of intelligent mobile phone
CN111060099B (en) Real-time positioning method for unmanned automobile
CN111964667B (en) geomagnetic-INS (inertial navigation System) integrated navigation method based on particle filter algorithm
CN110631588B (en) Unmanned aerial vehicle visual navigation positioning method based on RBF network
CN108446710A (en) Indoor plane figure fast reconstructing method and reconstructing system
CN108629295A (en) Corner terrestrial reference identification model training method, the recognition methods of corner terrestrial reference and device
CN108362289A (en) A kind of mobile intelligent terminal PDR localization methods based on Multi-sensor Fusion
Rungsarityotin et al. Finding location using omnidirectional video on a wearable computing platform
CN116448111A (en) Pedestrian indoor navigation method, device and medium based on multi-source information fusion
Wei et al. MM-Loc: Cross-sensor indoor smartphone location tracking using multimodal deep neural networks
CN114636422A (en) Positioning and navigation method for information machine room scene
Lee et al. An adaptive sensor fusion framework for pedestrian indoor navigation in dynamic environments
Spassov et al. Map-matching for pedestrians via bayesian inference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant