[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Rational Design of Peptide-Functionalized Surface Plasmon Resonance Sensor for Specific Detection of TNT Explosive
Next Article in Special Issue
LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter. Sensors 2017, 17, 539
Previous Article in Journal
A Light-Up Probe for Detection of Adenosine in Urine Samples by a Combination of an AIE Molecule and an Aptamer
Previous Article in Special Issue
Noncontact Sleep Study by Multi-Modal Sensor Fusion
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints

Geomatics Engineering Department, University of Calgary, Calgary, AB T2N 1N4, Canada
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(10), 2237; https://doi.org/10.3390/s17102237
Submission received: 12 July 2017 / Revised: 18 September 2017 / Accepted: 26 September 2017 / Published: 30 September 2017
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)

Abstract

:
This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS). Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS)-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers). These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS), accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs) that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines) visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution.

1. Introduction

Over the past two decades, mobile mapping systems (MMS) have been a vital source of direct geo-referenced data, which can be used for a variety of applications (e.g., mapping, 3D modeling, highway inventory, engineering projects, and Geographic Information System (GIS) data updates). Although land-based MMS is one of the main sources for acquiring direct geo-referenced data, there are many drawbacks to using the current MMS (i.e., their large size and complexity, as well as its high cost due to the use of expensive Inertial Measurements Unit (IMU) and GNSS receivers) which have restricted their widespread adoption in the survey and mapping industries. Consequently, the market for land-based MMS is small, and the existing MMS typically are operated by the companies or institutions that built them, which unfortunately means that their more efficient data collection is not available for wider use [1]. The research trend now is toward a more cost-effective, less complex, and time-efficient MMS. The accuracy of direct geo-referenced data depends on the intended application; for example, inventory applications require one to two-meter accuracy. This paper specifically focuses on the development of a low-cost MMS based on smartphone technology.
The proposed system overcomes the drawbacks of the current MMS (i.e., large size, complexity, and high cost) that have restricted their widespread adoption in disciplines which demand meter-level accuracies (e.g., documentation, inventory, surveying, and mapping). The development of such a system will satisfy the demand for a MMS that can compete both in cost and in user-friendliness with current terrestrial photogrammetry. The proposed system does not attempt to replace existing MMS, rather, it offers new low-cost alternative for applications requiring one to five meters accuracy. The GPS receivers used in most of the current smartphones (e.g., iPhone, Samsung, and HTC) have poor positioning accuracy. Furthermore, the MEMS sensors, especially gyroscopes, will accumulate position drift over short time because of their high signal-to-noise ratio over time. Furthermore, magnetometers can be easily disturbed by the presence of metallic objects in its vicinity. Although these sensors offer the ability to acquire direct geo-referencing data, their low-level sensor measurements can lead to inaccurate exterior orientation parameters (EOPs) which, in turn, decrease the mapping accuracy of the system. These erroneous EOPs must be corrected before calculating the final 3-D mapping coordinates of the points of interest. Therefore, a relative orientation approach is introduced in this paper to refine the initial EOPs. Then geometric features (e.g., straight vertical and horizontal linear features) are extracted, matched, and used to impose constraints on the object space calculation and adjustment inside the bundle adjustment model. The coplanarity constraint [2] is a well-known method for relative orientation estimation through an iterative process. However, the coplanarity constraint using least square adjustment requires good quality approximation of the unknown relative EOPs due to the highly iterative process based on the nonlinear nature of the model [3]. To solve this problem, several past studies introduced closed-form solutions to overcome these issues, such as eight-point [4,5] and five-point [6] algorithms. Similarly, the Structure from Motion (SfM) algorithm was originally developed by the computer vision community for solving 3D reconstruction problem using these closed-form solutions. SfM is commonly being used now in photogrammetry for automatic computation of initial relative EOPs [3,7,8].

2. Related Works

The process of integrating geometric constraints into bundle adjustment has generated a lot of interest within the photogrammetric community. McGlone [9,10] incorporated geometric constraints into bundle adjustment to improve the accuracy and precision of a detailed site model generated using multiple oblique airborne imagery. The author used the coplanarity condition that involved any number of object space points, which were used to fix the line or plane parameters and then were used to constrain the bundle adjustment. The effectiveness of this method was demonstrated in an experiment using airborne images of model-board buildings. Geometric constraints typically also are used is in the camera calibration process. Habib et al. [11] integrated geometric constraints in a bundle adjustment for self-calibration using straight lines and coplanarity conditions. The idea was based on the fact that, in the absence of camera distortions, the perspective projection of straight lines in object space must yield to a straight line in the image space. The idea behind this method is based on the coplanarity condition, more specifically, using stereo-pair imagery where three vectors satisfy the coplanar condition such that the first vector is connecting the perspective center to the first point along an object space straight line; the second vector is connecting the perspective center to the first point along an object space straight line; and the third vector is connecting the perspective center to any intermediate point along the image space line. Gerke [12] evaluated the use of geometric constraints to reduce the number of ground control points (GCPs) needed for indirect sensor orientation, whereby the geometric scenes (linear horizontal, vertical, and right-angle) that were visible in overlapping imagery were integrated into the bundle adjustment procedure along with some GCPs for the EOPs and Interior Orientation Parameters (IOPs) recovery. The author focused mainly on performing multi-camera self-calibrations by comparing the presence and the absence of certain distortion parameters for camera calibration in different scenarios as well as constraining the indirect orientation. This method was evaluated using two different airborne datasets, one acquired using a pictometry system and the other from a UAV equipped with a consumer digital camera. The author demonstrated the suitability of incorporating geometric constraints for reducing the need of well-distributed GCPs. Geometric constraints were also used to improve bad network geometry in past studies. For example, Zhang et al. [13] included planarity constraints and the constraints of highly correlated EOPs in a bundle adjustment to overcome the weak geometry connection of an image network to generate a precise ortho-image of the Dunhuang wall painting. However, this wall is a near-planar wall surface and the forward overlap between the network images were less than 60%, which produced a strong correlation between the EOPs and led to a singular normal matrix and increased the error propagation of the adjustment model. Therefore, the planarity constraints of the painting were used by the authors to control the error propagation by improving the geometric connection. The results of their experiment confirmed the effectiveness of these constraints for improving the stability and accuracy of the adjustment model. Likewise, geometric constraints were used to improve the overall accuracy of direct geo-referencing. El-Sheimy [14] used known geometric constraints, such as straight lines to place additional constraints on the calibration of a land-based MMS. All the studies above show the benefits of integrating geometric constraints for different photogrammetric applications; however, no studies to date have used these constraints to improve direct geo-reference using low-cost motion navigation sensors.
Current smartphones integrate on one platform low-cost GPS receivers, barometers, cameras, IMUs, and magnetometers, which are the ideal MMS components and have the key advantages of low cost, small size, and easy availability. A limited number of studies in the literature have investigated the use of smartphones for mapping applications. Al-Hamad and El-Sheimy [15,16,17] developed an innovative workflow for using smartphones as a low-cost MMS, whereby, the relative accuracy of the captured images’ EOPs was improved using a vision- based epipolar geometry technique. The epipolar line along with the automated matched points were used as a constraint to enhance the relative position and orientation of each captured image with respect to the first captured image. Although this work successfully illustrated that “Mobile Mapping Using Smartphones” is a potentially promising low-cost solution, the accuracy of the entire solution is governed by the accuracy of the first image’s EOPs, which are not usually accurate. Therefore, potential of a more robust method is needed. Alsubaie and El-Sheimy [18] introduced the potential of generating a direct geo-referenced image-based 3D point cloud using smartphones. This study demonstrated the suitability of incorporating geometric constraints to reduce the need for the conventional well distributed number of GCPs to improve direct geo-referencing using the initial EOPs directly from smartphones. However, the accuracy of the GPS chipset embedded in smartphones can exceed 10 m in multi-path conditions. However, only a few recent releases of the android smartphones (e.g., Nexus 9) allow access to the raw GPS measurements [19], while most of the Android and iOS smartphones do not provide raw measurements. Such limitation makes it impossible to apply differential GPS method without hardware modification [20].
Figure 1 illustrates the accuracy of the GPS chipset embedded in the iPhone 6 for a known shape. The blue color trajectory represents the reference trajectory of a tennis court and the red color is the GPS solution. Two tests were collected over the same trajectory with a short time difference between them. As shown in Table 1, the total distance error in the second test was not acceptable for both navigation and mapping applications, while the total distance error in the first test is within the expected accuracy of the GPS single point positioning solution. These tests clearly show the challenge associated that can be faced when relying only on smartphone’s GPS chipset for mapping applications. Therefore, the objective of this paper was to use any smartphone as a MMS by overcoming the GPS limitation and the IMU drift issue associated with most smartphones.
Furthermore, the errors of the MEMS sensors, used in smartphones, typically change over time (due to changing temperature) and from turn-on to turn-on of the smartphone [21,22]. Also, the magnetometer sensor can easily be disturbed in the presence of metallic objects [21]. The new methodology introduced in this paper intends to overcome these issues.

3. System Implementation and Data Collection

An iOS software application was developed to capture and synchronize the images with their corresponding GPS and motion sensors measurements (location and orientation) at the time of exposure. Figure 2 shows a snapshot of the developed application, which was installed on an iPhone-6 equipped with a GPS receiver, 6-Axis IMU (3-axis gyroscope and 3-axis accelerometer), pedometer, compass, and barometer [23]. Furthermore, the iPhone-6 is already equipped with a high-resolution digital camera with a resolution of 8 megapixels.

4. Methodology

As illustrated in Figure 3, the methodologies begin by estimating the relative orientations (RO) w.r.t the first image using Structure from Motion (SfM), which provides an up-to-scale 3D model. Thus, the initial EOPs acquired by smartphone sensors are used to calculate global relative rotations (GRR) between each paired images. Then, each GRR is subtracted from the corresponding rotation acquired by the SfM; and the norm of each difference is used to build symmetric rotational difference matrix. The two corresponding relative rotations which have a close to zero difference are corresponding to the most accurate IMU measurements that associated with two images in the network. The accuracy of these two candidate absolute rotations is further examined where the most accurate one is then used to rotate the SfM model to the mapping coordinate frame while the absolute scale is determined using the ratio distance between two images, the relative distance is acquired by the SfM, and absolute distance obtained using pedestrian navigation techniques (e.g., steps detection).Also, the centroid of all the GPS locations associated with all images are calculated and used to translate the SfM model to the global coordinate. Once the initial EOPs are refined then these EOPs, along with geometric constraints, are entered into a free network bundle adjustment algorithm for reconstructing robust 3D objects. These steps are explained in details in the following subsections.

4.1. Network Global Relative Rotation (GRR) Acquired by Smartphone Based on Motion Sensors

The main objective of this step is to find the relative rotation between each two images based on the absolute 3D rotations, which are directly acquired by smartphone’s motion sensors. These relative rotations are then used to evaluate the accuracy of each absolute rotation when compared with the SfM rotations.
The IMU along with magnetometers are used to obtain the direct 3D rotation of the smartphone instantly when the smartphone’s camera captures an image. The accelerometers sensor in the IMU are used to obtain the pitch and roll angles, which are rotation angles around the iPhone x-axis and y-axis respectively. Whereas, the magnetometer is used to derive the heading angle of the iPhone, which is measured w.r.t the iPhone y-axis as illustrated in Figure 4.
These Euler angles are used to compute the rotation matrix R b m , which rotate the motion sensors measurements from the IMU (body) frame to the Global frame as expressed by Equation (1):
R b m = R z ( A z i m u t h 90 )   ×   R x ( P i t c h )   ×   R y ( R o l l )
In photogrammetry, the desired orientation is related to the involved camera frame. Therefore, the Euler angles derived by the IMU (inside the phone) are used to determine the photogrammetric orientation angles (e.g., omega, phi, and kappa) utilizing a boresight matrix as shown in Equation (2), where omega, phi, and kappa are the rotation angles around the camera x-axis, y-axis, and z-axis respectively:
boresight = ( 0 1 0 1 0 0 0 0 1 )
These final absolute rotation angles are used to establish the Global Relative Rotation (GRR) between each two images in the network as expressed by Equation (3). For instance, if we have a network consisting of three images and we have the absolute EOPs acquired by the smartphone motion sensor, the RO between image 1 and image 2 can be calculated using these absolute EOPs:
R j i = ( R i m ) T   ×   R j m   R 2 1 = ( R 1 m ) T   ×   R 2 m
where R i m is the rotation between image (i) and mapping frame (Global frame). R j m is the rotation between image (j) and mapping frame. R j i is the GRR between image (i) and image (j).
This procedure is repeated for each paired image in the network. These relative rotations then are compared to the relative rotations acquired by SfM algorithm in order to find the most accurate absolute IMU rotation, which finally will be used to rotate the SfM model to the mapping frame.

4.2. Network RO Recovery Using SfM

As mentioned earlier, closed form solutions (e.g., SfM) are much faster and do not require initial approximations of the unknowns compared to the traditional coplanarity conditions. Therefore, SfM is adopted for initial relative EOP estimation. The process of the SfM algorithm begins with the automatic computing of modified coplanarity condition using overlapped image pairs with at least eight or five matched tie points depending on the previous knowledge of the IOPs. The modified coplanarity condition is based on computing the essential matrix, which used to estimate the transformation parameters between each image pair. The initial EOPs of the network w.r.t the first image, are established via successive resections to compute the position and orientation for each image in the network, whereas, the initial 3D coordinates of the tie points are computed using successive intersection. These relative preliminarily EOPs and 3D coordinates of tie points are then refined using Bundle adjustments with adoptive (non-strict) outlier rejection tolerances [3,7,24].
The main idea of this process is to compare the GRR derived from IMU with the corresponding RO obtained by SfM model. However, the GRR derived from IMU are computed for each paired image in the network. For example, the third image is considered a reference for the relative coordinate system based on IMU rotations (GRR), hence the GRR of any image is computed w.r.t the third image. However, the RO obtained by SfM are obtained with respect to the first image in the network. Therefore, new workflow is introduced to make each image in the network reference, one at a time, for the SfM model, while the EOPs of other images are computed relatively to the new reference image ith instead of 1st image. Therefore, Figure 5, Equations (4) and (5) illustrate the case where the second image is chosen to be the new reference instead of first image for the SfM model.
The workflow begins by rotating each image to the chosen reference image, which is illustrated in Figure 5a and expressed by Equation (4):
R i x = ( R x 1 ) T   ×   R i 1   R i 2 = ( R 2 1 ) T   ×   R i 1
where ( 1 ) is the first image that is originally used as reference for the network relative EOPs estimation using SfM. R i x is the relative rotation between the new reference image and other images. R x 1 is the relative rotation between the chosen new reference image (e.g., 2nd image in Figure 5a) and the first image. R i 1 is the relative rotation between each image in the network and image (1). Then, the translation is redefined between each image and the reference one, as illustrated in Figure 5b and expressed by Equation (5):
t 3 1 = t 2 1 + t 3 2   t 3 2 = ( R 2 1 ) T   ×   ( t 3 1 t 2 1 )   t i x = ( R x 1 ) T   ×   ( t i 1 t x 1 )
where, t 3 1 is translation vector between 3rd image and 1st image. t 2 1 is translation vector between 2nd image and 1st image. t 3 2 is translation vector between 3rd image and 2nd image. R 2 1 is the relative rotation between 2nd image and 1st image.

4.3. Comparing Different RO Matrices

As mentioned earlier, the objective of this step is to identify the image that is directly geo-referenced with the most accurate IMU rotation angles, where it can be used to rotate the SfM model to the global frame. Therefore, once the GRR and the SfM’s relative rotation for each paired image in the network are obtained, the difference between each corresponding rotation is calculated using Equation (6) as illustrated in Figure 6. To represent the difference result with one value instead of the (3 by 3) matrix, the norm of the difference matrix is calculated using Equation (7). This value then is used to reconstruct a n by n matrix, where n is the number of images in the network:
D i f i j = G R R i j R i j
where D i f   is the difference between each corresponding relative rotation matrices in the network. G R R i j is the relative rotation acquired based on the smartphone motion sensor measurements. R i j is the relative rotation obtained using SfM algorithm.
The difference ( D i f ) matrix is computed for each two images in the network. Each D i f matrix is represented as one value using matrix norm as expressed by Equation (7):
N i j = n o r m ( D i f i j )

Symmetric Rotational Differences Matrices

The main objective of this step is to determine the most accurate absolute rotation among the candidate IMU rotations in order to use it to rotate the SfM model to the global frame. Therefore, based on Equations (6) and (7), the norm value is used to reconstruct a (n by n) matrix that consists of the difference, where n is the number of images in the network. For example, for a network consisting of three images, the norm of the difference matrix ( N i j ) between each two corresponding relative rotations is calculated using Equations (6) and (7) and placed in the cell that is located between these images as shown in Table 2. Based on the off-diagonal elements in Table 2, the two corresponding relative rotations with difference value close to or equal to zero are selected as the candidate for rotating the SfM model.
However, one of the candidate rotation needs to be chosen to rotate the SfM model. In the case of considering image 3 and image 2 to be associated with the most accurate IMU rotation (with off diagonal value of 0.005 in the example in Table 2). Equation (8) is used to determine the most accurate IMU rotation by calculating the mean of rows correspond to image 3, and the mean of the column corresponds to image 2 in matrix D D . As a result, the image associated with less mean error value corresponds the most accurate absolute IMU rotation in the network. This absolute IMU rotation is then used to rotate the SfM model:
θ r o w = m e a n ( D D ( : , i m a g e 2 ) )   θ c o l u m n   = m e a n ( D D ( i m a g e 3 , : ) )
where, θ r o w is the mean of the error value along the column that is corresponding to image 2. θ c o l u m n   is the mean of the error value along the row that is corresponding to image 3.

4.4. Transforming the SfM Relative Model to the Mapping Coordinate

The transformation of a model from one coordinate to another requires the prior knowledge of seven parameters (i.e., three translations, three rotations, and a uniform scale) [2,25]. Therefore, the process of transforming the SfM model to the global coordinates is organized in the following order: SfM rotation, scale determination, and centroid of GPS solutions calculation.

4.4.1. SfM Rotation to Global Coordinate

As shown in Section 4.3, the most accurate absolute rotation derived by the smartphone IMU and magnetometer is determined and used to rotate the SfM model, which is established w.r.t the image that corresponds to the most accurate absolute rotation.

4.4.2. Centroid of GPS Solutions

The GPS chipset built into most smartphones does not provide raw measurements, such as pseudorange or carrier phase. The user can only log the final position solution with its standard deviation as calculated by the iOS or Android API system [20]. As a result, the user is limited by this solution, whereby no further improvements can be made. Therefore, in this method, each GPS solution that is used to geotag a captured image is used to calculate the network centroid based on the weighted average. The weight is derived based on the provided standard deviation of each solution.

4.4.3. Mapping Scale Acquisition from Smartphone

The mapping scale can be determined using the ratio between two distances. One of which is obtained in the SfM arbitrary coordinate and the other one is calculated in the actual mapping (global) coordinate as expressed by Equation (9):
Scale = D SfM D mapping
These distances can be calculated either between two points or two camera locations. Hence, the distance between two images is employed in this method. The mapping scale can be acquired using a traditional technique, which measures the distance between two points using measuring tape or imaging an object with a known length. The two distances are used to derive the scale value as shown in Equation (9). Although this technique is the most accurate one, it requires user interference in the process. Therefore, some pedestrian navigation techniques (e.g., step detection and step length estimation) are adapted to calculate the scale automatically. Lee et al. developed a robust step detection algorithm against the dynamics of smartphones, which is based on the 3D magnitude of accelerometer measurements. This algorithm begins by filtering the acquired data using a low-pass filter, and then extracting the measurements corresponding to the smartphone’s motion. Then, the extracted motions epochs are classified as peak-valley relationships using adaptive thresholds that are based on step average and step standard deviation. Furthermore, the adaptive time threshold is used to correct the candidate peak and valley. This algorithm shows 98.6% step detection accuracy [26].
As a result, this algorithm is adopted in this method to detect the user steps between two successive captured images as illustrated in Figure 7, where the user is constrained to walk in a straight line and the strike consists of two steps. Hence, the detected peaks and valleys are sorted in ascending order, and the acceleration between the first and last step is double integrated to calculate the distance between two images. This distance is used to represent the denominator in Equation (8).
The main challenge of validating the proposed scale method over a known reference distance is that it is impossible to start and end exactly over the start and end points of the reference distance. Therefore, image-based technique is introduced as well to determine the possible shift between the reference distance and the start and end distance estimated using the accelerometers.
Two color coded targets are placed on ground with a 7.22 m distance between them. The diameter ( D T ) of each target is at 20 cm. At each validation experiment, the user is asked to capture two images, one for the first target and another for the second target, as shown in Figure 8. Then, a morphological image classification technique is used to extract the centroid and the diameter ( D i ) of each target, whereby the scale ( λ i ) is determined by the ratio between the actual target diameter and the image diameter of the same target. The shift between the centroid of each target and the camera perspective center position at the time of exposure then are calculated using Equation (10):
λ i = ( D T D i )   s 1   = ( c 1 ( w 2 ) )    ×   λ 1    s 2   = ( c 2 ( w 2 )   )   ×   λ 2  
where, s 1 and s 2 are the actual horizontal shifts from the smartphone camera perspective center to the center of the target fixed on the ground for targets one and two respectively, c 1 and c 2 are the image space coordinates for the centroids of targets one and two, respectively, measured in pixels. w is the image format width, measured in pixels.
The two shifts are then subtracted from the known distance to determine the exact true distance for each validation test as expressed by Equation (11):
T r u e   D i s t a n c e   ( i ) = ( 7.22 + s 1 s 2 ) 100  
Several validation tests were conducted to assess the performance of the distances measured by the accelerometers and the result indicated the presence of approximately 7 cm errors between the known truth distance and the proposed method.

4.5. Geometric Constraints in Bundle Adjustment

As mentioned previously, geometric information (i.e., vertical, and horizontal linear features) that are visible in the captured images can be used as constraints to the bundle adjustment. These constraints are independently determined observations, capable of being added to the system equations in the normal matrix to ensure more reliable and higher quality solutions [18]. Therefore, vertical and horizontal line constraints are used to enhance the final bundle adjustment result as expressed using Equations (12) and (13) [14]. These constraints are measured in the object space domain and are added to the object space unknown parameters. As illustrated in Figure 9, the only change between any two points located along a vertical line is the height as expressed by Equation (11). The East and North dimensions for these points are similar as expressed by Equation (12). Likewise, any two points along the horizontal line have the same value for height, while the East and North dimensions for each point are different as expressed by Equation (13):
X i X j = Y i Y j = 0
Z i Z j = 0
where i , j are any two points on a straight line.

4.6. Free Network Adjustment with Geometric Constraint

The 3D object reconstruction is conducted using collinearity equations, which defines the mathematical relationship between the image and the ground coordinate system [2,25] as expressed by Equations (14a) and (14b):
x a = x p c r 11 ( X A X 0 ) + r 12 ( Y A Y 0 ) + r 13 ( Z A Z 0 ) r 31 ( X A X 0 ) + r 32 ( Y A Y 0 ) + r 33 ( Z A Z 0 ) + x
y a = y p c r 21 ( X A X 0 ) + r 22 ( Y A Y 0 ) + r 23 ( Z A Z 0 ) r 31 ( X A X 0 ) + r 32 ( Y A Y 0 ) + r 33 ( Z A Z 0 ) + y
where (   r 11 to r 33 ) represents the element of the rotation matrix, c is the perspective distance of the camera. The image coordinates of the principal point are x p , y p ; the image coordinates of the object point are xa, ya; the perspective centre ground coordinates are X 0 , Y 0 and Z 0 , and the object point ground coordinates are X A , Y A and Z A .
The study in this paper aimed to develop a stable low-cost direct geo-referencing system that can be operated in any outdoor environment without the need for GCPs whereas, the bundle adjustment requires a fixed reference consisting of seven parameters, which defines the network datum. The datum is traditionally defined by at least three GCPs for indirect geo-referencing or very accurate GPS/IMU systems for direct geo-referencing. To overcome the datum deficiency issue, a free bundle-adjustment procedure was used. This procedure uses the inner constraint matrix (G) to remove the rank defect of the normal matrix [27,28,29]. This constraint fits the network onto the estimated initial ground coordinate of the tie point values as shown in Equation (15). Hence, the inner constraint matrix accounts for the datum seven parameters (three translations, three rotations, and scale):
G = ( 1 0 0 0 Z A Y A X A 0 1 0 Z A 0 X A Y A 0 0 1 Y A X A 0 Z A )
where X A , Y A and Z A are the initial values for the ground coordinate of the tie point, the free bundle adjustment is a nonlinear least square technique used to calculate the EOPs, the desired 3D object point coordinate, and the IOPs utilizing the collinearity condition and the inner constraint as expressed by Equation (16):
( δ x ^ E O P s δ x ^ O P s K ) = ( A T E O P s R 1 A E O P s + R E o p s A T E O P s R 1 A O P s 0 A O P s T   R 1   A E O P s A T O P R 1 A O P s + R O p s + A C T   R 1 A c G S y m . 0 ) 1 ( A T E O P s   R 1   w + R E o p s   + w E o p s A T O P s R 1 w + R O p s   w O p s + A C T   R 1 w c 0 )
where, δ x ^ is the unknown parameters vector, A is the design matrix, R is the weight matrix, w is the misclosure vector. The linear features constraints are denoted subscript ( c ), and the subscript E O P s represents the EOPs parameters. Also, the subscript O P s represents the object’s point parameters. K is the Lagrange multiplier.

5. Experimental Results

To test the developed methodologies, an iPhone 6 was used to collect close range images along with their corresponding motion sensors measurements. As shown in Figure 10a, the collected images emulate a network consisting of 18 images and 25 OPs. To test the accuracy of the proposed methodologies, the reference EOPs of the images and the OPs were measured using Total Station, whereby the position of each camera station was determined within 1 cm and the rotation angles were within 30 arc second accuracies based on the Total Station’s measurement accuracy. Furthermore, the 3D tie points’ position accuracy was within 1 cm.
The experimental data were collected over a small building located in a harsh environment, for both the GPS and magnetometers, where the building is surrounded by high buildings which could introduce multipath in the GPS signals. Moreover, the building is located close to large parking, whereby, the presence of vehicles causes magnetometers measurements. Such environment is very challenging for all MMS especially those with low-cost sensors.
According to Clive [30], network design is the technique to ensure the reliability and precision of the bundle adjustment, especially the free network adjustment approach. The network design can be classified to four orders as stated by Grafarend [31] and Clive [30]:
  • Zero-Order Design (ZOD), which is associated with datum problem.
  • First-Order Design (FOD), which is associated with optimum network configuration.
  • Second-Order Design (SOD), which is associated with optimum number of observations and their corresponding weighting scheme.
  • Third-Order Design (TOD), which is associated with enhancing the network by adding more images, observations, and object points.
These precautions were considered in the planning stage of this experiment, more specifically the ZOD, FOD, and SOD, which are discussed in Section 5.3.

5.1. Initial EOPs and OPs 3D Coordinate Correction

The initial EOPs and OPs were determined using the SfM algorithm, which provided a good shape for the network compared to the ground data collected using Total Station as shown in Figure 10, but these initial parameters were in an arbitrary frame of coordinates. Therefore, this result needed to be scaled, rotated, and shifted to the global coordinate as discussed in the proposed method.
First the scale was determined using the distance between two successive images; then, the SfM model was rotated using the most accurate IMU data corresponding to one of the images. Finally, the GPS centroid was utilized to translate the network. The results of the transformation process and the corrected initial EOPs are illustrated in Figure 11.
Then, the well-known intersection method was conducted to calculate the 3D coordinates of the desired tie points utilizing the image measurements and the corrected initial EOPs of the involved camera stations as illustrated in Figure 12a,b.
Table 3 shows the accuracy of the corrected initial EOPs of the network using the proposed relative orientation method compared to the ground truth data acquired using Total Station. Although the proposed method attempted to overcome the GPS random errors by employing the GPS solutions centroid, the existing error was within the accuracy of the single point positioning (SPP) solution provided by iOS system. In addition to the horizontal errors that occurred due to the poor accuracy of the low-cost GPS and the uncertainty of scale calculation, the SPP method provided inaccurate height estimation compared to its horizontal accuracy. Table 4 lists the absolute difference between the initial Object Points (OPs) and the absolute reference data acquired by Total station.

5.2. Free Network Adjustment with Geometric Constraint Cooperation

Although, the initial EOPs were corrected using the proposed relative orientation, they still contained some errors due to the uncertainty of the scale automatic determination and the low-cost sensor errors. Therefore, the corrected initial EOPs and OPs became the input to the free network adjustment; and all the selected OPs were included in the inner constraints matrix to fix the network as illustrated in Figure 13. Moreover, three vertical and three horizontal linear features were visible in most of the captured images and distributed in different locations around the object, which were chosen to improve the final solution as illustrated in Figure 13.
Each of these lines is defined by two points that were measured manually to ensure the accuracy of extracting the same line in the overlapped images. The solution was obtained using the free network adjustment two times; one with and the other without adding the geometric constraints equations to the normal matrix. The inclusion of the linear constraints provided more accurate results as illustrated in Table 5. Furthermore, the final iteration residual was very low, which yield to good convergence of the solution as shown in Figure 14.
Table 5 lists the absolute difference between the calculated final OPs and the absolute reference data. The incorporation of the linear features constraints into the free network adjustment improved the absolute position in East, North, and the height by 8, 9 and 2 cm, respectively. This improvement is associated with the absolute position of the OPs. This result is expected since free network adjustment enhances the relative accuracy more than the absolute position.
As can be observed from Table 5 and Figure 15 and Figure 16, the calculated east and north ground coordinates of the tie points were even better than the GPS accuracy when used directly without any enhancement. However, the vertical accuracy was worse than the horizontal accuracy as illustrated in Figure 15 and Figure 16, which is common normal case when acquiring the GPS solution based on the praise point positioning technique. Overall, the proposed method mitigated the GPS random error by relying on the centroid of all the solutions, which were calculated based on the weighted average rather than including all the GPS solutions.

5.3. Impact of Network Design on the Proposed Method

To verify the robustness of the proposed methods, especially the relative accuracy; eight images out of the previously used network are considered as a new network as illustrated in Figure 17. Furthermore, the images that were found to correspond to the most accurate absolute rotations in the previous experiment are removed from the new network. Also, a new set of nine object points are added to the new network. Since the proposed methodologies involved several types of observations with varied levels of uncertainty, the impact of changing any of these observations uncertainty on the final solution was considered. Therefore, the new experiment was divided into two scenarios, namely free network adjustment with and without linear features constraints.

5.3.1. Free Network Adjustment without Linear Feature Constraints

In this section, the ZOD is considered to determine the network datum. Also, the impact of the mean precision (RMS) of the final 3D OPs by changing the standard deviation ( σ ) of the photo/image measurements and the number of OPs was studied as shown in Table 6.
Based on Table 6, there is a direct linear relationship between the image observations standard division ( σ )   and the output 3D OPs coordinate mean precision (RMS). Approximately, the mean precision of the OPs coordinates degrades linearly when the image observations standard division ( σ ) increases. Also, it can be seen that, as the number of OPs decreases, the mean precision of 3D OPs final solution increases.

5.3.2. Free Network Adjustment with Linear Features Constraints

In this section, the effect of changing the number of constraints on the solution is analyzed while the image observation standard deviation remains unchanged from the previous scenario. Also, the constraint observations weight was set to be smaller by five orders of magnitude compared to the image observations, similar to the approach in Gerke [12]. Table 7 shows the RMS of the 3D OPs final solution result when changing the number of linear feature constraints. The incorporation of linear features constraints enhanced the mean precision of the 3D OPs, while increasing the number of constraints further enhanced the RMS.
The results obtained from the proposed methodology is, however, limited to the network design used in this research. The network configuration influences the OPs precision, which thereby affects the relative accuracy of the introduced methodology. However, its absolute accuracy depends on the quality of low-cost GPS, which can vary between one to five meters in ideal conditions.

6. Conclusions

This paper introduced a robust smartphone-based MMS in conjunction with a new proposed workflow that overcomes the drawbacks of random errors associated with the smartphone motion sensors and poor GPS accuracy. The proposed workflow corrects the initial EOPs based on the relative orientation of the captured images using the SfM algorithm. These relative orientations then are used to validate the absolute orientations of these images, which are obtained directly by the smartphone’s motion sensors measurements (i.e., accelerometers and magnetometers). First, the relative rotations are calculated w.r.t the first image in the network utilizing the SfM algorithm. Then, this procedure is repeated to make each image in the network a reference for the SfM model, thereby achieving the relative rotation between each two images in the network. However, to evaluate the absolute rotation for each image that are acquired directly by smartphones, these rotations are used to derive the relative rotation between each two images in the IMU domain. Hence, the difference between each corresponding relative rotation obtained in different domains are evaluated to identify the images that have the most accurate relative rotation. Then, based on statistical evaluation, the absolute rotation of one of these images is used to rotate the SfM model established w.r.t this image. Subsequently, this model is scaled using the ratio between two corresponding distances, where one distance is acquired based on a step detection algorithm utilizing accelerometer measurements along with a pedestrian navigation technique, and the other distance is measured from the selected SfM model. The network global absolute centroid is calculated using the acquired GPS solutions for all the captured images, and this centroid is used to translate the centroid of the SFM model to the global coordinate.
Finally, the corrected initial EOPs are incorporated with the linear features constraints, and refined using free network bundle adjustment. The proposed methodology was applied to a small building that has well distributed GCPs all around the building facade. The proposed system and methodology showed promising results for applications that require three to five meters absolute accuracy. Although this system provided reasonable accuracy compared to the employed sensors and their uncertainty, future research should include finding a way to extract the scale without the user step length measurement, which can introduce large error in case the user step length is changed while walking between two images.

Acknowledgments

This project was funded by research grants from the Natural Sciences and Engineering Research Council of Canada (NSERC). The authors are grateful to the Saudi Ministry of Education for financing the first author’s scholarship at the Department of Geomatics Engineering in the University of Calgary, Canada.

Author Contributions

Naif M. Alsubaie conceived the methodology idea, developed the program and mathematical model, conducted the experiments, and wrote the manuscript. Naser El-Sheimy initiated the research idea and provided valuable feedback. Ahmed A. Youssef helped collect experimental data, specifically the ground truth data acquired by the Total Station, assisted with Section 4.4.3 of this paper, and proposed several suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ellum, C. The Development of a Backpack Mobile Mapping System. Master’s Thesis, The University of Calgary, Calgary, AB, Canada, 2001. [Google Scholar]
  2. Mikhail, E.; Bethel, J.; McGlone, C. Introduction to Modern Photogrammetry; John Wiley & Sons Australia, Ltd.: Milton, Australia, 2001. [Google Scholar]
  3. He, F.; Habib, A. Automated Relative Orientation of UAV-Based Imagery in the Presence of Prior Information for the Flight Trajectory. Photogramm. Eng. Remote Sens. 2016, 82, 879–891. [Google Scholar] [CrossRef]
  4. Hartley, R.I. In defense of the eight-point algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 580–593. [Google Scholar] [CrossRef]
  5. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge Press: Cambridge, UK, 2004. [Google Scholar]
  6. Nister, D. An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 756–770. [Google Scholar] [CrossRef] [PubMed]
  7. Stewénius, H.; Engels, C.; Nistér, D. Recent developments on direct relative orientation. ISPRS J. Photogramm. Remote Sens. 2006, 60, 284–294. [Google Scholar] [CrossRef]
  8. Huang, T.S.; Netravali, A.N. Motion and structure from feature correspondences: A review. Proc. IEEE 1994, 82, 252–268. [Google Scholar] [CrossRef]
  9. McGlone, J.C. Bundle adjustment with object space geometric constraints for site modeling. In Proceedings of the SPIE’S 1995 Symposium on Oe/Aerospace Sensing and Dual Use Photonics, Orlando, FL, USA, 17–21 April 1995. [Google Scholar]
  10. McGlone, C. Bundle adjustment with geometric constriants for hypothesis evaluation. Int. Arch. Photogramm. Remote Sens. 1996, 31, 529–534. [Google Scholar]
  11. Habib, A.F.; Morgan, M.; Lee, Y. Bundle Adjustment with Self–Calibration Using Straight Lines. Photogramm. Rec. 2002, 17, 635–650. [Google Scholar] [CrossRef]
  12. Gerke, M. Using horizontal and vertical building structure to constrain indirect sensor orientation. ISPRS J. Photogramm. Remote Sens. 2011, 66, 307–316. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Hu, K.; Huang, R. Bundle adjustment with additional constraints applied to imagery of the Dunhuang wall paintings. ISPRS J. Photogramm. Remote Sens. 2012, 72, 113–120. [Google Scholar] [CrossRef]
  14. El-Sheimy, N. The Development of VISAT—A Mobile Survey System for GIS Applications. Ph.D. Thesis, University of Calgary, Calgary, AB, Canada, 1996. [Google Scholar]
  15. Al-Hamad, A.; El-Sheimy, N. Smartphones Based Mobile Mapping Systems. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 29–34. [Google Scholar] [CrossRef]
  16. Al-Hamad, A.; Moussa, A.; El-Sheimy, N. Video-based Mobile Mapping System Using Smartphones. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 1, 13–18. [Google Scholar] [CrossRef]
  17. Al-Hamad, A. Mobile Mapping Using Smartphones. Master’s Thesis, University of Calgary, Calgary, AB, Canada, 2014. [Google Scholar]
  18. Alsubaie, N.; El-Sheimy, N. The Feasibility of 3d Point Cloud Generation from Smartphones. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 621–626. [Google Scholar] [CrossRef]
  19. Banville, S.B.; Diggelen, F. Innovation: Precise Positioning Using Raw GPS Measurements from Android Smartphones. Available online: http://gpsworld.com/innovation-precise-positioning-using-raw-gps-measurements-from-android-smartphones/ (accessed on 20 June 2017).
  20. Yoon, D.; Kee, C.; Seo, J.; Park, B. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones. Sensors 2016, 16, 910. [Google Scholar] [CrossRef] [PubMed]
  21. Niu, X.; Li, Y.; Zhang, H.; Wang, Q.; Ban, Y. Fast Thermal Calibration of Low-Grade Inertial Sensors and Inertial Measurement Units. Sensors 2013, 13, 12192–12217. [Google Scholar] [CrossRef] [PubMed]
  22. Li, Y. Integration of MEMS Sensors, WiFi, and Magnetic Features for Indoor Pedestrian Navigation with Consumer Portable Devices. Ph.D. Thesis, University of Calgary, Calgary, AB, Canada, 2016. [Google Scholar]
  23. Tang, P.V.; Tan, T.D.; Trinh, C.D. Characterizing Stochastic Errors of MEMS—Based Inertial Sensors. Available online: https://js.vnu.edu.vn/MaP/article/view/1688 (accessed on 20 June 2017).
  24. Stamatopoulos, C.; Fraser, C.S. Automated Target-Free Network Orienation and Camera Calibration. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 5, 339–346. [Google Scholar] [CrossRef]
  25. DeWitt, B.A.; Wolf, P.R. Elements of Photogrammetry (with Applications in GIS), 3rd ed.; McGraw-Hill Higher Education: New York, NY, USA, 2000. [Google Scholar]
  26. Lee, H.; Choi, S.; Lee, M. Step Detection Robust against the Dynamics of Smartphones. Sensors 2015, 15, 27230–27250. [Google Scholar] [CrossRef] [PubMed]
  27. Kuang, S. Geodetic Network Analysis and Optimal Design: Concepts and Applications; Ann Arbor Press: Ann Arbor, MI, USA, 1996; ISBN 978-1-57504-044-8. [Google Scholar]
  28. Granshaw, S.I. Bundle Adjustment Methods in Engineering Photogrammetry. Photogramm. Rec. 1980, 10, 181–207. [Google Scholar] [CrossRef]
  29. Lichti, D.D.; Chow, J.C.K. Inner Constraints for Planar Features. Photogramm. Rec. 2013, 28, 74–85. [Google Scholar] [CrossRef]
  30. Clive, F. Network Design Considerations for Non-Topographic Photogrammetry. Photogramm. Eng. Remote Sens. 1984, 50, 1115–1126. [Google Scholar]
  31. Grafarend, E. Optimization of Geodetic Networks; Bolletino di Ceodesia a Science Affini: Stuttgart, Germany, 1974; Volume 33, pp. 351–406. [Google Scholar]
Figure 1. (a) First test; (b) Second test @ Google.
Figure 1. (a) First test; (b) Second test @ Google.
Sensors 17 02237 g001
Figure 2. Developed iOS application.
Figure 2. Developed iOS application.
Sensors 17 02237 g002
Figure 3. Methodology Workflow.
Figure 3. Methodology Workflow.
Sensors 17 02237 g003
Figure 4. iPhone 6 Rotation Angles Definition.
Figure 4. iPhone 6 Rotation Angles Definition.
Sensors 17 02237 g004
Figure 5. (a) Modified SfM Rotation, (b) Modified SfM Translation.
Figure 5. (a) Modified SfM Rotation, (b) Modified SfM Translation.
Sensors 17 02237 g005
Figure 6. The difference between each corresponding relative rotation.
Figure 6. The difference between each corresponding relative rotation.
Sensors 17 02237 g006
Figure 7. The Step Detection Result.
Figure 7. The Step Detection Result.
Sensors 17 02237 g007
Figure 8. The Step Detection Result.
Figure 8. The Step Detection Result.
Sensors 17 02237 g008
Figure 9. Straight Linear Feature Constraints.
Figure 9. Straight Linear Feature Constraints.
Sensors 17 02237 g009
Figure 10. (a) The reference 3D model; (b) 3D model obtained by SfM.
Figure 10. (a) The reference 3D model; (b) 3D model obtained by SfM.
Sensors 17 02237 g010
Figure 11. The final corrected initial EOPs and Ops.
Figure 11. The final corrected initial EOPs and Ops.
Sensors 17 02237 g011
Figure 12. (a) Horizontal coordinate of OPs based on intersection; (b) Vertical coordinate of OPs based on intersection.
Figure 12. (a) Horizontal coordinate of OPs based on intersection; (b) Vertical coordinate of OPs based on intersection.
Sensors 17 02237 g012
Figure 13. Normal matrix with all object points constrained by inner matrix G and Geometric Constraint.
Figure 13. Normal matrix with all object points constrained by inner matrix G and Geometric Constraint.
Sensors 17 02237 g013
Figure 14. Residual at the final iteration.
Figure 14. Residual at the final iteration.
Sensors 17 02237 g014
Figure 15. Estimated horizontal coordinate of tie point.
Figure 15. Estimated horizontal coordinate of tie point.
Sensors 17 02237 g015
Figure 16. Estimated vertical coordinates of tie point.
Figure 16. Estimated vertical coordinates of tie point.
Sensors 17 02237 g016
Figure 17. Second experiment final corrected initial EOPs and Ops.
Figure 17. Second experiment final corrected initial EOPs and Ops.
Sensors 17 02237 g017
Table 1. Example of iPhone 6 chipsets accuracy over known distance.
Table 1. Example of iPhone 6 chipsets accuracy over known distance.
TrajectoryTest 1 Test 2
reference distance 69.789 m69.789 m
iPhone 6 GPS distance73.9 m199 m
Table 2. The difference between each corresponding relative rotation result.
Table 2. The difference between each corresponding relative rotation result.
DDImage 1Image 2Image 3
Image 1……‥ N 1 2 N 1 3
Image 2 N 2 1 ……‥ N 2 3
Image 3 N 3 1 0.005 ……‥
Table 3. The corrected initial position of the Station’s camera absolute accuracy.
Table 3. The corrected initial position of the Station’s camera absolute accuracy.
ErrorEast (m)North (m)Height (m)ErrorEast (m)North (m)Height (m)
Image 1−3.01−2.39−14.66Image 102.43−1.88−12.86
Image 2−1.41−4.59−14.62Image 111.97−0.57−12.03
Image 3−3.50−4.15−15.18Image 122.71−0.05−11.51
Image 4−0.99−4.09−14.30Image 130.910.30−10.38
Image 5−0.36−5.65−14.84Image 141.431.55−9.70
Image 62.13−6.26−15.09Image 153.362.54−8.67
Image 71.03−4.69−14.24Image 161.632.72−8.16
Image 83.37−5.12−14.66Image 170.171.85−9.49
Image 92.32−3.66−13.84Image 18−1.842.52−9.42
RMS
East (m)North (m)Height (m)
2.163.5212.66
Table 4. The determined absolute accuracy of the initial Object Points (OPs).
Table 4. The determined absolute accuracy of the initial Object Points (OPs).
ErrorEast (m)North (m)Height (m)ErrorEast (m)North (m)Height (m)
OP 10.58−1.25−14.64OP 13−0.85−2.15−13.97
OP 20.44−1.70−14.52OP 14−0.08−1.32−13.66
OP 3−0.34−2.56−14.46OP 15−0.35−0.79−12.61
OP 4−0.09−2.72−14.56OP 16−0.27−0.54−11.10
OP 50.09−2.94−14.68OP 17−0.13−0.64−11.18
OP 60.23−2.61−14.18OP 18−1.04−0.88−10.97
OP 70.66−2.81−14.14OP 19−0.96−0.58−11.15
OP 80.68−2.50−14.47OP 20−0.86−0.29−11.25
OP 9−0.05−2.91−14.34OP 21−1.12−0.01−10.90
OP 100.54−2.12−13.83OP 22−0.70−0.36−11.63
OP 110.54−2.21−13.85OP 23−0.75−0.35−11.35
OP 120.72−1.57−13.96OP 24−1.38−0.19−11.55
OP 25−0.87−0.34−11.61
RMS
East (m)North (m)Height (m)
0.681.7613.06
Table 5. The calculated final 3D ground coordinates of the accuracy of the tie points.
Table 5. The calculated final 3D ground coordinates of the accuracy of the tie points.
ErrorEast (m)North (M)Height (m)ErrorEast (m)North (M)Height (m)
OP 1−0.71−2.74−14.16OP 130.64−0.86−13.05
OP 2−0.63−2.67−14.26OP 14−0.05−0.83−12.79
OP 3−0.36−2.77−14.58OP 15−0.16−0.61−12.36
OP 40.05−2.65−14.65OP 16−0.30−0.22−11.79
OP 50.24−2.39−14.54OP 17−0.16−0.11−11.73
OP 6−0.02−2.61−14.63OP 18−0.36−0.85−12.27
OP 70.52−2.33−14.63OP 19−0.63−1.04−12.11
OP 80.35−2.01−14.29OP 20−0.87−0.95−11.68
OP 90.89−1.81−14.19OP 21−1.19−0.40−10.72
OP 100.24−1.49−13.69OP 22−0.71−1.33−11.72
OP 110.53−1.41−13.64OP 23−0.73−0.52−10.71
OP 120.07−1.14−13.26OP 24−1.24−1.55−11.27
OP 25−0.79−0.92−11.82
RMS
East (m)North (m)Height (m)
0.601.6713.04
Table 6. The effect of changing observation weight on 3D OPs final solution RMS.
Table 6. The effect of changing observation weight on 3D OPs final solution RMS.
Number of Tie PointsImage Observation σ Mean Precision of 3D OPs Final Solution (RMS)
XYZ
261/2 pixel0.0040.0040.002
2 pixels0.0180.0150.007
5 pixels0.0440.0370.018
171/2 pixel0.0070.0050.002
2 pixels0.0270.020.009
5 pixels0.0680.050.024
Table 7. The effect of changing number of constraints on 3D OPs final solution RMS.
Table 7. The effect of changing number of constraints on 3D OPs final solution RMS.
Number of Vertical Linear FeaturesNumber of Horizontal Linear FeaturesImage Observation σ Mean Precision of 3D OPs Final Solution
XYZ
1/2 pixel0.0040.0040.002
112 pixels0.0180.0150.007
5 pixels0.0440.0370.018
221/2 pixel0.00330.00330.0014
2 pixels0.0130.0120.005
5 pixels0.0330.0330.014
431/2 pixel0.0020.0020.001
2 pixels0.0080.0080.004
5 pixels0.0210.0220.010

Share and Cite

MDPI and ACS Style

Alsubaie, N.M.; Youssef, A.A.; El-Sheimy, N. Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints. Sensors 2017, 17, 2237. https://doi.org/10.3390/s17102237

AMA Style

Alsubaie NM, Youssef AA, El-Sheimy N. Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints. Sensors. 2017; 17(10):2237. https://doi.org/10.3390/s17102237

Chicago/Turabian Style

Alsubaie, Naif M., Ahmed A. Youssef, and Naser El-Sheimy. 2017. "Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints" Sensors 17, no. 10: 2237. https://doi.org/10.3390/s17102237

APA Style

Alsubaie, N. M., Youssef, A. A., & El-Sheimy, N. (2017). Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints. Sensors, 17(10), 2237. https://doi.org/10.3390/s17102237

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop