[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Device Management and Data Transport in IoT Networks Based on Visible Light Communication
Next Article in Special Issue
Improving the Performance of Multi-GNSS Time and Frequency Transfer Using Robust Helmert Variance Component Estimation
Previous Article in Journal
EEG-Based Emotion Recognition Using Quadratic Time-Frequency Distribution
Previous Article in Special Issue
Optimal Particle Filter Weight for Bayesian Direct Position Estimation in a GNSS Receiver
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Approach to Unwanted-Object Detection in GNSS/LiDAR-Based Navigation

1
College of Aerospace & Mechanical Engineering, The University of Arizona, Tucson, AZ 85721, USA
2
Armour College of Engineering, Illinois Institute of Technology, Chicago, IL 60616, USA
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(8), 2740; https://doi.org/10.3390/s18082740
Submission received: 21 June 2018 / Revised: 15 August 2018 / Accepted: 16 August 2018 / Published: 20 August 2018
(This article belongs to the Special Issue GNSS and Fusion with Other Sensors)
Figure 1
<p>Defining Integrity Risk for Automotive Applications. The integrity risk is the probability of the car being outside the alert limit requirement box (blue shaded area) when it was estimated to be inside the box. When lateral deviation is of primary concern, then the alert limit is the distance <math display="inline"><semantics> <mi>ℓ</mi> </semantics></math> between edge of car and edge of lane.</p> ">
Figure 2
<p>Simulation results assuming no unwanted objects (UO). (<b>top left</b>) On the upper plot, the thick black line represents the actual cross-track positioning error and the thin line is the one-sigma covariance envelope. The lower plot shows <span class="html-italic">P</span>(<span class="html-italic">HI<sub>k</sub></span>) bounds for the GPS-denied area crossing scenario. (<b>top right</b>) Snapshot vehicle-landmark geometry at the time step corresponding to the large increase in <span class="html-italic">P</span>(<span class="html-italic">HI<sub>k</sub></span>) Bound (time = 29 s). (<b>bottom left</b>) Azimuth elevation sky plot showing GPS satellite geometry at time = 29 s. (<b>bottom right</b>) Snapshot LiDAR scan at time = 29 s when landmark “1” is hidden behind landmark “4”.</p> ">
Figure 3
<p><span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>) bounds taking into account the possibility of IA and the potential presence of UOs. The difference between the dashed black line and the solid black line quantifies the impact on <span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>) of undetected UOs when assuming correct association (CA). The difference between the dashed red line and the solid red line measures the impact on <span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>) of undetected UOs when accounting for incorrect associations.</p> ">
Figure 4
<p>Simulation results accounting for UOs. (<b>a</b>) <span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>)-bound contributions under each UO hypothesis (<span class="html-italic">H</span><sub>0</sub> assumes no UO, <span class="html-italic">H</span><sub>1</sub> assumes a UO masks landmark “1”, etc.): the overall risk is the thick green line. (<b>b</b>) Color-coded landmark geometry: the color code identifies which landmark is masked by a UO under the corresponding hypothesis in the left-hand-side plot.</p> ">
Figure 5
<p>Experimental setup of a forest-type scenario, where a GPS/LiDAR-equipped rover is driving by six landmarks (cardboard columns) in a GPS-denied area. GPS is artificially blocked by a simulated tree canopy and a precise differential GPS solution is used for truth trajectory determination.</p> ">
Figure 6
<p>Experimental results accounting for UOs (<b>a</b>) <span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>)-bound contributions for each unmapped object (UO) hypothesis for the preliminary experimental dataset: the overall risk is the thick black line. (<b>b</b>) Color-coded subsets identifying which landmark is occluded by a UO under each one of the six single-UO hypotheses.</p> ">
Versions Notes

Abstract

:
In this paper, we develop new methods to assess safety risks of an integrated GNSS/LiDAR navigation system for highly automated vehicle (HAV) applications. LiDAR navigation requires feature extraction (FE) and data association (DA). In prior work, we established an FE and DA risk prediction algorithm assuming that the set of extracted features matched the set of mapped landmarks. This paper addresses these limiting assumptions by incorporating a Kalman filter innovation-based test to detect unwanted object (UO). UO include unmapped, moving, and wrongly excluded landmarks. An integrity risk bound is derived to account for the risk of not detecting UO. Direct simulations and preliminary testing help quantify the impact on integrity and continuity of UO monitoring in an example GNSS/LiDAR implementation.

1. Introduction

This paper describes the design, analysis, and preliminary testing of a new method to quantify safety in GNSS/LiDAR navigation systems. An integrity risk bound is derived, which accounts for failures to detect undesirable, unmapped and wrongly extracted obstacles. The paper describes an innovation-based method, which is an alternative to the solution separation approach used in [1]. In addition, the paper provides the means to quantify the impact of unwanted objects (UO) on the risk of incorrect association. This work is intended for driverless cars, or highly automated vehicles (HAV) [2,3], operating in changing environments where unknown, moving obstacles (cars, buses, and trucks) are not wanted as landmarks for localization, and may occlude other useful, mapped landmarks.
This research leverages prior analytical work carried out in civilian aviation navigation where safety is assessed in terms of integrity and continuity [4]. These performance metrics are sensor- and platform-independent. Integrity is a measure of trust in sensor information: integrity risk is the probability of undetected sensor errors causing unacceptably large positioning uncertainty [4]. Continuity is a measure of the navigation system’s ability to operate without unscheduled interruption. Both loss of integrity and loss of continuity can place the HAV in hazardous situations [4,5].
Several methods have been established to predict integrity and continuity risks in GNSS-based aviation applications [6,7,8]. Unfortunately, the same methods do not directly apply to HAVs, because ground vehicles operate under sky-obstructed areas where GNSS signals can be altered or blocked by buildings and trees.
HAVs require sensors in addition to GNSS, including LiDARs, cameras, or radars. This paper focuses on LiDARs because of their prevalence in HAVs, of their market availability, and of our prior experience. A raw LiDAR scan is made of thousands of data points, each of which individually does not carry any useful navigation information. Raw measurements must be pre-processed before they can be used to estimate HAV positioning and orientation (or pose).
A first class of algorithms establishes correlations between successive scans to estimate sensor changes in ‘pose’ (i.e., position and orientation) [9,10,11,12]. These procedures, including the Iterative Closest Point (ICP) approach [13], can become cumbersome when evaluating safety of HAVs moving over time. A second class of algorithms provides sensor localization by tracking recognizable, static features in the perceived environment (seminal references and survey papers can be found in [14,15,16,17,18,19]). Features can include, for example, lines or planes corresponding to building walls in two- or three-dimensional scans, respectively. Previous knowledge of feature parameters can be provided either from a landmark map, or from past-time estimation in Simultaneous Localization and Mapping (SLAM) [15,20]. The resulting information can then be iteratively processed using sequential estimators in SLAM (e.g., Extended Kalman filter or EKF), which is convenient in practical implementations. To estimate the HAV’s pose starting from a raw LiDAR scan, two intermediary, pre-estimator procedures must be carried out: feature extraction (FE), and data association (DA).
First, FE aims at finding the few most consistently recognizable, viewpoint-invariant, and mutually distinguishable landmarks in the raw sensor data. Second, DA aims at assigning the extracted features to the corresponding feature parameters assumed in the estimation process, i.e., at finding the ordering of mapped landmarks that matches the ordering of extracted features over successive observations. Incorrect association is a well-known problem that can lead to large navigation errors [21], thereby representing a threat to navigation integrity. FE and DA can be challenging in the presence of sensor uncertainty. This is why many sophisticated algorithms have been devised [17,18,19,21,22,23]. But, how can we prove whether FE and DA are safe for life-critical HAV navigation applications?
This research question is mostly unexplored. Several publications on multi-target tracking describe relevant approaches to evaluate the probability of correct association in the presence of measurement uncertainty [24,25]. However, these algorithms are not well suited for safety-critical HAV applications due to their lack of prediction capability, to approximations that do not necessarily upper-bound risks, and to high computational loads. Also, the risk of FE is not addressed. Overall, research on integrity and continuity of FE and DA is sparse.
This paper builds upon prior work in [1,26,27,28], where we developed an analytical integrity risk prediction method using a multiple-hypothesis innovation-based DA process. We established a compact expression for the integrity risk of LiDAR-based pose estimation over successive iterations. However, references [26,27,28] made simplifying assumptions that limit the applicability of these prior results. For example, we assumed that the set of landmarks in the a-priori map was exactly the same as the one being extracted. This assumption was relaxed in [1] where we developed an integrity-risk-minimizing data-selection method. To achieve this, we derived a bound on the risk of incorrect association, with which a subset of measurements can be used while considering potential wrong associations with all landmarks surrounding the LiDAR. This bound was used in a preliminary approach to detect UO using solution separation tests. In practice, UO such as other vehicles passing by are likely to be extracted, and may even occlude other mapped landmarks. Obstacle detection methods have been developed to mitigate the impact of such UOs (example methods are described in [29,30]). But, the safety risks of using UOs as landmarks for navigation have yet to be fully quantified.
In response, in this paper, we derive new methods to quantify the integrity risk caused by failures to detect unwanted obstacles (UO), while guaranteeing a predefined false alert risk requirement.
Section 2 of the paper provides an overview of the risk evaluation methods developed in [1,26,27,28], and of their limitations. These methods use a nearest-neighbor DA criterion [9], defined by the minimum normalized norm of the EKF innovation vectors over all possible landmark permutations. Section 3 and Section 4 deal with the situation where a mapped landmark is not extracted, but another unknown obstacle is extracted instead (e.g., case of an obstacle masking a mapped landmark). This paper assumes that UOs only mask one unknown landmark at a time as the HAV drives by. Section 3 describes the innovation-based approach employed to detect the UO (which differs from the solution separation detector employed in [1]). An integrity risk bound is then derived to incorporate the risk of not detecting a UO when one might be present. This bound is analytically evaluated in two steps in Section 4: we account for the impact of undetected UO: (a) on the probability of hazardously misleading information (HMI) under the correct association (CA) hypothesis, and (b) on the probability of incorrect association (IA). Navigation integrity performance is then assessed in Section 5 using direct simulations and preliminary testing for an example implementation using GNSS and two-dimensional LiDAR data.

2. Background: Integrity Risk Bound Accounting for Incorrect Associations

This section presents an overview of the integrity risk evaluation method described in [1,26,28], which uses a multiple-hypothesis innovation-based DA process.

2.1. Integrity Risk Definition and Integrity Risk Bound

The integrity risk, or probability of hazardous misleading information (HMI) at time k , is noted P ( H M I k ) , and is defined in Figure 1. The safety criterion is: P ( H M I k ) I R E Q , k where I R E Q , k is a predefined integrity risk requirement set by a certification authority (similar to requirements set for aviation applications in [4,8]). Values for I R E Q , k that might be used in future HAV applications can be found in [5].
In [26,28], we established an analytical bound on the integrity risk, which accounts for the risk of incorrect associations. This bound is expressed as:
P ( H M I k ) 1 [ 1 P ( H M I k | C A K ) ] P ( C A K ) + I F E , k
with
P ( H M I k | C A K ) = 2 Q { / σ k }
P ( C A K ) l = 1 k P χ 2 { n l + m l , L l 2 λ l 2 4 }
where
k is an index identifying a time step;
K designates a range of indices: K { 0 , , k } , from filter initiation to time k ;
C A K is the correct association hypothesis for all landmarks, at all times 0, ..., k ;
Q {   } is the tail probability function of the standard normal distribution;
is the specified alert limit that defines a hazardous situation [4,5,8] (e.g., see Figure 1);
σ k is the standard deviation of the estimation error for the vehicle state of interest (or linear combination of states);
P χ 2 { d o f , T } is the probability that a chi-squared-distributed random variable with “dof” degrees of freedom is lower than some value T;
n l is the number of measurements at time step l ;
m l is the number of estimated state parameters at time step l ;
I F E , l is an integrity risk budget allocation, i.e., a fraction of I R E Q , k that we choose to satisfy: I F E , k < < I R E Q , k ;
L l 2 is the minimum mean normalized separation between landmark features that can be guaranteed with probability larger than 1 I F E , l . The normalized feature separation metric is derived in [28]. L l 2 is derived at FE using a map or database of landmarks or using landmark observations at previous time-steps in SLAM;
λ l 2 is a mapping coefficient from separation space to EKF innovation space. This coefficient is determined by solving an eigenvalue problem in [28]. The minimum eigenvalue is taken to lower bound P ( C A K ) , which is conservative;
L l 2 λ l 2 forms a probabilistic lower bound on the mean innovation’s norm, which is further described in the Section 2.2.
The integrity risk bound in Equation (1) is refined in this paper to account for the presence of UOs and for failures to detect them. Equation (1) captures a key tradeoff in data association: on the one hand, using only few measurements can cause a large nominal estimation error and hence large P ( H M I k | C A K ) ; but on the other hand, few measurements from sparsely distributed landmarks can improve P ( C A K ) because features are “separated”, distinguishable, and therefore can be robustly associated. P ( H M I k ) is unknown, but we can assess safety by comparing I R E Q , k to the upper bound given in Equations (1)–(3), where all terms are known.

2.2. Innovation-Based Data Association

Equation (1) is derived for an innovation-based DA process, which is further described in the following paragraphs. Let n L be the total number of visible landmarks and n F the number of estimated feature parameters per landmark. Feature parameters can include landmark position, size, orientation, surface properties, etc. When using LiDAR only (we integrate GNSS in Section 5), the total number of feature parameters within the visible landmark set is: n k n L n F . We can stack the actual (true) values of the extracted feature parameters for all landmarks in an n k × 1 vector z k . Let z ^ k be an estimate of z k . We assume that the cumulative distribution function of z ^ k can be bounded by a Gaussian function with mean z k and covariance matrix V k [31,32,33]. We use the notation: z ^ k ~ N ( z k , V k ) .
The nonlinear measurement equation can be written in terms of the m k × 1 state parameter vector x k as
z ^ k = h k ( x k ) + v k
where
x k includes vehicle pose parameters and may also include landmark feature parameters (for SLAM-type approaches);
v k is the extracted measurement noise vector: v k ~ N ( 0 n × 1 , V k ) , where 0 a × b is an a × b matrix of zeros.
The mean of z ^ k is z k = h k ( x k ) . Equation (4) can be linearized about an estimate x ¯ k of x k :
z ^ k h k ( x ¯ k ) + H k ( x k x ¯ k ) + v k   where   H k h k ( x k ) x k | x ¯ k
The ordering of landmarks in z ^ k is arbitrary and unknown. A nearest-neighbor approach (described below) is used to determine the ordering of measurement-to-state coefficients in h k ( x ¯ k ) and H k . Failing to find the landmark ordering that matches that of z ^ k causes estimation errors called incorrect associations (IA).
If n L landmarks are extracted, there are ( n L ! ) ways to arrange measurements in z ^ k , which we call ( n L ! ) candidate associations. For clarity of exposition, we assume that the total number of mapped landmarks, or of previously observed landmarks when using SLAM, is also the number n L of extracted landmarks (procedures to address this assumption are given in [1]). Let subscript i designates association hypotheses, for i = 0 , , n A , where n A = n L ! 1 . We define i = 0 the fault-free, correct association (CA) hypothesis, and the other n A hypotheses are IA. IA impacts the EKF estimation process through the innovation vector γ i , k . Vector γ i , k is an effective indicator of CA because it is zero mean only for the correct association.
In all IA cases, the mean of γ i , k is not zero and is expressed in terms of n × n permutation matrices A i , k , for i = 1 , , n A , as
γ i , k = z ^ k A i , k h k ( x ¯ k ) = y i , k + v k A i , k H k ε ¯ k
where
y i , k h k ( x k ) A i , k h k ( x k ) = ( I n A i , k ) z k   and   y 0 , k = 0
where ε ¯ k is the EKF state prediction error vector ( ε ¯ k x ¯ k x k ) and I a is the a × a identity matrix.
Let P ¯ k be the EKF state prediction error covariance matrix. We select the association candidate that satisfies the nearest-neighbor association criterion [9], defined as
min i = 0 , , n A γ i , k 2
where
γ i , k 2 γ i , k T Y i , l 1 γ i , k   and   Y i , k = A i , k H k P ¯ k H k T A i , k T + V k   for   i = 0 , , n A
The probability of correct association is the probability of the following event occurring: i = 1 n A { γ 0 , k 2 γ i , k 2 } . We can determine the a priori distributions of variables γ i , k 2 , for i = 0 , , n A , except their mean values that are unknown. In [28], we show that the term L l 2 λ l 2 used in Equation (1) is a lower bound on the mean innovation’s norm y i , l 2 ( y i , l 2 y i , l T Y i , l 1 y i , l ). Equation (1) is a bound on P ( H M I k ) , but it assumes that no UO is present. We first design a UO detector and derive a new P ( H M I k ) bound in Section 3, and then we establish an analytical method to evaluate the impact of undetected UOs on this new bound in Section 4.

3. Risks Involved with Unwanted Object Detection

In the presence of a UO, the innovation vector’s norm in Equation (9) is nonzero under all association hypotheses. In this case, the correct association hypothesis must be redefined. We call correct association (CA) the one where all landmarks that are not occluded by a UO are correctly associated, i.e., where the innovation vector would be zero mean if the UO was removed. The nonzero mean in the CA’s innovation vector is caused by the UO only, not by other incorrectly associated landmarks.

3.1. Innovation-Based Detector

If a UO is present, γ i , k does not have a mean of zero even under CA. To identify such events, we can set a threshold T k 2 on the minimum innovation norm squared, or, since the process is performed over time, on the running sum of minimum innovation norms squared. Using innovations (instead of solution separations as in [1]) will facilitate evaluation of P ( C A K ) in Section 4. The UO detection test statistic is defined as
q k 2 = l = 0 k min i = 0 , , n A γ i , l 2
Since the innovation sequence is white, q k 2 is non-centrally chi-squared distributed with n D O F , k = l = 0 k n l degrees of freedom and noncentrality parameter (NCP) μ Q , k 2 . We use the notation q k 2 ~ χ 2 ( n D O F , k , μ Q , k 2 ) . μ Q , k 2 , which is further discussed in Section 4. The detection threshold T k 2 is set according to a continuity risk requirement C R E Q to limit the risk of false alerts. False alerts occur when no UO is present, causing q k 2 ’s NCP to be zero under CA. Thus, T k 2 is given by
0 T k 2 χ τ 2 ( n D O F , k , 0 ) d τ = 1 C R E Q   or   equivalently , T k 2 = P χ 2 1 { n D O F , k , 1 C R E Q }
where P χ 2 1 {   } is the inverse cumulative distribution function (CDF) of the chi-squared distribution χ τ 2 ( n D O F , k , 0 ) evaluated at the 1 C R E Q quantile.
If T k 2 is exceeded, we interrupt the mission. (As an alternative to mission interruption, we could select a different set of landmark feature measurements as in [1,34], but this is beyond the scope of this paper.) This does not impact P ( H M I k ) . However, if T k 2 is not exceeded, a UO may still be present because the detection test statistic q k 2 is a random, noisy variable. Navigation errors due to undetected UOs can cause the vehicle to crash.

3.2. Integrity Risk in Presence of UO

To quantify the integrity risk caused by potentially undetected UOs, the P ( H M I k ) definition in Equation (1) is modified: HMI is the joint event of the car being out of lane while no alert has been sent. The integrity risk is redefined as
P ( H M I k ) = P ( | ε ^ k | > [ l = 0 k q l 2 T l 2 ] )
where ε ^ k is the EKF state estimation error for the state of interest, e.g., for the vehicle’s lateral deviation within its lane. Because ε ^ k and q k 2 are obtained after associating LiDAR data to a landmark map, we consider a set of mutually exclusive, exhaustive hypotheses of correct associations (CA) and incorrect associations (IA). We derived the following bounds:
P ( H M I k ) P ( H I k N D K C A K ) + P ( H I k N D K I A K ) + I F E , k       P ( H I k N D k | C A K ) + P ( N D K I A K ) + I F E , k
where
H I k is the event of hazardous information (HI) at time k , defined as H I k | ε ^ k | > ;
N D K is the event of no detection (ND) at all previous times 0, ..., k , defined as N D K l = 0 k q l 2 T l 2 ;
N D K is the event of ND at time k , defined as N D K q k 2 T k 2 ;
C A K is the CA hypothesis for all landmarks, at all times 0, ..., k ;
I A K is the IA hypothesis for any landmarks, at any time 0, ..., k .
In Section 4, we derive upper bounds on P ( H I k N D K | C A K ) and P ( N D K I A K ) .

4. Analytical Bounds on Risks Caused by Undetected Unwanted Objects

As stated in Section 1, this paper assumes that UOs only mask one unknown landmark at a time as the HAV drives by. This can be extended to multiple UOs masking one subset of landmarks at a time, using the procedures described in [1]. However, the performance analysis in Section 5 does not illustrate this case. The limitation is that the UO-free subset must be large enough to enable HAV pose estimation; the method requires landmark redundancy because it assumes an uncertain vehicle dynamic model and no inertial navigation system.

4.1. Risk of HMI Due to Undetected UO

We consider a set of mutually exclusive, exhaustive hypotheses H h of a UO masking a landmark h (or landmark subset h ) for h = 0 , , n H , where n H is the total number of hypotheses. We note H 0 the fault-free (no UO) hypothesis. Using the law of total probability, P ( H I k N D K | C A K ) is rewritten as
P ( H I k N D k | C A K ) = h = 0 n H P ( H I k N D k H h | C A K )
We have no prior knowledge on the probability of occurrence of H h , but we can bound the sum of their occurrence probabilities by 1. Thus, P ( H I k N D K | C A K ) can be upper-bounded using the following expression:
P ( H I k N D k | C A K ) max h = 0 , , n H P ( | ε ^ k | > q k 2 T k 2 | H h C A K )
Recalling that ε ^ k and q k 2 are statistically independent (e.g., [35,36]), we can rewrite the bound in Equation (15) as
P ( H I k N D k | C A K ) max h = 0 , , n H P ( | ε ^ k | > | H h C A K ) P ( q k 2 T k 2 | H h C A K )
Under the correct association hypothesis ( C A K ), the distributions of ε ^ k and q k 2 are known except for mean values ε ^ k ~ N ( μ k , σ k 2 ) and q k 2 ~ χ 2 ( n D O F , k , μ Q , k 2 ) . Thus, Equation (16) can be upper-bounded using receiver autonomous integrity monitoring (RAIM) methods [6,7,34,35,36,37]. A UO causes a shift in the mean of ε ^ k and in the NCP of q k 2 . Large UO-induced feature measurement errors cause large ε ^ k (i.e., high risk of HI) but also cause large q k 2 , which makes the UO easier to detect (i.e., low risk of ND).
To analyze this tradeoff, innovation-based chi-squared RAIM methods consider the failure mode slope (FMS) [34,35,36,37]. Given a UO hypothesis H h for h 0 , the FMS is the ratio of the mean estimation error over the NCP of the test statistic g h , k ( μ k 2 / μ Q , k 2 ) 1 / 2 . Recent analytical results in [35] were established in the context of GNSS/INS integration. They provide the means to recursively determine the FMS when using an EKF for estimation and a sequence of innovations for detection. We use this method to determine the bound in Equation (16) for the risk-maximizing hypothesis H h for h = 0 , , n H , i.e., for the worst-case FMS g M A X , k max h = 1 , , n H g h , k :
P ( H I k N D k | C A K ) max η [ ( Q { + η g M A X , k σ k } + Q { η g M A X , k σ k } ) P N C χ 2 { n D O F , k , η 2 , T k 2 } ]
where η is a search parameter (called the fault magnitude in [36]) that is easily determined at each time step k using a one-dimensional search, e.g., using an interval-halving method [36], and where
P N C χ 2 ( n D O F , k , η 2 , T k 2 ) 0 T k 2 χ τ 2 ( n D O F , k , η 2 ) d τ

4.2. Risk of Incorrect Association Due to Undetected UO

This subsection aims at evaluating the other unknown term in Equation (13): P ( N D K I A K ) . The presence of a UO can cause the risk of IA to grow without bound. In this case again, the detector is leveraged to limit the impact of UO on safety risks. However, in contrast with Section 4.1, two major challenges must be tackled to upper-bound P ( N D K I A K ) :
(i)
the events I A K and N D K are correlated because both events depend on the same innovation vectors; and
(ii)
unlike on the left-hand side in Equation (17), there is no condition on association (no “given C A K ”), so we do not know which association is used to compute the innovations in the detection test statistic q k 2 .
In response, we used an approach based on the minimum detectable error (MDE) concept used in the GPS Local Area Augmentation System (LAAS) [4,38,39]. The MDE is a probabilistic bound on the NCP of the chi-squared detection test statistic. The Appendix A shows that
P ( N D k I A K ) l = 0 k ( P N C χ 2 { n l + m l , μ M D E , l 2 , L l 2 λ l 2 4 } + I M D E , l )
where μ M D E , l 2 is the MDE due to a UO at time l . μ M D E , l 2 can be computed using the following equation:
0 T l 2 χ τ 2 ( n l , μ M D E , l 2 ) d τ = I M D E , l
The probability I M D E , l is an integrity risk requirement allocation, i.e., a fraction of I R E Q , l such that I M D E , l < < I R E Q , l . μ M D E , l 2 is the smallest value that the detection test statistic NCP can take to ensure that the risk of no detection stays below I M D E , l . μ M D E , l 2 is a probabilistic bound, not a random variable (which addresses challenge (i) above), and is independent of the association candidate (Equation (20) only depends on the number of degrees of freedom, thus addressing (ii)).

4.3. Summary of the New Integrity Risk Bound, Accounting for Presence of UO

In the presence of UOs due to wrong landmark feature extraction, the probability of hazardous misleading information (HMI) at time k can be bounded by the following expression:
P ( H M I k ) P ( H I k N D k | C A K ) + P ( N D k I A K ) + I F E , k
with
P ( H I k N D k | C A K ) max η [ ( Q { + η g M A X , k σ k } + Q { η g M A X , k σ k } ) P N C χ 2 { l = 0 k n l , η 2 , T k 2 } ]
P ( N D k I A K ) l = 0 k ( P N C χ 2 { n l + m l , μ M D E , l 2 , L l 2 λ l 2 4 } + I M D E , l )
where
μ M D E , l 2 is derived from 0 T k 2 χ τ 2 ( n l , μ M D E , l 2 ) d τ = I M D E , l and where, in addition to the variables defined under Equations (1)–(3), we used:
η is a scalar search parameter (fault magnitude) that is varied to maximize the integrity risk at each time k ;
g M A X , k is the worst-case failure mode slope (FMS) over all UO hypotheses, determined using the method given in [35];
P N C χ 2 { d o f , μ 2 , T } is the probability that a non-centrally chi-squared distributed random variable with “dof” degrees of freedom and noncentrality parameter μ 2 is lower than some value T;
T k 2 .is a detection threshold set in accordance to a continuity risk requirement C R E Q in Equation (11);
I M D E , l is an integrity risk budget allocation, i.e., a fraction of I R E Q , k , chosen to satisfy I M D E , k < < I R E Q , k .

5. Performance Analysis

In this section, example simulations and testing introduced in [26,27,28,40,41] are employed to compare the P ( H M I k ) bounds assuming no UOs in Equations (1)–(3) versus accounting for possible UOs in Equations (21)–(23).

5.1. Direct Simulation: Vehicle Roving through a GNSS-Denied Area

This analysis investigated the safety performance of a GPS/LiDAR navigation system onboard a vehicle roving through a forest-type environment. GPS signals were blocked by the tree canopy, and low-elevation satellite signals did not penetrate under the trees. Tree trunks served as landmarks for a two-dimensional LiDAR using a SLAM-type algorithm.
The measurement vector z ^ k in Equation (4) was augmented with GPS code and carrier measurements. The state vector x k was augmented to include an unknown GPS receiver clock bias and carrier phase cycle ambiguities. Time-correlated GPS signals and nonlinear LiDAR data were processed in a unified time-differencing EKF derived in [33,34]. The main simulation parameter values are listed in Table 1, and a differential GPS measurement error model was used, which is fully described in [41]. In this scenario, GPS and LiDARs essentially relayed each other with seamless transitions from open sky through GPS-denied areas where landmarks were modeled as poles with nonzero radii.
As shown in Figure 2, Figure 3 and Figure 4 and 6, we consistently employed the following yellow-green-blue color code: the mission started with the vehicle operating in a GPS available area (yellow-shaded). Satellite signals available during initialization enabled accurate estimation of cycle ambiguities, so that vehicle positioning uncertainty did not exceed a few centimeters. Then, as the vehicle moved and crossed the GPS- and LiDAR-available area (green-shaded) and the LiDAR-only area (blue-shaded), seamless variations in covariance were achieved. A detailed description of this simulation is given in [41]. In this scenario, the likelihood of IA is high.
First, as shown in Figure 2, we assumed that no UO was present but IAs occurred. One indicator of IA is displayed on the top of the upper left-hand-side (LHS) plot in Figure 2. It shows that the actual cross-track positioning error (thick black line) versus distance travelled exceeded the corresponding one-sigma covariance envelope (thin black line). This suggests that errors impacting positioning are not captured by the covariance.
This is confirmed on the lower part of the upper LHS chart in Figure 2, where the black curve showing the P ( H I k | C A K ) bound stayed below 10 7 . This curve can directly be derived from the EKF covariance. It does not account for IA. In contrast, the red P ( H I k ) -bound curve reached a first plateau of I F E , k = 10−9 as soon as two landmarks were visible by design of our risk evaluation method [28]. The P ( H I k ) curve then suddenly increased to 10−5 at approximately 29 m of travel distance.
To explain this sudden jump, the top right-hand-side (RHS) chart in Figure 2 shows that, at the travel distance of 29 m (i.e., at travel time = 29 s) corresponding to the large increase in predicted integrity risk, landmark “1” was hidden behind landmark “4”. To the LiDAR, landmark “1” became visible again at the next time step, which made correct measurement association with either landmark “1” or “4” extremely challenging. The P ( H I k ) bound accounted for the risk caused by such events. This is consistent with other results presented in [1,26,27,28].
The bottom LHS chart in Figure 2 shows the simulated GPS satellite geometry on an azimuth elevation plot of the sky. At travel time 29 s, the tree canopy blocked all satellite signals. The bottom RHS chart displays the simulated LiDAR measurements showing again that landmark “1” was not visible from the LiDAR’s viewpoint.
In Figure 3, the risk of having a UO occluding a landmark is taken into account. Our new integrity risk evaluation method was implemented. We could quantify the impact on P(HMIk) of undetected UOs assuming systematic CA by measuring the difference between the dashed black line P ( H I k | C A K ) derived using [28] and the solid black line P ( H M I k | C A K ) . We noticed again that P ( H I k | C A K ) (directly derived from the EKF covariance) was a poor safety metric because it stayed below 10 7 , whereas P ( H M I k | C A K ) , accounting for UOs, exceeded 10 2 . In parallel, the red curves account for the risk of incorrect association (IA). The difference between the dashed red line and the solid red line, which respectively reached 10 5 and above 10 2 , shows the impact on P(HMIk) of undetected UOs.
To better understand the shape of the overall P ( H M I k ) bound, Figure 4 shows the contributions of each single-UO hypothesis (assuming no UO, assuming a UO masking landmark “1”, assuming a UO masking landmark “2”, etc.). In Figure 4, the color code used in the LHS graph is also employed in the RHS plot to represent the landmark involved in the corresponding fault hypothesis. Peaks in P ( H M I k ) -bound contributions occurred when the landmark geometry and redundancy was too poor to ensure reliable detection of a given UO. The overall P ( H M I k ) bound was the maximum of all the contributions at each time step and is represented with a thick green line.

5.2. Preliminary Testing in an Incorrect-Association-Free Environment

Preliminary experimental testing was carried out using data collected in a structured environment shown in Figure 5. Static simple-shaped landmarks were located at locations sparse enough to ensure successful outcomes for FE and DA. Because the results presented here were free of incorrect associations, P ( H M I k ) was expected to match P ( H M I k | C A K ) . This test data was used to focus on the risk of UO misdetection.
Measurements from carrier phase differential GPS (CPDGPS) as well as LiDAR scanners were synchronized and recorded. In order to obtain a full 360-degree LiDAR scan, two 180-degree LiDAR scanners were assembled back-to-back. The LiDAR scanners had a specified 15–80-m range limit, a 0.5-degree angular resolution, a 5-Hz update rate, and a ranging accuracy of 1–5 cm (1 sigma) [42]. The GPS antenna was mounted on top of the front LiDAR. The lever-arm distance between the two LiDARs was accounted for. The two LiDARs and the GPS antenna were mounted on a rover also carrying the GPS receiver and data-link. An embedded computer onboard the vehicle recorded all measurements including the raw GPS data from the reference station transmitted via a wireless spread-spectrum data-link. Truth trajectory was obtained using a fixed CPDGPS solution.
The upper LHS chart in Figure 6 confirms that this is an incorrect-association-free scenario because the actual error (thick line) fits within the covariance envelope (thin line) throughout the test. In addition, the lower LHS graph in Figure 6 shows P ( H M I k ) -bound contributions for each single-UO hypothesis. The six P ( H M I k ) bounds corresponding to UO hypotheses are shown using the same color code as in Figure 4, and the UO-free hypothesis is the dashed line. The color code is used on the RHS chart, which also shows the landmark geometry. In the LHS graph, P ( H M I k ) increases substantially when accounting for undetected UO (thick black curve), as compared to ignoring their potential presence (dashed red line). UO occluding landmarks “1” and “2” cause by far the largest increase in P ( H M I k ) bound. In this SLAM-type implementation where the map is built incrementally, landmarks observed early in the rover trajectory play a key role throughout the mission, which explains the method’s sensitivity to potential extraction faults on landmarks “1” and “2”. In future work, we will try to reduce the P ( H M I k ) bound using redundant information from other sensors, from additional landmarks, and from additional landmark features.

6. Conclusions

This paper presents a new approach to improve the safety of LiDAR-based navigation by quantifying the risks of missed detection of unwanted objects (UO). UOs can occlude useful landmarks, thereby causing large navigation errors. We established a bound on the integrity risk caused by UOs. First, we presented an innovation-based detector, and we established an analytical expression for the impact of undetected UO on the positioning error assuming correct association. Then, we derived a bound on the risk of incorrect association (IA) in the presence of UO. Direct simulation and preliminary testing in a structured environment demonstrated the proposed method’s ability to quantify safety risks in the presence of both UOs and IAs. It showed, for example, that the Kalman filter covariance is a poor metric of safety performance. The analysis of our preliminary experimental results suggests that additional redundant information from other sensors would be needed to safely detect UOs in the LiDAR’s surroundings.

Author Contributions

Conceptualization, M.J., G.D.A., M.S. and B.P.; Methodology, M.J.; Software, M.J.; Validation, M.J.; Formal Analysis, M.J.; Investigation, M.J.; Resources, M.J.; Data Curation, M.J.; Writing-Original Draft Preparation, M.J.; Writing-Review & Editing, M.J., G.D.A., M.S. and B.P.; Visualization, M.J.; Supervision, M.J., M.S. and B.P; Project Administration, M.J. and M.S.; Funding Acquisition, M.J. and M.S.

Funding

This research is funded by the National Science Foundation (NSF award CMMI#1637899).

Conflicts of Interest

The authors declare no conflicts of interest. The opinions expressed in this paper do not necessarily represent those of any other organization or person.

Appendix A. Upper Bound on the Probability of Incorrect Association in the Presence of Unwanted Objects

This appendix aims at finding an upper bound on P ( N D K I A K ) . First, we point out that EKF innovations are used both for data association (DA) and for UO detection. Because the sequence of innovations is white, events N D K and I A l are independent for l = 0 , , k 1 , but not for l = k . Also, it is worth noting that I A K is a union of events of incorrect associations at any previous time steps: I A K I A k I A k 1 I A 0 , and that by definition of the association initialization [31], P ( I A 0 ) = 0 . We use these observations to rewrite P ( N D K I A K ) as
P ( N D K I A K ) = P ( l = 0 k ( N D l I A l ) )      = l = 0 k P ( N D l I A l ) P ( l = 0 k ( N D l I A l ) )      = l = 0 k P ( N D l I A l ) l = 1 k P ( N D l I A l | N D L 1 I A L 1 ) P ( N D 0 I A 0 )      = l = 0 k P ( N D l I A l )
This expression is desirable because it can be updated recursively. We will upper-bound P ( N D K I A K ) by bounding each individual term in the sum in Equation (A1).
From the definition of the detection test in Section 3.1, the N D l event can be expressed as
N D l { j = 0 l min i = 1 , , n A γ i , j 2 T l 2 }
which is included in the event:
N D l * { γ M I N , l 2 T l 2 }   where   γ M I N , l 2 min i = 1 , , n A γ i , l 2
With the index notation “ M I N ” defined in Equation (A2), the distribution of γ M I N , l 2 is known ( γ M I N , l 2 ~ χ 2 ( n l , y M I N , l 2 ) ) except for its NCP y M I N , l 2 . With the knowledge that no detection occurred, we can determine a probabilistic bound μ M D E , l 2 on y M I N , l 2 . The law of total probability is used again to express a bound on P ( N D K I A K ) as
P ( N D l I A l ) P ( N D l * I A l ) P ( N D l * I A l y M I N , l 2 < μ M D E , l 2 ) + P ( N D l * I A l y M I N , l 2 μ M D E , l 2 ) P ( I A l | y M I N , l 2 < μ M D E , l 2 ) + P ( N D l * y M I N , l 2 μ M D E , l 2 )
We find μ M D E , l 2 to ensure that the second term in Equation (A3) is smaller than a predefined allocation I M D E , l :
P ( γ M I N , l 2 T l 2 y M I N , l 2 μ M D E , l 2 ) I M D E , l
A minimum value for μ M D E , l 2 is found using the expression
P ( γ M I N , l 2 T l 2 | y M I N , l 2 = μ M D E , l 2 ) = I M D E , l
Hence, Equation (20). μ M D E , l 2 is the smallest value of the test statistic NCP that can cause no detection with probability lower than I M D E , l . Any error larger than that will be detected with probability larger than 1- I M D E , l , which is considered safe. Substituting Equation (A4) into (A3), Equation (A3) becomes
P ( N D l I A l ) P ( I A l | y M I N , l 2 < μ M D E , l 2 ) + I M D E , l
As described in Section 2, the IA event may be expressed as I A l γ M I N , l 2 < γ 0 , l 2 when the “ M I N ” differs from 0. We address the fact that the random variables γ M I N , l 2 and γ 0 , l 2 are correlated using the exact same steps as in [28]. The derivation in [28] shows that the following event includes I A l :
y M I N , l T Y M I N , l 1 y M I N , l 4   q l T q l
where
y M I N , l is defined in Equation (7) and is not zero because of IA (not due to UOs);
Y M I N , l is defined in Equation (9);
q l is an ( n l + m l ) × 1 vector such that q l ~ N ( μ Q , l , I n l + m l ) ;
4the factor four is derived in [28] by solving an eigenvalue problem involving a sum of two idempotent matrices.
In this work, we distinguish the impacts of the IA and UO. Recall that the CA is the one where all landmarks that are not occluded by a UO are correctly associated, i.e., where the innovation vector would be zero mean if the UO was removed. The mean contribution due to IA is accounted for with y M I N , l on the left-hand side of (A7). In contrast with [28], q l is not zero mean because of the presence of a UO. Following the eigenvalue solution provided in [28], the maximum impact of UO on the right-hand-side term is 4 μ M D E , l 2 . After dividing both sides of Equation (A7) by 4, the probability of occurrence of the event in Equation (A7) is expressed in Equation (19).

References

  1. Joerger, M.; Duenas Arana, G.; Spenko, M.; Pervan, B. Landmark Data Selection and Unmapped Obstacle Detection in Lidar-Based Navigation. In Proceedings of the ION GNSS+, Portland, OR, USA, 25–29 September 2017. [Google Scholar]
  2. U.S. Department of Transportation (DOT) National Highway Traffic Safety Administration (NHTSA). Available online: https://www.nhtsa.gov/manufacturers/automated-driving-systems (accessed on 18 May 2018).
  3. Federal Automated Vehicles Policy—September 2016. Available online: https://www.transportation.gov/AV/federal-automated-vehicles-policy-september-2016 (accessed on 18 May 2018).
  4. RTCA Special Committee 159, Minimum Aviation System Performance Standards for the Local Area Augmentation System (LAAS). Available online: https://standards.globalspec.com/std/11988/rtca-do-245 (accessed on 18 May 2018).
  5. DOT Federal Highway Administration (FHWA), Vehicle Positioning Trade Study for ITS Applications. Available online: https://rosap.ntl.bts.gov/view/dot/3319/Print (accessed on 18 May 2018).
  6. Lee, Y.C. Analysis of Range and Position Comparison Methods as a Means to Provide GPS Integrity in the User Receiver. In Proceedings of the 42nd Annual Meeting of The Institute of Navigation (1986), Seattle, WA, USA, 24–26 June 1986. [Google Scholar]
  7. Parkinson, B.W.; Axelrad, P. Autonomous GPS Integrity Monitoring Using the Pseudorange Residual. J. Inst. Navig. 1988, 35, 255–274. [Google Scholar] [CrossRef]
  8. RTCA Special Committee 159, Minimum Operational Performance Standards for Global Positioning System/Wide Area Augmentation System Airborne Equipment. Available online: https://standards.globalspec.com/std/1239716/rtca-do-229 (accessed on 18 May 2018).
  9. Lu, F.; Milios, E. Globally Consistent Range Scan Alignment for Environment Mapping. Ayton. Robots 1997, 4, 333–349. [Google Scholar] [CrossRef]
  10. Röfer, T. Using Histogram Correlation to Create Consistent Laser Scan Maps. IEEE Intell. Robots Syst. 2002, 1, 625–630. [Google Scholar]
  11. Diosi, A.; Kleeman, L. Laser scan matching in polar coordinates with application to SLAM. IEEE Robots Syst. 2005, 5, 3317–3322. [Google Scholar]
  12. Bengtsson, O.; Baerveldt, A.J. Robot localization based on scan-matching-estimating the covariance matrix for the IDC algorithm. Robot. Autom. Syst. 2003, 44, 29–40. [Google Scholar] [CrossRef]
  13. Rusinkiewicz, S.; Levoy, M. Efficient Variants of the ICP Algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001. [Google Scholar]
  14. Bar-Shalom, Y.; Fortmann, T.E.; Cable, P.G. Tracking and Data Association. Math. Sci. Eng. 1988, 179, 918–919. [Google Scholar] [CrossRef]
  15. Leonard, J.; Durrant-Whyte, H. Directed Sonar Sensing for Mobile Robot Navigation; Springer: New York, NY, USA, 1992. [Google Scholar]
  16. Thrun, S. Robotic Mapping: A Survey. In Exploring Artificial Intelligence in the New Millenium; Lakemeyer, G., Nebel, B., Eds.; Morgan Kaufmann: San Francisco, CA, USA, 2003. [Google Scholar]
  17. Cooper, A.J. A Comparison of Data Association Techniques for Simultaneous Localization and Mapping. Master’s Thesis, Massachussetts Institute of Technology, Cambrige, MA, USA, 2005. [Google Scholar]
  18. Ruiz, I.T.; Petillot, Y.; Lane, D.M.; Salson, C. Feature Extraction and Data Association for AUV Concurrent Mapping and Localisation. In Proceedings of the 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), Seoul, Korea, 21–26 May 2001. [Google Scholar]
  19. Tareen, S.A.K.; Saleem, Z. A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018. [Google Scholar]
  20. Dissanayake, G.; Newman, P.; Clark, S.; Durrant-Whyte, H.; Csorba, M. A Solution to the Simultaneous Localization and Map Building (SLAM) Problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef]
  21. Feng, Y.; Schlichting, A.; Brenner, C. 3D Feature Point Extraction from LiDAR Data Using a Neural Network. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Cezch, 12–19 July 2016. [Google Scholar]
  22. Li, Y.; Olson, E.B. A General Purpose Feature Extractor for Light Detection and Ranging Data. Sensors 2010, 10, 10356–10375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Kim, J.; Kang, H. A New 3D Object Pose Detection Method Using LIDAR Shape Set. Sensors 2018, 18, 882. [Google Scholar] [CrossRef] [PubMed]
  24. Bar-Shalom, Y.; Daum, F.; Huang, J. The Probabilistic Data Association Filter. IEEE Control Syst. Mag. 2009, 29, 82–100. [Google Scholar]
  25. Areta, J.; Bar-Shalom, Y.; Rothrock, R. Misassociation Probability in M2TA and T2TA. J. Adv. Inf. Fusion 2007, 2, 113–127. [Google Scholar]
  26. Joerger, M.; Jamoom, M.; Spenko, M.; Pervan, B. Integrity of Laser-Based Feature Extraction and Data Association. In Proceedings of the 2016 IEEE/ION Position, Location and Navigation Symposium (PLANS), Savannah, GA, USA, 11–14 April 2016. [Google Scholar]
  27. Joerger, M.; Pervan, B. Continuity Risk of Feature Extraction for Laser-Based Navigation. In Proceedings of the 2017 International Technical Meeting of The Institute of Navigation, Monterey, CA, USA, 30 January–2 February 2017. [Google Scholar]
  28. Joerger, M.; Pervan, B. Quantifying Safety of Laser-Based Navigation. IEEE Trans. Aerosp. Electron. Syst. 2018. [Google Scholar] [CrossRef]
  29. Kim, C.; Lee, Y.; Park, J.; Lee, J. Diminishing unwanted objects based on object detection using deep learning and image inpainting. In Proceedings of the 2018 International Workshop on Advanced Image Technology (IWAIT), Chiang Mai, Thailand, 7–9 January 2018. [Google Scholar]
  30. Asvadi, A.; Premebida, C.; Peixoto, P.; Nunes, U. 3D Lidar-based static and moving obstacle detection in driving environments: An approach based on voxels and multi-region ground planes. Robot. Autom. Syst. 2016, 83, 299–311. [Google Scholar] [CrossRef]
  31. DeCleene, B. Defining Pseudorange Integrity—Overbounding. In Proceedings of the 13th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS 2000), Salt Lake City, UT, USA, 19–22 September 2000. [Google Scholar]
  32. Rife, J.; Pullen, S.; Enge, P.; Pervan, B. Paired Overbounding for Nonideal LAAS and WAAS Error Distributions. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 1386–1395. [Google Scholar] [CrossRef]
  33. Arana, G.D.; Joerger, M.; Spenko, M. Minimizing Integrity Risk via Landmark Selection in Mobile Robot Localization. IEEE Trans. Robot. 2017, in press. [Google Scholar]
  34. Tanil, C.; Khanafseh, S.; Joerger, M.; Pervan, B. An INS Monitor to Detect GNSS Spoofers Capable of Tracking Vehicle Position. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 131–143. [Google Scholar] [CrossRef]
  35. Tanil, C.; Joerger, M.; Khanafseh, S.; Pervan, B. A Sequential Integrity Monitoring for Kalman Filter Innovations-Based Detectors. In Proceedings of the ION GNSS+, Miami, FL, USA, 24–28 September 2018. [Google Scholar]
  36. Joerger, M.; Chan, F.-C.; Pervan, B. Solution Separation Versus Residual-Based RAIM. J. Inst. Navig. 2014, 64, 273–291. [Google Scholar] [CrossRef]
  37. Joerger, M.; Pervan, B. Kalman Filter-Based Integrity Monitoring Against Sensor Faults. J. Guid. Control Dyn. 2013, 36, 349–361. [Google Scholar] [CrossRef]
  38. Pullen, S.; Lee, J.; Luo, M.; Pervan, B.; Chan, F.-C.; Gratton, L. Ephemeris Protection Level Equations and Monitor Algorithms for GBAS. In Proceedings of the ION GPS 2001, Salt Lake City, UT, USA, 11–14 September 2001. [Google Scholar]
  39. Pullen, S. Augmented GNSS: Fundamentals and Keys to Integrity and Continuity. In Proceedings of the ION GNSS 2011, Portland, OR, USA, 19–23 September 2011. [Google Scholar]
  40. Joerger, M.; Pervan, B. Measurement-Level Integration of Carrier-Phase GPS and Laser-Scanner for Outdoor Ground Vehicle Navigation. J. Dyn. Syst. Meas. Control 2009, 131, 021004. [Google Scholar] [CrossRef]
  41. Joerger, M. Carrier Phase GPS Augmentation Using Laser Scanners and Using Low Earth Orbiting Satellites. Ph.D. Dissertation, Illinois Institute of Technology, Chicago, IL, USA, 2009. [Google Scholar]
  42. Ye, C.; Borenstein, J. Characterization of a 2-D Laser Scanner for Mobile Robot Obstacle Negotiation. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), Washington, DC, USA, 11–15 May 2002. [Google Scholar]
Figure 1. Defining Integrity Risk for Automotive Applications. The integrity risk is the probability of the car being outside the alert limit requirement box (blue shaded area) when it was estimated to be inside the box. When lateral deviation is of primary concern, then the alert limit is the distance between edge of car and edge of lane.
Figure 1. Defining Integrity Risk for Automotive Applications. The integrity risk is the probability of the car being outside the alert limit requirement box (blue shaded area) when it was estimated to be inside the box. When lateral deviation is of primary concern, then the alert limit is the distance between edge of car and edge of lane.
Sensors 18 02740 g001
Figure 2. Simulation results assuming no unwanted objects (UO). (top left) On the upper plot, the thick black line represents the actual cross-track positioning error and the thin line is the one-sigma covariance envelope. The lower plot shows P(HIk) bounds for the GPS-denied area crossing scenario. (top right) Snapshot vehicle-landmark geometry at the time step corresponding to the large increase in P(HIk) Bound (time = 29 s). (bottom left) Azimuth elevation sky plot showing GPS satellite geometry at time = 29 s. (bottom right) Snapshot LiDAR scan at time = 29 s when landmark “1” is hidden behind landmark “4”.
Figure 2. Simulation results assuming no unwanted objects (UO). (top left) On the upper plot, the thick black line represents the actual cross-track positioning error and the thin line is the one-sigma covariance envelope. The lower plot shows P(HIk) bounds for the GPS-denied area crossing scenario. (top right) Snapshot vehicle-landmark geometry at the time step corresponding to the large increase in P(HIk) Bound (time = 29 s). (bottom left) Azimuth elevation sky plot showing GPS satellite geometry at time = 29 s. (bottom right) Snapshot LiDAR scan at time = 29 s when landmark “1” is hidden behind landmark “4”.
Sensors 18 02740 g002
Figure 3. P(HMIk) bounds taking into account the possibility of IA and the potential presence of UOs. The difference between the dashed black line and the solid black line quantifies the impact on P(HMIk) of undetected UOs when assuming correct association (CA). The difference between the dashed red line and the solid red line measures the impact on P(HMIk) of undetected UOs when accounting for incorrect associations.
Figure 3. P(HMIk) bounds taking into account the possibility of IA and the potential presence of UOs. The difference between the dashed black line and the solid black line quantifies the impact on P(HMIk) of undetected UOs when assuming correct association (CA). The difference between the dashed red line and the solid red line measures the impact on P(HMIk) of undetected UOs when accounting for incorrect associations.
Sensors 18 02740 g003
Figure 4. Simulation results accounting for UOs. (a) P(HMIk)-bound contributions under each UO hypothesis (H0 assumes no UO, H1 assumes a UO masks landmark “1”, etc.): the overall risk is the thick green line. (b) Color-coded landmark geometry: the color code identifies which landmark is masked by a UO under the corresponding hypothesis in the left-hand-side plot.
Figure 4. Simulation results accounting for UOs. (a) P(HMIk)-bound contributions under each UO hypothesis (H0 assumes no UO, H1 assumes a UO masks landmark “1”, etc.): the overall risk is the thick green line. (b) Color-coded landmark geometry: the color code identifies which landmark is masked by a UO under the corresponding hypothesis in the left-hand-side plot.
Sensors 18 02740 g004
Figure 5. Experimental setup of a forest-type scenario, where a GPS/LiDAR-equipped rover is driving by six landmarks (cardboard columns) in a GPS-denied area. GPS is artificially blocked by a simulated tree canopy and a precise differential GPS solution is used for truth trajectory determination.
Figure 5. Experimental setup of a forest-type scenario, where a GPS/LiDAR-equipped rover is driving by six landmarks (cardboard columns) in a GPS-denied area. GPS is artificially blocked by a simulated tree canopy and a precise differential GPS solution is used for truth trajectory determination.
Sensors 18 02740 g005
Figure 6. Experimental results accounting for UOs (a) P(HMIk)-bound contributions for each unmapped object (UO) hypothesis for the preliminary experimental dataset: the overall risk is the thick black line. (b) Color-coded subsets identifying which landmark is occluded by a UO under each one of the six single-UO hypotheses.
Figure 6. Experimental results accounting for UOs (a) P(HMIk)-bound contributions for each unmapped object (UO) hypothesis for the preliminary experimental dataset: the overall risk is the thick black line. (b) Color-coded subsets identifying which landmark is occluded by a UO under each one of the six single-UO hypotheses.
Sensors 18 02740 g006
Table 1. Simulation parameters.
Table 1. Simulation parameters.
System ParametersValues
Standard deviation of raw LiDAR ranging measurement0.02 m
Standard deviation of raw LiDAR angular measurement0.5 deg
LiDAR range limit20 m
GNSS and LiDAR data sampling interval0.5 s
Standard deviation of raw GNSS code ranging signal1 m
Standard deviation of raw GNSS carrier ranging signal0.015 m
GNSS multipath correlation time constant90 s
Vehicle speed1 m/s
Alert limit 0.5 m
Integrity risk allocation for FE, IFE,k10−9
Integrity risk allocation for MDE, IMDE,k10−10
Continuity risk requirement, CREQ,k10−3

Share and Cite

MDPI and ACS Style

Joerger, M.; Duenas Arana, G.; Spenko, M.; Pervan, B. A New Approach to Unwanted-Object Detection in GNSS/LiDAR-Based Navigation. Sensors 2018, 18, 2740. https://doi.org/10.3390/s18082740

AMA Style

Joerger M, Duenas Arana G, Spenko M, Pervan B. A New Approach to Unwanted-Object Detection in GNSS/LiDAR-Based Navigation. Sensors. 2018; 18(8):2740. https://doi.org/10.3390/s18082740

Chicago/Turabian Style

Joerger, Mathieu, Guillermo Duenas Arana, Matthew Spenko, and Boris Pervan. 2018. "A New Approach to Unwanted-Object Detection in GNSS/LiDAR-Based Navigation" Sensors 18, no. 8: 2740. https://doi.org/10.3390/s18082740

APA Style

Joerger, M., Duenas Arana, G., Spenko, M., & Pervan, B. (2018). A New Approach to Unwanted-Object Detection in GNSS/LiDAR-Based Navigation. Sensors, 18(8), 2740. https://doi.org/10.3390/s18082740

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop