Aircraft-Based Satellite Navigation Augmentation to Enable Automated Landing and Movement on the Airport Surface A dissertation submitted by

Okuary Osechas

in partial fulfillment of the requirements for the degree of

Doctor of Philosophy in Electrical Engineering

Tufts University

February 2014

Adviser: Jason Rife

Acknowledgments The author gratefully acknowledges the Federal Aviation Administration GBAS Program (Grant FAA-10-G-006) for supporting this research. The opinions discussed here are those of the author and do not necessarily represent those of the FAA or other affiliated agencies.

i

ii

Abstract This work identifies a series of shortcomings in conventional aircraft navigation systems. These shortcomings can be overcome by using satellite-based navigation systems; however, even state-of-the-art satellite-based systems are not well suited for operations like precision approach, landing, and taxiing. The proposed solutions require no new ground-based hardware; rather the proposed solutions require primarily software (and communication) upgrades be made to aircraft.

The three major contributions of the thesis are:

1. A monitor to autonomously identify satellite clock faults on moving satellite receivers 2. A collaborative method to detect ionospheric activity that is potentially hazardous to aviation users of satellite navigation systems 3. A novel approach to estimating bounds on position error distributions in real time

All three contributions have the potential to tighten the confidence intervals in satellite navigation position estimates, which could potentially enhance the efficiency of use of the airspace by operating aircraft closer together and allow them to fly routes that are closer to the optimal path than currently possible.

iii

iv

Contents

Abstract

iii

1 Introduction

1

1.1

Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.3

Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

1.4

Theoretical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.4.1

GNSS Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.4.2

Concepts in Aviation . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

1.4.3

Approach and Landing Operations . . . . . . . . . . . . . . . . . . .

12

1.4.4

ILS-Based Approaches . . . . . . . . . . . . . . . . . . . . . . . . . .

12

1.4.5

GBAS-Based Approaches . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.5

Organization

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Autonomous Satellite Clock Fault Detection

v

14

15

vi

CONTENTS 2.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

2.2

Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

2.2.1

Acceleration Solution . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

2.2.2

Monitor Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

2.2.3

Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

Experimental Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

2.3.1

Experiment Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

2.3.2

Algorithm Verification . . . . . . . . . . . . . . . . . . . . . . . . . .

32

Simulating Integrity Performance . . . . . . . . . . . . . . . . . . . . . . . .

37

2.4.1

Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

2.5

Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

2.6

Integrity Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

2.7

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

2.8

Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

2.8.1

Fault Detectability: Pmin

. . . . . . . . . . . . . . . . . . . . . . . .

45

2.8.2

Selecting τS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

2.8.3

Derivation of Range Acceleration Correction Term . . . . . . . . . . .

48

2.8.4

Influence of Clock Fault on the Estimation of vθ . . . . . . . . . . . .

54

2.3

2.4

3 Collaborative Ionosphere Monitoring

57

CONTENTS

vii

3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58

3.2

System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

3.3

Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62

3.4

Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

3.4.1

Local Gradient Estimate for Each Satellite . . . . . . . . . . . . . . .

65

3.4.2

Monitoring Statistic and Threshold . . . . . . . . . . . . . . . . . . .

67

3.4.3

Faulted operations . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

3.5

Simulation Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

3.6

Dependence of Sensitivity on Receiver Density . . . . . . . . . . . . . . . . .

71

3.7

RFI Robustness of the Method . . . . . . . . . . . . . . . . . . . . . . . . .

74

3.8

Operation Under Restricted Bandwidth . . . . . . . . . . . . . . . . . . . . .

75

3.8.1

Relaxations to the Optimization Problem . . . . . . . . . . . . . . . .

77

3.8.2

Impact of the Relaxations . . . . . . . . . . . . . . . . . . . . . . . .

79

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

3.9.1

Benefits of the Method . . . . . . . . . . . . . . . . . . . . . . . . . .

82

3.9.2

Implementation Concepts for Network Communication . . . . . . . .

84

3.9.3

Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

3.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

3.9

4 Direct Position-Domain Error Bounding

87

viii

CONTENTS 4.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

4.2

Comparison of Direct and Indirect Position-Domain Bounding . . . . . . . .

89

4.2.1

Indirect Position-Domain Bounding . . . . . . . . . . . . . . . . . . .

91

4.2.2

Direct Position-Domain Bounding . . . . . . . . . . . . . . . . . . . .

95

4.2.3

Comparison of an Overbound Tightness-of-Fit for a Synthetic Example 96

4.2.4

Implications of Position-Domain Bounding . . . . . . . . . . . . . . . 100

4.3

4.4

4.5

Error Distribution Binning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.3.1

Non-Stationary Statistical Process . . . . . . . . . . . . . . . . . . . . 101

4.3.2

Indirect Position-Domain Bounding . . . . . . . . . . . . . . . . . . . 102

4.3.3

Na¨ıve Position-Domain Bounding . . . . . . . . . . . . . . . . . . . . 103

4.3.4

Figure-of-Merit-Based Direct Position-Domain Bounding . . . . . . . 103

4.3.5

Theorem: Overbounding of Mixtures of Gaussians . . . . . . . . . . . 105

Selection of a Figure of Merit . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.4.1

Dilution of Precision as a Figure of Merit . . . . . . . . . . . . . . . . 107

4.4.2

Weighted Dilution of Precision as a Figure of Merit . . . . . . . . . . 108

Experimental Demonstration . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.5.1

Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

4.5.2

Baseline Binning Scheme . . . . . . . . . . . . . . . . . . . . . . . . . 110

4.5.3

Sensitivity to Bin Selection . . . . . . . . . . . . . . . . . . . . . . . . 112

CONTENTS

ix

4.5.4

DOP vs. Weighted DOP . . . . . . . . . . . . . . . . . . . . . . . . . 115

4.6

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

4.7

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

4.8

Appendix I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5 Summary and Vision

123

5.1

Vision for Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

5.2

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

x

CONTENTS

Chapter 1 Introduction This introductory chapter outlines contributions to the advancement of automated aircraft landing systems described in depth later in the thesis. It begins exploring the some shortcomings in state-of-the-art aircraft navigation systems and how these shortcomings can be addressed in the next generation of satellite-based navigation systems. Solutions to the identified challenges are proposed in this chapter and elaborated on in the technical portion of the thesis (chapters 2-4). This chapter also contains some general background information and basic theory of satellite navigation for civil aviation.

1.1

Motivation

The number of aircraft departures has risen over the years and orders for aircraft are also rising, which means that the growth will be sustained for the near term. Passenger air travel accounted for more than 30 million departures worldwide (10 million in the US alone) in 2012, carrying more than 2.7 billion passengers [1]. Growing numbers of aircraft that use a fixed number of airports require an increase in operational efficiency to provide higher 1

2

CHAPTER 1. INTRODUCTION

throughput than current systems are able to sustain. At the same time, accident figures are in a steady decline [2]. Maintaining this record of safety is a major driver in developing new technology that enables service improvement. The expectation is that satellite navigation will be able to deliver the sought-after flexibility at the required level of reliability [3, 4].

Three important factors limit throughput in currently used sensor systems for automated aircraft landing. The first factor is accuracy: minimum separation standards between flying aircraft are dictated by the uncertainty in position estimates, which is a fundamental limitation of the technology. The second factor is flexibility: operations are constrained to straight segments, may not be optimal in terms of fuel burn or noise impact on residential areas. The third factor is cost: the current generation systems are expensive to install, and maintenance is labor intensive. In fact, maintenance is complicated enough that a new generation of callibration systems will be based on satellite-based navigation methods [5].

Providing a navigation system that addresses these shortcomings would therefore likely enable a significant improvement in the efficiency of the airspace. One possible solution is to install satellite-based landing systems. Global Navigation Satellite Systems (GNSS), such as the Global Positioning System (GPS), are designed to provide users with position estimates in real time. GNSS alone, however, do not meet the strict accuracy and reliability requirements for civil aviation. Shortcomings in accuracy and reliability can be overcome using complementary augmentation systems that correct for spatially correlated errors and monitors for faults in the GNSS. One implementation of such an augmentation system is the Ground Based Augmentation System (GBAS), where reference receivers positioned at known locations measure the errors in the broadcast GNSS signal and broadcast them to users within a radius of a few tens of kilometers. GBAS-based procedures address some of the needs not covered by current-generation systems (accuracy, flexibility, cost) [6].

1.2. CHALLENGES

3

GBAS is currently being tested by civil aviation authorities in cooperation with industrial partners. In the U.S. GBAS prototypes are currently under evaluation at Newark and Houston. In Germany, GBAS is certified for precision approach (but not for landing) at Bremen and Frankfurt. Despite initial successes, many challenges remain in certifying GBAS for landing operations, roll-out, and movement on the airport surface.

1.2

Challenges

Landing and taxiing are the most demanding phases in a flight operation, from the navigation point of view, as accuracy and reliability requirements are much stricter than for any other phase. Some of the difficulties in meeting requirements stem from:

1. Loss of communications with the GBAS ground segment during surface operations: A taxiing aircraft may find itself missing GBAS messages, for example because the data broadcast is blocked by another aircraft or a building [7]. This is a situation that is relatively common in taxiing but is less frequent during landing. 2. Artifacts in the GNSS signal during ionosphere storms: In the presence of an unusually pronounced ionosphere gradient, the component of ionospheric delay in the differential correction will be different from the ionospheric delay at the user receiver [8]. Errors of this nature can become large enough to be hazardous. 3. Overly conservative error models that lead to unnecessary system outages: Current error bounds are computed for zero-mean Gaussian error models that over-predict the actual ranging errors. The compounding of uncertainty due to the combination of

4

CHAPTER 1. INTRODUCTION multiple ranging measurements in a position estimate exacerbates the conservatism of position error bounds [9].

The issues stated above make no claim of completeness: other challenges remain to be solved. One example is the threat from unintentional jamming of GNSS signals by personal privacy devices [10]; these privacy devices are illegal, but available through the black market.

Some additional explanation is beneficial as background to understand each of the issues enumerated above. For example the first shortcoming, related to GBAS communication dropouts. Communication outages are expected during surface movement, as static structures and moving vehicles may occasionally block messages transmitted from the GBAS base station to taxiing aircraft. As such, longer communication outages must be tolerated during surface movement than during other critical operations [11], such as landing. Because of such extended communications outages, during which the GBAS ground station cannot communicate alarms to a taxiing aircraft, satellite faults present additional risk. In particular, satellite clock faults are problematic, since conventional GBAS monitoring for clock faults is ineffective when extended communication outages occur. Such a satellite clock fault can go undetected, introducing faulted information into a critical system. A possible resolution to this issue is to provide additional monitoring for clock faults at the aircraft receiver. However, appropriate monitoring methods for timely detection of hazardous satellite clock faults have not previously been defined in the research literature.

Uncertainty in the estimation of ionospheric activity also plays an important role in the aforementioned unnecessary GBAS outages. Ionospheric activity was generally assumed to be highly spatially correlated. Since 2003 there is new evidence that storms can cause disruptions in the ionosphere, in the form of local gradients that disrupt this assumption of large-scale spatial correlation of ionosphere delay [12]. Detecting ionosphere disruptions can

1.3. THESIS CONTRIBUTIONS

5

be accomplished using a network of airborne receivers that share GNSS measurements to estimate local ionosphere activity. State-of-the-Practice methods rely on multiple groundbased reference receivers to detect variations in ionosphere activity between them.

As part of the mission to provide safe operations the error models used in assessing the confidence interval for a position solution over-predict the probability of larger errors. This conservative error model, which varies in time, is used to decide whether or not a safetycritical operation such as landing should be performed. If the error model is too conservative, then aircraft may be incorrectly prevented from landing even under safe conditions. Unfortunately, current mechanisms for building error bounds combine multiple inaccurate, conservative ranging error models into one position error model [9], which leads to a situation where uncertainty grows with the addition of more measurements, when in fact uncertainty should be reduced with the inclusion of each new measurement. A solution to this conundrum might be to model the distribution of position errors directly, applying a single simplifying model and by-passing the effect of stacking the uncertainty of multiple pessimistic models. This endeavor, however, requires a method to address the non-stationary nature of position errors, which is simpler to model for ranging measurements.

1.3

Thesis Contributions

This thesis describes solutions to the challenges, as identified above. Each solution is presented with a thorough analysis that justifies the performance and is described in chapters 2-4. The following list briefly summarizes my primary contributions. 1. Developed and evaluated the performance of a satellite clock monitor that enables GBAS to support safe aircraft surface movement: GBAS does not currently support safe aircraft surface movement because longer communications outages

6

CHAPTER 1. INTRODUCTION are permitted than during precision approach and landing. To address this shortcoming I introduced the first ever monitor that scans GNSS carrier signals from a moving antenna in order to detect satellite clock anomalies. The algorithm is shown to address the effects of extended communication outages in simulation [13, 14]. 2. Proposed combining measurements from mobile and stationary trusted receivers to detect ionosphere storms while ensuring system robustness to loss of multiple participating receivers: With the exception of ionosphere monitoring GBAS is capable of remaining fully operational under multiple reference receiver outages. To allow for continuing operation during outages of reference receivers I designed a new architecture that augments conventional ground based GBAS measurements with data drawn from airborne receivers. The monitor functions by estimating ionosphere activity over an area of tens of kilometers in radius. The method is shown to tolerate multiple receiver outages in simulation without significant deterioration in sensitivity [15, 16]. 3. Designed a process for tightening confidence intervals on GBAS position solutions to reduce down-times: Unnecessarily pessimistic confidence intervals on the GBAS position solution stem from a difficulty in modeling position errors for continually changing satellite positions. To reduce potential down-times I proposed a new way of bounding GBAS error distributions in real time and implemented a proof of concept on representative data. The new approach computes bounds directly from a position error model (rather than a ranging error model, as conventionally done), which reduces the effect of large, rare errors. Tighter confidence intervals enable operations under conditions that would otherwise be vetted if the satellite positions were unfavorable [17].

1.4. THEORETICAL BACKGROUND

1.4

7

Theoretical Background

This section provides an overview of the fundamentals of GNSS, as a way to provide additional context needed to understand the technical details of the contributions listed above.

1.4.1

GNSS Basics

The goal of GNSS is to provide location and time information to users worldwide. GNSS users compute their position estimates based on distance measurements, from the user to a number (N ) of different GNSS satellites in the sky. To estimate its position based on distance measurements the user needs to be able to compute the position of all satellites of interest (x(k) , k ∈ [1, N ]); this is usually done based on the orbital parameters broadcast by the satellites themselves [18].

Ranging Measurements

The quantity that GNSS receivers measure is a pseudorange (ρ), which is a time difference between a sending time (ts ) and a receiving time (tr ), multiplied by the speed of light (c); (k)

(k)

thus for the k th satellite ρ(k) = c(tr − ts ). Due to the fact that tr and all ts are computed on different hardware, the difference in clocks must be accounted for. It is customary to represent the time offset from standard GPS time for each satellite and for the user with a clock bias term. These will be labeled bu for the user and b(k) for each satellite k.

A user receiver is able to measure two different characteristics of the satellite signal that yield a range measurement. The first option is to exploit the fact that data are encoded

8

CHAPTER 1. INTRODUCTION

using CDMA modulation; in this case the transitions between code chips yield the required timing information. This code-based measurement is often labeled simply pseudorange. The second option is to track the phase of the carrier signal directly. This is known as a carrierphase measurement and trades spatial accuracy for ambiguity, as the measurements are more precise than code measurements but fundamentally ambiguous at integer multiples of the wavelength (λ).

The code measurement on satellite k depends on the position of that satellite x(k) and its corresponding clock bias b(k) , and the position of the user xu and its clock bias bu . The norm of the difference in position between satellite and user is referred to as the range: r(k) = kx(k) − xu k. A number of other influences disturb the propagation of the satellite signal and are commonly modeled as part of the pseudorange equation; the most common influences are: ionospheric delays (I (k) ) and tropospheric delays (T (k) ) and random zeromean, white noise ερ . The pseudorange measurement equation becomes: ρ(k) = r(k) + bu − b(k) + I (k) + T (k) + ερ .

(1.1)

The carrier-phase measurement model differs from the pseudorange measurement in two fundamental points: the ionosphere component and the cycle ambiguity N . The ionosphere advances the phase of the satellite signal, such that the delay term I (k) becomes an advance term of the same magnitude as in equation (1.1) but opposite sign. The cycle ambiguity indicates that infinite range values, spaced apart by the wavelength λ, satisfy the equation. With the inclusion of a zero-mean Gaussian noise term (εφ ), the carrier-phase measurement model becomes: φ(k) = r(k) + bu − b(k) − I (k) + T (k) + N λ + εφ

(1.2)

1.4. THEORETICAL BACKGROUND

9

Differentially Corrected Measurements

The basic principle behind DGNSS is that a set of pre-surveyed base stations monitors the GNSS constellation to compensate for errors that correlate spatially, such as satellite clock modeling errors or ionosphere delay. The differential correction (∆ρ) on the pseudorange measurement for satellite k is:

∆ρ(k) = ρ(k) −

  x(k) − xb · 1(k) + bb .

(1.3)

Here xb is the surveyed position of the base station, usually known to within a few mm, bb is the clock bias on the base receiver, x(k) is the position of the k th satellite in view. Accuracy on position estimates can be improved by applying the corrections obtained from equation (1.3) to the pseudorange measurements, resulting in the corrected pseudorange measurement ρcorr : (k) ρcorr = ρ(k) + ∆ρ(k) .

(1.4)

Replacing ρ with ρcorr in equation (1.7) leads to more accurate position estimates.

Dual Frequency Measurements

Differential corrections provide one way to correct for ionosphere delays; another approach is to perform processing on GNSS signals broadcast from the same satellite at different frequencies. The ionosphere delay term of equation (1.1) is frequency dependent, which allows for an estimation (and removal) of that very delay, if the user receiver is equipped to receive GNSS signals at multiple frequencies. Currently, most GPS satellites only provide dual-frequency code measurements to military users; however, modernized GPS satellites and satellites from other GNSS constellations will provide dual-frequency code measurements available to civil-

10

CHAPTER 1. INTRODUCTION

ian users, also.

GNSS satellites broadcast in various different frequency slots located in the L band, which are commonly labeled L1, L2, and L5. Two frequencies are protected and dedicated to be used for satellite navigation: fL1 = 1575.42 MHz and fL5 = 1176.45 MHz. The so-called Ionosphere Free (or IFree) pseudorange observable, for a combination of L1 and L5 is defined as [18]: ρIF =

2 2 fL5 fL1 ρ − ρL5 = 2.261ρL1 − 1.261ρL5 , L1 2 2 2 2 (fL1 − fL5 ) (fL1 − fL5 )

(1.5)

where ρL1 and ρL5 are the pseudorange measurements on L1 and L5, respectively.

IFree processing removes ionospheric influences without concern about spatial decorrelation, which is in principle an attribute that would make it superior to DGNSS. It does not, however, remove the tropospheric delay (which is usually much smaller than the ionospheric delay). Another drawback of IFree is that it increases the noise power, compared to the single-frequency solution, as it results from a weighted sum with coefficients greater than unity. Yet another disadvantage over DGNSS is that only few GNSS satellites currently transmit multi-frequency civilian signals, making IFree unavailable for most of the time.

GNSS Position Solution

Equation (1.1) can be re-written as:

 ρ(k) = x(k) − xu · 1(k) + bu − b(k) + I (k) + T (k) + ερ ,

(1.6)

1.4. THEORETICAL BACKGROUND

11

where 1(k) is the unit vector that points from the user to satellite k. This notation emphasizes the non-linear nature of the estimation problem, as the definition of 1(k) depends on xu . The estimation problem of finding the user position can be linearized iterˆ u,i , with the initial position guess often taken to be atively around a position estimate x ˆ u,0 = 0 (the center of the Earth) and the initial clock bias bu = 0. This yields an initial x (k)

ρˆ0 = kx(k) − xu,0 k + bu,0 = kx(k) k. For convenience, the measurements ρ(k) are grouped into  T a measurement vector ρ = ρ(1) , ρ(2) , . . . ρ(k) . . . ρ(N ) .

The increment of the position estimate (δxu,i ) at each iteration i and corresponding clock bias estimate (δbu,i ) are computed from the following linearized least squares expression: 



−1 T  δxu,i  T G Wδρi .   = G WG δbu,i

(1.7)

Here the geometry matrix G is made up of rows consisting of pointing vectors to each satellite (first three elements) and a one (the fourth element) [18], and W is simply a weighting matrix that accounts for the elevation-dependent noise level on each satellite.

One iteration is completed by re-calculating the pseudorange increment δρi+1 , considering that ρˆi is the vector of expected pseudoranges assuming xu = xu,i :

δρi+1 = ρi − ρˆi

(1.8)

This iterative process is pursued until convergence to within a specified limit ε, when the condition kxu,i − xu,i+1 k ≤ ε is met.

12

1.4.2

CHAPTER 1. INTRODUCTION

Concepts in Aviation

For this thesis it is useful to understand some basic concepts in aviation and, especially, the navigation aids that are commonly available to aviation users. Note that vision still plays an important role in landing operations, as navigation aids are only used in poor visibility conditions.

1.4.3

Approach and Landing Operations

In an approach aircraft fly towards the runway threshold, usually descending at an angle of 3◦ until touchdown. In poor visibility pilots may rely on technical navigation aids for the approach and the degree to which the pilot requires assistance is indicated by the decision height (DH). At the DH the pilot must take manual control of the aircraft in order to land or else abort the landing operation. Automated approaches are classified into different categories, where the most important ones are category I (Cat-I) and category III (Cat-III). Cat-I systems support a DH of 200 ft (61 m), while there are different variants of Cat-III that support DHs of between 0 and 15 ft.

1.4.4

ILS-Based Approaches

Currently the only sensor systems certified to provide Cat-III service are Instrument Landing Systems (ILS). An ILS installation is typically made up of a glide slope indicator and a localizer. The glide slope subsystem is designed to aid the approaching aircraft in maintaining the nominal 3◦ slope, while the localizer indicates deviation from the approach direction. While Cat-III ILS is accurate and reliable enough to meet current requirements, the acqui-

1.4. THEORETICAL BACKGROUND

13

sition, installation, and maintenance costs are often prohibitive. Installation and calibration of the directional antennae is labor intensive and requires highly skilled personnel. Since an ILS installation provides guidance for approach on a single runway, each runway end must be equipped separately, thus it is not possible to specify alternate approach procedures for a given runway end.

1.4.5

GBAS-Based Approaches

GBAS is a DGNSS system where a local reference station, with a known position, monitors the errors on GNSS satellites. The reference station computes corrections, as described in 1.4.1, and the corrections are broadcast to users in a vicinity of tens of kilometers from the reference.

In its current conception, GBAS is able to provide Cat-I-like performance and the main advantages over ILS-borne Cat-I are already apparent. For example a single GBAS installation can serve multiple runway ends of a single airport and even runways at other, nearby airports; this represents a great reduction in upfront costs and in maintenance efforts. A second crucial benefit is the fact that GBAS allows arbitrary approach paths, which are much more similar to the optimal paths aircraft would fly under visual conditions than ILS could ever support.

Developing the technology to enable Cat-III GBAS has the potential to change many aspects of civil aviation, from reducing delays in poor visibility, to reducing fuel consumption thanks to more optimal routes, to increasing the capacity of the airspace. Another potentially beneficial development is the Differentially Corrected Positioning Service (DCPS), which

14

CHAPTER 1. INTRODUCTION

is designed to support non-precision maneuvers, as well as surface movement (or taxiing). DCPS has been fully specified in interface documents, but it has not yet been certified. DCPS has significant potential for impact, as no other service supports surface movement.

1.5

Organization

This thesis is organized in three parts. In the first part an introductory chapter (1) presents the basic rationale that drives the underlying research. This chapter has a section dedicated to the theoretical background that allows readers to familiarize themselves with the methods used in the following chapters.

The second part contains the technical matter of the research. One chapter is dedicated to each of the three contributions listed above. The first technical chapter (2) presents the receiver autonomous satellite clock fault monitor. The second technical chapter (3) describes the collaborative approach to ionosphere monitoring. The third technical chapter (4) introduces a novel error bounding technique that allows for a reduction of conservatism in GBAS estimates.

The last chapter (5) of the thesis is dedicated to the conclusions drawn from the research, a vision for the future and a list of consulted references.

Chapter 2 Autonomous Satellite Clock Fault Detection A novel method for detecting GPS satellite clock excessive acceleration faults is presented, which allows users of GBAS to detect clock faults even during extended outages, which may occur for users of the Differentially Corrected Positioning Service (DCPS). The method is based on monitoring residuals of the acceleration solution, computed from carrier-phase measurements. To demonstrate its function, the algorithm is verified using data collected from a hardware receiver. Because the algorithm is designed to react to rare events, a simulation is used to study algorithm performance in the presence of a fault. The proposed method represents a significant reduction in ranging error, by two orders of magnitude for the same probability of missed detection, as compared to conventional Receiver Autonomous Integrity Monitoring (RAIM).

15

16

2.1

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

Introduction

Satellite clock errors represent a hazard in GPS-based navigation systems, as timing and synchronization of different clocks plays a critical part in delivering accurate navigation estimates. To a certain degree, all satellite clocks drift slightly from the GPS time standard, but typically these drifts are slow and, for the most part, correctable using the navigation message parameters, which are estimated and uploaded by the GPS operational control segment (OCS). On rare occasions, one of the satellite clocks may fail dramatically, causing an especially rapid divergence from the GPS time standard. In these cases, the ranging error on the faulted satellite may reach very large levels (hundreds of meters) in a very short time (seconds or minutes), necessitating specialized error-mitigation for safety critical applications.

This chapter focuses on mitigating the effect of dramatic clock failures on a particular set of safety-critical GPS users: aviation users of the Differentially Correction Positioning Service (DCPS) for the Ground-Based Augmentation System (GBAS) for GPS [4]. At first glance, it would seem that differential corrections generated by GBAS would eliminate clock errors, as the same clock error is observed both by the stationary reference station and the mobile user. For a rapidly drifting clock, the differential correction is not perfect, however, because of the finite time required to transmit the corrections to the mobile user. Any clock drift that occurs during the transmission period is not inherently removed by the broadcast differential correction. To address this problem, GBAS users apply a range-rate correction [19], which can account for linear clock drift. GBAS users do not correct for nonlinear drift (e.g. drift which includes a second or higher order derivative). For this reason, the presented method is concerned with a particularly harsh model of a satellite clock failure, which is dubbed satellite clock excessive acceleration (EA).

2.1. INTRODUCTION

17

In general terms, an EA fault is a satellite clock fault that is not removed by GBAS differential corrections (including the range-rate correction). EA faults will be modeled as a quadratic drift of arbitrary magnitude. Historically, satellite clock faults have occurred on the order of once per year [20], though not all of these have been major EA faults.

EA faults introduce a significant integrity risk for precision users requiring rapid alert times, as during EA faults the system is likely to provide hazardously misleading information while the fault remains undetected. Although clock faults can be detected by the OCS, which will then flag the satellite as unhealthy, OCS detection of an EA fault may not occur for many minutes or tens of minutes. During this time, GPS users are exposed to an anomalous ranging error on the affected satellite, which can result in large positioning errors.

Additional monitoring capabilities are required to support safety of life applications such as precision aircraft landing and aircraft surface movement. To this end, GBAS incorporates an EA monitor, which continuously checks the carrier-phase signal for each satellite to detect possible anomalies. This monitor functions well for precision landing applications, where the differential corrections can be no longer than 2 seconds old. For DCPS, by contrast, the time-to-alert can be as long as 10 seconds [19]. As such, differential corrections can age substantially for DCPS users, resulting in massively larger EA errors. To understand how errors grow when there is an extended transmission time (or outage) after the differential corrections are generated, consider the following model for the GBAS user ranging error in response to an EA failure. The model assumes a quadratic satellite clock drift, mitigated by user range-rate corrections [4].    1 ¨b(m) (t − T0 )2 (t − T0 ) < τag (m) 2 f ault bf ault (t) =   0 (t − T0 ) ≤ 0

(2.1)

18

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

The variable τag is defined as the time delay introduced by ground system (g) and avionics (a) processing. Note that EA errors grow with the square of time, so a five fold increase in time-to-alert (from 2 s to 10 s) corresponds to a 25-fold increase in ranging. For GBAS users, equation (2.1) means that pseudorange measurements associated with satellite m will (m) (m) be in error by bf ault (t), with T0 as the start time of the fault, t the current time, and ¨bf ault

the magnitude of the faulted clock acceleration.

Given the quadratic dependence on time, aging of the broadcast corrections is a major concern. DCPS allows for up to 10 s old corrections to be applied to current data, in the case of a broadcast outage. In the event of an EA fault coinciding with a DCPS outage, users may find themselves using hazardously misleading information as a base for their navigation. It is assumed that ground-based monitors cannot provide alerts to mobile users during the communications outage, so ground-based EA monitors do not mitigate the fault.

This chapter proposes an alternative approach to EA monitoring, in which the monitor is implemented autonomously, within the user receiver. Autonomous monitoring has the potential to address the long user outage times which might otherwise pose a risk for DCPS users during a satellite clock anomaly, by providing a warning when the ground system cannot do so. Although the response time for autonomous monitoring is faster than that for ground-facility-based monitoring, autonomous monitoring does have a disadvantage. In contrast with ground-facility-based monitoring, where reference receivers are stationary, user receivers are mobile and cannot be assumed to have zero acceleration. In this case, satellite clock acceleration errors may be masked by the physical acceleration of the user receiver and are, therefore, not directly observable.

Implementing autonomous EA monitoring thus requires a capability to measure and compen-

2.1. INTRODUCTION

19

sate for user receiver physical acceleration. To accomplish this, a new method is proposed for acceleration-based receiver autonomous integrity monitoring (RAIM). The proposed concept is based on two existing ideas that are combined in a new way, to yield the desired capability. The first idea, that of computing a receiver acceleration estimate from carrier-phase data, has been described by several researchers [21, 22], and is a logical extension of conventional GPS processing [18]. The second idea, that of monitoring the residuals of least-squares estimation, is the well-known foundation of a number of RAIM algorithms [23, 24]. The unique contribution of this chapter is to synthesize these two ideas, by investigating the use of the residuals of an acceleration-estimation process for the purpose of detecting EA events. This chapter refers to the new algorithm as carrier-acceleration receiver autonomous integrity monitoring, abbreviated CA-RAIM.

Figure 2.1: Simplified block diagram summarizing the proposed approach for carrieracceleration-based RAIM. The CA-RAIM algorithm, initially proposed in [13] is developed in detail in the remainder of this chapter, which consists of three major sections. The theoretical basis of the CA-RAIM algorithm is introduced in the Algorithms section. The Experimental Verification section applies the CA-RAIM algorithm to experimental data. The following section, Simulating

20

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

Integrity Performance, uses simulation to extend the experimental results and, in particular, to assess the performance of the proposed methods in detecting rare but hazardous EA faults. Observations from the data and simulation implementations are discussed in the section called Integrity Implications. A brief conclusion summarizes the contents of the chapter and provides an outlook on future work.

2.2

Algorithms

This section describes an algorithmic implementation of the proposed CA-RAIM concept. It presents tools for computing acceleration estimates and, subsequently, for defining a monitor test based on acceleration residuals. The performance of these steps can be significantly enhanced by introducing additional signal pre-processing, in advance of acceleration estimation. This signal processing is described immediately following the introduction of the primary steps of the algorithm.

2.2.1

Acceleration Solution

Because the proposed method is intended for use with a moving receiver, the first task of the satellite EA monitoring algorithm is to remove physical and clock accelerations associated with the user. For this purpose, an estimate of the user’s acceleration is computed from carrier phase measurements. The process of estimating user acceleration is closely related to estimating user position, so this section reviews GPS position estimation before discussing acceleration estimation.

2.2. ALGORITHMS

21

Obtaining a standard navigation solution from pseudorange measurements involves a recursive process in which the estimated position is computed using the data from four or more satellites. This navigation solution is based on the following measurement model [18]. In this model, ρ(k) is the pseudorange from the receiver to satellite k, r(k) is the true range and bu and b(k) are the user and satellite clock biases; in addition, ionospheric (I) and tropospheric (T ) effects are considered, as well as a random component ερ , where: ρ(k) = r(k) + bu − b(k) + I (k) + T (k) + ερ

(2.2)

A navigation solution can be computed from a set of at least four pseudoranges. The solution vector consists of three position estimates (δxu ) and a clock bias estimate (δbu ). The solution vector is obtained by iteratively solving the following linearized least squares equations. 



 δxu  δρ = G   + ερ δbu 

(2.3)



−1 T  δxu  T G δρ  = G G δbu

(2.4)

Here the geometry matrix G is made up of rows consisting of pointing vectors to each satellite (first three elements) and a one (the fourth element) [18] The equations that relate the user position to satellite pseudoranges are nonlinear, so the linearized equations must be solved iteratively to obtain an accurate solution.

In roughly the same way a position estimate can be computed from pseudorange measurements, acceleration estimates can be computed from carrier phase acceleration. In order to construct the equations for estimating user acceleration from carrier phase data, it is first useful to provide a model of the carrier ranging measurement that is analogous to (2.2). This

22

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

model can be expressed as follows, where φ is the carrier measurement (in units of length), λ is the carrier wavelength, N is an integer number of wavelengths, εφ is a random error. φ(k) = r(k) + bu − b(k) − I (k) + T (k) + λN + εφ

(2.5)

Taking the second derivative of equation (2.5) and assuming the second derivative of ionosphere and troposphere delays are negligibly small (I¨ = T¨ = 0) gives an expression for carrier-phase acceleration:   (k) φ¨(k) = a(k) − au · 1(k) + b¨u − ¨bf ault − f v(k) + εφ¨

(2.6)

Here 1(k) is the unit vector pointing from the receiver to satellite k. a(k) is the acceleration of (m) satellite k, au is the user physical acceleration, ¨bu is the acceleration of the user clock, ¨bf ault is

the magnitude of the EA fault on satellite m, f is a nonlinear function described below, v (k) is the satellite velocity, and εφ¨ is the random error on the carrier-phase acceleration observables. The difference a(k) − au is the relative acceleration of the satellite to the receiver. It is assumed the satellite acceleration can be computed from broadcast ephemeris data. The function f accounts for deterministically computed rotation of the 1(k) vector which can be computed from the ephemeris data and GPS observables). Unpredicted satellite clock acceleration is only modeled explicitly for the faulted case (e.g. during an EA fault). Under these considerations, f becomes a function of relative position and velocity; specifically of r(k) and vθ , which is the component of the relative velocity between satellite and receiver, perpendicular to the pointing vector 1(k) .  vθ2 f v(k) = (k) r

(2.7)

During an EA fault, the estimated vθ may become faulted and introduce an error in f . The

2.2. ALGORITHMS

23

effect of this error, however, is negligible, as shown in the appendix of this chapter (Influence of Errors in the Estimated vθ ).

It is convenient to combine together several terms to simplify equation (2.6). In particular, the carrier acceleration measurement φ¨ can be combined with two known satellite acceleration terms. A translational satellite acceleration term consists of the satellite acceleration vector a(k) , which can be computed from broadcast ephemeris parameters [18], and the satellite pointing vector 1(k) , which is known once position estimation is performed using (2.3). The nonlinear correction term described by the function f , depends on the satellite velocity and clock behaviors, which can be computed entirely from broadcast ephemeris parameters. Lumping these known terms together, yields an adjusted carrier acceleration measurement, φ¨˜(k) :  φ¨˜ = φ¨(k) − a(k) · 1(k) + f v(k)

(2.8)

(k) φ¨˜(k) = −au · 1(k) + b¨u − ¨bf ault + εφ¨

(2.9)

Which leads to:

In this equation, there are four unknowns associated with the user (spatial acceleration vector au and user clock acceleration ¨bu ) and N additional unknowns, associated with each of the (k) possible satellite clock faults from N visible satellites (¨bf ault ). It is not possible to solve for

this set of N +4 variables, because only N measurements are available. However, it is possible to solve for the four user acceleration variables directly, as is done in the conventional position solution (2.4) and look at the estimation residuals as a means of monitoring anomalous satellite clock acceleration. The corresponding linearized solution for the user’s physical and clock acceleration is:





(k) ¨˜ = G  au  + b φ  ¨ f ault + ε  b¨u

(2.10)

24

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

The geometry matrix G is the one obtained in the process of estimating user position and clock bias, as described by equations (2.3) and (2.4). In computing the user acceleration estimate, it is assumed that user position has already been computed and that the G matrix is known. As a consequence, equation (2.9) is linear, and hence a direct solution, rather than an iterative solution is possible. 



−1 T ¨  au  T ˜ G φ  = G G b¨u

2.2.2

(2.11)

Monitor Test

The main goal of the CA-RAIM algorithm is to look for inconsistencies in the estimation of user acceleration, in order to monitor for satellite EA faults. In equation (2.11), an estimate of the user acceleration au and clock acceleration ¨bu is computed, based on range acceleration measurements. If more than four range acceleration measurements are available at one point in time the system of equations in (2.11) is over-defined and, thus, contains redundant information. This redundancy can be exploited to assess the integrity of the navigation solution.

Redundancy can be exploited by computing the residuals of the estimate [25]. The residuals of the carrier acceleration solution, for instance are the following: 



¨˜ − G  aˆu  . rφ¨ = φ   ¨ bˆu

(2.12)

The residuals of an estimate are a measure of its consistency with prediction; thus the norm of the residual vector offers a possibility for monitoring discrepancies between expected and actual satellite clock behavior. Here σφ¨ is the standard deviation of the norm of the residual

2.2. ALGORITHMS

25

vector under nominal circumstances.

mφ¨ =

2 r ¨ φ σφ2¨

=

rT ¨ ¨ rφ φ σφ2¨

(2.13)

The quality of the carrier acceleration solution will be degraded under fault conditions, making one component of the residuals vector rφ¨ anomalously large. If any component of the residual vector grows large, then the monitor statistic of (2.13) will also be large, which makes mφ¨ a useful indicator for the presence of a fault.

Assuming that the error terms in (2.10) are independent, identical Gaussian distributed, the distribution of the monitor statistic (mφ¨) will be χ2 -distributed, with N − 4 degrees of freedom, where N is the number of available pseudoranges at the time of the estimation. The number of degrees of freedom is N − 4, and not N , because four values are constrained when solving for three components of the spa tail acceleration and one component of the clock acceleration.

It is possible to mitigate the risk posed to DCPS users by EA if a warning is issued when the monitoring statistic grows beyond an acceptable limit. This acceptable limit can be quantified as a threshold, τφ¨. mφ¨ > τφ¨ → warning

(2.14)

The threshold value must be designed to be consistent with the allowable continuity risk specification [19, 26]. In this chapter, the continuity risk allocation is assumed to reflect a false alarm rate of pF A = 10−6 . The threshold is calculated assuming nominal operation, i.e. (m) no EA fault is present (¨bf ault = 0), and inverting the χ2 cumulative distribution function,

26

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

which is expressed here using the notation “chi2inv”, as described in [27].

τφ¨ = chi2inv (1 − pF A , N − 4)

(2.15)

The inputs of the chi2inv function are a probability (the complement of the acceptable continuity risk) and the degrees of freedom of the χ2 distribution.

(m) In order to assess the threat posed by an EA fault of magnitude ¨bf ault , it is important to

estimate the probability with which it will be detected by the method or, conversely, the probability with which it will go unnoticed. An important quantity in this analysis is the probability of missing detection of an EA fault (PM Dφ¨). Because the EA fault biases the residual vector, the monitor statistic distribution is no longer described by a conventional χ2 distribution, but rather by a non-central χ2 distribution. Using this distribution, PM D for the CA-RAIM monitor can be estimated using the following equation, where the function ncx2cdf denotes the cumulative distribution function for a non-central chi square distribution.  PM Dφ¨ = ncx2cdf τφ¨, N − 4, Pmin ·

¨b(m) f ault σφ¨

!2  

(2.16)

Here the cumulative distribution function is evaluated at an error value equal to the first argument of ncx2cdf. The second argument is the number of degrees of freedom of the distribution, and the last argument is its non-centrality parameter. The factor Pmin is the smallest diagonal element of the projection matrix P which relates the magnitude of the residuals −1 T vector to the corresponding error in positioning P = I − G GT G G . A detailed analysis of the Pmin factor is presented in the appendix section titled “Fault Detectability: Pmin ”.

The subsequent analysis will compare the performance of CA-RAIM to conventional RAIM

2.2. ALGORITHMS

27

in detecting EA faults. For this reason, it is convenient to provide a brief comparison of conventional RAIM to CA-RAIM. In conventional RAIM the residuals of the position estimate, obtained from equation (2.4), are monitored. Similar to equation (2.12), a position residual is defined:





xu   δˆ rρ = δρ − G   δˆbu

(2.17)

Based on (2.17), a monitoring statistic is defined, in which the residual is normalized by the standard deviation: mρ =

rT ρ rρ σρ2

(2.18)

Here the residuals are normalized by the pseudorange error standard deviation σρ , which is implicitly assumed identical for all satellites, as described in [28]. This form is trivially adapted to handle variations of noise level (and correlations) by instead normalizing with a full covariance matrix [25], but for clarity, only the simpler normalization is considered in this chapter.

The position monitoring statistic mρ is distributed similarly to its acceleration counterpart mφ¨. In other words, the distribution of mρ is assumed to be a χ2 distribution. A warning is issued if this monitor statistic mρ ever exceeds a threshold:

mρ > τρ → warning

(2.19)

The threshold τρ , is defined as follows in terms of the false alarm rate pF A (which is assigned the same value as for the CA-RAIM analysis, 10−6 per epoch):

τρ = chi2inv (1 − pF A , N − 4)

(2.20)

28

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

Note that the values of τρ and τφ¨ are the same, as the monitoring statistics are identically distributed (χ2 with N − 4 degrees of freedom).

For conventional RAIM, the probability of missed detection is, again, defined in terms of a non-central chi-squared distribution. Whereas the non-centrality parameter for CA-RAIM (m) depended on the second derivative term ¨bf ault , the non-centrality parameter for conventional (m)

RAIM is a function of the accumulated pseudorange bias bf ault , given by equation (2.1).  PM Dρ = ncx2cdf τρ , N − 4, Pmin ·

2.2.3

(m) bf ault

σρ

!2  

(2.21)

Pre-processing

A series of steps need to be taken for carrier-phase measurements to be meaningfully processed by the CA-RAIM method. A block diagram of the pre-processing is shown in figure 2.2. Ultimately, the goal of the pre-processing operations is to convert the raw carrier phase ¨˜ Not surprisingly the key step measurements φ into adjusted carrier phase accelerations φ. in pre-processing is to compute an acceleration from the raw data by introducing a numerical second derivative operator. The acceleration observables are then corrected by a term  that accounts for a rotating frame of reference, which was introduced as f v(k) in equation (2.7). In effect the first two steps implement equation (2.8). Subsequently, two additional signal processing steps are introduced to enhance algorithm sensitivity. Specifically, the corrected carrier-phase acceleration measurements are smoothed, with a time constant τS and, in parallel, detrended by subtracting a bias estimate. The bias estimate is computed using a low-pass filter with a time constant τD greater than τS .

As shown in figure 2.2, the first pre-processing block is a double derivative operation. The

2.2. ALGORITHMS

29

Figure 2.2: Pre-processing steps necessary to obtain the carrier-phase acceleration observables. numerical double derivative is computed using the following equation. ¨ = φ(t + ∆t) − 2φ(t) + φ(t − ∆t) φ(t) ∆t2

(2.22)

The second pre-processing step corrects the second derivative, to account for satellite mo¨˜ which tion. The nonlinear correction f is given by (2.7). The output of the summation is φ, is given by (2.8).

In order to enhance the response of the monitoring statistic to EA faults, the corrected observables are smoothed and detrended. The smoothed, detrended signal is the output of a second summation, which combines the outputs of two additional signal processing blocks, labeled “bias estimation“ and “smoothing“ in figure 2.2.

30

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

The smoothing block is implemented as a low-pass filter, with time constant τS , and enhances the signal-to-noise ratio of the estimates. (k) (k) φ¨˜S (t) = α(τS )φ¨˜(k) (t) + (1 − α(τS ))φ¨˜S (t − ∆t)

(2.23)

Here φ¨S is the smoothed acceleration (an intermediate variable), ∆t is the discrete sample period and α is a function of τS , given by α(τS ) =

∆t . τS +∆t

While this low-pass filtering op-

eration improves signal-to-noise ratio, it also introduces a time lag. These two effects have a competing influence on algorithm performance. In this chapter, a value of 4 s is used to achieve a reasonable balance of these competing factors. The trade-offs between beneficial smoothing and detrimental lag are discussed in more detail in the Appendix section, titled “Selecting τS ”.

Bias estimation is implemented using another first-order low-pass filter, represented by the following equation. (k) φ¨˜B (t) = α(τD )φ¨˜(k) (t) + (1 − α(τD ))φ¨˜(k) (t − ∆t)

(2.24)

The final smoothed and detrended carrier-phase acceleration φ¨˜SD is computed as the difference of a bias term φ¨˜B from the smoothed measurement φ¨˜B . φ¨˜SD = φ¨˜S − φ¨˜B

(2.25)

The bias estimation time constant τD is assumed to be much longer than the smoothing time constant (τD >> τS ). In this work, τD is set to 100 s.

2.3. EXPERIMENTAL VERIFICATION

31

Removing the bias estimate detrends the corrected range acceleration measurements (see figure 2.2). Failure to de-trend the acceleration measurements results in biased, drifting residuals that reduce the sensitivity of the monitor and mask out satellite clock faults.

The main purpose of bias subtraction is to remove long-term trends in order to make the monitor statistic more sensitive to sudden acceleration faults. It is important to note, however, that bias subtraction also removes long-term trends from the acceleration estimate, such that steady physical acceleration by the receiver is not observable. Although this characteristic limits the use of the acceleration estimate in certain navigation applications, this characteristic does not inhibit the monitoring function, which is the primary role of the proposed CA-RAIM algorithm.

2.3

Experimental Verification

In order to verify that the proposed method is capable of accomplishing its objectives, it was implemented on data collected from a hardware GPS receiver. This section discusses the nature of the available data and provides an initial characterization of algorithm performance. This nominal analysis will subsequently be used to calibrate simulations of the algorithm and to extrapolate its utility as an integrity monitor.

2.3.1

Experiment Data

The data were downloaded from the website of the Oakland CORS (Continuously Operating Reference Station), which is identified as station ZOA1. The receiver was chosen for its known history of reliable operation, its representative sample rate (1 Hz), and its location at

32

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

an airport with good visibility of the sky. An additional feature of the station is its cesium standard [29], which provides a stable, well-characterized receiver clock. A high quality receiver was purposefully selected to demonstrate the feasibility of the proposed methods; as such, the performance predictions discussed in this chapter are an upper bound for mobile, aviation-grade receivers.

The data set obtained from CORS ZOA1 contains a log of the raw pseudorange and carrierphase measurements, as well as the broadcast ephemeris data. The data used for the experiments in this chapter were recorded on April 5, 2011, from 00:00 to 01:00 UTC. During the period of observation, the number of satellites in view changed between 9 and 13. For testing purposes, it is convenient to have an unchanging set of satellites; for the analysis in this chapter a stretch of 1500 s was used, during which the same 9 satellites remain in view. As part of the data set, a survey-grade estimate of the position of the antenna reference point is provided. This high precision position estimate, combined with the knowledge that the receiver is stationary, gives a reliable ground truth that was leveraged in evaluating the performance of the method.

Note that while precise estimation of acceleration or position is not the focus of CA-RAIM, the sensitivity of the method does depend on these estimates being bias free.

2.3.2

Algorithm Verification

The verification process, described in this section, demonstrates that the algorithm performs as intended when applied to the available data. In particular we will use data to analyze the performance of four of the blocks in figures 2.1 and 2.2.

2.3. EXPERIMENTAL VERIFICATION

33

First consider the receiver acceleration block (see figure 2.1), as this is the core block of the algorithm. The acceleration estimates are a crucial piece of CA-RAIM, as the performance of the method is contingent upon the estimates being unbiased. Figure 2.3 (a) shows that this is the case. Given that ZOA1 is a stationary receiver, the true acceleration is zero and acceleration estimates are, therefore, expected to be zero. Applying CA-RAIM to the nominal ZOA1 data, the acceleration estimate stays within 0.5 mms−2 of the true value 95% of the time, for each spatial component. The estimator is unbiased, in the sense that the mean is smaller than the standard deviation.

Next consider the monitor test block (see figure 2.1), which checks the consistency of the residuals of acceleration estimation. Having an unbiased estimate of the receiver acceleration, during nominal operation, leads to unbiased components of the residuals vector (rφ¨), which in turn leads to a well-behaved monitoring statistic (mφ¨), defined in equation (2.13). Inspecting the mean and standard deviation of the monitor statistic normalized by the threshold value, shown for this data set in figure 2.3(b), it can be seen that they behave as expected under nominal conditions. The monitor statistic is free of drifting bias components and appears chi-square-distributed, at least in the sense that for a chi-square statistic, the ratio of the mean to standard deviation should be

√k . 2k

A major challenge in defining a chi-square monitor is correctly accounting for the covariance matrix, as this is not known a priori and is, in general, a function of time. Methods are being developed to account for imperfect knowledge of the covariance matrix; for a discussion on the current state of affairs please refer to [30]. For this chapter a chi-square model will be used to obtain a representative, though not exact, assessment of performance. More precisely modeling the distribution for this monitor is topic for future work.

34

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

Specifically, for the period of time shown, the number of visible satellites is N = 9, which makes the number of degrees of freedom k = N −4 = 5, making the ratio of mean to standard deviation

√5 10

= 1.58. Inspection of the data shows that the mean of the plotted quantity is

0.36 with a standard deviation of 0.090; this leads to a mean-to-standard-deviation ratio of 4. This means that the data are nearly chi-square with a standard deviation slightly lower than expected. Moreover, as shown in figure 2.3(b), which plots the ratio of the monitor m¨

statistic to threshold ( τ ¨φ ) over time, this ratio remains well below one in the data, verifying φ

that CA-RAIM nominally operates without triggering an excessive number of false alarms.

To ensure that the monitor test block of figure 2.1 does generate warnings when a fault occurs, the response of the new monitor to fault conditions must also be considered. A statistical approach is conceivable, where historical data on clock faults could be gathered and analyzed. However, because EA faults are rare, and statistical analysis is best performed on events with numerous repetitions, a different approach was chosen.

In this approach, the method was evaluated by injecting a synthetic EA fault into the data and analyzing the response of the monitor. As can be seen in figure 2.4, the monitor responds to injected EA faults; in the case shown, the fault was of ¨bf ault = 0.2 ms−2 . During the fault, m¨

the monitoring statistic can be seen to grow above the threshold ( τ ¨φ > 1). In the example φ

shown, this event occurs within one time step of the fault occurring. (The simulated fault is injected at simulated time zero.) Thus, for this case, the monitor is observed to alarm immediately after the fault occurs.

The remaining block of figure 2.1 is the pre-processing block. This block consists of a number

2.3. EXPERIMENTAL VERIFICATION

(a) Acceleration Error

35

(b) Norm of Acceleration Residual.

Figure 2.3: Nominal operation of the CA-RAIM algorithm, smoothed with τS = 4 s. The acceleration estimate has a standard deviation below 0.5 mms−2 and a bias that is negligibly small in relation to the noise level. of sub-operations as shown in figure 2.2. Of these sub-operations, the key ones to verify are the smoothing and bias estimation blocks, as these are signal conditioning procedures added to enhance algorithm performance.

The bias estimation block of figure 2.2 is necessary, because without bias subtraction a small drifting bias can be observed in the distribution of the monitor statistic. Figure 2.5 shows the effect of using the bias subtraction method. On the left side of figure 2.5, the monitor statistic is shown over a 3500 s period with bias subtraction disabled. On the right side of figure 2.5 the monitor statistic is shown over the same period with bias estimation enabled. With the bias estimator disabled, there is a drifting offset that changes at each change in the set of visible satellites. The source of this bias has not been identified. Inclusion of the bias estimator stage removes the drifting bias and yields a distribution that is close to the expected χ2 , as shown in figure 2.5(b).

The smoothing block of figure 2.2 enables an opportunity to enhance monitor sensitivity

36

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

(a) Non-smoothed observables

(b) Smoothed observables, τS = 4 s

Figure 2.4: Response to an injected fault of 0.2 m/s2 on SV #5. Inclusion of a smoothing stage enhances the sensitivity of the monitoring statistic to EA faults. in exchange for a slight reduction in monitor response time. Smoothing reduces noise, under nominal conditions: amplifying the monitor statistic (because σφ¨ in the denominator of equation 2.13 is reduced). The result is that the monitoring statistic is more sensitive to EA faults. This trade-off is illustrated in figure 2.4. The left side of figure 2.4 shows the response of the monitor statistic to a simulated injected fault, as described above, with no additional smoothing. The right side of figure 2.4 shows the same scenario but with smoothing introduced via equation (2.23), for a time constant of τS = 4 s. The result of the smoothing is that the amplitude of the monitor statistic in relation to the threshold is increased in the presence of a fault (to 20 times the threshold, rather than just 5 times the threshold). However, the smoothed case takes longer to reach this maximum value (from about 2 seconds to 8 seconds with smoothing). Selecting the smoothing time constant directly impacts monitor performance, particularly when the fault is somewhat smaller than the case shown (e.g. when the fault-shifted monitor is only slightly larger than the threshold). A more in depth discussion of time constant selection is considered in the appendix; for the remainder of this chapter it is assumed that τS = 4 s.

2.4. SIMULATING INTEGRITY PERFORMANCE

(a) Case with no bias removal

37

(b) Case with bias removal

Figure 2.5: Acceleration residuals during nominal operations. Inclusion of the bias estimator removes the drifting bias from the residuals.

2.4

Simulating Integrity Performance

One of the central challenges addressed in this chapter is a performance evaluation of the proposed fault detection mechanism on real data. The aggressive integrity requirements for surface movement users encourage the search for a comprehensive analysis of monitor performance.

A simulation-based study offers the benefit that faults can be induced repeatably and their effects quantified in terms of variables that are not observable in physical systems, such as true range acceleration errors. In order to be representative, the simulation must resemble the real system as much as possible. To this end, the previously described algorithm implementation was recreated in simulation, fine-tuning its parameters such that its operating conditions were equivalent.

38

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

Figure 2.6: Finding the most adequate value for σφ, ¨ sim , the carrier acceleration noise is varied in different simulation runs, to find the response that best matches the response of the data.

2.4.1

Simulation Setup

The simulation setup was designed to serve as a data source, to replace the aforementioned CORS ZOA1 data with comparable randomly generated noise. The simulated pseudorange and carrier-phase measurements are generated from the broadcast navigation file, also available from CORS. The simulated receiver is assumed to be positioned at the location of the ZOA1 receiver. Under these circumstances, both the pseudorange and the carrier-phase measurements resemble the real measurements very closely, save for the level of random noise, which is a design parameter of the simulation. The noise level on the pseudorange measurements is set to σρ = 1 m. The carrier-phase measurements are adapted to resemble the noise level observed in the CORS data, assuming differential GBAS corrections are available to users.

To simplify the simulation, synthetic noise signals for all satellites were generated using the same Gaussian distribution, regardless of satellite elevation. To choose the equivalent standard deviation for the simulated noise, a value was selected to match the monitor response to an injected fault as shown in figure 2.6.

2.5. SIMULATION RESULTS

39

The different responses to the same fault, shown as thin solid lines in figure 2.6, are computed using different simulated σφ¨, thus altering the sensitivity of the monitor. The dotted line represents the response of injecting that same fault into the data; while the bold line represents the simulated response that best matches the data response, corresponding to σφ¨ = 1.4 · 10−3 m/s2 .

Fault scenarios were simulated by injecting a modeled clock acceleration error on a random satellite, in accordance with the threat model of equation (2.1), where the magnitude of the injected EA fault ranges from 0.01 m/s2 to 4 m/s2 . The threat model grows quadratically in time and is added to the random noise measurements coming from the faulted satellite.

Figure 2.7: Construction of ranging error vs. PM D bound for the filtered carrier-accelerationbased method, at a τ of 4 s.

2.5

Simulation Results

Performance of the proposed CA-RAIM method is assessed in relation to conventional RAIM in simulation. In this comparison, the ranging error is computed as a function of the probability of missed detection for an EA fault. This type of analysis, sometimes referred to as “PM D vs. E“ analysis, is currently the standard technique for evaluating monitor perfor-

40

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

mance for GAST-D GBAS [3, 31] In order to assess performance for CA-RAIM, and compare performance to conventional RAIM, PM D vs. E curves were computed by varying the size (k) of the fault, characterized by the fault magnitude (¨bf ault ).

An ideal monitor would have a low probability of failing to detect a fault (PM D ), especially (m)

for cases with large ranging biases (bf ault ). If two monitors are compared at a given level of PM D , the safer monitor will be that with the smaller ranging error.

For the comparison between a conventional, pseudorange-based RAIM and the proposed carrier-acceleration-based approach it is important to keep in mind that while the position error, and with it mρ , grows with time, the acceleration error (and thus mφ¨) remains constant. The corresponding PM D behaves accordingly: for the conventional approach PM D decreases, while for the acceleration-based approach PM D remains constant, once the monitor statistic reaches its steady state level, as shown in figure 2.4. Thus, it is important to consider PM D in the context of time-to-alert. For the present analysis, the fault is simulated over 10 seconds, as that is the time-to-alert used for DCPS.

Running a number of different simulations for different fault magnitudes, it is possible to construct a family of PM D vs. E curves for both methods: conventional and CA-RAIM. Figure 2.7 shows this family of curves, where each curve is plotted as PM D (vertical axis) vs. error (E 0 , horizontal axis). Each value along the curve represents a result at a different moment in time after the fault occurred. The error E 0 is the error for the case of Pmin = 1. For other values of Pmin the horizontal axis is effectively scaled, such that the error is slightly larger as Pmin falls below unity, according to the relationship: E0 = √

E . Pmin

(2.26)

2.5. SIMULATION RESULTS

41

The dependence on Pmin is a direct consequence of the presence of the term in equation (2.16). From figure 2.7 it can be seen that the worst-case errors for each EA trial can be used as a bound for the PM D as a function of the ranging error of the system.

By looking at figure 2.8 it becomes apparent that this bound is below the bound defined by the conventional method, proving that monitoring m ¯ φ¨ is superior to mρ for detecting EA faults, if an adequate filter time constant is selected. The figure shows the behavior of the CA-RAIM approach, previously depicted in figure 2.7, in direct comparison with the behavior of the conventional RAIM approach. Comparing different ranging errors for the same PM D , CA-RAIM exhibits ranging errors that are two orders of magnitude smaller than conventional RAIM.

(a)

(b)

Figure 2.8: Comparison between conventional RAIM and the proposed CA-RAIM, with Pmin = 1. In (a) the ranging error is plotted on a linear axis, in (b) the ranging error is plotted on a logarithmic axis, with only the worst-case bound shown for CA-RAIM

42

2.6

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

Integrity Implications

The proposed CA-RAIM method represents a significant improvement in ranging error, compared to conventional RAIM, when looking at a particular value for the probability of missed detection, as shown in figure 2.8. Considering a particular point on the PM D scale illustrates the performance improvement that CA-RAIM represents. For instance, at PM D = 10−5 , conventional RAIM has an associated ranging error (E 0 ) of approximately 10 m (see figure 2.8), while for CA-RAIM E 0 = 0.12 m; which represents an improvement of nearly two orders of magnitude. These ranging errors are, however, both degraded by a factor of

√ 1 , Pmin

which means that their ratio does not change with Pmin . The degradation of ranging error with Pmin is illustrated by the values presented in table 2.1.

The performance gain in CA-RAIM, relative to conventional RAIM, can be attributed to two essential factors. First, the use of carrier-phase signals allow for more precision in acceleration estimates than code-based measurements, and second, the use of acceleration residuals yields a monitor statistic with a faster response than position residuals. A fastresponding monitor is not in itself an advantage, considering the 10 s time to alert specified by DCPS; but the gain from a faster response is translated into a more sensitive monitor by smoothing (see figure 2.2).

Pmin 1 0.5 0.25

E 0 @ PM D 0.140 0.198 0.280

= 10−4 m m m

E 0 @ PM D 0.149 0.211 0.299

= 10−5 m m m

Table 2.1: Worst possible ranging error for CA-RAIM for different values of Pmin , at relevant values for the probability of missed detection

In order to assess the utility of CA-RAIM in comparison with conventional RAIM, we con-

2.6. INTEGRITY IMPLICATIONS

43

sider the application of supporting airport surface movement. For this application, the navigation system error (NSE) is required to stay below 1.5 m for 95% of the time, under poor visibility conditions [32]. In order to leave margin for nominal random error, the ranging E should be somewhat smaller than 1.5 m at the critical probability level for missed detection (expected to be in the range of PM D from 10−4 to 10−5 ). For this case, the E value associated with RAIM is approximately 8 m to 10 m (see figure 2.8), which is clearly much greater than 1.5 m.

The E 0 value associated with CA-RAIM at the same PM D level is approximately 0.14 m to 0.15 m (table 2.1), which is clearly much smaller than 1.5 m. Thus, it is evident that the added sensitivity of CA-RAIM would be highly beneficial for airport surface movement, since conventional RAIM is not sufficiently sensitive to enable this application.

Not all observed clock faults have the structure assumed in equation (2.1). Of the faults shown in [20], one does appear to conform to such quadratic structure, but another does not. It is important to note that, in this chapter, it is assumed that the quadratic fault is the worst-possible fault that might be encountered in real operations.

Finally, an important operational consideration must be pointed out. In the proposed scheme, no SV should be taken into account for calculating residuals if it is not covered by the GBAS update message. This guarantees integrity in case of a temporary satellite drop-out, e.g. due to occlusions. The intent of the proposed CA-RAIM monitor is to operate in coordination with conventional excessive acceleration monitoring at the ground station, in order to ensure continuous DCPS operation even during a brief communication outage. Ground-station monitors are expected to provide the first line of protection against any satellite acceleration fault. CA-RAIM becomes active only if the GBAS VHF data broad-

44

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

cast (VDB) to a particular aircraft is interrupted (implying that alerts from the ground station’s EA monitor are no longer available). Thus, only in the event of an anomalous sequence of missed VDB messages would the CA-RAIM monitor be allowed to issue alerts. Accordingly, both CA-RAIM and conventional EA monitoring would share the same risk allocations from the overall continuity and integrity risk budgets. Though CA-RAIM would only issue alerts during an anomalous series of missed messages, it is assumed that CARAIM would run continuously in the background, so that detrending filters would remain fully converged.

2.7

Summary

The chapter describes a novel approach for detecting satellite clock excessive acceleration faults, based on carrier-phase acceleration receiver autonomous integrity monitoring (CARAIM). A simulation set up is constructed to demonstrate the algorithm. The set up is tested both on real-world data collected from a high-end stationary receiver, to simulate nominal behavior, and using simulated injected errors, to analyze response to excessive clock acceleration faults.

Performance predictions, based on the probability of missed detection as a function of error, show that the CA-RAIM scheme is superior to conventional position-based RAIM by two orders of magnitude.

Before deploying the proposed algorithm for safety-of-life applications, further investigations are needed. One important topic for future research is determining the cause of the biases observed in Figure 2.5. Although biases do not significantly degrade short-term monitor performance (detection of a new fault within the time-to-alert), a means for deterministically

2.8. APPENDICES

45

removing biases would allow for detection of smaller faults over longer periods of time (detections over periods longer than τD , i.e. over hundreds of seconds). Another topic to address in future work will be the characterization of the method using representative aviation-grade hardware (antenna, receiver, and receiver clock). Moreover, the CA-RAIM algorithm might also have utility in detecting satellite fault modes aside from excessive acceleration. Future work might investigate, for instance, if CA-RAIM provides effective monitoring to detect unscheduled satellite maneuvers or other ephemeris faults.

2.8 2.8.1

Appendices Fault Detectability: Pmin

−1 T Mapping errors into residuals occurs via the matrix P, defined in [18] as P = I−G GT G G and obeying r = Pε. It is a transformation matrix that maps the error vector ε, into residuals r, and thus preserves its norm.

¨ (m) of Having assumed that EA faults happen on one satellite at a time, the fault vector b f ault equation (2.10) becomes ¨b(m) f ault

   0 =   ¨b

f ault

j 6= m

(2.27)

j=m

where j is the index for elements in a vector of length N , the number of visible satellites, and m is the index of the faulted satellite.

The EA fault maps into the norm of the residuals vector with a factor proportional to the mth diagonal element in P.

46

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

¨ T Pb ¨ f ault , which for the structure given The value of the non-centrality parameter is b f ault above is equal to Pm,m¨b2f ault = λ(k) .

(2.28)

The worst-case (least detectable case) happens when the non-centrality parameter is smallest; in other words: the worst case is when Pm,m is at a minimum. Hence, the worst case satellite is covered if we consider a non-centrality parameter

λworst = min{Pm,m }¨b2f ault .

(2.29)

In data gathered over one typical day at ZOA1 (see figure 2.9), Pmin = min{Pm,m } varies between 0.15 and 0.58. The ratio of maximum to minimum Pmin over a day corresponds to approximately 4 and is, thus, not as great a concern as it would be, if it were 10 or 100. It is important to note that Pmin is entirely geometry dependent and, thus, deterministic. Cases with unfavorable Pmin can be detected ahead of time and dealt with appropriately.

Figure 2.9: The value of Pmin during one typical day at ZOA1: 2007-02-25

2.8. APPENDICES

2.8.2

47

Selecting τS

In order to determine the best time constant for the smoothing filter τS , a “sweet spot” must be found. If τS is larger than the optimal value, the monitoring statistic is slow to respond; whereas a τS that is too small does not effectively remove the random noise components, thus reducing the sensitivity of the monitoring statistic. One way of resolving the trade-off is to measure the response of the monitoring statistic at 10 s, as dictated by the DCPS application scenario, and to pick the τS that yields the highest response.

Figure 2.10: Finding the ideal value for the smoothing filter time constant, based on the time response of the monitoring statistic

Different responses to a fault of magnitude ¨bf ault = .2 ms−2 can be seen in figure 2.10. For the data set used in this experiment, the best value of τS was found to be 4.1 s. This value was judged to be best because it yielded the strongest monitor response at 10 s, which is indicated with a bold line in figure 2.10. Note that the selected optimal τS may change, depending on sampling frequency and noise level, but is independent of fault magnitude.

48

2.8.3

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

Derivation of Range Acceleration Correction Term

The goal of this appendix is to derive equation (2.6), which is the key equation relating the pseudorange measurement, user acceleration estimate, and known satellite ephemeris. To start, write the position vector from the user and the satellite. This vector will be described by the variable xs/u where the subscript denotes the tip of the vector (satellite, s) as well as the tail (user, u).

xs/u = r 1(k)

(2.30)

Here the 1 vector is the unit pointing vector in 3D from the user to the satellite. The distance between the two is the range r. Although the unit pointing vector 1 has been described by a 3D ECEF coordinate system elsewhere in this chapter, it is useful to express it here in 2D planar coordinates. Specifically, we will consider the 2D plane defined by two vectors: the relative position vector xs/u and the corresponding relative velocity vector vs/u .

vs/u

d xs/u = dt ECEF

(2.31)

For planar kinematics it is convenient to express vectors in terms of polar coordinates. For this purpose, we introduce a rotating reference frame with two orthogonal components. The ˆr , the radial unit vector which points from the user to the satellite. The first component is e ˆθ , the circumferential unit vector which is perpendicular to the radial second component is e vector as illustrated in the figure. These vectors rotate continually as the satellite moves ˆr always points directly between the two. across the sky or as the user translates, such that e Using these unit vectors, the relative position xs/u can be written as follows:

ˆr xs/u = r e

(2.32)

2.8. APPENDICES

49

Figure 2.11: Definition of unit vectors for expressing satellite position and velocity in polar coordinates. The plane represented in the illustration is the plane that contains xs/u and vs/u . Thus, the kinematics in this problem are 2-dimensional. As described in standard dynamics textbooks [33], the derivative of this vector is:

ˆr + vθ e ˆθ vs/u = r˙ e

(2.33)

Here r˙ is the ranging velocity, which is related to the Doppler measurement, and bθ is the perpendicular (i.e. circumferential) velocity component. The relative acceleration between the satellite and the user (with respect to the ECEF frame) will be written as /u. This relative acceleration is the derivative of the relative velocity vs /u with respect to the ECEF frame. As described by standard dynamics textbooks [33], the 2D planar relative acceleration is given in polar coordinates by the following formula.

as/u = (¨ r−

vθ vθ2 )ˆ er + (αr + 2 r)ˆ ˙ eθ r r

(2.34)

All variables in the expression have been previously defined, except for r¨, which is the second derivative of the distance between the satellite and the user, and α, which is the angular acceleration of the rotating unit vectors. In the CA-RAIM problem, we are only concerned

50

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

ˆr term, as this term is directly related to the with the first term of the above equation, the e observable (i.e. to the carrierphase range acceleration). The carrierphase range acceleration is equal to the second derivative of the range plus relative clock δ¨b, plus noise.

φ¨ = r¨ + δ¨b + εφ¨

(2.35)

To obtain estimates of user acceleration, we must relate the user acceleration to this observable, accounting for known satellite kinematics. A first step in building this relationship is to obtain an expression for the range acceleration r¨ in terms of the user and satellite acceleration terms. Such an equation can be obtained by isolating the first component of (2.34).

r¨ −

vθ2 ˆr · as/u =e r

(2.36)

Noting that vectors sum, the relative acceleration between the satellite and user (in the ECEF frame) can be written as the acceleration of each relative to the center of the earth (i.e., the origin of the ECEF frame).

as/u = as/O − au/O

(2.37)

Here as/O refers to the relative acceleration between the satellite and the origin, and au/O refers to the acceleration between the user and the origin. All accelerations in the above equation are relative accelerations taken in the rotating ECEF frame. Substituting (2.37) into (2.36) gives the following.

r¨ −

 vθ2 ˆr = as/O − au/O e r

(2.38)

2.8. APPENDICES

51

Note that the radial direction is aligned (in 3D coordinates) with the unit vector 1(k) for a particular satellite k. Hence (2.38) can be can be written as follows.

r¨ −

 vθ2 = a(k) − au · 1(k) r

(2.39)

In this equation, we have substituted the notation used earlier in this chapter. Specifically, we note that the relative acceleration of satellite k to the rotating Earth a(k) is equivalent to as/O and that the relative acceleration of the user to the rotating Earth au is equivalent to au/O .

As a final step, substitute (2.35) into (2.39).

 v2 φ¨ − δ¨b − εφ¨ − θ = a(k) − au · 1(k) r

(2.40)

In this equation, the primary quantities to be estimated are the user acceleration au/O and the user component of the clock acceleration. Other terms can be determined from a mea¨ directly from the satellite ephemeris data (ak ), or indirectly from the ephemeris surement (φ), v2

data coupled to user position and velocity estimates ( rθ ). Specifically, the a(k) term is computed from the satellite ephemeris in two steps.

The first step in obtaining a(k) is to compute the satellite acceleration at the time of trans(k)

mission, where the time of transmission is labeled ttr , and is related to the time of reception (k)

t and the propagation time tp .

52

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

(k)

ttr = t − t(k) p

(2.41)

(k)

The propagation time for kth satellite tp is a variable obtained in computing the position solution. A 3-point numerical approximation was used to compute the acceleration in the (k)

ECEF frame at the transmission time ttr .

(k)

(k) atr

(k)

(k)

x(k) (ttr + ∆t) − 2x(k) (ttr ) + x(k) (ttr − ∆t) = ∆t2

(2.42)

The satellite position data (with respect to the rotating ECEF frame) can be obtained from the ephemeris parameters provided in the navigation message. These parameters are defined such that the satellite position can be evaluated at any time t. A time step resolution study was conducted to show that the value of ∆t did not have a significant impact on the acceleration approximation of (2.42) for values of ∆t between 10 s and 0.1 s. In this chapter, a ∆t of 1 s was used.

The second step in obtaining a(k) is to account for the rotation of the ECEF reference frame during the propagation. The appropriate rotation matrix R is a function of the Earths (k)

rotation rate ωE and the propagation time tp , and is identical to the rotation matrix used for the same purpose in the position solution.

 (k) a(k) = R ωE , t(k) · atr p

(2.43)

The circumferential component of relative velocity must likewise be computed carefully. Recall that this component of velocity was defined in (2.33). Rewriting this equation to

2.8. APPENDICES

53

solve for vθ gives.

vθ eθ = vs/u − re ˙ r

(2.44)

Expanding vs/u in terms of the satellite and user velocities in the rotating ECEF frame, we can write the following.

vs/u = v(k) − vu

(2.45)

Note that the satellite velocity term v(k) can be computed from the ephemeris, but that ˆ u must be obtained from a GNSS velocity solution computed the user velocity estimate v using standard methods [18]. Substituting (2.45) into (2.44), we can obtain the following expression for the magnitude of vθ .

  vθ = k v(k) − vu − v(k) − vu · 1(k) k

(2.46)

To simplify the observation equation, it is useful to introduce the function fθ to represent the

vθ2 r

term.

  k v(k) − vu − v(k) − vu · 1(k) k2 vθ2 = fθ = r kx(k) − xu k

(2.47)

Noting this definition for fθ and also noting that ¨b is equal to the clock acceleration of the user minus the clock acceleration of the satellite (δ¨b = δ¨bu − δ¨b(k) ), the terms in (2.40) can be re-arranged to obtain the following.

 φ¨ = a(k) − au · 1(k) + ¨bu − ¨b(k) + fθ + εφ¨

(2.48)

54

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

This equation is equivalent to the original observation equation presented previously as equation (2.6).

2.8.4

Influence of Clock Fault on the Estimation of vθ

The term fθ is vulnerable to clock faults, as it is computed from a combination of ephemeris data and user measurements. This section is devoted to showing that the relative error in fθ induced by an error in the estimated vθ is negligibly small.

For the purposes of this analysis, the quantities vθ and fθ will denote their corrupted values, while the prefix ∆ will denote their errors and the subscript nom will represent the nominal (un-corrupted) values. Using this notation, equation (2.7) becomes:

fθ = =

(vθnom + ∆vθ )2 = r

(vθ 2nom + 2vθnom ∆vθ + ∆vθ2 ) r

(2.49)

From this equation, the error in fθ is:

∆fθ = fθ − fθ nom =

=

(2vθnom ∆vθ + ∆vθ2 ) r

(2.50)

Combining equations (2.49) and (2.50) yields the relative error in fθ : θ 2∆vθ + v∆v ∆fθ θ nom = fθ vθnom + 2∆vθ + ∆vθ2

In order to assign a bounding value to

∆fθ , fθ

(2.51)

the terms vθnom and ∆vθ must be bounded from

2.8. APPENDICES

55

various considerations.

The maximum velocity error due to an undetected clock fault is bounded by conventional RAIM. From figure 2.8, conventional RAIM will ensure EA faults remain below 12 m with a PM D of 10−9 ; which corresponds to a constant acceleration of 0.24 m/s2 over 10 s. Thus, the fault can can grow maximally to a range rate error of 2.4 m/s.

In the worst case, the range rate error maps fully into the user velocity, leading to ∆vu ≤ 2.2   m/s. Now, from equation (2.46), vθ = k v(k) − vu − v(k) − vu · 1(k) k and therefore, again in the worst case, |∆vθ | ≤ |∆vu |.

A bound on the worst case perpendicular velocity (vθ,bnd ) can be found from the following geometric considerations. Let θ be the angle between r(k) (the vector from the center of the earth to the satellite) and 1(k) (the pointing vector from the user to the satellite). At the same time, satellite velocity (v(k) )is perpendicular to r(k) and vθ is, by definition, perpendicular to 1(k) ; which means that the angle between v(k) and vθ is also θ.

Let el be the elevation angle of the satellite and R the radius of the earth. From the law of sines: sin(θ) sin(el + 90◦ ) = (k) r R

(2.52)

A vθ lower bound, vθ,bnd , can be derived by finding the lowest possible satellite velocity, vθ,sat,bnd , and deducting the highest possible user velocity, vθ,usr,bnd . These bounds are functions of elevation angle.

56

CHAPTER 2. AUTONOMOUS SATELLITE CLOCK FAULT DETECTION

vθ,bnd = vθ,sat,bnd + vθ,usr,bnd

(2.53)

vθ,sat,bnd = |v(k) |cos(θ) =

(k)



= |v |cos sin

−1



Rsin(el + 90◦ ) r(k)

 (2.54)

If the user velocity is bounded to have a maximum vertical component of A and a maximum overall magnitude of B then:

vθ,usr,bnd = A + (B − A)sin(el)

(2.55)

The velocity of a landing aircraft should not significantly exceed B = 100 m/s. At the same time, GPS satellite minimum velocity is approximately |v(k) | = 3800 m/s.

According to (2.53), the lowest bounding value, over the range of non-negative elevation angles, occurs on the horizon (el = 0). Thus, equation (2.53) evaluates to vθ,bnd = 3500 m/s.

Given that the upper bound on ∆vθ is 2.4 m/s and the lower bound on vθ is 3500 m/s, θ equation (2.51) indicates that the worst case relative error ( ∆f ) can be computed to: fθ

θ 2∆vθ + v∆v ∆fθ θ,bnd | |≤ ≤ 1.3 x 10−3 fθ vθ,bnd + 2∆vθ + ∆vθ2

(2.56)

Chapter 3 Collaborative Ionosphere Monitoring This chapter presents a novel method for ionosphere anomaly monitoring, based on a network of collaborative airborne dual-frequency receivers. Each receiver measures the ionospheric delay for each satellite; measurements are shared across various receivers and used in the estimation of the local ionosphere gradient, which may be different for each satellite in the case of an ionosphere storm. This monitor is able to detect anomalously large ionosphere gradients of 400 mm/km with better than 10−9 accuracy, using a network of 20 aircraft; similar sensitivity to signicantly smaller gradients is possible if more aircraft are involved in the scheme. The method is tested in simulation and it is shown that the probability of mis-detection is largely insensitive to radio frequency interference (RFI), even when receivers are disabled over a radius of 20 km, with collaborating aircraft uniformly spread over a radius of 50 km. From this result, precision landing (with ionosphere mitigation) is possible even when all but one ground-station reference receiver is incapacitated by RFI. Also, because the proposed algorithm detects anomalies over large baselines (tens of kilometers of separation), this algorithm could enable approach and landing at one airport using differential correc-

57

58

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

tions broadcast from another airport in an urban metroplex. Such operations are not typical using standard ionosphere mitigation techniques because of tight restrictions on the allowed broadcast distance, called Dmax .

3.1

Introduction

Currently, the two most severe threats to Ground Based Augmentation Systems (GBAS) for navigation satellite systems are severe ionosphere storms and elevated levels of radiofrequency interference (RFI) [34]. Ionospheric disturbances are structures in the ionosphere that cause unusually large spatial gradients of ionosphere delay in the GPS signals. They are a threat to the integrity of GBAS, because if the corrections broadcast by GBAS contain wrong ionosphere information a solution will appear valid when it is, in fact, not [8]. This chapter presents a novel approach to Ionosphere monitoring for safety-of-life applications involving networked collaborators; the method is robust even when individual members of the network lose tracking due to radio frequency interference (RFI).

Conventional wisdom suggests that the threat posed by ionospheric disruptions can be reduced by introducing dual-frequency measurements [35], however, dual-frequency user processing does not provide a full solution for the precision landing application. The ionospheric delay of a signal is inversely proportional to the square of its frequency, making it possible to estimate or remove the ionospheric delay using GNSS signals on two frequencies. Ionospherefree processing can eliminate ionosphere delays, but at the expense of multiplying the size of nominal errors by a factor of nearly three [35], when compared to single-frequency processing. Thus ionosphere-free processing is not entirely desirable, as the increased random noise would significantly lower GBAS availability for precision landing [36]. To avoid this availability penalty, GBAS will likely use single-frequency processing (or divergence-free pro-

3.1. INTRODUCTION

59

cessing [35]), even when multiple frequencies are widely available. As such, some form of supplemental monitoring will be needed to mitigate the impact of severe storms [37].

It has been shown in [36] that the threefold increase in noise, associated with the ionospherefree dual-frequency solution over the conventional single-frequency solution, is a factor which would degrade navigation availability severely enough to make GBAS unacceptable from an operational point of view; the increase in noise represents a drop in availability from 99.98% for single-frequency to 68.51% for dual-frequency [36].

Potential candidates for supplemental ionosphere-storm monitoring include a combination of geometry screening [38] and code-carrier divergence monitoring [39], the long baseline monitor [40], and the ionosphere gradient monitor (IGM) [41].

Cat-I operations were enabled, in large part, through the introduction of geometry-screening methods [38]. This approach inflates user protection levels by adjusting error models (e.g. sigma parameters) broadcast by the ground-station, such that users are rigorously protected against even worst-case ionosphere storm scenarios. The disadvantage of geometry screening is that the process of sigma-inflation reduces GBAS availability.

To mitigate availability penalties associated with geometry screening, many enhanced ionosphere monitors have been proposed for Cat-III GBAS (also known as GAST-D). The simplest of these monitors is a long-baseline monitor [40], which would monitor ionosphere gradients using a pair of ground-based receivers separated by ten kilometers or more. This approach is very difficult to implement at most airports, both because of the difficulty of siting and securing ground-based antennae off airport property and because of the difficulties

60

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

of siting off-shore monitor receivers for ocean-side airports.

A more compact but algorithmically complicated monitor, based on carrier-phase data, is called the Ionosphere Gradient Monitor (IGM) [41]. The IGM uses antennae spaced over short distances (hundreds of meters) to form single-frequency carrier-phase differences that can be used to detect threatening ionosphere gradients. Because the IGM can be located on the airport surface, it is likely to be incorporated in GAST-D systems. Some of the shortcomings of this approach are (1) that IGM algorithms place severe restrictions on antenna siting in order to achieve good observability of ionosphere fronts and (2) that IGM installations are susceptible to elevated RFI, since antennas must be located in close proximity, typically within hundreds of meters.

This chapter presents a novel method, originally introduced in [15], for ionosphere anomaly monitoring. It is based on a networked approach, where collaborating users share their estimates of ionosphere delays to collectively estimate the local ionosphere gradient. The magnitude of the gradient is used to detect anomalous conditions, when the ionospheric delay gradient becomes large enough to invalidate differential corrections broadcast by GBAS.

The proposed approach is shown to provide a useful tool in ensuring safe operation of GBAS, independent of satellite geometry. Accessing measurements from other receivers enables a user to compute a more reliable estimate of the state of the local ionosphere than a one-point measurement would. The proposed monitoring approach would enable high availability dualfrequency monitoring for severe ionosphere storms that might impact the integrity of users computing single-frequency or divergence-free position solutions.

3.2. SYSTEM ARCHITECTURE

61

The chapter is structured as follows: Section 3.2 describes the architecture of the proposed system. Section 3.3 discusses the threat posed by ionospheric disruptions. The details of the method are discussed in section 3.4. Section 3.5 shows a simulation of how the method might work in the vicinity of a GBAS installation. The dependence of detection sensitivity on user density is addressed in section 3.6 and the influence of RFI in section 3.7. The effects of finite bandwidth on the method are studied in section 3.8. The implications of the results obtained in the previous sections are discussed in section 3.9. A summary of the work concludes the chapter in section 3.10.

3.2

System Architecture

The proposed system requires a set of aircraft equipped with dual-frequency receivers, as well as a GBAS ground station, also dual-frequency equipped. Using dual-frequency processing, each aircraft and the GBAS station are able to obtain measurements of ionospheric delay for each satellite. A data communications link is required for users to share the ionospheric delay measured for each satellite, as well as the position of the user receiver itself (see Figure 3.1). Using compiled ionosphere delays for an individual satellite as seen by all users, the gradient affecting that satellite can be estimated. The estimated gradient on each satellite is monitored for extreme values that indicate the presence of an anomaly. If such an anomaly is detected, the monitor triggers an alert and declares the satellite unusable. Computation of the gradient estimates can happen either centrally, with all measurements concentrated in one location (e.g. the GBAS ground station), or it can be decentralized, with every user receiver computing its own estimates.

62

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

Figure 3.1: The proposed system leverages a network of airborne receivers to probe the ionosphere at different locations and estimate local gradients. Arrows represent communication links. Aircraft engaged in critical maneuvers, such as precision approach or landing, share measurements of ionospheric delay with other aircraft in the vicinity, as well as with the local GBAS facility. Note that the method can benefit from aircraft not directly in communication with the GBAS facility.

3.3

Threat Model

The hazardous impact of severe ionosphere storms on GBAS has been a concern since moving ionosphere-delay fronts, featuring gradients as large as 400 mm/km on the L1 frequency, were identified by Datta-Barua et al. [8]. Typically, hazardous ionosphere storms are modeled as a traveling wedge; however, observations of anomalies with different structures have also been reported [42]. Figure 3.2 shows a set of different possible models for ionospheric anomalies; the wedge model is shown on the top, while the bottom two models correspond to so-called ionosphere “bubbles”. Note that bubbles represent a depletion of electrons [43], which is compatible with the wedge model, as the wedge can be defined to represent either

3.3. THREAT MODEL

63

elevated Total Electron Content (TEC), as shown in the figure, or reduced TEC, depending on values chosen for the wedge parameters.

Figure 3.2: Three different categories of ionospheric anomalies, adapted from [42].

Using a network of dual-frequency receivers, ionospheric delay (I) can be measured by differ-

64

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

ent receivers (i), at different locations ((xi , yi )) for different satellites (k), in order to compute a planar description of the local ionosphere using the thin-shell assumption (see [40]). The thin shell assumption is justified in that all receivers (airborne and ground based) are far below the altitude of the ionosphere.

We assume that, during severe ionosphere storms, ionosphere anomalies introduce disturbances over regions of tens of kilometers across. As a corollary, we assume that anomalous storm activity will introduce salient gradients over local regions (defined as circles of 50 km in radius in this chapter). Those local gradients are modeled as approximately planar. No other assumptions regarding the structure of an ionosphere storm are made in this chapter. Because storm structure is modeled as being local in nature, a different ionosphere gradient should be computed for each particular satellite. Since the groups of pierce points corresponding to different satellites can be thousands of kilometers apart, no attempt is made to infer ionosphere structure between ionosphere pierce points (IPPs) for different satellites.

The ionosphere delay I for each satellite (j), as seen by user i, is estimated assuming a locally planar model for variations in ionosphere delay. (j)

Ii

T

(j)

(j)

For this purpose we define the position vector xi  T parameters g(j) = a(j) b(j) .

(j)

(j)

Here xi and yi

(j)

= f (xi ) = g(j) xi + c(j) ,

(3.1)

h iT (j) (j) = xi yi and the vector of gradient

are the east and north coordinates of the pierce points of satellite (j) seen (j)

from receiver i, in a coordinate system centered at the GBAS reference. The variable Ii

is the measurement of ionospheric delay for the corresponding pierce point. Note that the

3.4. PROPOSED METHOD

65

inter-frequency bias for satellite j is assumed to be common for all users and is hence lumped into the parameter c(j) .

3.4

Proposed Method

The networked ionosphere monitor collects measurements of ionospheric delay from at least three receivers and computes a local ionosphere gradient estimate (ˆ g(j) ) for each satellite.



 (j)

ˆ   a ˆ (j) =  g  ˆb(j)

(3.2)

If the magnitude of the estimated gradient is larger than a threshold, an alarm can be issued. This section describes the detailed mathematical formulation for the proposed ionosphere monitor.

3.4.1

Local Gradient Estimate for Each Satellite

As a simplifying assumption, the gradient (3.2) is computed under the assumption of a planar ionosphere, as described in equation (3.1). The simultaneous and instantaneous measurements of ionosphere delay for all users can be compiled in the following matrix form: 

.. .

   I (j) (x , y ) i i   .. .





.. .

.. .

    (j) (j) = x yi   i   . .. .. .

 .. .   a(j)   (j) 1   b ..   (j) . c

     

(3.3)

66

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

For simplicity of notation, the vector of measurements I (j) (xi , yi ) is denoted as y(j) . The (j)

(j)

(j)

(j)

matrix of (xi , yi ) positions is named Ag , and the vector of all ones is named Ac . The ˆ (j) . Equation (3.3) estimated parameters a ˆ(j) and ˆb(j) are grouped into the gradient vector g can now be re-written as: 

 y(j) =



(j)

Ag



(j)

Ac

(j)



 (j)

 g   g  (j) (j)  + ν = A +ν .  c(j) c(j)

(3.4)

The local ionosphere gradient (or local planar fit for ionospheric delay) can be estimated via A+ , the weighted pseudo-inverse of A. 

 (j)

ˆ   g + (j)  =A y cˆ with A+ being equal to AT W A

−1

(3.5)

AT W, with ν being the measurement noise vector,

and with W = R−1 = (E[νν T ])−1 being the inverse of the sensor noise covariance that accounts for thermal noise, multi-path, and RFI-related disturbances. Note that we assume receiver inter-frequency biases can be calibrated in advance for all of the collaborating, aviation-grade receivers; this is an important assumption, since receiver inter-frequency bias calibration is a very challenging aspect of ionosphere modeling.

Assuming the measurement noise distribution is zero-mean and Gaussian, the distribution ˆ is also Gaussian. Given a general form of a multi-variate Gaussian of the gradient estimate g is   1 T −1 Φ (ˆ g; g, Q) = − (ˆ g − g) Q (ˆ g − g) , 1 exp 2 |2πQ| 2 1

(3.6)

ˆ the estimated gradient, and so g ˆ − g is the estimation where g is the actual gradient and g

3.4. PROPOSED METHOD

67

error; the distribution of the estimated gradient becomes:

p(ˆ g) = Φ(ˆ g; g, Qest ).

(3.7)

ˆ distribution is the actual gradient vector g. The covariance of the estimate The mean of the g is Qest defined as: h

Qest = E g

0(j)

g

 0(j) T

i

,

(3.8)

where g0(j) is the estimation error, defined as:

g0

(j)

= PA+ ν (j) .

(3.9)





 1 0 0  Since c(j) is not used in the monitoring statistic, the projection matrix P =   is 0 1 0 needed to separate the terms of interest from the estimation equation (3.5). Thus, T

Qest = PA+ RA+ PT .

3.4.2

(3.10)

Monitoring Statistic and Threshold

The threat to GBAS is proportional to the gradient magnitude. Thus, in order to tie monitor performance to gradient magnitude, it is useful to make the monitoring statistic m equal to the estimate of this quantity.

ˆ (j) . m = kˆ g(j) k2 = (ˆ g(j) )T g

(3.11)

68

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

The distribution of the monitoring statistic is given by the cumulative distribution function (CDF), defined in (3.6) to describe the distribution of the ionosphere gradient: Z

m

Φ (ˆ g; g, Qest ) dˆ g

p(m) =

(3.12)

0

Since the estimation errors are not necessarily independent or identically distributed in both coordinate directions, the distribution of m is not χ2 but generalized χ2 . The main difference between χ2 and generalized χ2 is that the former corresponds to a sum of squared independent or identically distributed Gaussian random variables, where as the latter does not require them to be identically distributed. Designing thresholds for generalized χ2 distributions and computing Minimum Detectable Errors (MDEs) requires the implementation of specialized numerical algorithms, as discussed in [44, 45]. This generalized χ2 distribution can be used to evaluate the distribution of the monitoring statistic in the presence of a fault with a particular gradient g.

Whereas (3.12) is useful for characterizing fault scenarios, a slight reformulation is necessary in determining the monitor threshold, to account for nominal variations in g.

The matrix Qenv can be assumed to be overbounded by a diagonal matrix, with all diagonal terms equal to the variance corresponding to the worst directional gradient, σI2 :   Qenv = 

 σI2 0

0  . 2 σI

(3.13)

The decision to overbound the nominal ionosphere disturbances is conservative. A more precise (and less conservative) assessment of monitor sensitivity might incorporate directional effects, but such an analysis is beyond the scope of the current chapter.

3.4. PROPOSED METHOD

69

A model for the long-term distribution of the monitor statistic, which accounts for the variability of the ionosphere over periods of many hours, is also generalized χ2 , with the following distribution: Z

m

Φ (ˆ g; 0, Qest + Qenv ) dˆ g

plong (m) =

(3.14)

0

Under nominal operations, i.e. when no ionospheric anomaly is present, the monitor should issue as few warnings as necessary to maximize availability. The monitor threshold T can be computed to ensure that the integral of the long-term monitor distribution above T does not exceed the continuity allocation for the monitor (pF A,req ). The probability to be computed is that of triggering an alarm, given that no fault is present:

plong (m > T |fault-free) ≤ pF A,req .

3.4.3

(3.15)

Faulted operations

The monitoring statistic should minimize the probability of not detecting a hazardous ionospheric anomaly. To meet GBAS integrity requirements, the probability that no alarm is triggered when a fault is present should not exceed the maximum allowed probability of a missed detection (pM D,req ). Note that this is an instantaneous requirement, rather than a long-term average, and so the distribution in (3.12) is used to evaluate the missed-detection risk: p(m ≤ T |fault present) ≤ pM D,req .

(3.16)

In order to assess monitor sensitivity, we assess the left side of the above equation, defined as the missed-detection probability pM D , as a function of gradient size kgk. We will call this

70

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

a “pM D vs. G” analysis, to evoke the connection to the “pM D vs. E” approach, which has become a standard tool in the analysis of GBAS-related systems [3].

3.5

Simulation Scenario

A computer simulation was set up to demonstrate the method and study its benefits and potential shortcomings, as compared to existing or proposed methods. In this simulation, airborne receivers are placed uniformly at random in 2-D space, within a certain radius (currently set at 50 km) over San Francisco Bay. In concept we imagine that a GBAS ground facility is located at an airport at the center of this circle. Noisy measurements of the ionosphere are aggregated over a number of receivers to yield an estimate of the current ionosphere gradient, thus providing a capability to detect anomalies injected into the simulated ionosphere. A snapshot of one such set of 19 airborne receivers can be seen in Figure 3.3.

The ionosphere measurements (the entries of y(j) , in equation (3.4)) are assumed to have additive noise with standard deviation σi = 1 m. The noise on each receiver is assumed to be independent from other receivers, which leads to a diagonal sensor covariance matrix R. The nominal ionosphere is also assumed to have a diagonal covariance matrix Qenv as defined by (3.13); where σI = 4 x 10−6 = 4 mm/km, which is in accordance with [8].

Different user-receiver geometries directly impact the measurement covariance matrix Qest , which in turn sets the detection thresholds and the probability of mis-detecting a fault as a function of fault magnitude (3.16). For instance, the pM D vs. G curve for a set of 19 receivers randomly distributed over a circular area of 50 km in radius is shown in figure Figure 3.4. In the specific case of Figure 3.4, the detection threshold was set to 282 mm/km. This par-

3.6. DEPENDENCE OF SENSITIVITY ON RECEIVER DENSITY

71

Figure 3.3: Network of 19 simulated airborne receivers over Oakland, CA. Note that a radius of 50 km around Oakland airport covers two further international airports: San Francisco International Airport and San Jose International Airport. ticular geometry achieves a pM D of 10−9 when the anomalous gradient reaches 340 mm/km.

3.6

Dependence of Sensitivity on Receiver Density

An important feature of the proposed method is that increasing the number of participants improves the sensitivity. This happens as increasing the number of receivers both enhances geometric diversity and provides enhanced averaging of noisy signals, thus shrinking the el-

72

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

Figure 3.4: pM D vs kgk behavior of the receiver distribution seen in Figure 3.3 lipse associated with the matrix Qest . In the limit of a large number of user receivers, this, in turn, leaves the ionospheric variability (Qenv ) as the dominant term setting the threshold T according to (3.15).

In order to study the sensitivity as a function of receiver density, different numbers of simulated receivers were distributed uniformly within the 50 km communications radius. For each configuration of receivers the covariance matrix Qest was calculated, and from it the detection threshold. Although a full integrity verification would consider a PM D vs. G curve of the form of Figure 3.4, it is useful in this study to characterize sensitivity using a single number, the value on the PM D vs. G curve that corresponds to a PM D value of 10−9 . We label this point the minimum detectable gradient (MDG). The MDG is the smallest value of kgk that can reliably be detected. Using simulation, MDG was computed for 72 randomly generated geometries for each of 20 different numbers of collaborators between 8 and 800. The results of these simulations are plotted in Figure 3.5. The graph indicates the mean MDG for each set size, and the standard deviation from the mean. The variability observed

3.6. DEPENDENCE OF SENSITIVITY ON RECEIVER DENSITY

73

in the plot can be attributed to the fact that the different geometries are generated at random. For the geometries simulated, this variability was small relative to the MDG, and so the mean values are a good representation of overall performance.

Note that, in practice, the distribution of aircraft could not realistically be assumed as uniform. Air Traffic Controllers may line up aircraft on specific paths to maintain separation. Lining up receivers enhance the sensitivity of the monitor along that particular direction, compared to the uniform distribution. Presumably when aircraft are lined up this happens in the direction of landing, which is the most critical direction for ionosphere gradient detection. Thus, lining up aircraft before landing will likely provide increased sensitivity along the most critical direction.

Figure 3.5: The sensitivity of the method depends on the density of collaborating receivers. The graph in Figure 3.5 shows an approximately linear dependence between the logarithm

74

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

of the MDG and the logarithm of the number of collaborating receivers. It is evident from looking at the plot that increasing the number of users in the network will increase the sensitivity without requiring longer baselines. When a large number of receivers collaborate (beyond the right side of Figure 3.5), the missed-detection probability asymptotes to a constant value determined only by the nominal ionosphere variations with covariance (Qenv ).

3.7

RFI Robustness of the Method

This section discusses the benefits of the proposed collaborative ionosphere monitor with regards to its robustness to ground-based radio-frequency interference (RFI).

In order to illustrate this point, a simulation experiment was applied to study how the sensitivity of the proposed method would be impacted if users in the vicinity of the GBAS facility were excluded from the solution. In this scenario, it is assumed that differential corrections are available from another source, perhaps from a second GBAS reference station located in the same urban metroplex [46].

In the RFI simulation 200 simulated users were placed, uniformly distributed within the 50 km communications radius. One receiver at the center of the circle represents the ground facility. A hypothetical RFI source (or PPD) was also situated at the origin; this RFI source was assumed to deny service to any participating users within the interference radius. The simulations allow for an extended jamming radius, extending from as little as 0 km to as much as 49 km. The denied users were simply not considered in calculating the covariance matrix (Qest ). The resulting degradation in pM D vs G behavior for different interference radii is depicted in Figure 3.6. This RFI robustness analysis is similar to the annular deprivation

3.8. OPERATION UNDER RESTRICTED BANDWIDTH

75

analysis developed for the Wide Area Augmentation System (WAAS) [47].

Figure 3.6 shows that the algorithm provides robust detection of ionosphere anomalies, with little change to sensitivity out to a jamming of 20 km. In fact, sensitivity does not degrade significantly until the jamming radius exceeds 45 km. It is important to keep in mind that users are maximally 50 km away from the origin, which means that at 49 km nearly 96% of the coverage area has been disabled. This kind of situation far more severe than the unintentional RFI scenarios described in [10]. Of course sensitivity results are somewhat dependent on the location of the jammer. Nonetheless, the main point of the analysis is that the proposed approach is significantly more robust to jamming than conventional GBAS, where all monitoring antennas are co-located on airport property (and hence vulnerable to simultaneous jamming).

3.8

Operation Under Restricted Bandwidth

As the RFI simulations show, the proposed algorithm performs robustly even when only a subset of available measurements are used for monitoring. This characteristic of the algorithm suggests that it might also perform robustly in a bandwidth-limited communication scenario, in which the available communication channel cannot broadcast all user measurements for inclusion in the monitor statistic. This section will consider in more detail this limited-bandwidth scenario, by proposing and analyzing a receiver-selection algorithm designed to maximize monitor sensitivity given practical communications constraints.

The analysis of sensitivity as a function of the number of users, presented in section 3.6, does not take into consideration the restrictions potentially imposed by bandwidth limitations in

76

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

Figure 3.6: Influence of interference radius on the MDG. The blue dot (to the left end) represents the nominal case of a 0 km interference radius, while the red dot (to the right end) represents a radius of 49 km. the communications network. Under a scenario of limited bandwidth, it is no longer feasible to assume that data are available from all N users, and it becomes necessary to define a criterion for selecting a subset of k receivers when computing the monitoring statistic m. To simplify analysis of limited bandwidth communication, we here assume that all users view the same set of satellites.

Selecting an optimal subset of size k from theset (Ω N ) of all users N can be thought of as an  N  optimization problem. Within ΩN there are   different subsets (Ωik ), where i ∈ N and k    N  1≤i≤ , and each Ωik has k users (k ≤ N ). The problem becomes one of selecting k the subset (Ω∗k ) of receivers that will yield the smallest MDG. This can be formulated as an

3.8. OPERATION UNDER RESTRICTED BANDWIDTH

77

optimization problem:  Ω∗k = argmin M DG(Ωik ) .

(3.17)

Given that there are a finite number of different Ωik , a global optimum is guaranteed. Finding this global optimum can, however, become computationally intensive as N and k grow. This combinatorial explosion may hinder the scheme from working in a real-time situation. In order to mitigate the combinatorial explosion, this section discusses two relaxations to the optimization problem that make it tractable to solve in real-time. The resulting trade-off between reduction of computing time and monitor sensitivity are subsequently discussed.

3.8.1

Relaxations to the Optimization Problem

This section proposes two relaxations to the optimization problem: 1) a greedy search algorithm to replace the exhaustive combinatorial search, and 2) an approximate cost function that is faster to compute than the MDG. The decrease in the sensitivity of the scheme due to both relaxations is shown to be negligibly small compared to the decrease in computation time.

Greedy Search Algorithm

The greedy search is based on the assumption that a desirable, yet sub-optimal, set of k + 1 receivers (Ω0k+1 ) contains the previously found desirable set of k receivers (Ω0k ). This relaxation eases the computational requirement involved in finding the optimal set of k receivers, as it reduces the number of different subsets of receivers that need to be evaluated for an adequate geometry to be found. In that sense, for some given initial number of

78

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

participants, k0 , the desired set of receivers is defined recursively with  Ω0k+1 = argmin MDG(Ωik+1 ) , such that Ω0k ⊂ Ω0k+1

(3.18)

The initialization of the scheme requires a selection of an initial set of k0 receivers. For a small k0 ,  it is computationally inexpensive to pick the initial set Ω0k0 using an exhaustive   N  search of   combinations of k0 receivers using (3.17). Thus, Ω0k0 = Ω∗k0 . k0

Once an initial set of receivers has been determined, the greedy search algorithm adds receivers to the estimate one at a time. Every time a new receiver is to be added, all candidates are evaluated by adding them to a geometry matrix and computing either the MDG, or a proxy cost function, described in the next section. In either case,the heuristic reduces the  N  number of cost function evaluations to N − k, rather than  . k

Proxy Cost Function Computation of the MDG, as defined in equation (3.16) for PM D = 10−9 , is the most computationally expensive element of the algorithm. One way to speed up the calculation of the optimal subset of receivers is to find a metric that will yield similar results, but is faster to compute than MDG.

Reducing the MDG is equivalent to increasing the geometric diversity of the set of receivers; intuitively this corresponds to enlarging the baselines between sensors. A different, surrogate metric for geometric diversity is the second moment (or statistical covariance) of the user receiver positions. This second moment may be visualized as an ellipse in which larger ellipse

3.8. OPERATION UNDER RESTRICTED BANDWIDTH

79

dimensions in any direction correspond to larger baselines in that direction. Since sensitivity is enhanced by larger baselines, it is desirable to maximize the principal axes of the secondmoment ellipse. An equivalent problem is to minimize the negative of the smallest principal axis. Define ΣA(j) as the central second-moment (or covariance) of the set of user positions g (j)

contained in the geometry matrix Ag . The cost function then becomes:

J = −min(eig(ΣAg(j) )).

(3.19)

The eigenvalues of ΣA(j) indicate the principal axes of the error distribution; the estimate is g most sensitive along the direction of the eigenvector associated with the largest eigenvalue. Thus, the cost function J penalizes small eigenvalues, as this encourages the selection of Ω∗k as the subset that is most sensitive along its least sensitive direction.

The combination of both relaxations to the optimization problem finally yields an expression for the recursive definition of the subset Ω0k , given an initial subset Ω∗k0 . The recursion becomes:  0 0 0 Ω0k+1 = argmin J(A(j) g (Ωk )) , such that Ωk ⊂ Ωk+1 .

3.8.2

(3.20)

Impact of the Relaxations

The objective of the two relaxations, previously discussed, is to make the optimization problem tractable. The performance of the relaxed optimization method was analyzed in a simulation in which the two methods of selecting the desired subset of receivers were compared for different values of k. The subsets generated from the combination of greedy search and proxy cost function, Ω0k as defined in equation (3.20), were compared the true optimum Ω∗k , generated from the exhaustive search for the minimization problem of equation (3.17).

80

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

The benefit of the greedy algorithm is that the number of evaluations of the equation (3.16) can be drastically reduced for 3 < k < N − 3.  Recallthat the method is not defined for  N  k < 3 and that the number of combinations in   peaks around k = N/2 but drops k off as k approaches N . The number  ofevaluations Meval required for the thorough search of  N  the most sensitive geometry is  , while the greedy search algorithm requires Meval of k    N    for k = k0 and k0 

Meval



N X  N  (N − i) for k > k0 . = + k0 i=k0 +1

(3.21)

A comparison of the computing time required for each scheme is shown in Figure 3.7. Considering that for some k = 5 the baseline case requires up to 5 s to be computed, but it takes less than 40 ms to compute the heuristic, it is evident from Figure 3.7 that there is a gain in processing time for the heuristic approach that can be attributed to the two relaxations discussed above, except for the case when k = N . Note that the case k = N is trivial as there is only one possible ΩN and no search is required.

An important aspect of the heuristic search algorithm is that the benefits increase as N increases. The complexity of the optimal search is combinatorial, while the complexity of the heuristic is linear with N . The results of Figure 3.7 were computed for a representative case, where the effect is noticeable even for only 12 users. In an equivalent graph over 200 users, the gain in computation time is significantly larger. As an illustration of the in complexity due to the greedy search algorithm, consider that there are  reduction   200    = 1.8913 × 1025 different combinations of 18 receivers in a set of 200; By compar18

3.8. OPERATION UNDER RESTRICTED BANDWIDTH

81

ison, the number of evaluations required by the greedy algorithm is 19 orders of magnitude lower: 1.316 × 106 .

Figure 3.7: A comparison of the computing time required for calculating the optimal (Ω∗k ) and near optimal (Ω0k ) subsets of participants for different set sizes (k). For the plotted example, the heuristic search requires an initial computing time of about 36 ms for k = 3, with increasing computing times as k grows; this increase, however is small compared to the first step and becomes progressively smaller. The increase from k = 3 to k = 4 is of 6 ms, but only 0.3 ms from k = 9 to k = 10).

The proposed heuristics do not significantly degrade monitor sensitivity. The decrease in sensitivity that is traded for a gain in computation time is addressed in Figure 3.8. The data underlying this plot were generated from 10 repetitions of an experiment; for each repetition, the difference in MDG between the baseline and the heuristic is normalized by the baseline MDG. The plot shows that as k increases, the degradation of the MDG increases slightly before decreasing. Even at the worst k values, the MDG only degraded by 17% (mean) or ≈ 29% (maximal value over 10 trials). Thus, the scheme works better for larger k, but even for a bad k the mean in MDG is below 20%

82

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

Figure 3.8: Percent degradation of MDG computed for the sets selected by the greedy algorithm, as compared to the set with best sensitivity for each k. Results computed over 10 different trials.

3.9

Discussion

This section discusses some of the benefits of distributed ionosphere monitoring over other methods; such advantages include the absence of siting requirements and robustness to RFI. The limitations of the method, in the form of communication requirements, are also discussed.

3.9.1

Benefits of the Method

A first benefit is that the networked ionosphere gradient monitor, unlike IGM, has no explicit siting constraint related to integer ambiguity. By comparison the standard IGM method imposes strict siting requirements that limit reference receivers to be positioned within a few hundred meters of one another [48].

3.9. DISCUSSION

83

A second additional benefit of networked ionosphere gradient monitoring is the possibility of enabling a relaxation of maximum coverage radius for a GBAS installation (Dmax ). The current value of Dmax (approximately 10 km) means that services can only be offered to aircraft within that distance of the ground facility. The current restriction is derived from performance limitations of local ground-station-based ionosphere gradient monitors [49], rather than limitations on the range of the data broadcast. With a mechanism in place for detecting widespread ionospheric anomalies, a relaxation of Dmax to the limits of reliable communication becomes possible. This means that, if RFI can be otherwise mitigated, the cost-benefit advantages of using one GBAS reference station to cover multiple airports in an urban metroplex might be achieved. Increasing the coverage area would be achieved by providing long-baseline monitoring across the entire region in which the GBAS VHF Data Broadcast can be received. By comparison, airport-based monitors typically only cover a limited region (10 km or fewer). Network-based monitoring has the potential to extend coverage out to the largest allowed coverage radius for GBAS (45 to 60 km). This effect is a compelling advantage of the proposed method, but it will not be discussed in detail in this chapter.

Conventional ionosphere monitors may be disabled by severe RFI conditions that corrupt one or more GBAS reference stations; carrier-phase IGM is particular vulnerable to localized interference from PPDs, because IGM requires monitor antennas to be close together (within a few hundred meters) [48, 41]. In a distributed monitoring approach, by contrast, GBAS ionosphere monitoring would remain active (and GBAS service might remain available), so long as differential corrections could be broadcast from at least one ground station antenna or from an alternative source [50]. This compelling advantage will be investigated using simulation later in the chapter, after the proposed monitoring approach is described in detail in the coming sections.

84

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

Another important feature of the proposed monitor is that it would operate at a scale that is an intermediate between the local scale of GBAS and the regional scale of Satellite-Based Augmentation Systems (SBAS); as such it is able to capture ionosphere structures at a smaller scale than SBAS.

A potential non-aviation benefit of networked ionosphere monitoring is to support collaborative automotive applications. With vehicle-to-vehicle communications becoming more and more feasible, co-operative navigation schemes become more attractive. The proposed collaborative ionosphere gradient monitor would complement other collaborative methods for verifying GNSS integrity, such as CERIM [51]. Deploying an automobile-based variant of the proposed scheme would benefit from the fact that automobiles are able to operate in a configuration that is significantly more dense than for aircraft.

3.9.2

Implementation Concepts for Network Communication

The single most prominent weakness of the method is the dependence on aircraft-to-aircraft data communication. While such systems have been discussed in the scope of the NextGen program, under the System Wide Information Management (SWIM), there is no indication of this capability coming into operation soon. Alternatively, a modified ADS-B message might be the quickest means for enabling the proposed networked ionosphere monitoring concept. In either case, ensuring secure communications will be critical to protect users from spoofing or other security threats.

While the ADS-B-based concept would enable a fundamentally decentralized approach, other architectures are conceivable. For example a centralized approach, where the GBAS ground

3.10. SUMMARY

85

facility is tasked with gathering measurements and computing estimates. Another possible approach would be to adequately equip trusted unmanned aerial vehicles (UAVs) to continually monitor the ionosphere in the vicinity of a GBAS installation.

3.9.3

Future Work

In this chapter we have estimated a locally planar gradient as a means of detecting ionosphere anomalies; however, threatening ionosphere anomalies are not guaranteed to be locally planar. In order to provide better detection for anomalous but non-planar events, other approaches for estimating anomalies may be of interest, including methods previously explored for application to Satellite Based Augmentation Systems (SBAS). Such methods include Kriging [52] or the conical fit method [53].

A full continuity analysis is a subject for future work. Realistically, the monitor will have to be designed with margin to tolerate small changes in the aircraft geometry as well as sudden dropout of signals form individual aircraft.

3.10

Summary

This chapter introduces a novel ionosphere monitor based on a collaborative network of airborne dual-frequency receivers. Ionosphere delay measurements are aggregated across multiple receivers to compute an estimate of the local ionosphere gradient for each satellite. These estimates are then used to form a monitoring statistic to detect anomalously large ionosphere gradients.

86

CHAPTER 3. COLLABORATIVE IONOSPHERE MONITORING

Simulations were used to assess the sensitivity of the proposed algorithm under several scenarios. In the first scenario the influence of the number of receivers on the sensitivity was analyzed. A second scenario looks at the robustness of the method when RFI removes a subset of collaborators, and the third scenario shows how the scheme can be implemented under limited bandwidth. The simulation scenarios show consistently that, with a sufficient number of collaborators, the scheme is able to detect ionospheric gradients of 300 mm/km with a probability of missed detection below 10−9 .

Chapter 4 Direct Position-Domain Error Bounding This chapter proposes a novel method to compute overbounds on position errors of Differential Global Navigation Satellite System (DGNSS) solutions. Prior work has suggested that a tighter protection level bound would be possible if conservative assumptions were applied only to the position-domain error distribution model rather than to each individual rangedomain error distribution model.

It has previously been unclear, however, how to ensure conservatism in the position domain, since the dependence of position-domain error distributions on the geometry is more complex than that of the range-domain error distributions. This work introduces a novel approach for position-domain bounding that provides a practical means to account for geometry dependence in the error-model validation process.

The proposed process reduces conservatism used in constructing error bounds (called pro-

87

88

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

tection levels) and thereby has the potential to improve the availability of the DGNSS signal for safety-of-life operations.

4.1

Introduction

Protection Levels (PL) are among the most important performance criteria for safety-of-life applications of a Differential Global Navigation Satellite System (DGNSS) installation. PLs represent the confidence interval for the position estimate, with a confidence level that depends on the specific application. These confidence levels are usually derived from a model distribution for a particular probability (e.g. the 99.99999% confidence level might be computed for a Gaussian noise model).

In order to determine whether an operation can be conducted safely, the PL is compared against a reference: the Alert Limit (AL). The AL is dictated for the particular operation being executed and is part of a standard procedure. If the computed PL exceeds the required AL, the operation (which may be landing an aircraft) cannot proceed. As requirements on vertical errors are often stricter than on horizontal errors, it is useful to visualize numbers in the vertical direction. For applications related to automated landing of civil aircraft, the vertical AL is at 35 m for LPV-200 approaches and at 10 m for Category III. Should the vertical PL exceed the vertical AL, the DGNSS safety-of-life service becomes unavailable.

This chapter introduces a method that allows a reduction of the PL by modeling DGNSS position errors in the position domain, as opposed to the range domain, as usually done in practice [54, 55]. In this context, the traditional method of computing PLs (which are reported in the position domain) from range-domain considerations will be referred to as

4.2. COMPARISON OF DIRECT AND INDIRECT POSITION-DOMAIN BOUNDING89 Indirect Position-Domain Bounding or Indirect PDB; the proposed method provides PLs without regard to range-domain considerations and will therefore be referred to as Direct PDB.

The proposed direct PDB differs from indirect PDB in its treatment of the error models that underlie PLs. Figure 4.1 offers a comparison of the two methods. The traditional method uses range error measurement data to compute overbounding (conservative) error models that are then mapped into the position domain to produce PLs. The direct method takes position errors and an overbounding model is applied directly to the data to produce PLs.

The remainder of the chapter is structured as follows: First the method currently used for modeling position domain errors is outlined and contrasted with the proposed method in the next section. The following section lays the foundations of the proposed direct method and is followed by a section that looks into the nature of the measurements required for Direct Position-Domain Bounding. Following that, a section with some preliminary results shows how the method could be implemented on real data and what these data may look like. The section after that is devoted to the discussion of results and insights.

4.2

Comparison of Direct and Indirect Position-Domain Bounding

In order to fully appreciate the benefits of Direct Position-Domain Bounding, it is helpful to establish the differences from the traditional method. This section presents the basic concepts behind both methods and an illustrative example of how excess conservatism is

90

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

Figure 4.1: Comparison of Direct and Indirect PDB. Indirect PDB (left branch): The range error data Ei are computed for each satellite individually and modeled with a conservative overbound, e.g. as N (0, σi ). The resulting range-domain error model is transformed into the position domain and the components of the error models are summed in each direction (East, North, Up). Direct PDB (right branch): The position-domain error is computed from range error data and satellite geometry. The overbounding model is applied directly in the position domain, resulting in a different position-domain (E,N,U) error distribution.

4.2. COMPARISON OF DIRECT AND INDIRECT POSITION-DOMAIN BOUNDING91 reduced by describing error models directly in the position domain.

4.2.1

Indirect Position-Domain Bounding

Traditionally, error bounding has been based on producing a position-domain error model that is constructed from the modeled error on each ranging measurement. The error is modeled separately for each satellite and mapped into the position domain. Because this procedure relies on conservative range-domain error models to construct the position-domain error model, the method will henceforth be referred to as “indirect” Position Domain Bounding (Indirect PDB).

The relationship between range-domain error and position-domain error is dictated by the range-to-position matrix S: 



 εx   = Sερ .  εb

(4.1)

Here εx is the 3 − D position error, εb is the clock error, and ερ is the measurement error on each pseudorange; the matrix S, used in obtaining the position solution, is a weighted pseudo-inverse computed from the geometry matrix G and a weighting matrix W: −1 T S = GT WG G W.

(4.2)

The weighting matrix W depends on the azimuth and elevation angles of each satellite [28].

The structure of (4.1) implies that the directional position errors, which are the three elements of εx , are each constructed as the sum of N random variables, where N is the number

92

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

of satellites in view. To illustrate this relationship, consider one element of εx , the Vertical Position Error (VPE). Extracting the vertical row from (4.1) gives the following equation for the VPE: V PE =

N X

(Sv,i Ei )

(4.3)

i=1

Here Sv,i refers to an element in the matrix S from the vertical row and from the column corresponding to satellite i. The term Ei refers to the error on that same satellite.

Indirect PDB uses equation (4.3), or equivalently equation (4.1), to construct a model of VPE by first constructing a set of models for each satellite error (Ei ), as shown on the left branch of Figure (4.1).

First, statistical range error data are obtained. Notionally, these are computed for a stationary reference antenna by subtracting the differentially corrected pseudorange from the true range (based on a calibrated antenna location). For instance, one approach for computing the instantaneous range error is:

Ei = [ρdc,i −

1 X 1 X ρdc,i ] − [ri − ri ] N N

(4.4)

In this example, ρdc,i indicates the differentially corrected pseudorange for satellite i, and ri indicates the true range for that satellite. Because the clock offset is not known, it is subtracted out by removing the average pseudorange and range values across all satellites.

The next step, labeled Overbounding in the left branch of Figure 4.1, compiles a large population of Ei samples in order to model the distribution for each satellite. Models are constructed to be conservative in that the model over-predicts the occurrence of large errors.

4.2. COMPARISON OF DIRECT AND INDIRECT POSITION-DOMAIN BOUNDING93 These conservative range-domain probability-density function (pdf) models are often called “overbounds”. An overbar notation will be used in this document to identify overbounds. For instance, the true range error distribution p(Ei ) is modeled by the overbound p¯(Ei ).

The next step in Indirect PDB, labeled “Range-to-Position Mapping” in the left branch of Figure 4.1, converts the range error overbounds into a position error overbound. Assuming errors are independent, the true position error pdf is a convolution of the range error pdfs. This result is an extension of the fact that when two independent random variables (e.g. x and y) are summed, the result (e.g. z = x + y) has a distribution h(z) obtained by convolution as follows:

Z



g1 (x)g2 (z − x)dx.

h(z) =

(4.5)

−∞

Under certain conditions [56, 57, 9, 58], a conservative position error overbound can similarly be obtained from range-domain error bounds via repeated convolution. Such an overbound can be formulated as follows, where g¯1 (x) overbounds g1 (x), g¯2 (y) overbounds g2 (y) and ¯ h(z) =

Z



g¯1 (x)¯ g2 (z − x)dx.

(4.6)

−∞

Typically, range errors are modeled as zero-mean Gaussian distributions, where the Gaussian variance is a function of satellite elevation. For Gaussian range error pdfs, the position error is also Gaussian, and its variance may be computed in closed from: qX σp = (S2v,i σi2 ).

(4.7)

The final step in Indirect PDB is to construct a PL from the position-error overbounding pdf.

94

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

The PL is computed by integrating the tails of the position-error overbound and determining at what magnitude of error the integrated tail probability equals a specified integrity risk. Given a Gaussian distribution, for example, the PL can be expressed simply as a multiplier times the conservative position-domain sigma:

P L = kσp

(4.8)

In the modeling step, the data are overbounded by a conservative model, which could be a zero-mean Gaussian, to facilitate integrity computations. The error data need to be collected over long enough periods of time, so as to support the required confidence in the models. The overbounding model of Ei can then be used in the computation of an overbounding position-domain error [56, 57, 9, 58].

In the context of error bounding, conservative models of errors bound actual error distributions. For a given level of probability, the magnitude of the error model is bigger than what can be found in the data. “Excess” conservatism refers to models where conservative assumptions have lead to PLs that are provably larger than necessary. Using equation (4.6) introduces additional conservatism with each convolution. It seems plausible that, for nonGaussian distributions, a tighter bound might be obtained working with the real positiondomain error distribution and only applying conservative modeling to the final result. This notion of a tighter bound is supported by prior efforts by several authors to develop real-time integrity monitoring of the GNSS position solution using a fixed-base reference antenna [59]. [60], [61].

4.2. COMPARISON OF DIRECT AND INDIRECT POSITION-DOMAIN BOUNDING95

4.2.2

Direct Position-Domain Bounding

Direct PDB dispenses with the range-domain as an intermediary and, instead, constructs an overbound directly from position error samples. To understand this principle consider a stationary reference antenna, where each sample error Epos is computed by differencing the differentially-corrected position solution xdf from a surveyed ground truth xtrue .

Epos = xdf − xtrue

(4.9)

Given a population of position error samples acquired for related satellite-geometry conditions, that population can be approximated conservatively with a position-domain overbound. This process is labeled ”overbounding” on the right branch of Figure 4.1. The result is an overbounding model p¯(Epos ; G) that depends on the geometry of the visible satellites, as characterized by the Geometry matrix G. Here f is used to represent an arbitrary (notnecessarily Gaussian) function with the properties of a pdf (e.g. always positive, integrates to unity). Geometric dependence may reflect azimuth and elevation-dependent noise for each individual satellite as well as the effects of satellite spatial-distribution (i.e. geometric diversity) on the solution matrix S.

The PL can be computed in a final step, as shown in Figure 4.1, from the position-domain overbound. Most typically, the PL is evaluated in one dimension at a time. For instance, the Vertical PL is computed by assessing the largest vertical error (VPE) beyond which the integral of f (Epos ; G) is equal to a specified integrity risk.

Although maintaining conservatism for all satellite-geometry scenarios is challenging, the Direct PDB approach offers certain benefits as well. As mentioned in the prior section,

96

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

the Direct PDB process yields a reduction in excess conservatism by directly introducing a conservative model of the position-error distribution that does not require modeling in the range domain. Rather, in Direct PDB, conservatism and inflation are only introduced for one model - for the direct position-domain bounding function f (G), - rather than being introduced N times for N satellites, as is the case in range-domain bounding. In other words, in direct PDB there is no need to convolve different overbounding models, as there is only one model and it is applied directly to the distribution of interest.

4.2.3

Comparison of an Overbound Tightness-of-Fit for a Synthetic Example

To better explain the potential benefits of Direct PDB in reducing overconservatism for representative, heavy-tailed error distributions, this section considers a simple simulated example. The simulation considers how errors combine to a final (“position domain”) error. The final error is constructed from ten independent, identical heavy-tailed distributions (representative of “range domain” error sources). Figure 4.2 shows the results of this simulation experiment. A truncated exponential distribution was used as a model for a heavy-tailed distribution (labeled as range-domain errors in the left plot). The range-domain errors are bounded by a Gaussian model; the model has a zero mean and the smallest standard deviation σRD for which all points on the distribution are bounded.

The “range domain” data is simulated as a truncated exponential, which is a representative model for heavy-tailed distributions. The probability density for the range-domain errors is:

p(Ei ) =

  

1 −|Ei | e C

  0

|Ei | < T |Ei | ≥ T

,

(4.10)

4.2. COMPARISON OF DIRECT AND INDIRECT POSITION-DOMAIN BOUNDING97

(a)

(b) Figure 4.2: The effect of summing 10 heavy-tailed pdfs: The top plot shows a simulated range-domain error distribution and an overbounding Gaussian model. For the bottom plot, 10 of these distributions are convolved and the range-domain bound appropriately inflated to yield an indirect position-domain bound. In addition, a direct position-domain bound is shown to be less conservative, but support the same level of integrity. Note that for these plots the independent variable is the quantile of an ideal Gaussian, while the dependent variable represents the actual quantile of the plotted function. In such a representation, a Gaussian distribution becomes a straight line, with an inclination that is inversely proportional to the standard deviation.

98

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

which is set to T = 3 for Figure 4.2 and: Z

T

C=

e−|Ei | dEi .

(4.11)

−T

The standard deviation of the exponential distribution (prior to truncation), which are labeled σe , is scaled to be one. Simulated range-domain error data is plotted as black markers in the “range domain” plot of Figure 4.2 (a).

The range-domain overbound p¯RD (Ei ) is a Gaussian model with zero mean and standard deviation σi : p¯RD (Ei ) = N (Ei ; 0, σi ).

(4.12)

The overbound is plotted as a dashed line in the range-domain plot in Figure 4.2, and the standard deviation of the overbound, σi = 1.25σe .

From Figure 4.2 it is evident that the cumulative distribution function of the range-domain overbound P¯ (Ei ) and the cumulative probability of the range-domain error (P (Ei )) satisfy the following conditions: P¯RD (Ei ) > P (Ei )∀Ei < 0 P¯RD (Ei ) ≤ P (Ei )∀Ei ≥ 0.

(4.13)

In this chapter, a capital P is used to identify cdfs and distinguish them from pdfs (which are indicated by a lower-case p).

In the position domain plot, because the error sources are assumed independent, the combined error is generated by convolving the range-domain data with itself ten times, which

4.2. COMPARISON OF DIRECT AND INDIRECT POSITION-DOMAIN BOUNDING99 will be denoted as: p(Ex ) = p∗10 (Ei ), where Ex represents the error along one spatial dimension and the

(4.14) ∗10

operator indicates a

ten-fold convolution of a function with itself. The position-domain error is plotted as black crosses in Figure 4.2 (b).

The traditional position-domain overbound, given by the indirect method is computed from the ten-fold convolution of the range-domain overbound with itself: √ p¯iP DB (Ex ) = p¯RD (Ei )∗10 = N (Ei , 0, 10σi ),

where



(4.15)

10σi = 3.953σe which is plotted as a dashed line in Figure 4.2 (b).

The result of applying a Gaussian overbound directly to the position-domain error (Ex ) will be denoted as p¯dP DB (Ex ). This overbound, plotted as a solid line in Figure 4.2, has a smaller standard deviation (σd ) than the indirect overbound:

p¯dP DB (Ex ) = N (Ex ; 0, σd ).

(4.16)

The smallest σd that overbounds p(Ex ) is σd = 3σe , which is smaller than the 3.953σe associated with indirect PDB, yet provides a conservative bound for the position-domain error. The example shows that direct position-domain bounding can bring about a reduction in excess conservatism, in this case a reduction of σ of nearly 25%.

100

4.2.4

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

Implications of Position-Domain Bounding

By reducing excess conservatism, PDB has the potential to enable many new applications for safety-of-life systems. One possible benefit would be improved availability for certain safety-of-life systems (e.g. precision approach and landing using augmented GPS). These operations are subject to regulations that allow landing only if the PL does not exceed the AL. Shrinking protection levels would thus enable precision operations under a greater range of conditions, particularly under degraded conditions.

Although Direct PDB offers potential to reduce conservatism due to certain effects including distribution heavy tails, it is important to note that implementing Direct PDB also introduces a great many complications. Direct PDB has not been studied extensively in the past largely because of these complications which include: how to account for differences in the satellite sets viewed by different users, how to account for different quality receiver equipment, and how to account for differences in algorithm used to compute the position solution. Addressing limitations of Direct PDB (and in particular the question of how to account for differences in the satellite sets viewed by different users) is the key contribution of this chapter.

Another possible benefit might even be to enable augmentation systems to deliver new categories of service. Consider Satellite-Based Augmentation Systems (SBAS), for instance, which currently support LPV-200 approaches (very similar to Category I landings with a 200 ft decision altitude). If it were possible to shrink SBAS protection levels enough, it might be possible for an SBAS to also support Category III landings, proceeding all the way down to touchdown, enabling the sought after LPV-Zero capability. For a more detailed description of how this might be accomplished, see Appendix I.

4.3. ERROR DISTRIBUTION BINNING

4.3

101

Error Distribution Binning

The proposed approach to Direct PDB relies on computing an intermediate classification metric to cluster different error scenarios (with distinct but similar error distributions). In this chapter the classification method will be denoted as P DB = f (G). This function f is the classifier; it operates on the satellite geometry matrix G to create a position-domain bound PDB. Many different strategies exist to implement the classifier f . One approach is to invoke an Indirect PDB, as described by equations (6) and (7). Two Direct PDB alternatives for constructing f (G) will be considered in this section. In both Direct PDB cases, f (G) is constructed as a look-up table, where possible values of the position-domain bound are segmented into “bins” and the statistics computed within each bin.

It should be noted that the look-up table implementations developed here are intended for classification of the errors in a differentially corrected positioning system. Differences in user receiver equipment are not specifically considered. Accounting for such differences is a topic left for future work.

4.3.1

Non-Stationary Statistical Process

A central challenge in obtaining a PDB from a look-up table is that the statistical properties of the position-domain error are non-stationary. Errors depend on a number of different influences that include, but are not limited to overall constellation geometry, multipath sensitivity for individual satellites as a function of elevation, and time-varying ionosphere and troposphere delays. As such, the pdf of the position error is a function of time. Strictly speaking it is not possible to characterize such a process statistically because each data sample acquired is drawn from a different distribution.

102

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

From an engineering point of view, however, it is practical and reasonable to construct a histogram for a non-stationary process that is changing slowly enough that many data points can be acquired from very similar distributions [62]. The difficulty in such an approach lies in the appropriate design of the number of bins in the histogram, as the design needs to reconcile a finely sampled model (which requires larger numbers of bins) with the time it takes to populate the bins (which requires smaller numbers of bins). The time to populate bins is an important consideration, as it competes with the time-variant nature of the probability distribution of the position-domain errors.

Managing the trade-off between the a finely sampled model and well-populated data bins is challenging. The question is whether there is a way to define a limited number of bins for Direct PDB such that we can accumulate enough data in each bin to model the far tails with the required integrity levels, but at the same time guarantee that the PDF is stationary enough within each bin to allow for a tight overbound to be constructed..

4.3.2

Indirect Position-Domain Bounding

Indirect PDB deals with the same trade-off as described above, but range data are binned for individual satellite ranging errors. It is widely agreed that the major effect driving nonstationarity of ranging errors is Elevation [63]. Typically errors are combined over bins of 5 to 10 degrees of elevation (e.g. 0 − 10, 10 − 20, 20 − 30, and so forth). Given approximately 10 elevation bins are used and given that, on average, each bin is populated with N data points, the total number of data points collected to populate the distribution model is 10N . As the range-domain bounding approach is the standard method applied for validating

4.3. ERROR DISTRIBUTION BINNING

103

GNSS augmentation systems, this population size of 10N data points (needed to construct an elevation-dependent error model) stands as a baseline for what size of data collection campaign might be considered feasible.

4.3.3

Na¨ıve Position-Domain Bounding

A Direct PDB method would be based on binning position-domain errors, obtained from (4.9). This section considers a first concept for how one might construct the look-up table f (G) for Direct PDB. We call this simple approach “Naive Position-Domain Bounding”. In this approach, position error is binned as a function of the distinct azimuth and elevation angles for each satellite in view. The problem with such a scheme is the explosion in the number of bins. If the azimuth and elevation of each satellite were classified into one of 100 bins (a grid of 10 bins in each direction), and if each of 10 satellites were similarly characterized with 100 bins, then a total of 1020 bins would be needed to characterize the geometry of the visible constellation. In order to provide the required integrity levels, for example 10−7 , a sufficiently large number N of data points need to be collected. Such an approach would require collecting data from 1020+N different geometries. Assuming data points taken 300 s apart, to ensure independence, collecting this amount of data, even with 1000 different receiver stations, would take more than 1019 years.

4.3.4

Figure-of-Merit-Based Direct Position-Domain Bounding

It is now evident that a different strategy is needed to construct the look-up table f (G); this strategy needs to collapse common geometric effects and elevation effects into a substantially smaller number of bins. To accomplish this goal, we propose introducing a classification met-

104

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

ric called a Figure of Merit (FOM). The resulting FOM must classify data well enough to ensure that non-stationarity effects on the position error are significantly smaller than the maximum tolerable variability in the position error.

The fundamental problem will still remain; even if an excellent FOM is found, non-stationarity effects will still be present in each bin. The main focus of this chapter is to obtain a conservative model for the worst-case distribution associated with a given FOM, given that we know a range of distribution will be represented in each bin.

The main purpose of the FOM is to compress the geometry matrix G into a much simpler metric that largely captures the influence of geometry on the position error. In this sense, equation (4.7), which derives a Gaussian position-error standard deviation σp from the geometry matrix is a good example of a FOM.

In this case, how is Direct PDB different than Indirect PDB, where equation (4.7) was used to construct a PL? In Direct PDB, the notion is that σp (or some other FOM) would be used as a classifier, to define bins in which actual statistical samples would be collected. Protection Levels could then be computed directly from the statistical data, without relying on a probabilistic modeling assumptions. By contrast, Indirect PDB attempts to construct a bound directly from equation (4.7) by assuming a Gaussian overbound.

Written as an equation, the form of the FOM-based look-up table would be the following. Here FOM is used generically to represent some function of G, such as equation (4.7) for example. P DB = f (F OM (G))

(4.17)

4.3. ERROR DISTRIBUTION BINNING

   P DB0 0 ≤ F OM (G) ≤ b1       P DB1 b1 ≤ F OM (G) ≤ b2 f= .   ···       P DB M bM −1 ≤ F OM (G)

105

(4.18)

This equation defines a look-up table with M bins. Each bin contains all scenarios in which the FOM is between an upper bound and a lower bound, where the parameters bm define transitions between bins. A value of the Position-Domain Bound P DBm is assigned to each bin m. Thus the look-up table can be used in real-time by assessing the FOM for the actual geometry matrix G and looking up a corresponding Position-Domain Bound that serves as a Protection Level. The look-up table can be generated in advance by taking K data points, where each sample includes an actual error Ek and an associated geometry matrix Gk . Error values are assigned to the bins defined by equation (4.18), and the position-domain bounds for each bin are extracted from the histograms of those sampled error values. The key assumption is that for this chapter the problem is considered in a probabilistic sense (instead of statistically). This enables the study of non-stationary effects without any concerns on sampling effects (such as the noisy representation of empirical distribution tails based on limited available data). A key topic for further work is the study of statistical effects and the difficulty of inferring far tails of the position error distribution, given limited data.

4.3.5

Theorem: Overbounding of Mixtures of Gaussians

Consider the special case of a non-stationary random process for which the error distribution is zero-mean Gaussian at any instant, but for which the variance of that Gaussian distribu-

106

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

tion is continuously changing. If several data points are sampled sequentially and combined to create an empirical distribution, that empirical distribution is best modeled as a mixture of Gaussians. For an integrity application, the important detail is that the worst instantaneous distribution must be identified from the mixture of Gaussians, since integrity is based on instantaneous risk (and not average risk over time).

The following theorem states that the worst instantaneous Gaussian can be identified from a mixture of Gaussian distributions; by implication, an empirical histogram of error data for a non-stationary process can be used to identify the worst instantaneous error distribution (at least so long as the instantaneous error distribution is always Gaussian). A mathematical statement of this theorem is given below; a proof of the theorem can be found in the appendix.

Theorem: For a mixture of zero-mean Gaussians (M1 ([σ1 , σ2 , . . . , σN ]), the smallest σbnd for which N (0, σbnd ) overbounds N (0, σN ) also overbounds M1 .

Proof See Appendix I.

4.4

Selection of a Figure of Merit

The selection of the FOM is a critical design choice. Once a FOM is chosen, the domain of the function must be split into different bins. The design of the binning strategy must reconcile competing interests: a large number of bins will provide more finely graded PLs, but the confidence levels will be lower. Conversely, introducing a small number of bins, the confidence level increases, but the error model becomes more coarse.

4.4. SELECTION OF A FIGURE OF MERIT

107

The FOM also needs to be such that the position-domain errors are well clustered. If this is the case, then the effects of non-stationary error distributions will be negligible within each bin and the position error will be quasi-stationary.

4.4.1

Dilution of Precision as a Figure of Merit

As a starting point we propose Dilution of Precision (DOP) as a FOM for the position error. The DOP matrix (Q) is given by the covariance of the GNSS position estimate and describes the position error in three spatial dimensions and time: 

Q ≡ (GT G)−1

 qxx   qxy  =  q  xz  qxt



qxy qxz qxt   qyy qyz qyt   . qyz qzz qzt    qyt qzt qtt

(4.19)

For the purposes of this chapter, the data analysis was carried out based on Geometric DOP (GDOP): GDOP =

p qxx + qyy + qzz + qtt .

As an alternative, Vertical DOP (VDOP) could be considered; where V DOP =

(4.20) √

qzz .

GDOP can easily be seen to reflect error behavior. In fact, GDOP is an exact predictor of position error magnitude when an unweighted GNSS solution is computed, if all errors are Gaussian, independent and identically distributed. When these assumptions are not strictly true, GDOP still provides a reasonable approximation of the quality of satellite geometry and hence serves as a good classifier for the look-up table defined by equations (4.17) and (4.18). Excellent GDOP values, for example GDOP ≤ 2 with 7 satellites in view, will likely

108

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

have smaller errors than 4 ≤ GDOP ≤ 8, as might occur with 4 satellites.

4.4.2

Weighted Dilution of Precision as a Figure of Merit

Although this chapter focuses on using DOP as a FOM for classification of errors, other metrics might also be used. For instance, the weighted DOP (WDOP) might be considered: 

QW ≡ (ST S)−1

 qw,xx   qw,xy  =  q  w,xz  qw,xt



qw,xy qw,xz qw,xt   qw,yy qw,yz qw,yt   , qw,yz qw,zz qw,zt    qw,yt qw,zt qw,tt

(4.21)

where S is the range-to-position matrix of equation (4.1), and the weighted DOP becomes:

W DOP =

p

qw,xx + qw,yy + qw,zz + qw,tt .

(4.22)

The WDOP provides an exact estimate of position-error standard deviation when a weighted GNSS solution is computed, if all errors are Gaussian, independent and identically distributed. In this chapter DOP was selected as a classifier over WDOP (a) because the weighting process provides little benefit in improving position accuracy for the data analyzed in the next section and (b) because, in general, weighting matrices might be different for different users so as to reflect varying quality of user receiver equipment.

4.5. EXPERIMENTAL DEMONSTRATION

4.5

109

Experimental Demonstration

As a first step in demonstrating how the proposed Direct PDB method would work, we applied DOP as a classification metric for a sample set of differentially-corrected positions solutions. The differential position errors were classified into different bins (or regimes) according to the corresponding GDOP value and the data in each bin were analyzed to find an overbounding Gaussian. Several different permutations on the basic binning process are compared, including variations on how bin bounds (the variables bi ) are selected and on which FOM is selected as a classification metric.

4.5.1

Data Source

Data were collected from the Continuously Operating Reference Station (CORS) [29], a service provided by the National Oceanic and Atmospheric Administration (NOAA). In this setup, data were collected for the entire day of January 1 2013, sampled at 30 s. The data for CORS station ZOA2 were used, together with the known position of the station, to compute local differential corrections; the differential corrections were applied to data from a nearby CORS station, P222, which is 5.96 km away from ZOA2. This in turn allowed for the computation of a differential position-domain error. In order to obtain a wide range of DOP readings, the data were artificially degraded by blacking out 9 different satellites over the entire day; this explains why the DOP values climb above 5, which is extremely rare under all-in-view conditions and should not occur more than once in the unaltered data set.

In this proof of concept analysis, no attempt was made to investigate all possible satellite subsets, as in [59]. The single degraded geometry consider was merely intended to provide representative error values for poor DOP conditions. Analyzing all valid satellite subsets is

110

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

a topic for future investigation.

4.5.2

Baseline Binning Scheme

At each epoch we calculated a position-error at station P222 using equation (4.9). DOP was also computed at each epoch. An unweighted position solution was used, so DOP was simply a by-product of this position estimation process [18]. The error at each epoch was classified into a particular bin according to the DOP computed at the same epoch.

Multiple ways of setting the thresholds that define the bins were considered. A baseline method for defining bins is described in the remainder of this section. Alternatives are described in the following section.

A binning scheme will be most useful if it provides a clear way of separating favorable error scenarios from unfavorable ones. This can be assessed graphically, using quantilequantile plots as in Figure 4.2. Plotting multiple distributions from different bins on a quantile-quantile plot allows straightforward visual assessment of whether one distribution overbounds another. Intuitively a normal probability plot represents a Gaussian data set as a straight line, where the inclination of the line is inversely proportional to the variance and the intercept is given by the mean.

Several approaches were considered for defining the transition points bi between bins. As a baseline, we first considered defining two GDOP transitions (b1 = 2 and b2 = 5), because the data set contained very few GDOP readings in the immediate vicinity of these two values. Based on these bin definitions, histograms were created from all of the error values assigned

4.5. EXPERIMENTAL DEMONSTRATION

111

to each bin.

The position-domain error plotted in Figure 4.3 is binned as a function of Geometric DOP. The figure shows three empirical histograms for data broken up into three intervals: GDOP ∈ [0, 2], (2, 5], (5, ∞). The dashed guide lines indicate ideal Gaussian distributions in the Quantile-Quantile domain. This tool serves as a visual aid to assess the “heaviness of tails” of the actual error distributions, which is particularly useful in assessing the relative occurrence of rare events.

The first observation immediately apparent from the plot is that for a GDOP below 2, the vertical position error is bounded by a Gaussian of standard deviation 2 m, and for GDOP ∈ (2, 5] this number grows to 3 m. The final empirical histogram, for GDOP ∈ (5, ∞), contains significantly larger errors than those observed for the other two bins. The plot is bounded by the σ = 4 m line on the right hand side, but approaches the σ = 7 m line on the left hand side of the plane. This asymmetry, as well as the strong non-linearity of the QQ plot for the final bin, is indicative of much more severe heavy-tail effects than observed n the first two bins.

It is important to note that each bin includes samples drawn from different distributions (since the position error is generated by a non-stationary process). For integrity purposes, it is not sufficient simply to bound an empirical histogram. Whereas the histogram represents a mixture of distributions, a conservative integrity model should overbound the instantaneous distribution. As such, the PDB should be constructed from the histogram in an appropriate conservative way. If the far-tails of the distribution are relatively close to a Gaussian distribution (e.g. the empirical histogram is nearly linear in the quantile-quantile plot, at least in the far tails), then it is possible to justify a Gaussian overbound, where the worst-case instantaneous distribution is identified by a fit to the far-tail histogram. Such an instanta-

112

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

Figure 4.3: A simulated GBAS setup: Measurements gathered at CORS station ZOA2 are used to compute local differential corrections, using the position information provided by CORS as a reference. The differential corrections are then applied to data collected at station P222, which is 5.96 km away from ZOA2. The differential position errors are binned by GDOP and plotted in a quantile-quantile format, with Gaussians of indicated standard deviation plotted as a reference. neous bound is reasonable for the first two bins seen in Figure 4.3, but not for the third bin (with DOP above 5) where heavy tail effects are salient.

4.5.3

Sensitivity to Bin Selection

In the process of designing different thresholds for different FOMs it became evident that the design of the thresholds is critical to the success of dPDB. Badly designed thresholds lead to mis-classified position errors, which manifest themselves in the Quantile-Quantile (QQ)

4.5. EXPERIMENTAL DEMONSTRATION

113

plot as curves that intersect each other at multiple points.

The baseline approach for defining the bin transition values, as described above, was somewhat ad hoc. As such, an important question is to ask how much do variations in binning affect the quality of the histogram obtained through the Direct Position-Domain Bounding process? To create a distinct (but still rationale) set of bins for this sensitivity study, a FOM-Driven approach was used. Specifically, we considered the full set of geometries observed over one day (for the degraded constellation) and identified bins based on the 90th and 95th percentiles for the associated set of DOP values. The result is shown in Figure 4.4.

Inspecting Figure 4.4 and comparing it with Figure 4.3 shows that the FOM-based PDB is sensitive to the placement of binning thresholds. For example, the innermost bin (that with the best DOP values) of Figure 4.3 is very close to the Gaussian with σ = 1 m, but only contains about 30% of the data. By comparison, the innermost bin of Figure 4.4 contains 90% of the data but, in exchange for containing more data points, the position-domain bound is much wider (e.g., the GDOP ∈ [0, 4] bin of Figure 4.4 is reasonably well modeled by a Gaussian with σ = 2 m). For the outermost bin the behavior is very similar between the two figures. This can be attributed to the fact that the number of data points are similar and that the far tails of the distribution are dominated by few, rare events.

Another relevant question is how the number of bins impacts the resulting bound. To help study this question, a third binning was created using only a single bin transition (at the 90th percentile DOP value). The number of bins represents a trade-off between having fewer wide bins with more data, and having more finely spaced bins with less data. Results for the single-transition binning are plotted in Figure 4.5. Comparing Figures 4.4 and 4.5 it becomes apparent that the Gaussian that bounds the worse data bin in Figure 4.5 has a

114

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

Figure 4.4: The outermost two bins overbound 90% of the data. smaller variance (6 m) than the Gaussian that bounds the worst bin in Figure 4.4 (7 m). Arguably, the empirical histogram model improved (became more linear) when more data became available, suggesting a possible reason to favor low bin resolution (few bins) over high bin resolution (more bins).

Another alternative approach to selecting bin transitions might be one based on operational considerations. In other words, it may be possible to determine the highest DOP that still ensures the navigation capability can be used. A bin value could be mapped to this DOP level (or multiple bins to the DOP levels for multiple applications). This strategy would provide a rationale and useful means to couple bin transition values to specific applications. As such, this approach to binning will be considered in more depth in the future.

4.5. EXPERIMENTAL DEMONSTRATION

115

Figure 4.5: Here the innermost bin contains 90% of the data, while the outermost bin contains 10%.

4.5.4

DOP vs. Weighted DOP

GNSS position estimates can often be refined with the inclusion of a weighting matrix in the least-squares equation. Typically, the weighting matrix is calculated from a range-domain error model, where the noise level on each satellite is modeled as a function of the elevation angle and where all satellite errors are modeled as independent [28]. To assess how well Weighted DOP (WDOP) serves as an alternative classifier, the above analysis procedure can be repeated with two modifications. First, errors should be based on a weighted position solution (presuming that position errors are more accurate when the weighting is introduced). Second, WDOP should be used as the FOM classifier instead of DOP.

When applied to the CORS data analyzed here, WDOP classification was not evidently more effective than DOP classification. If position errors were made smaller by the weight-

116

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

ing process, WDOP was made smaller, also, so the net effect on binning was negligible. Furthermore, DOP and WDOP values were quite similar in most cases (except extreme cases). This result is evidenced by Figure 4.6, which plots WDOP as a function of DOP.

Figure 4.6: A comparison of DOP and weighted DOP. Note that there is a strong component of proportionality between the two quantities that is well described with a linear model.

4.6

Discussion

Direct PDB is motivated by the fact that conservative range-domain error models do not guarantee conservative position-domain models. A trade-off is required to assure the highest possible integrity and to provide the highest possible availability.

Direct PDB offers the possibility of accommodating correlated and heavy-tailed satellite ranging-error distributions and thereby tightening PLs without modifying the integrity risk.

4.7. SUMMARY

117

This observation makes Direct PDB particularly interesting for Satellite Based Augmentation Systems, where the difference between protection levels and actual error performance is much higher than in other navigation systems [64].

The experimental results presented in the previous section confirm the hypothesis that DOP is a reasonable classifier for position error. Under nominal, all-in-view conditions the DOP will not rise above 2 very often; on the particular day and at the particular location where the data set was acquired, DOP is above 2 only for about 3.32 % of the time. In this sense, Figures 4.3, 4.4, and 4.5 do not reflect realistic availability percentages.

Before implementing Direct PDB, many questions will need to be resolved. Accounting for user receiver equipment and for distance between stations are two examples of additional issues not resolved here. Differing user equipment should clearly impact the PDB, since better quality user equipment typically reduces errors. Distance between stations is relevant because ionosphere decorrelation, among other factors, will increase errors at larger distances (keeping in mind that the data from the previous section were compiled for two stations separated by a mere 5.96 km).

4.7

Summary

The chapter proposes a new method to provide DGNSS error bounds directly in the position domain, as an alternative to the traditional range-domain bounding implemented in conventional augmentation systems.

118

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

In the proposed methodology error behavior is characterized by a Figure of Merit (FOM), which reflects different error regimes. Based on preliminary data, we propose that DOP is a useful metric for classifying raw data into histograms from which protection levels can be constructed.

4.8

Appendix I

This appendix was added to prove that:

Theorem: For a mixture of zero-mean Gaussians (M1 ([σ1 , σ2 , . . . , σN ]), the smallest σbnd for which N (0, σbnd ) overbounds N (0, σN ) also overbounds M1 .

Throughout the proof, the following notation convention will be observed:

1. A Gaussian probability distribution N (µ, σ) with mean µ and standard deviation σ will have an associated probability density function (pdf) p(x; µ, σ) and a cumulative distribution function (CDF) Φ(x; µ, σ). 2. The argument of a pdf or CDF (x) will be referred to as “error”. 3. A mixture of N zero-mean Gaussians will be denoted as M ([σ1 , σ2 , . . . , σN ]) = P with M = N i=1 Li .

1 M

hP N

i i=1 Li N (0, σi ) ,

Definition 1: A CDF with zero mean Φ2 (x) overbounds another zero-mean CDF Φ1 (x) if for any given cumulative probability T , the magnitude of the associated error |x2 | corresponding to Φ2 is larger than or equal to that corresponding to Φ1 .

T = Φ2 (x2 ) = Φ1 (x1 ) ⇐⇒ |x2 | ≥ |x1 |

4.8. APPENDIX I

119

Henceforth A overbounds B will be written as A  B or B ≺ A.

Property 1: A zero-mean CDF Φ2 overbounds another zero-mean CDF Φ1 if they satisfy:

Φ2 ≤ Φ1 ∀x > 0

Φ2 ≥ Φ1 ∀x < 0

For any zero-mean CDF: Φ(0) =

1 . 2

CDFs are monotonically increasing functions, as

their derivatives are by definition greater than or equal to zero. For any x > 0 to have a T2 = P2 (x2 ) below T1 = P1 (x2 ) requires that P1 (x1 ) = T2 be satisfied at x1 < x2.

Proof : Let T2 be a cumulative probability, which the CDF Φ2 satisfies at x = x2 ; thus: T2 = Φ2 (x2 ). Similarly, let x1 be the point for which T2 = Φ1 (x1 ). If |x2 | > |x1 |, then Φ2 overbounds Φ1 , by definition 1. But if |x2 | > |x1 |, then at x2 , due to the monotonicity of CDFs the probability value of Φ1 must be greater than or equal to T2 if x2 > 0 and less than or equal to T2 if x2 < 0. 

Lemma 1: If σ2 ≥ σ1 > 0, then N (0, σ2 )  N (0, σ1 ).

Proof :For any zero-mean Gaussian, the CDF at zero error is: Φ(0; 0, σ) = 0.5∀σ ∈ R|σ > 0.

120

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

For any other error value x0 , the dependence of the CDF value on σ is: ∂ ∂ [Φ(x0 ; 0, σ)] = ∂σ ∂σ  =

∂  ∂σ

1−

 =

∂  ∂σ

Z

x0



−∞

1



2πσ 2

e

2

2

dx =

=



x √ 1 e− 2σ2 dx −x0 2πσ 2 

R x0





x √ 1 e− 2σ2 dx −x0 2πσ 2 

R x0

2



x2 2σ 2

2

=

  x=x0 ∂ 1 x = − erf √ = ∂σ 2 2σ 2 x=−x0      x0 −x0 ∂ 1 1 = − erf √ + erf √ ∂σ 2 2 2σ 2 2σ 2 Where erf (a) = −erf (−a) and so:

∂ ∂ [Φ(x0 ; 0, σ)] = ∂σ ∂σ

"

0 −erf ( √x2σ )

2

q

# =

x2

0 2 x e− 2σ2 π 0

2σ 2

.

Since the dependence of Φ(x0 ; 0, σ) on σ is always positive for x0 > 0 and negative for x0 < 0, it follows that: Φ(x0 ; 0, σ2 ) > Φ(x0 ; 0, σ1 )∀x0 > 0 Φ(x0 ; 0, σ2 ) < Φ(x0 ; 0, σ1 )∀x0 < 0 and so, by Definition 1, N (0, σ2 )  N (0, σ1 ). 

Corollary: For the special case σ2 = σ1 , N (0, σ2 )  N (0, σ1 ) and N (0, σ1 )  N (0, σ2 ).

4.8. APPENDIX I

121

Figure 4.7: Sketch of points used in the proof of property 1. Theorem 1: For a mixture of zero-mean Gaussians (M1 ([σ1 , σ2 , . . . , σN ]), the smallest σbnd for which N (0, σbnd )  N (0, σN ) and so N (0, σbnd )  M1 .

Li Proof : M1 is a weighted mean and no individual weight exceeds unity ( M ≤ 1). This P Li means that for any error x, the CDF is Φ(x) = N i=1 M Φ(x; 0, σi ) ≤ Φ(x; 0, σN ). Therefore

any Gaussian that overbounds the heaviest component N (0, σN ) of the mixture M1 also overbounds M1 . 

122

CHAPTER 4. DIRECT POSITION-DOMAIN ERROR BOUNDING

Definition 2: Given a sample from a mixture of Gaussians M1 ([σ1 , σ2 , . . . , σN ]), with empirical distribution M1 , N (0, σbnd )  M1 if for every entry xi in M1 (x):

N (xi ; 0, σbnd ) ≥ M1 (xi ) if xi < 0

N (xi ; 0, σbnd ) ≤ M1 (xi ) if xi > 0

Theorem 2: Given M1 , ∃ σbnd | N (0, σbnd )  M1 . Proof : Define σ1 as follows: σ1 =min σ x

s.t.

N (x1 ; 0, σ)  M1 (x1 )

For each subsequent xi ∈ M1 find the corresponding σi . Define σbnd := max(σi ). According to Lemma 1, N (0, σbnd )  M1 . 

Chapter 5 Summary and Vision The work underlying this thesis was intended to address some of the shortcomings in aircraft navigation systems with the aim of making more efficient use of the airspace. The challenges identified in chapter 1 address issues that have not been solved elsewhere but have the potential to increase the capacity of the airspace; where each solution on its own already represents a potential benefit. Considering the three contributions as part of a whole architecture concept, much more complex than each of the systems proposed in chapters 2-4, yields insights that point to the potential synergy effects of combining the three methods developed.

The satellite clock monitor proposed in chapter 2 was specifically designed to overcome outages in the communication between taxiing aircraft and the GBAS ground facility. The monitor was fine-tuned to work under very specific circumstances, such a time-to-alert of 10 s. The method reduces the ranging error by a factor of 100, compared to existing algorithms at the same probability of missed detection. The same monitoring principle could be applied to satellite clock faults during a precision approach or a landing, making the system more sensitive to smaller clock faults and thus tightening the protection levels by detecting clock 123

124

CHAPTER 5. SUMMARY AND VISION

anomalies earlier in their development, for example if the time-to-alert were 2 s. Furthermore, the principle of specialized autonomous monitoring could be extended to a variety of other “fast” faults that threaten the integrity of GBAS.

Mitigating the threat from ionosphere anomalies can be accomplished by a network of airborne multi-frequency GNSS receivers if they are able to share measurement data, as discussed in chapter 3. Such a collaborative monitor is more sensitive to ionospheric activity than one based at a single location, or a set of nearby locations. The collaborative approach is shown to detect gradients of 400 mm/km with better than 10−9 accuracy. Guaranteeing a higher sensitivity to ionosphere faults extends the range of validity of GBAS corrections, which could potentially enable the use of corrections computed at one airport to be used by aircraft landing at a different airport, or it could reduce the protection levels, allowing aircraft to fly closer together than currently possible. The method used for monitoring GNSS signals for faults that have a large component of spatial dependency can also be applied not only to ionosphere faults, but also to any other type of “slow” faults, as these are detectable from different locations at the same time and can therefore be addressed with a collaborative approach.

Tightening protection levels by changing the error bounding process, as proposed in chapter 4, also has the potential to enable a larger area of validity for GBAS corrections and to reduce separation requirements. The results show that for a user at approximately 6 km from a GBAS facility, the vertical position error can be bounded by a zero-mean Gaussian model of σ = 3 m, 90% of the time. The main focus of this chapter is not to change the errors themselves, but to improve the quality of error estimates, thus allowing for more accurate modeling and ultimately tightening protection levels.

5.1. VISION FOR FUTURE WORK

125

Since each method on its own has the potential to tighten the protection levels it seems like a logical next step to assess the impact of combining all three methods into a single system.

5.1

Vision for Future Work

The corrections computed by operational Satellite-Based Augmentation Systems (SBAS) consistently achieve a high quality [64] that has the potential to support category III (CatIII) operations without requiring a local ground facility. I envision a system that would use differential corrections computed by SBAS, but with enhanced fault detection provided by specialized autonomous monitors (such as [65]) and collaborative monitors (such as [15]). If feasible, such a system would have two major advantages over ground-based augmentation systems (GBAS): It would reduce the installation costs generally associated with GBAS, and it would be more robust to radio-frequency interference (RFI) than GBAS.

The approach proposed in this thesis would make due without a local ground facility. In this sense, the proposed approach is different that the notion of a Local Airport Monitor (LAM), which has previously been explored as a means to leverage SBAS corrections for precision landing applications [66]. In LAM, SBAS corrections were processed at a local ground station, checked for integrity with local ground-based monitors, and re-broadcast to users via the VDB. Instead, in the proposed system users would use SBAS-computed differential corrections and perform their own integrity checks on board, autonomously for fast faults, or via networked collaboration for slow faults.

The challenges involved in supporting Cat-III service with SBAS differential corrections stem from two requirements: Time-to-Alert (TTA) and Vertical Alert Limit (VAL). SBAS cur-

126

CHAPTER 5. SUMMARY AND VISION

rently provides LPV-200 service, which operates with a TTA greater than 6 s, while Cat-III requires a 1 s time-to-alert. The Vertical Alert Limit for LPV-200 service is 35 m; by contrast, Cat-III service requires a VAL of 10 m. Availability would be abysmal if SBAS were to be used for Cat-III approaches without modification, as typical SBAS protection levels very often exceed 10 m.

The proposed method is based on two capabilities: local monitoring for offnominal events and tight bounding of nominal errors. Local monitoring is essential to meet TTA requirements and to achieve higher spatial resolution (e.g. for ionosphere anomaly detection) than is possible with SBAS alone. Presuming that off-nominal events can be excluded through supplemental monitoring, a much tighter protection level for the SBAS-corrected position solution is possible, particularly if the position-bound is formulated directly in the position domain. (By contrast, typical protection levels are based on root-sum-squared range-domain error bounds, an approach which introduces significant excess conservatism).

The notion of achieving Cat-III using SBAS corrections is not unreasonable given actual SBAS performance under nominal conditions. According to [64], WAAS is able to provide a service with errors below 2 m 95% of the time. Assuming a truly Gaussian distribution and extrapolating out to a 10−9 integrity bound implies a protection level of approximately 6 m bound. Thus, margin exists relative to a 10 m VAL to cover inflation needed, for instance, to cover heavy distribution tails [59]. Though supported by operational SBAS data, the notion of satisfying a 10 m VAL seem very aggressive relative to the protection levels provided by current SBAS systems. In effect, the proposed method would remove conservatism in existing protection level calculations needed for the following reasons.

• To account for a long time-to-alert: SBAS-broadcast error models (σs) are inflated in part to provide integrity given a relatively long time-to-alert of 10 seconds. A significant

5.1. VISION FOR FUTURE WORK

127

reduction in TTA is required for Cat-III operations and will reduce the potential for faults to develop over time before detection. I contend that a fast time-to-alert may be possible using aircraft based fault monitoring methods, such as those discussed in [65]. • To account for ionosphere under-sampling: Another reason broadcast SBAS error models are inflated is to account for uncertainty in the ionosphere correction model, which must be interpolated between nodes spaced regularly on a latitude-longitude grid. Conservatism in SBAS broadcast sigmas, to account for this uncertainty can be mitigated, potentially, by collaborative aircraft monitoring. For instance, a networked ionosphere monitor can provide sensitivity to anomalies on a smaller scale than SBAS monitoring networks [15]. • To account for range-domain over-conservatism: Yet another reason why SBAS broadcast error models are conservative is that they have been defined by Indirect PDB methods. As discussed in chapter 4, direct PDB methods have significant potential to reduce conservatism in order to obtain tighter protection levels (and potentially higher availability).

In short, with the addition of appropriate monitoring and formal bounding methods, it seems possible that SBAS might provide Cat-III service with high availability. A detailed feasibility assessment will require extending theories presented in this thesis and other papers cited above; also, a large data collection effort would be required. Though the effort would be large, the potential benefit of this concept is enticing: Cat-III service on a continental scale without requiring GAST-D ground facilities. This would represent a dramatic reduction in the costs involved in providing Cat-III service to airports within SBAS coverage.

128

5.2

CHAPTER 5. SUMMARY AND VISION

Conclusions

The potential impact of providing Cat-III equivalent service (VPL ≤ 10 m, TTA = 2 s, p(HMI) = 10−9 ) using no further external infrastructure than what is already provided (e.g. GNSS, SBAS) would be a significant expansion of current airspace capacity and a reduction in operating costs. This must be considered in conjunction with current developments in Europe and USA, where the entire concept of airspace management is under revision with projects like NextGen or SESAR. Both of these initiatives are long-term projects (10-15 year scope) plan a complete overhaul of the airspace and aim at making it more efficient.

First results have already demonstrated how SBAS-based Cat-I approaches enable airlines to save fuel by allowing curved approaches at various pilot locations [67]. Further tightening protection levels, as could be achieved using the methods described in this thesis, would enhance this flexibility, which would in turn translate into greater fuel savings and a reduction in carbon emissions from circling aircraft or from more direct flight routes under adverse visibility.

Another point where tighter protection levels would yield favorable results is in the potential reduction of installation and maintenance costs caused by the operation of ILS installations. These installations currently require significant investments from airports, but may be rendered obsolete by DGNSS-based systems. In fact, DGNSS-based systems have become a common tool in automating inspection of ILS installations.

In addition to the applications in civil aviation, safety-of-life services with such tight integrity bounds may have application in a variety of situations that have not been anticipated. Most notably, autonomous automobiles will likely benefit from aviation-grade navigation services

5.2. CONCLUSIONS becoming available on a large scale.

129

130

CHAPTER 5. SUMMARY AND VISION

Bibliography [1] Federal Aviation Administration. Aerospace Forecast Fiscal Years 2013-2033. Technical report, US Department of Transportation, 2013. [2] Federal Aviation Administration. Review of Accident Data. 2012 Aviation Statistics. Online Data Set, 2013. [3] Lee, Y. C. and Shively, C. A. Feasibility of GPS III integrated with an inertial system to provide CAT IIIb services. In Proc. of ION GNSS 17th International Technical Meeting of the Satellite Division, pages 267 – 284, 2007. [4] Rife, J., Pullen, S., and Enge, P. Evaluating Fault-Mode Protection Levels at the Aircraft in Category III LAAS. In Proc. of ION GNSS 20th International Technical Meeting of the Satellite Division, 2007. [5] Euiho Kim and Todd Walter and J. David Powell. WAAS-Based Flight Inspection System. In Proc. ION ATM 2007, 2007. [6] Pullen, S. and Rife, J. GNSS Applications and Methods, chapter 10: Aviation Applications, pages 245–265. Artech House, 2009. [7] Young Shin Park, Sam Pullen, and Per Enge. Enabling the LAAS Differentially Corrected Positioning Service (DCPS): Design and Requirements Alternatives. In ION GNSS 2009, 2009.

131

132

BIBLIOGRAPHY

[8] Seebany Datta-Barua, Jiyun Lee, Sam Pullen, Ming Luo, Alexandru Ene, Di Qiu, Godwin Zhang, and Per Enge. Ionospheric Threat Parameterization for Local Area GlobalPositioning-System-Based Aircraft Landing Systems. Journal of Aircraft, 47:1141–1151, 2010. [9] Jason Rife, Sam Pullen Per Enge, and Boris Pervan. Paired Overbounding for Nonideal LAAS and WAAS Error Distributions. Aerospace and Electronic Systems, IEEE Transactions on, 42(4):1386–1395, 2006. [10] Sam Pullen and Grace Gao. GNSS Jamming in the Name of Privacy. Inside GNSS, March/April 2012. [11] Anja Grosch, Boubeker Belabbas, and Michael Meurer.

Redundant Inertial-Aided

GBAS for Civil Aviation. In Navitec, 2010. [12] Seebany Datta-Barua, Patricia Doherty, Susan Delay, Thomas Dehel, and John A. Klobuchar. Ionospheric Scintillation Effects on Single and Dual Frequency GPS Positioning. In Proc. ION GPS, 2003. [13] Osechas, O. GPS Satellite Clock Excessive Acceleration Detection for DCPS Users of GBAS. In Proc. ION GNSS 2011, 2011. [14] Okuary Osechas, Pratap Misra, and Jason Rife. Carrier-Phase Acceleration RAIM for GNSS Satellite Clock Fault Detection. NAVIGATION, 59(3):221–235, Fall 2012. [15] Okuary Osechas and Jason Rife. Distributed Ionosphere Monitoring by Collaborating Mobile Receivers. In Proc. ION GNSS, 2012. [16] Okuary Osechas and Jason Rife. Distributed Ionosphere Monitoring by Collaborating Mobile Receivers [to appear]. Aerospace and Electronic Systems, IEEE Transactions on, 2014.

BIBLIOGRAPHY

133

[17] Okuary Osechas and Jason Rife. Tightening DGNSS Protection Levels Using Direct Position-Domain Bounding. In ION GNSS, 2013. [18] Misra, P. and Enge, P. Global positioning system: Signals measurements and performance. Ganga-Jamuna Press, 2006. [19] SC-159. Minimum Aviation System Performance Standards for the Local Area Augmentation System (LAAS). Technical report, RTCA, Inc., 2001. [20] Pullen, S. and Rife, J. GNSS Applications and Methods, chapter 4: Fundamentals of GNSS III: Differential GNSS, pages 87–120. Artech House, 2009. [21] Serrano, L., Kim, D., and Langley, R.B. A Single GPS Receiver as a Real-Time, Accurate Velocity and Acceleration Sensor. In Proc. of ION GNSS 17th International Technical Meeting of the Satellite Division, pages 2021–2034, 2004. [22] S. L. Kennedy. Precise Acceleration Determination from Carrier-Phase Measurements. Navigation, 50(1):9–19, 2003. [23] Misra, P., Pratt, M., Muchnik, R., and Manganis, B. A General RAIM Algorithm Based on Receiver Clock. In Proc. ION GPS-95, pages 1941–1948, 1995. [24] Parkinson, B. W. and Spilker, J. J. Jr., editors. Global positioning system: theory and applications, chapter 5: Receiver Autonomous Integrity Monitoring. Progress in Aeronautics and Astronautics, 1996. [25] Simon, D. Optimal State Estimation. Wiley, 2006. [26] SC-159. Minimum Operational Performance Standards for Airborne Supplemental Navigation Equipment Using the Global Positioning System. Technical report, RTCA, Inc., 2001. [27] The MathWorks Inc. Statistics Toolbox For Use With Matlab, 2011.

134

BIBLIOGRAPHY

[28] Gleason, S. and Gebre-Egziabher, D. GNSS Applications and Methods, chapter 3 GNSS Navigation: Estimating Position, Velocity, and Time, pages 55–86. Artech House, 2009. [29] National

Geodetic

Survey.

ZOA1

Site

Information

Form.

Website:

http://www.ngs.noaa.gov/cgi-cors/corsage.prl?site=ZOA1, Aug. 2011. [30] Rife, J. Overbounding Missed-Detection Probability for a Chi-Square Monitor. In Proc. ION GNSS 2012 - To appear, 2012. [31] International Civil Aviation Organization Navigation Systems Panel (NSP). Preliminary ground system monitor validation results. Technical report, ICAO, NSP, 2009. [32] SC-159. The Role of Global Navigation Satellite Systems (GNSS) in Supporting Airport Surface Operations. Technical report, RTCA, Inc., 1998. [33] Beer, F. P. and Johnston, E. R. Jr.

Vector Mechanics for Engineers. Dynamics.

McGraw-Hill, 2003. [34] Global Navigation Satellite System Program Office. GNSS Evolutionary Architecture Study. Phase I - Panel Report. Technical report, Federal Aviation Administration, 2008. [35] Patrick Y. Hwang, Gary A. McGraw, and John R. Bader. Enhanced Differential GPS Carrier-Smoothed Code Processing Using Dual-Frequency Measurements. NAVIGATION : Journal of The Institute of Navigation, 46(2), 1999. [36] Hiroyuki Konno. Dual-Frequency Smoothing for CAT III LAAS: Performance Assessment Considering Ionosphere Anomalies. In ION GNSS, 2007. [37] Hiroyuki Konno, Sam Pullen, Jason Rife, and Per Enge. Evaluation of Two Types of Dual-Frequency Differential GPS Techniques under Anomalous Ionosphere Conditions. In ION ITM, 2006.

BIBLIOGRAPHY

135

[38] Jiyun Lee, Jiwon Seo, Young Shin Park, Sam Pullen, and Per Enge. Ionospheric Threat Mitigation by Geometry Screening in Ground-Based Augmentation Systems. JOURNAL OF AIRCRAFT, 48(4), 2011. [39] D.V. Simili and B. Pervan. Code-Carrier Divergence Monitoring for the GPS Local Area Augmentation System. In Position, Location, And Navigation Symposium, 2006 IEEE/ION, pages 483 – 493, 25-27, 2006. [40] Ming Luo, Sam Pullen, Todd Walter, and Per Enge. Ionosphere Spatial Gradient Threat for LAAS: Mitigation and Tolerable Threat Space. In ION National Technical Meeting, 2004. [41] Samer Khanafseh, Sam Pullen, and John Warburton. Carrier Phase Ionospheric Gradient Ground Monitor for GBAS with Experimental Validation. NAVIGATION: Journal of The Institute of Navigation, 59(1), 2012. [42] Matt Harris and Tim Murphy. GAST D Ionosphere Anomaly Simulation Including Airplane Speed Change Effect on Monitors. Technical report, NSP July 09 CSG/Flimsy 2, July 2009. [43] R. T. Tsunoda, R. C. Livingston, J. P. McClure, and W. B. Hanson. Equatorial plasma bubbles: Vertically elongated wedges from the bottomside F layer. Journal of Geophysical Research: Space Physics, 87(A11):9171–9180, 1982. [44] Jason Rife. Overbounding Chi-square Probability Distributions. In Proc. ION GNSS, 2012. [45] Mathieu Joerger, Fang-Cheng Chan, and Boris Pervan. Addressing the Limitations of Existing RAIM Methods by Fully Exploiting Measurement Redundancy. In Proc. ION GNSS, 2012.

136

BIBLIOGRAPHY

[46] Michael Felux, Patrick Remi, and Simona Circiu. Potential for the Use of GBAS at Close-By Airports. In ION GNSS, 2013. [47] L. Sparks, A. Komjathy, and A.J. Mannucci. The Dependence of WAAS Ionospheric Error Bounds upon the Spatial Distribution of GPS Measurements. In ION National Technical Meeting, 2003. [48] Jing Jing, S. Khanafseh, Fang-Cheng Chan, S. Langel, and B. Pervan. Detecting ionospheric gradients for GBAS using a null space monitor. In Position Location and Navigation Symposium (PLANS), 2012 IEEE/ION, pages 1125 –1133, april 2012. [49] Ming Luo, Sam Pullen, Alexandru Ene, Di Qiu, Todd Walter, and Per Enge. Ionosphere Threat to LAAS: Updated Model, User Impact, and Mitigations. In ION GNSS, 2004. [50] J. Rife. Collaborative Vision-Integrated Pseudorange Error Removal: Team-Estimated Differential GNSS Corrections with no Stationary Reference Receiver. Intelligent Transportation Systems, IEEE Transactions on, 13(1):15 –24, march 2012. [51] J. Rife. Collaboration-Enhanced Receiver Integrity Monitoring With Common Residual Estimation. In Proc. IEEE/ION Position, Location, and Navigation Symposium (PLANS 2012), 2012. [52] Juan Blanch, Todd Walter, and Per Enge. A New Ionospheric Estimation Algorithm for SBAS Combining Kriging and Tomography. In ION NTM, 2004. [53] Lawrence Sparks, Attila Komjathy, and Anthony J. Mannucci. Estimating SBAS Ionospheric Delays Without Grids: The Conical Domain Approach . In ION National Technical Meeting, 2004. [54] SC-159. DO-229: Minimum Operational Performance Standards for Global Positioning System/Wide Area Augmentation System Airborne Equipment. Technical report, RTCA, Inc., 2001.

BIBLIOGRAPHY

137

[55] SC-159. Minimum Operational Performance Standards for Airborne Supplemental Navigation Equipment Using the Global Positioning System. Technical report, RTCA, Inc., 2001. [56] Bruce DeCleene. Defining Pseudorange Integrity - Overbounding . In Proc. ION GPS, 2000. [57] H. Veerman, A. van Kleef, F. Wokke, B. Ober, C. Tiberius, S. Verhagen, A. Bos, and A. Mieremet. A Tool for GNSS Integrity Verification Based on Statistical Extreme Value Theory. In Proc. ION ITM, 2012. [58] J. Rife and B. Pervan. Overbounding revisited: Discrete error-distribution modeling for safety-critical gps navigation. Aerospace and Electronic Systems, IEEE Transactions on, 48(2):1537–1551, 2012. [59] Jiyun Lee, S. Pullen, and P. Enge. Sigma Overbounding using a Position Domain Method for the Local Area Augmentaion of GPS. Aerospace and Electronic Systems, IEEE Transactions on, 45(4):1262–1274, 2009. [60] Ronald Braff. A Method for LAAS Fault-Free Error Overbounding Using A Position Domain Monitor . In Proc. ION NTM, 2003. [61] Ronald Braff and Curtis Shively. A Method of Over Bounding Ground-Based Augmentation System (GBAS) Heavy Tail Error Distributions. In Proc. ION ITM, 2004. [62] Irfan Sayim and Boris Pervan. LAAS Ranging Error Overbound for Non-Zero Mean and Non-Gaussian Multipath Error Distributions. In Proc. ION Annual Meeting, 2003. [63] Tim Murphy, Matt Harris, Curtis Shively, Laurent Azoulai, and Mats Brenner. Fault Modeling for GBAS Airworthiness Assessments. NAVIGATION, 59(2):145–161, 2012. [64] WAAS Test Team. Wide Area Augmentation System Performance Analysis Report # 43. Technical report, Federal Aviation Administration, 2012.

138

BIBLIOGRAPHY

[65] Okuary Osechas, Jason Rife, and Pratap Misra. Carrier-Phase Acceleration RAIM for GNSS Satellite Clock Fault Detection. NAVIGATION, 59(3):221–235, 2012. [66] Jason Rife, Sam Pullen, Todd Walter, Eric Phelts, Boris Pervan, and Per Enge. WAASBased Threat Monitoring for a Local Airport Monitor that Supports Category I Precision Approaches. In Proc. ION PNT, 2006. [67] Michael Huerta. REPORT ON STATUS OF GREENER SKIES PROJECT. Technical report, Federal Aviation Administration, 2013.

Aircraft-Based Satellite Navigation Augmentation to ...

in partial fulfillment of the requirements for the degree of. Doctor of Philosophy ..... 1.1 Motivation. The number of aircraft departures has risen over the years and orders for aircraft are also ...... to support collabo- rative automotive applications.

1MB Sizes 3 Downloads 115 Views

Recommend Documents

the revolution in global navigation satellite systems ...
... of tasks including surveying, grading, dozing, drilling and fleet management; .... network-level software engine to generate all types of GNSS corrections using ...

Update on BeiDou Navigation Satellite System (BDS) - GPS.gov
Sep 12, 2016 - 10 meters, the timing accuracy ... much better than 10m in some of the coverage area. ..... Coupled with an Android phone or a Windows tablet,.

SportsStore: Navigation - GitHub
Act. ProductsListViewModel result = controller.List(null, 2).ViewData. ..... Clicking this button will show a summary of the products the customer has selected so ...

Cheap Sathero Sh-100Hd Digital Pocket Satellite Finder Satellite ...
Cheap Sathero Sh-100Hd Digital Pocket Satellite Finde ... sb 2.0 Sat Finder Free Shipping & Wholesale Price.pdf. Cheap Sathero Sh-100Hd Digital Pocket ...

Mixing navigation on networks
file-sharing system, such as GNUTELLA and FREENET, files are found by ..... (color online) The time-correlated hitting probability ps and pd as a function of time ...

Augmentation of Flag Day Fund collection
GOVERNMENT OF ANDHRA PRADESH. ABSTRACT. ARMED FORCES FLAG DAY – Augmentation of Flag Day Fund collection –. Contribution by the State Government Employees – Recovery from the salaries of December payable in the month of January - Amendment -. O

impact of the plant rhizosphere and augmentation on ...
Oct 4, 2002 - prior to application, replacing the depleted surfactant, thereby ensuring .... ever, the data from the planted column indicated methane de- pletion over time ... bioaugmented soil columns (39% PCB recovery) were signif-.

EC1403 - SATELLITE - noorulfaisal
What are the different applications of satellite systems? 2. Mention the different ... Draw the block diagram & explain the Master antenna TV system. (16). 4. Explain the EIRP& ... GEOGRAPHIC INFORMATION SYSTEM. PART A(2 Marks ). 1.

EC1403 - SATELLITE - noorulfaisal
What is an noise power spectral density? 10. What is an Intermodulation noise? 11. ... 25. What is PDOP? 26. What is burst code word? 27. Define SIC. 28.

impact of the plant rhizosphere and augmentation on ...
Oct 4, 2002 - to stand for 90 d in a greenhouse at the University of California. (26 ..... trations in the planted column were best fit to the model yielding the.

Carrier-based Coverage Augmentation in Wireless ...
At the end o the paper, we propose an optimization technique to reduce augmentation delay and save robot energy. I. INTRODUCTION. Wireless sensor networks (WSN) consist of a large number of small sensing and computational devices, called sensors, whi

A Combinatorial Approach to Building Navigation ...
Sep 24, 2009 - Design, Subject applications, Empirical results. • Related ... Figure 2: Combinations from the exhaustive testing .... Results: performance & cost.