> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

Collaborative Vision-Integrated Pseudorange Error Removal: Team-Estimated Differential GNSS Corrections with no Stationary Reference Receiver Rife, J.; Tufts Univ., Medford, MA, USA This paper appears in: IEEE Trans. on Intelligent Transportation Systems, 2012 Volume: 13 , Issue: 1 Page(s): 15 - 24 Issue Date: March 2012 ISSN: 1524-9050 Print ISBN: TBA INSPEC Accession Number: 12557765 Digital Object Identifier: 10.1109/TITS.2011.2178832 Date of Published Version: 27 February 2012 This is a pre-print. Final Version available at

1

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

2

Collaborative Vision-Integrated Pseudorange Error Removal: Team-Estimated Differential GNSS Corrections with no Stationary Reference Receiver Jason Rife, Member, IEEE  Abstract—This paper presents an approach for generating GNSS differential corrections by distributing GNSS and georeferenced vision measurements through a vehicle-to-vehicle (V2V) communications network. Conventionally, high-quality differential GNSS corrections are generated from a stationary reference receiver in close proximity to a set of mobile users. The proposed method, called Collaborative Vision Integrated Pseudorange Error Removal (C-VIPER), instead generates differential corrections using data from moving vehicles, thus eliminating the need for an infrastructure of stationary receivers. An important feature of the proposed algorithm is that individual differential corrections are computed for each satellite, so that corrections can be shared among users with different satellites in view. As demonstrated in simulation, measurement sharing significantly improves positioning accuracy both in the crosstrack direction, where the quality of visual lane-boundary measurements is high, and in the along-track direction, where the quality is low. Furthermore, because measurements are shared among many vehicles, the networked solution is robust to vision-sensor dropouts that may occur for individual vehicles. Index Terms—Collaborative Navigation, Distributed Vision, Geographic Information System (GIS), Global Positioning System (GPS), Global Navigation Satellite System (GNSS)

I. INTRODUCTION

A

DVANCED Driver Assistance Systems (ADAS), such as collision and lane-departure warning systems, have great potential to enhance the safety and efficiency of automotive transportation. Implementing highly automated systems in mass produced vehicles will require new navigation capabilities with higher levels of accuracy and reliability than currently possible using existing consumer-grade equipment. This paper proposes a novel navigation capability to enhance a vehicle’s estimate of its geo-referenced position in order to support emerging ADAS technologies. Manuscript received February XX, 2011. The author acknowledges Tufts University for providing internal seed funding for this project. The author also acknowledges Xuan Xiao and Courtney Mario, whose constructive comments and discussions improved this manuscript. J. Rife is with Tufts University, Medford, MA 02155 USA (e-mail: [email protected]).

More specifically, this paper proposes a low-cost positioning capability based on sharing visual estimates of road-boundary locations and Global Navigation Satellite System (GNSS) measurements over a Vehicle-to-Vehicle (V2V) network. GNSS receivers, designed to track U.S. Global Positioning System (GPS) satellites or Russian GLONASS satellites, are already common equipment in production vehicles. Camera systems are widely available, and V2V communications systems are poised to become standard equipment in the near future [1]. A system of these sensor components (GNSS receiver, camera, and V2V transceiver) would be orders of magnitude cheaper than many alternatives, such as scanning-laser based navigation [2]-[4]. GNSS and camera systems are highly complementary. GNSS offers high availability, in the sense that a GNSS position-fix can nearly always be computed outdoors, even if a number of GNSS satellites are blocked by buildings or terrain features. By contrast, current vision-based positioning methods are subject to occasional dropouts [5], either because tracking quality is poor (e.g. in bad weather) or because measured features cannot be correlated to a geo-referenced position using a Geographic Information System (GIS) database. Fusing vision and GNSS measurements among multiple vehicles has the potential to achieve GNSS-like availability and vision-like accuracy, about 0.5 m (95%) for vision systems in contrast with 3m (95%) for a GPS receiver aided by the Wide-Area Augmentation System (WAAS) [6]. Given the potential benefits for integrating vision and GNSS measurements, many single-user fusion methods have been proposed, both for automotive [7]-[10] and more general applications [11]-[12]. To date, little research has addressed collaborative sharing of GNSS and geo-referenced vision data among multiple vehicles. Moreover, existing collaborative fusion work has focused on integrating GNSS position outputs [13] rather than the more information-rich raw measurements (GNSS pseudoranges) acquired for each visible satellite. This paper proposes a novel approach for collaboratively sharing geo-referenced vision and GNSS pseudoranges to calibrate common-mode pseudorange errors for each satellite in view. Such a system would be similar to conventional Differential GNSS (DGNSS), but with no need for a

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < stationary reference antenna. Conventional DGNSS assesses pseudorange errors for each satellite using a stationary, surveyed reference antenna [14]-[15] and broadcasts error corrections to many users (who may each see a different set of satellites). Satellite-specific errors removed by DGNSS include clock calibration, ephemeris definition, ionosphere delays, and troposphere delays [16]. DGNSS corrections are more accurate when the user and reference antennas are in a local area, separated by only a short distance. Wide-area DGNSS networks, featuring only a few reference antennas spread over vast, continent scale regions [17]-[18], are still of interest because of the prohibitive costs of deploying localarea antennas over a large coverage region. In essence, the proposed methodology aims to achieve the benefits of a localarea DGNSS system wherever cars congregate, thus avoiding the costs of deploying a large array of stationary, local-area DGNSS antennas. A key challenge in generating DGNSS corrections using automotive vision sensors is that vision processing of road scenes is more effective in the lateral direction (cross-track) than in the car’s direction of travel (along-track). Generally, for road scenes, the highest quality visual features are those aligned with the roadway. These line features include painted lines, rows of lane markers, and physical road edges [19]-[23]. An example of line features is illustrated in Fig. 1. Line features are particularly attractive for navigation because they are relatively easy to identify uniquely and to correspond with entries in a GIS database. (Such a database is assumed to exist, to geo-reference visual features for comparison with GNSS.) The disadvantage of line features is that the distance to a line can be computed in only one direction (cross-track), leaving some ambiguity in the position of a vehicle along the roadway [24]. Hence line features observed by a single vehicle do little to enhance along-track GNSS positioning. The sensor fusion approach proposed here, called Collaborative Vision Integrated Pseudorange Error Removal (C-VIPER), is unique in that it exploits cross-track vision measurements from some vehicles to improve along-track positioning for others. Thus, all users benefit from joining the network, because GNSS pseudorange corrections become fully observable only when lateral vision measurements are combined for vehicles travelling along different compass headings. Moreover, performance scales with the number of networked vehicles, since estimation errors can be further mitigated by averaging over additional sensors. An additional benefit of C-VIPER is that it makes networked users more robust to vision-system dropouts, since, whenever onboard vision measurements are unavailable, enhanced cross-track accuracy is still possible using local GNSS corrections. The remainder of the paper describes the C-VIPER algorithm in detail. First, the next section provides the mathematical formulation for the method. A subsequent section characterizes algorithm performance using a simulated scenario with 10 vehicles traveling in close proximity (< 1 km) through an urban road network. Results of the simulation indicate that geo-referenced error levels of 1 m one sigma

3

(along-track) or better (cross-track) may be possible, even in the absence of external differential corrections.

Fig.1. Lane Boundary Detection. In road scenes, along-track features are common and relatively easy to identify uniquely. In this scene, the centerline and left road edge are identified (thick dashed lines), but the right edge, which curves into an intersection, is not.

II. COLLABORATIVE POSITIONING ALGORITHM This section details the proposed approach for fusion of GNSS and visual line-feature measurements. At its heart, the method is a nonlinear least-squares algorithm. The method generalizes our earlier work [24] by introducing a new algorithm with two important features: an ability to compute corrections for individual satellites and an ability to process measurements from vehicles tracking different sets of satellites. The approach might be considered collaborative navigation in the sense that the term has recently been used to refer to positioning algorithms that employ wireless networks to pass sensor data among multiple vehicles [25]-[29]. A. Measurement Models The primary goal of the C-VIPER algorithm is to solve simultaneously for a set of geo-referenced vehicle locations. The accuracy of this algorithm is enhanced by solving for systematic sensor errors, and in particular for spatially correlated, satellite-specific GNSS errors. These satellitespecific errors are equivalent to differential corrections. In order to characterize these systematic errors quantitatively, it is useful to introduce models for two types of sensor data: GNSS pseudoranges and visual lane-boundary measurements. GNSS receivers produce a set of measurements known as pseudoranges. Each pseudorange  n( k ) is a measurement of the time of flight of a GNSS signal from a particular satellite k to the receiver of a particular vehicle n. The pseudorange is much like a tape measure that reads the distance between the satellite and vehicle, where the “tape measure” is attached to a virtual cord of unknown length (representing the time bias between the user and GNSS clock references). The user must

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < solve for this cord length (i.e. time) in order to also obtain position. In this sense, the pseudorange  n( k ) can be modeled (k)

as a true range between the satellite position x and the user position xn, subject to a receiver-specific time shift bn and additional errors [16].



(k ) n

 x

(k )

 xn  bn  a

(k )



(k ) n

(1)

Here two additional error sources corrupt pseudoranges: a satellite-specific error a(k) and a channel-specific error

 n( k ) .

Neither error is estimated in stand-alone GNSS. By contrast, conventional differential GNSS systems directly measure the satellite-specific errors a(k), which affect all users in a local region viewing a particular satellite k. For this reason, this paper also refers to the satellite-specific error terms a(k) as differential corrections. Satellites-specific errors result from atmospheric effects (ionosphere and troposphere) or from ephemeris and clock parameter definition [15]. Channel-specific errors, such as thermal noise and multipath, are largely uncorrelated from one satellite and from one receiver to another [30]. It is not possible to estimate these errors, since the number of unknown

 n( k ) terms equals

the number of measurement equations. In position estimation, it is useful to linearize the nonlinear measurement equation about a nominal user position xˆ n .

n( k )    u( k )   xn  bn  a( k )   n( k ) T

(2)

This linearized equation depends on the unit vector u(k), which points from the user toward satellite k.





u( k )  x( k )  xˆ n / x( k )  xˆ n

(3)

  G   u( k )  



  1  



T

T

d n  ut

r

n

 lb 

 xn    n

 xˆ n   ρn  G    aˆ  ε n  bˆn 

(4)

In GNSS processing, the matrix G is often called the Geometry Matrix. Each row of G contains the pointing vector to satellite k followed by a one, which is the coefficient of the clock correction term bˆn .

(6)

This equation assumes a map database [31] that stores surveyed data for road lane boundaries in terms of linear segments each characterized by a reference point rn, which lies somewhere along the length of the segment, and by a transverse unit vector ut, which lies perpendicular to the laneboundary in the plane of the road. Sensor measurement error is described by a random noise term  n lb  . To construct a state estimator, it is useful to linearize the nonlinear equation (6) about a nominal user position xˆ n . T

pseudorange residual vector, aˆ is the estimated satellitespecific error vector, and ε n is the random error vector.

(5)

In the proposed algorithm, GNSS data is fused with camera measurements, which are assumed to identify geo-referenced line features in the road plane. Many vision processing methods have been proposed to provide camera-based measurements of road or lane boundaries [19]-[23]. This paper is not concerned with a particular vision algorithm, but instead models the camera sensor simply as a device capable of measuring the lateral distance (or more generally the angle and lateral distance [8]) of a particular car relative to its lane boundary. As a further simplification, it is assumed either that the camera is collocated with the GNSS receiver, or that vision measurements are calibrated so as to report distances of a geo-referenced lane boundary relative to the GNSS receiver. With this assumption, the lane-boundary sensor measurement dn can be related to the position of the GNSS receiver xn through the following equation.

 lb 

 d  ut  xn   n

The pointing vector to a given satellite k is essentially the same for all users in proximity (within tens of kilometers). In this work users are assumed distributed over a radius of 10 km or fewer; hence, the subscript n is dropped in defining the pointing vector u(k). It is convenient to write the linearized measurement equations (2) in vector form, where  ρ n is the

4

(7)

B. System States and Constraints C-VIPER estimates the positions of all vehicles using shared GNSS pseudoranges and vision-based lane-boundary distances. As in conventional GNSS processing, the estimated states include both the position vector xn and the time bias bn for each vehicle n. In C-VIPER, these states for each individual vehicle are coupled together by a set of additional states: the satellite-specific error terms a(k). In effect, the proposed algorithm solves for differential GNSS corrections, not by differencing individual pseudoranges from known true ranges, but rather by finding them through minimization of a set of least-squares residuals. More precisely, C-VIPER estimates all but two differential corrections, which are instead computed directly for reasons described later in this section. As such, the full state vector is:

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < x   x1T b1 xT2 b2

xTN bN a1 a 2

T

a K 2  .

(8)

Here K is the total number of satellites viewed by one or more of the N collaborating vehicles. The reason why two of the differential corrections are not estimated directly is that the full set of corrections is not observable. In this paper, line features are assumed to be extracted from a two-dimensional database which provides no information about the altitude of road segments. With this assumption, vision-based measurements do not pin down the differential corrections in the vertical direction, only in the North and East plane. Vision measurements, furthermore, do not provide any information to aid in identifying the mean value of the a(k) terms. This mean value is indistinguishable (i.e. not linearly independent) from the clock correction bn. Reference [24] provides a more formal discussion of observability for this problem. Since vertical and time corrections are not observable, they remain ambiguous unless constraints are applied to ensure a unique solution. To generate these constraints, two assumptions are made: the total contribution of all K differential corrections to (1) the vertical position and to (2) the time correction is zero for the single-user, all-satellites-inview GNSS solution. Constraints are constructed from a Geometry matrix G that contains rows for all of the K satellites viewed by at least one of the collaborating vehicles. This all-in-view geometry matrix is decomposed into four columns: ve, vn, vu, and vb. Each column respectively maps the set of differential corrections into an east, north, up or clock shift. G   ve | v n | vu | vb 

(9)

A shift in the GNSS position and/or time solution results when differential corrections align with any of these column vectors. Nominally one constraint should set the mean (or equivalently total) time-shift contribution of the differential correction vector to zero. This is equivalent to setting to zero the dot product of the last column of the G matrix (all ones) and the estimated differential correction vector aˆ . vb  aˆ  0

(10)

vu  aˆ  0 .

the differential corrections, the constraints are modified slightly. Specifically, only the components of the time and up vectors which do not also contribute to a shift in one of the horizontal coordinates are set to zero. The orthogonal component of the time projection vector, dubbed vb , is:  v vT v v T v b   I  e e2  n n2  ve vn 

  vb .  

(12)

Similarly, the modified upwards projection vector v u is defined to be orthogonal to the East, North, and time projection vectors.  v vT v vT v vT v u   I  e e2  n n2  b b2  ve vn vb 

  vu  

(13)

Rewriting the constraints in terms of these orthogonal vectors decouples the vertical and time constraints from the North and East directions. In compact form, the modified constraints can be written: Vaˆ  0 .

(14)

Here the matrix V is defined such that V  vb

constraint (14), it is possible to define a unique solution for the full set of satellite-specific errors, including the previously ambiguous terms a(K-1) and a(K), which were not specified in the state vector. A decomposition of (14) allows these terms to be computed directly. In the following decomposition, aˆ st refers to the vector of K-2 differential corrections in the state vector x, and aˆ el refers to the remaining two terms, a(K-1) and a(K). The matrices Vst and Vel refer to the block components of the V matrix that multiply aˆ st and aˆ el , respectively. Vaˆ   Vst

aˆ  Vel   st   0 aˆ el 

Refinement of these constraints is necessary, however, because the columns of the G matrix are not mutually orthogonal. Differential corrections that cause a shift in one direction (e.g., East) may also cause a shift in another direction (e.g. up). To decouple the nominal constraints (10) and (11) from the East and North directions, such that the constraints do not actually impact the lateral contributions of

(15)

This equation is solved for aˆ el . 1

(11)

T

vu  . Using

aˆ el    Vel  Vst aˆ st

By analogy, a similar constraint on vertical position gives:

5

(16)

The full state-specific error correction vector aˆ is aˆ  Aaˆ st ,

(17)

where A is defined to be I   A . 1    Vel  Vst 

(18)

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < Car Locations and Road unit vectors Equation (17) is significant in that it expresses a full set of differential corrections aˆ in the terms of the partial set of estimated corrections that are components of the state vector x as defined by (8). This set of corrections is very similar to those generated by a conventional DGNSS system, except in that no stationary reference receiver is required.

1

10

C. Estimation Problem To solve for the state x, it is necessary to invert the nonlinear equations h relating the states to the measurement y. y  h(x)  ε

ρTn

9

(19)



dn

6 7 2 5

T

4

(20)

The linearized form of the observation equation (19) is

 y  H  xˆ   x  ε .

(21)

Once linearized, this equation can be solved in an iterative least-squares sense by the Newton-Raphson method. In the case of C-VIPER a weighted least-squares calculation is performed. The weighting matrix W is the inverse of the measurement covariance matrix. In practice, only one iteration is needed if a good initial estimate is provided (using a conventional standalone GNSS solution for each user).

 xˆ   HT WH  HT W y 1

(22)

Fig. 2. Simulated Vehicle Locations and Orientations. Each dot represents a vehicle; axes represent local along-track and cross-track directions. Artwork based loosely on satellite imagery from [32].

that the G n and A n matrices only contain rows corresponding to the particular set of satellites in view of vehicle n. The set of visible satellites is, in general, different for each vehicle. D. State Estimation Error The accuracy of the position solution can be assessed by analytically computing the state covariance matrix P, where P  E ε xεTx  ,

xˆ  xˆ   xˆ

and where The H matrix is obtained from the linearized measurement models, (2) and (7), compiled in vector form in the order of the sensor variables given by (20).

G1 0 H   0

0

0

G2

0

0

GN

3 8

Here the measurements vector y consists of the pseudoranges for each vehicle augmented by lane-boundary measurements. y  

6

A1 

A2 

   AN 

(23)

In the observation matrix H, the G n and A n matrices are closely related to the geometry and satellite-specific error projection matrices, G and A, previously defined by equations (5) and (18). The primary difference is that G n and A n are augmented with an additional row that includes the coefficients for the lane-boundary sensor, from (7), if a laneboundary measurement is available. A secondary difference is

(24)

ε x is the vector of errors for each element of the

state vector x. Here it is assumed that state-error vector is unbiased, which is true when the sensor errors are also unbiased. (The assumption is revisited in the following section.) The state error is related to the sensor error through the weighted-pseudoinverse in (22).



ε x  HT WH



1

HT Wε y

(25)

Substituting this expression for the error into (24), and noting that the weighting matrix is the inverse of the symmetric sensor covariance matrix W1  E ε y εTy  ,

(26)

it is straightforward to show that the state covariance matrix is



P  HT WH



1

.

(27)

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

This state covariance plays an essential role in assessing the quality of the collaborative positioning solution. If the weighted pseudo-inverse is poorly conditioned, then the covariance P will have large values on the diagonal, indicating that large errors are likely.

III. SIMULATION STUDIES In this section, simulations are used to assess the nominal performance of the C-VIPER algorithm. Analyses show significant accuracy improvement in the along-track position (where vision sensors are less accurate) and also in the crosstrack direction (for vehicles with vision-sensor dropouts). These benefits are realized only when collaborators are oriented in diverse compass directions, however. A. Simulation Scenario To analyze the performance of the proposed estimation algorithm, ten vehicles traveling on an urban road network were simulated (see Fig. 2). The network consists of roads in Medford and Somerville, MA, at one edge of the Tufts University campus (42.4060N, -71.1161W). The ten vehicle locations are shown on the map as small circles; each circle is accompanied by a pair of coordinate axes indicating local along-track and cross-track directions. The largest distance between vehicles (vehicles 1 and 4) is 734 m. GNSS measurements were simulated for a particular satellite geometry, illustrated in Fig. 3. This sky plot shows each satellite as a circular marker, with the satellite PRN identifier at the center. The radial position of the maker corresponds to the satellite’s elevation, with the center of the plot being directly overhead (90 elevation). The North

7

circumferential angle of the marker corresponds to the satellite’s azimuth, with the top of the plot indicating North. This particular satellite geometry is a representative historical example from [33]. Simulations considered only a single instant, so car and satellite motion were not considered. Newton-Raphson iterations were initialized using the singlereceiver position solutions for each vehicle; however, results were not highly dependent on the specific initialization states. Each simulation run included a series of ten instances of solving the C-VIPER equations for different numbers of available vision measurements. In each case, the number of available vision measurements was increased by one, starting with vehicle 1 (see Fig. 2) and proceeding to vehicle 10, such that the tenth instance included vision measurements for all vehicles. This series of solutions permits analysis of C-VIPER’s sensitivity to vision-sensor dropouts, which are expected operationally when vehicles enter an area without prominent lane-boundary markings (such as an intersection), when a sensing anomaly occurs [5], or when vehicles without onboard camera systems join the network. In the simulation, it was assumed that not every vehicle tracked every GNSS satellite. A detailed list of the satellites tracked by each vehicle is given in Table 1. In the table, white boxes indicate satellites (columns) which are tracked by a particular vehicle (row). In the baseline scenario, every vehicle sees at least five of the eight satellites that are above the horizon. It is significant for algorithm validation that different vehicles see different satellites. The nature of the differential correction changes when all vehicles see the same set of satellites, since a simple position correction (rather than a set of pseudorange corrections) is sufficient for this case [24]. By contrast, individual pseudorange corrections such as those produced by C-VIPER, are needed when users each see different sets of satellites [16]. B. Monte Carlo Simulation A Monte Carlo simulation was used for initial verification. The Monte Carlo simulation consisted of 100 runs. In each

0

30 

Table 1. Satellites in View. White boxes indicate a satellite (column) in view for a particular vehicle (row); gray boxes indicate occluded satellites.

17

19

60 

PRN Car

22

East

18

90  29 28 25

31

Satellite Azimuth-Elevation Chart Fig. 3. Simulated Satellite Elevation (Radial Coordinate) and Azimuth (Circumferential Coordinate).

1 2 3 4 5 6 7 8 9 10

17

18

19

22

25

28

29

31

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

Error (m)

Cross-Track Error

Along-Track Error

10

10

8

8

6

6

4

4

2

2

0

0

-2

-2

-4

-4

-6

-6

-8

-8

-10

1

2

3 4 5 6 7 8 9 Number of LB Measurements

8

10

-10

1

2

3 4 5 6 7 8 9 Number of LB Measurements

10

Fig. 4. Representative Monte Carlo Simulation Results. A box plot of simulated errors in both the cross-track (left) and along-track (right) directions are shown for ten cases, as the number of vehicles producing lane-boundary (LB) measurements increases from one to ten (horizontal axis).

run, errors were sampled from random distributions and the position solution was computed for each of the ten instances of varying number of available vision measurements. Significantly, the Monte Carlo simulation enabled testing of the analytical covariance error model, given by equation (27). Random errors for each run were sampled from independent Gaussian distributions. The lane-boundary sensor error was assumed to have a standard deviation (sigma) of 0.25 m. The pseudorange error terms bn , a ( k ) and  n( k ) were sampled from distributions with sigma values of 2 m, 10 m, and 1 m, respectively. Though this manuscript models all satellites with a uniform noise level (to simplify the interpretation of simulation results), it should be noted that the formulation of the weighting matrix W, as given by (26), generalizes to cases in which the noise level differs for each satellite, as is typical due to higher multipath for lowerelevation satellites [34] and for satellites obstructed in urban canyons [30]. All of the distributions were assumed unbiased, except the satellite-specific error a ( k ) , which was drawn from a biased Gaussian distribution with a mean of 15 m. This bias was introduced as an approximate model for the ionosphere delay, which is always positive. In the C-VIPER formulation, as in conventional GPS, the bias on the satellite-specific errors a ( k ) is corrected by the clock shift bn , and hence does not impact solution accuracy. Indeed, simulations indicate that position estimates are unbiased, as indicated by Fig. 4. The figure illustrates the cross-track error (left) and along-track error (right) for one vehicle, vehicle number 5. A box plot of Monte Carlo errors (vertical axis) is shown for each instance, as the number LB of available lane-boundary measurements increases (horizontal axis). Boxes contain the middle two quartiles (50%) of the

error samples, and the tails extend to contain 100% of the samples. The analytical standard deviation, computed from the square root of the diagonal of P, is expected to contain approximately 68% of sampled data, which is the case, as can be confirmed by visual inspection of the figure. Because the analytical model is a good representation of the Monte Carlo simulation, subsequent discussions of C-VIPER will focus on the P matrix, which can be computed deterministically, rather than on Monte Carlo simulations, which are stochastic and hence more processor intensive. Another feature of interest in examining Fig. 4 is the rate at which the along-track and cross-track position estimation errors converge as a function of the number of available laneboundary measurements. Unsurprisingly, the cross-track error drops dramatically for Vehicle 5 when the lane-boundary sensor for that vehicle is included in position estimation (the LB 5 instance). This dramatic drop in cross-track positioning error occurs because the vision sensor is significantly more accurate in the lateral direction than GNSS. Perhaps of greater interest is the sudden drop in along-track error that occurs when a sixth lane-boundary sensor is added (see right side of Fig. 4). This drop in the along-track error for Vehicle 5 is associated with the addition of a camera measurement for Vehicle 6 (the LB 6 instance). In this instance, the heading diversity of the lane-boundary measurements was low until the first six vehicles were considered together. This observation is confirmed by an examination of the street map of Fig. 2, in which it is evident that the along-track vectors for the first five cars were roughly aligned in the North-South direction. The sixth vehicle, by contrast, was aligned closer to the East-West axis and provided the necessary geometry to improve the position solution by approximately a factor of four. Eventually, with ten lane-boundary measurements

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

Cross-Track Accuracy, 1 (m)

C. Multi-Vehicle Comparison This section compares the position solutions for all ten collaborating vehicles. To this end, position errors for each user were evaluated by extracting the two-element North and East covariance matrix for each vehicle from the analytical state covariance P and rotating these North/East covariance matrices into along-track/cross-track directions for each vehicle. The resulting one-sigma cross-track and along-track errors for all vehicles are plotted in Fig. 5 as a function of the number LB of available lane-boundary measurements. In general, the accuracy of position estimation for all vehicles improved as the number of lane-boundary measurements increased, as is expected. The exception to this trend is the transition between the LB 1 and LB 2 instances, where the error actually increased for several vehicles when the second vision measurement was combined with the first and integrated into the C-VIPER solution. It should be noted that the LB 1 instance is never observable unless an additional constraint is added to the V matrix of (14), since a single measurement provides only information in the cross-track direction and not the along-track direction. As such, in the LB 1 case, the total all-in-view shift in the local along-track direction associated with the measurement was constrained to be zero. By contrast, the simulations used the standard twoconstraint V matrix in all other cases with LB 2 or greater. Thus, if the all vision measurements were nearly parallel, a standard C-VIPER solution was still computed for the LB2 case, whether or not the differential corrections were well observable. Based on this observation that performance suffers when the geometric diversity of vision measurements is poor, it is recommended that the two-constraint V matrix only be used in place of the three-constraint V matrix if the result would improve position accuracy. This is not expected to be an issue in nominal conditions, when a diverse set of vision measurements is expected to be available. When all ten vision measurements become available (LB 10 instance), the along-track error for many vehicles approaches the code-noise and multipath limit, which is indicated with a dashed horizontal line labeled CNMP. The CNMP line represents the bounding case in which the differential corrections are perfect, and the satellite-specific errors a(k) are wholly removed. If the number of measurements grows sufficiently large, it seems reasonable to envision that the positioning errors will eventually converge to the CNMP line. Not all vehicles experience the sudden dramatic drop in along-track error that was discussed in the previous section for Vehicle 5. Rather, the error for some vehicles shrinks only gradually. Consider, for instance, the along-track accuracy for Vehicles 6 and 8, as shown in Fig. 5. Why do accuracy jumps occur for some vehicles and not others? A relevant observation is that the two vehicles experiencing slowest error

Baseline Simulation 8 Car 1 Car 2 Car 3 Car 4 Car 5

7 6 5 4 3 2 1

CNMP

0 8

Along-Track Accuracy, 1 (m)

available, the along-track position error was reduced to as low as 1.2 m (one sigma), significantly better than the case for stand-alone GNSS, with an along-track error of 7.1 m (one sigma), for the error distributions used in these simulations.

9

Car 6 Car 7 Car 8 Car 9 Car 10

7 6 5 4 3 2 1 0

CNMP 0

2

4 6 8 Number of LB Measurements

10

Fig. 5. Error Comparison for All Vehicles. Each plotted line corresponds to the analytical one-sigma error computed for a particular vehicle in the local cross-track direction (top) and along-track direction (bottom) as the number LB of lane-boundary sensors is varied.

reduction in the along-track direction (Vehicles 6 and 8) were aligned similarly, along the same straight road. In this light, it seems likely that the occurrence of an accuracy jump for a particular vehicle is dependent on its direction relative to other collaborators. This hypothesis will be explored further in future work. D. Lane-Boundary Measurement Sensitivity C-VIPER performance depends not only on the number of camera measurements available, but also on the vehicles from which those measurements are obtained. To study the sensitivity of the algorithm to different sets of available vision measurements, the LB sequence was modified, and the simulation was run again. In contrast with the baseline case, in which LB instances were obtained by introducing additional measurements starting with Vehicle 1 and proceeding to Vehicle 10, the modified case added measurements starting with Vehicle 10 and proceeding downward to Vehicle 1. This simulation is labeled the reordered case. Sigma values for each vehicle in the reordered case are plotted in Fig. 6. In the reordered case, errors were extremely high when only two lane-boundary measurements were available. These high

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < Reordered Case

Cross-Track Accuracy, 1 (m)

8 Car 1 Car 2 Car 3 Car 4 Car 5

7 6 5 4 3 2 1

CNMP

0

Along-Track Accuracy, 1 (m)

8 Car 6 Car 7 Car 8 Car 9 Car 10

7 6 5 4 3 2 1 0

CNMP 0

2

4 6 8 Number of LB Measurements

10

Fig. 6. Error for Reordered Case. By comparison to the baseline case, one-sigma errors converge more quickly as LB measurements are added.

error levels were the result of poor heading diversity, as Vehicles 9 and 10 are essentially parallel (see Fig. 2). Errors dropped dramatically when a third vision measurement was added to the solution, because Vehicle 8 is headed in an approximately orthogonal direction to Vehicles 9 and 10. This case confirms that a good C-VIPER position solution is possible with a very small number of vehicles (three in this case), but that care should be taken in deciding whether or not to use C-VIPER when the number of available vision measurements is small, unless heading diversity among those measurements is sufficient. As a final note, the one-sigma values for the reordered case converged to those for the baseline case (Fig. 5) at the right side of the plot, in the LB 10 instance. This behavior is expected, since the measurement sets are identical in each case (i.e. vision measurements for all ten vehicles were used in the LB 10 instance for both the baseline and reordered cases.)

IV. CONCLUSION This paper presented a novel approach for enhancing positioning accuracy for a group of collaborating users by sharing GNSS and visual lane-boundary measurements. The approach is called Collaborative Vision-Integrated

10

Pseudorange Error Removal (C-VIPER). In this approach, vehicles use their onboard cameras to solve for differential GNSS corrections. A major benefit of C-VIPER is that it produces differential corrections without requiring a stationary reference antenna, as is the case for conventional differential GNSS systems. C-VIPER performance exploits the complementary capabilities of vision and GNSS systems. Differential GNSS corrections produced by C-VIPER are not generally needed in the lateral (cross-track) direction, as onboard camera measurements are much more accurate than GNSS in the lateral direction. Network generated GNSS corrections are highly valuable in the along-track direction, however, as the vehicle relies almost entirely on GNSS to determine its position along the length of a road. Moreover, the benefits of C-VIPER appear to increase with the number of collaborating vehicles, as simulations show continued improvement in position accuracy with an increasing number of available vision measurements obtained from different vehicles. Importantly, C-VIPER provides a robust positioning capability, in that the algorithm functions well even when individual users suffer a vision-system dropout or when local topography blocks tracking of one or more GNSS satellites. In fact, the differential corrections produced by C-VIPER can significantly improve cross-track positions when a user has no onboard vision sensor or when that sensor fails to produce a valid measurement. In the ten-vehicle simulation used for verification, the simulated cross-track accuracy reached the level of the visual lane-boundary sensor errors (assumed 0.25 m one sigma) and along-track accuracy approached the level of GNSS codenoise and multipath errors for an individual receiver (assumed to be approximately 1 m one sigma). These levels of accuracy are on the order of requirements needed to support automated lane keeping and other emerging ADAS capabilities. Future work will include an experimental verification of the C-VIPER method using an automotive testbed under construction at Tufts University. This effort will assess the method’s sensitivity to multipath and to other practical V2V issues, such as communications delays, packet losses, and processor limitations. Fusing other sensors (laser ranging sensors or odometry, for instance) into the C-VIPER solution is also a topic of interest for future work. As a final note, C-VIPER may offer a compelling advantage over conventional DGNSS if its corrections approach zero error for a large number of teaming vehicles. Conventional DGNSS systems (such as NDGPS [35]) rely on information from the nearest reference receiver; applying differential corrections from this receiver removes commonmode errors at the expense of injecting added code-noise and multipath from the reference. By comparison, C-VIPER generates differential corrections using many “reference receivers,” averaging over (and mitigating) random noise associated with each. Thus, evaluating C-VIPER performance for large numbers of vehicles will be another important topic of future work.

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < REFERENCES [1]

Intelligent Transportation Systems Joint Program Office. Achieving the vision: From VII to IntelliDrive, RITA, U.S. DOT [Online]. http://www.its.dot.gov/press/2010/vii2intellidrive.htm. Apr., 2010.

[2]

S. Thrun, M. Montemerlo, et al., Stanley, the robot that won the DARPA Grand Challenge, J. Robotic Systems, vol. 23, pp. 661-692, 2006.

[3]

C. Urmson, J. Anhalt, et al., Autonomous driving in urban environments: Boss and the Urban Challenge, J. Field Robotics, vol. 25, pp. 425-466, 2008.

[4]

M. Montemerlo, J. Becker, et al., Junior: the Stanford entry in the Urban Challenge, J. Field Robotics, vol. 25, pp. 569-597, 2008.

[5]

C. Mario and J. Rife, Integrity monitoring of vision-based automotive lane detection methods, Proc. ION Global Navigation Satellite Systems (ION-GNSS), 2010.

[6]

FAA. Wide area augmentation system performance analysis report: #32. William J. Hughes Technical Center, Atlantic City, NJ. [Online]. Available: http://www.nstb.tc.faa.gov/REPORTS/waaspan32.pdf. Apr., 2010.

[7]

J. Goldbeck, B. Huertgen, et al., Lane following combining vision and DGPS, Image and Vision Computing, vol. 18, pp. 425-433, 2000.

[8]

J. Clanton, D. Bevly, and A. Hodel, A low-cost solution for an integrated multisensory lane departure warning system, IEEE Trans. Intelligent Transportation Systems, vol. 10, pp. 47-59, 2009.

[9]

L. Bai and Y. Wang, A sensor fusion framework using multiple particle filters for video-based navigation, IEEE Trans. Intelligent Transportation Systems, 11(2):348-358, 2010.

[10] Y. Wang and L. Bai, Using computer vision and GIS to improve GPS accuracy, in Proc. IEEE International Conference on Image Processing (ICIP), 2010. [11] J. Beich and M. Veth, Tightly-coupled image-aided inertial relative navigation using statistical predictive rendering (SPR) techniques and a priori world models, in Proc. IEEE/ION PLANS, 2010. [12] A. Soloviev and D. Venable, Integration of GPS and vision measurements for navigation in GPS challenged environments, in Proc. IEEE/ION PLANS, 2010. [13] G. Challita, S. Mousset, et al., An application of V2V communications : Cooperation of vehicles for a better car tracking using GPS and vision systems, in Proc. IEEE Vehicular Networking Conference (VNC), 2009. [14] B. Parkinson and P. Enge, Differential GPS. In Global Positioning System: Theory and Applications, Volume II, Chapter 1, B. Parkinson and J. Spilker, Eds., AIAA, 1996. [15] S. Pullen and J. Rife. Aviation Applications. Differential GNSS. In GNSS Applications and Methods, Chapter 4, S. Gleason and D. GebreEgziabher, Eds., Artech House, 2009. [16] P. Misra and P. Enge, Global Positioning System: Signals, Measurements, and Performance, 2nd Ed., Lincoln, MA: Ganga-Jamuna Press, 2006. [17] P. Enge, T. Walter, et al., Wide area augmentation of the global positioning system, Proc. of the IEEE, vol. 84, pp. 1063-1088, 1996. [18] P. Enge., Local area augmentation of GPS for precision approach of aircraft, Proc. of the IEEE, vol. 87, pp. 111-132, 1999. [19] D. Grejner-Brzezinska, C. Toth, and Q. Xiao, Real-time tracking of highway linear features, in Proc. ION Global Positioning System (ION-GPS), 2000. [20] J. Lee, A machine vision system for lane-departure detection, Computer Vision and Image Understanding, vol. 86, pp. 52–78, 2002.

11

[21] C. Jung and C. Kelber, A lane departure warning system based on a linear-parabolic lane model, in Proc. IEEE Intelligent Vehicle Symposium, 2004, pp. 891-895. [22] I. Gat, M. Benady, and A. Shashua, A monocular vision advance warning system for the automotive aftermarket, in Proc. SAE World Congress and Exhibition, 2005. [23] A. Huang, D. Moore, et al., Finding multiple lanes in urban road networks with vision and lidar, Autonomous Robots, vol. 26, pp. 103122, 2008. [24] J. Rife and X. Xiao, Estimation of spatially correlated errors in vehicular collaborative navigation with shared GNSS and road-boundary measurements, in Proc. ION Global Navigation Satellite Systems (ION-GNSS), 2010. [25] S. Čapkun, M. Hamdi and J.-P. Hubaux, GPS-free positioning in mobile ad hoc networks, Cluster Computing, vol. 5, pp. 157-167, 2002. [26] F. Berefelt, B. Boberg, et al., Collaborative GPS/INS navigation in urban environment, in Proc. ION National Technical Meeting, 2004. [27] N. Drawil and O. Basir, Intervehicle-communication-assisted localization, IEEE Trans. Intelligent Transportation Systems, vol. 11, pp. 678-691, 2010. [28] D. A. Grejner-Brzezinska, C. K. Toth, et al.. Positioning in GPSchallenged environments: Dynamic sensor network with distributed GPS aperture and inter-nodal ranging signals, in Proc. ION GNSS 2009. [29] S. Wu, J. Kaba, et al., Distributed multi-sensor fusion for improved collaborative GPS-denied navigation, in Proc. ION International Technical Meeting 2009. [30] M. Spangenberg, V. Calmettes, et al., Detection of variance changes and mean value jumps in measurement noise for multipath mitigation in urban navigation, NAVIGATION, vol. 57, pp. 35-52, 2010. [31] D. Bétaille and R. Toledo-Moreo, Creating enhanced maps for lane-level vehicle navigation, IEEE Trans. Intelligent Transportation Systems, vol. 11, pp. 786-798, 2010. [32] Google Maps, Google. [Online]. Available: http://maps.google.com/, 2011. [33] NGS, Precise GPS Orbits, NOAA. http://www.ngs.noaa.gov/orbits/, 2010.

[Online].

Available:

[34] J. Rife and S. Pullen, The impact of measurement biases on availability for Category III LAAS, NAVIGATION, vol. 52, pp. 215-228. [35] D. Wolfe, C. Judy, et al., Implementing and engineering an NDGPS network in the United States, Proc. ION GPS, 2000.

Jason Rife (M’01) received the B.S. degree in mechanical and aerospace engineering from Cornell University, Ithaca, NY, in 1996, and his M.S. and Ph.D. degrees in mechanical engineering from Stanford University, Stanford, CA, in 1999 and 2004. He is currently an Assistant Professor of Mechanical Engineering at Tufts University in Medford, Massachusetts. At Tufts, he directs the Automation Safety and Robotics Laboratory (ASAR), which applies theory and experiment to characterize the integrity of autonomous vehicle systems. Previously, after completion of his graduate studies, he worked as a researcher with the Stanford University GPS Laboratory, serving with the Local Area Augmentation System (LAAS) and Joint Precision Approach and Landing System (JPALS) teams.

Collaborative Vision-Integrated Pseudorange Error ...

differential corrections are computed for each satellite, so that corrections can be shared ... been proposed, both for automotive [7]-[10] and more general applications [11]-[12]. ...... Jason Rife (M'01) received the B.S. degree in mechanical and ...

496KB Sizes 0 Downloads 161 Views

Recommend Documents

How to make error handling less error-prone - GitHub
“Back to the Future™” is a trademark of Universal City Studios, Inc. and Amblin ... In the Apple OpenSSL “goto fail;” bug, the cleanup code was written incorrectly, ...

Absolute Error -
f(x)=x3-3x-4 are some examples of algebraic equations and the if variable is changed to y x or any ... for example the Newton's Raphsom method is known as the open method as it requires only one initial ..... There are two types of data in the interp

Dynamic forward error correction
Nov 5, 2003 - sponding error correction data therebetWeen during a plural ity of time frames. ..... mobile sWitching center, or any communication device that can communicate .... data according to a cyclical redundancy check (CRC) algo rithm and ...

Collaborative Meeting -
165 Church Street. New Haven, CT 06510. Office: 203-946-7665 OR 203-946-8593 www.cityofnewhaven.com. EXECUTIVE DIRECTOR: CONTACT PERSON AND. CELL PHONE #: (Trip supervisor and/or person(s) responsible for scheduling trips). APPLICATION #: ______. Com

Collaborative Agreement -
Email: [email protected] ... tool to have both a pre-install check as well as a means of interacting with the automated CORD install to report back test ...

EAF Brochure - Thinking Collaborative
the school website www. International ... emotional climate to create an environment that either enhances or inhibits student learning. ... Please join us for an indepth look at current research and best practice in the field of teaching math.

EAF Brochure - Thinking Collaborative
The International School in Genoa, via Romana della Castagna 11A, 16148 Italy. ... subject, degrees of confidence, mindsets, readiness levels, and learning ... Please join us for an indepth look at current research and best practice in the field ...

Collaborative Agreement -
It is just a template to exchange ... Email: Brian. ... tool to have both a pre-install check as well as a means of interacting with the automated CORD install to ...

Collaborative FAQ.pdf
reporting and analytical tool called StratusTM – a single source for patient‐level information. that can help care providers save time and resources, and enable ...

Error Reports Help -
The number of days each student was enrolled (membership), numbers of days ... makes the PowerSchool software run quicker and with less issues. This lists ...

spotting error notes.pdf
are to his heart's content, at his wit's end, out of. harm's way. i) When a noun consists of several. words, the possessive sign is attached only to the. last word. For example. a) The Queen's of England reaction is important in. the Diana episode. (

KODE ERROR EDC.pdf
Payment. Telkomsel. Payment. Indosat. Payment SYB. Purchase. Simpati. Purchase. Mentari. Purchase XL. Masukkan no. rekening yang. benar. Nomor Tujuan.

ERROR SPOTTING_doozy study.pdf
Vedaranyam. Nagapattinam district. ... ERROR SPOTTING_doozy study.pdf. ERROR SPOTTING_doozy study.pdf. Open. Extract. Open with. Sign In. Details.

Modularizing Error Recovery
Sep 23, 2009 - Parser. Intermediate (IR). Code Generator. Seman)c. Analyzer. IR Code. Op)mizer. Target Code. Generator. Error. Handler. Logical Structure.

Collaborative Revision with Google Docs
One of the best features of Google Docs is the collaboration feature. Students can ... entering their email addresses and clicking “Invite Collaborators. ... In the example below, the work done by Sambengston is in green and the work done by.

Collaborative Ecological Restoration
Jun 30, 2006 - damaged land (3, 4) [support- ing online material .... ings, applying for permits and small grants, soliciting materials, or .... faculty are supported by home departments, and admin- istration is ... Ecology; education; business;.

web2-Collaborative tools.pdf
Web 2.0 Collaboration Tools: A Quick Guide. MOHAMED AMIN EMBI. Centre for Academic Advancement. Universiti Kebangsaan Malaysia. 2012. Page 2 of 86 ...

Collaborative Educational Systems
Collaborative systems are profoundly changing how research and creative activity are ... research, grid and cloud computing, simulation, or virtual worlds. They.

Collaborative Agreement Template -
forms, too. ◇ SLM (Software license Management) ONOS App. A new ONOS application, SLM App, will be added to handle License Management and interact.

Food Collaborative Book.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.

Collaborative Ecological Restoration
Jun 30, 2006 - CREDIT. : W. ARREN GOLD. EDUCATIONFORUM. The complexity of the interface .... A. D. Bradshaw, M. J. Chadwick, The Restoration of Land.