1

Tracking and Phase Identification of Targets in Nonlinear Motion Nicholas Roseveare Department of Electrical & Computer Engineering EE652: Filtering and Estimation Theory Final Project Report

Abstract The tracking of moving targets using Kalman filtering on noise contaminated measurements is a procedure which must be carefully tuned and requires an accurate model of the underlying process and noise levels. This procedure becomes even more complicated when modeling movement of nonlinear dynamics. The typical solution is to linearized the process about the current state, under the assumption that the process is well approximated by a linear model given a small enough time step, this is known as the Extended Kalman Filter (EKF). There are several problems with this assumption in many practical target tracking scenarios where the dynamical and observation equations are highly nonlinear and a linear approximation results in an abundance of unwanted and unnecessary errors. Two different methods of correcting for these nonlinearities will be experimentally tested in this project, namely, the Unscented Kalman Filter (UKF) and the Interacting Multiple Model (IMM) method.

I. I NTRODUCTION Filtering and estimation are two of the most ubiquitous tools in electrical engineering and other disciplines. Whenever the state of a system must be estimated from noisy observations, some kind of state estimator is employed to fuse the data from different sensors and different time instances together to produce an accurate estimate of the true systems current state. When the system dynamics and observation models are linear (and Gaussian), the minimum mean squared error (MMSE) estimate may be computed using the Kalman filter. However, in most applications of interest the system dynamics and observation equations are nonlinear and suitable

2

extensions to the Kalman filter have been sought. It is known that the optimal solution to the nonlinear filtering problem requires that a exhaustive description of the conditional probability density is maintained. Unfortunately this exact description requires an excessive and inhibiting number of parameters and several suboptimal approximations have been proposed [1]. A. Objectives The purpose of this project is to explore some of these approximations and test their effectiveness. Especially those which provide an effective bypass of some of the issues in nonlinear filtering and estimation. The objectives of this project are to show the deficiencies of the single dynamic model in Extended Kalman Filtering for nonlinear state estimation and propose either the Unscented Kalman Filter or the more heuristic IMM method (which uses multiple KFs) as a viable (and more accurate) alternative. Thus, a review of the Extended Kalman filter and the associated shortcomings of the method, followed by a introduction to the Unscented Kalman filter and the Interacting Multiple Model (IMM) method. A secondary objective is to use the model probabilities of the IMM method to estimate the maneuvering targets’ phase of motion. A brief summary of the metrics by which comparisons will be made will also be given. And finally, these methods will be benchmarked on simulated underwater vehicle tracking scenarios where the targets move nonlinearly, and where the measurement data is a nonlinear transformation of the state variables. II. BACKGROUND ON N ONLINEAR M ODELS AND THE E XTENDED K ALMAN F ILTER Target tracking and many other interesting scenarios by nature involve systems with nonlinearly evolving dynamics and interactions, so the traditional Kalman filter cannot be applied to estimate the state of such a system. In these systems, both or either the dynamics and the observation process can be nonlinear. The basic nonlinear state space model is of the form x(k + 1) = f (x(k), u(k + 1), w(k + 1), k) z(k) = h(x(k), u(k), k) + v(k)

(1) (2)

where x(k) is the d-dimensional state of the system at time instance k and u(k) is the input vector, which we will omit from here on out as it is not a part of the nonlinear state model for target tracking. The process noise vector w(k) is assumed to be due to disturbances and modeling

3

errors, z(k) is the observation vector, and v(k) is the measurement noise. Since the focus of this project is nonlinear state and parameter estimation, the normal simplifying assumptions about the noise processes are made. This includes that vectors w(k) and v(k) and zero-mean, white, and uncorrelated, i.e. E[v(i)vT (j)] = δij Q(i),

E[w(i)wT (j)] = δij R(i),

E[v(i)wT (j)] = 0, ∀ i, j

(3)

The Kalman filter is based on a prediction-correction algorithm structure and propagates the ˆ (n|r) be defined as first and second moment of the distribution state vector recursively. Let x the estimate of the state at time n based on the previous observations, Zr = [z(1) · · · z(r)], up to and including time r, for n > r. The error covariance of this estimate is P(n|r). Given an ˆ (k|k) the first step in filtering is to forward predict this estimate to the next updated estimate x time step based on the previous measurement data and would have the MMSE form of ˆ (k + 1|k) = E[f (ˆ x x(k|k), u(k + 1), w(k + 1), k)

(4)

ˆ (k + 1|k))((x(k + 1) − x ˆ (k + 1|k))T ]. P(k + 1|k) = E[(x(k + 1) − x

(5)

Unfortunately, when the functions f (·) and h(·) are nonlinear these expectations are not viable because they are based on the statistics of x(k) conditioned on Zk and are nearly impossible to find. The common simplification is to assume that x(k) is Gauss-Markov and parametrized only by a Gaussian distributed mean and covariance which fully conveys the past and present statistics of dynamic system. The prediction estimate is updated by a current measurement of the system. For a discretized state space this is a linear updating rule, even for nonlinear systems. This is given by ˆ (k + 1|k + 1) = x ˆ (k + 1|k) + K(k + 1)˜z(k + 1) x

(6)

P(k + 1|k + 1) = P (k + 1|k) − K(k + 1)S(k + 1|k)KT (k + 1)

(7)

z˜(k + 1) = z(k + 1) − zˆ(k + 1|k) K(k + 1) = C(k + 1|k)S−1 (k + 1|k)

(8) (9)

ˆ and zˆ and where C(k + 1) = E[ˆ x(k + 1|k)ˆzT (k + 1|k)] is is the cross covariance between x S(k + 1|k) = E[˜z(k + 1|k)˜zT (k + 1|k)] is the innovation covariance. The gain K(k) in the linear case is replaced by K(k) = P(k|k − 1)HT (k)[H(k)P(k|k − 1)HT (k) + R(k)]−1 .

4

The vital aspect of filtering for nonlinear systems is making accurate predictions of the first and second order moments of the system statistics. Possible criterion for making accurate predictions of these moments is set forth in [1], namely, that transformation of the statistics of a distribution by an approximated prediction should yield consistent statistics. And ideally these statistics should be efficient and unbiased. More discussion on consistent statistics as a prediction criterion as well as a metric will be provided later. A. Extended Kalman Filter The most commonly used estimator for nonlinear systems is the extended Kalman filter (EKF) [2]. The EKF applies the Kalman filter to nonlinear systems by simply linearizing all the nonlinear models so that the traditional linear Kalman filter equations can be applied. In practice, the use of the EKF has two significant drawbacks, namely, i) linearization can produce highly unstable filters if the assumptions of local linearity is violated. And in addition ii) the derivation of the Jacobian matrices are nontrivial in most applications and often lead to significant implementation difficulties. An abbreviated derivation of the extended Kalman filter begins with the presence of nonlinear mapping function y = f (x),

(10)

and uses the Taylor series expansion to obtain the approximation to the first and second order moments as µy = f (µx )

(11)

Pyy = ∇f Pxx (∇f )T ,

(12)

where µx and µy are the means of the input and output statistics, respectively. The covariance of the input and output are given by Pxx and Pyy , respectively. It should be noted that these approximations are only accurate if second order and higher terms in the mean and fourth order and higher terms in the covariance are negligible [10], [7], [1]. Higher order approximations can be made to capture second order statistics in the mean and fourth order in the covariance by using the Hessian of the nonlinear transform as well as the Jacobian. First and second order EKF smoothers could also be implemented for better performance, but the issues with the EKF to very nonlinear systems still apply. The increased approximation error with respect to the absence of

5

important higher order statistics in nonlinear propagation is exactly why the Unscented Kalman Filter (UKF) is explored next. III. U NSCENTED T RANSFORM The unscented Kalman filter relies on the unscented transform to perform the nonlinear prediction, and still uses the standard linear updating rule. This unscented transformation rests on the premise that it is easier to approximate a Gaussian distribution than it is to approximate an arbitrary nonlinear function or transformation [3]. By virtue of this unscented transformation, this algorithm has two great advantages over the EKF. First, it is able to predict the state of the system more accurately. Second, it is much less difficult to implement. The basic approach is to select a set of sigma points according to some criterion and such that their sample mean and covariance match that of the current statistical distribution up to second order, i.e. µx and Pxx . The nonlinear function is then applied to each point to create a “cloud” of transformed points such that µy and Pyy is calculated from them. This might be considered a random Monte Carlo method but for the fact that the sigma points are deterministically selected [2]. The following summarizes the unscented transform. Unscented Transform 1) Approximate the d-dimensional random variable x with mean µx and covariance Pxx with 2d + 1 weighted samples given by X0 = µ x Xi = µ x + ( Xi+d where κ ∈ R, (

p

(d + κ)Pxx )i p = µx − ( (d + κ)Pxx )i

(13)

p (d + κ)Pxx )i is the ith row or column of the matrix square root of

(d + κ)Pxx and W0 = κ/(d + κ), Wi = Wi+d = κ/(2(d + κ)) are the respective weights of each sample. 2) Propagate each sample point through the nonlinear function to yield a set of transformed points, Yi = f (Xi ).

(14)

6

3) The mean is given by the weighted average of the transformed points µy =

2d X

W i Yi .

(15)

i=0

4) The covariance is the weighted outer product of the transformed points Pyy =

2d X i=0

Wi (Yi − µy )(Yi − µy )T .

(16)

Useful properties of the Unscented Transform[4], [5]: 1) The mean and covariance are represented exactly, transformed, and recalculated, the transformed values of mean and covariance are precisely known up to second order, implying that the UT mean and covariance are more accurate than the EKF. The covariance is calculated to the same order of accuracy. Although the state vector and covariance can be augmented to incorporate process and measurement noise to use them as a way of obtaining additional information through the unscented transform. 2) The UKF is numerically efficient only requiring simple vector and matrix operations. The sigma points always capture the mean and covariance regardless of the type of matrix square root. In addition, implementation is quick for set of nonlinear models because derivation of Jacobians and Hessians is not necessary. As an example of the second order statistic preserving property of the Unscented transform, consider a sensor which returns range and bearing information about a target [1]. This data is to be converted to Cartesian coordinates through the following transformation equation       cos(θ) −r sin(θ) r cos(θ) x   with ∇f =   = sin(θ) cos(θ) r sin(θ) y

(17)

The actual location is (x, y) = (0, 1). The nonlinearities are compounded by the fact that in

this example range accuracy is much better than angle accuracy. The large bearing uncertainty as compared with the range violates the assumption of local linearity in the approximation used by the EKF. The plot in Figure 1 illustrates the transformation of the mean and covariance. The covariance representation is the 1-σ level set for the distribution and is show in Figure 1 by a circle. The solid line and × represent the original statistics. The thin dashed line and ◦ represent the first order Taylor series approximation, and the thick dashed line and + represent the Unscented transformation of the mean and covariance.

7

Fig. 1. The mean and covriance of the unscented and Taylor approximation transforms are applied to the range-bearing nonlinear measurement model.

As is plainly visible, the Taylor approximation is slightly biased sitting at 1m instead of .97m, but more importantly the covariance is inconsistent and would underestimate the amount of uncertainty in a filtering process. The Unscented transformed (thick dashed line) covariance is barely inside the covariance of the original and is not biased either. A. Unscented Kalman Filtering The unscented Kalman filter is a simple extension of the standard Kalman filter. As mentioned before, only the prediction step must be modified to handle the nonlinearity. Thus the algorithm takes the following steps. •

Prediction. Compute the predicted mean and error covariance using the matrix version of

8

UT sigma point weighting ˆ (k|k)] + X(k) = [ˆ x(k|k) · · · x



c[0

ˆ + 1) = f (X(k), k) X(k

p p P(k|k) − P(k|k)]

ˆ + 1)ω ˆ (k + 1|k) = X(k x

(18)

T ˆ + 1|k)W[X(k)] ˆ P(k + 1|k) = X(k + Q(k + 1)

where c = κ/(2(d+κ)) is the scaling factor to the square root of P(k|k), ω = [W0 · · · W2d ]T , and W = (I − [ω · · · ω]) × diag(W0 · · · W2d ) × (I − [ω · · · ω]). •

(19)

Correction. Compute the predicted measurement zˆ and innovation covariance S, and the cross covariance C: ˆ (k + 1|k)] + X(k + 1|k) = [ˆ x(k + 1|k) · · · x



c[0

ˆ + 1|k), k) Y(k + 1|k) = h(X(k

p

P(k + 1|k) −

zˆ(k + 1|k) = Y(k + 1|k)ω

p P(k + 1|k)] (20)

S(k + 1) = Y(k + 1|k)W[Y(k + 1|k)]T + R(k) C(k + 1) = X(k + 1|k)WY(k + 1|k) And the Kalman gain in (9) is updated according to the correction quantities obtained here. IV. I NTERACTING M ULTIPLE M ODEL (IMM) M ETHOD In practical scenarios it is reasonable to assume that the model for a system can change. Characteristics of a varying system are hard to describe with a single model, so in estimation there is the need to allow for continuous piecewise switching of the motion model. A set of n models is denoted M = {M 1 , . . . , M n }. It is assumed that for each model M j there is prior

probability µj0 = P {M0j }. The probability of switching from model i to j in the next time instance is given by pij = P {M j (k + 1)|M i (k)}. This can be seen as a transition probability

matrix of a first order Markov chain characterizing the mode transition. Hence, systems of this type are often called Markovian switching Systems. The optimal approach to filtering states of possible multiple models requires running optimal filters for every possible sequence of models,

9

so for the kth measurement nk filters must process the data. Thus, some kind of approximation must be made. A modified version of our system must now be given as x(k + 1) = f j (x(k), u(k + 1), wj (k + 1), k) z(k) = hj (x(k), u(k), k) + vj (k),

(21)

where j is now the index of the model (or mode) which is active during time instance k. The IMM-filter [6] is a computationally efficient, good performance, suboptimal estimation algorithm for Markovian switching systems of type described previously. The IMM has three basic steps: interaction (mixing), filtering, and combination. In a single time step, state estimates and covariance matrices from multiple models are mixed according to an initial probabilities from the previous time step [7]. Each of the n filters is run forward independently as they normally would on their own. The last step is to computer a weighted combination of the updated filter statistics yielding a final state estimate. The model probabilities have some default initialization value, but after evolving in time the will stabilize favoring the filter that has updated to measurements with higher likelihood. The equations for each step in the IMM-filter are as follows: •

Interaction i|j

The mixing probabilities µk for each model M i and M j are calculated as c¯j =

n X

pij µik ,

(22)

i=1

i|j

µk

=

1 pij µik , c¯j

(23)

where µik is the probability of model M i in the time instance k and c¯j is a normalizing factor. Now the mixed inputs to the filters are computed as ˆ 0j (k) = x

n X

i|j i ˆ (k|k) µk x

(24)

 i|j ˆ 0j (k)][ˆ ˆ 0j (k)]T , µk × Pi (k|k) + [ˆ xi (k|k) − x xi (k|k) − x

(25)

i=1

P0j (k) =

n X i=1

i

ˆ (k|k) and Pi (k|k) are the updated mean and covariance for model i and time where x instance k. •

Filtering

10

Now, for each model M i the filtering is done as





ˆ i (k + 1|k), Pi (k + 1|k) x

ˆ i (k + 1|k + 1), Pi (k + 1|k + 1) x





KFu (ˆ xi (k + 1|k), Pi (k + 1|k),

= KFp (ˆ x0j (k), Pi (k), Ki (k), Qi (k)), (26) = z(k + 1), Hi (k + 1), Ri (k + 1)),

(27)

where KFp (·) and KFu (·) are the Kalman filter prediction and update steps, respectively. The likelihood of the measurement for each filter is computer as Λi (k) = N(˜z(k); 0, Si (k)),

(28)

where z˜(k) is the measurement innovation and Si (k) is the innovation covariance for model M i in the KF update step. The probabilities of each model M i at time step k are calculated as c =

n X

Λi (k)¯ ci ,

(29)

i=1

µik

1 i = Λ (k)¯ ci , c

(30)

where c is the normalizing factor. •

Combination In the final stage of the algorithm the combined estimate for the state mean and covariance are computed as ˆ (k|k) = x P(k|k) =

n X

i=1 n X i=1

ˆ i (k|k) µik x

(31)

 ˆ (k|k)][ˆ ˆ (k|k)]T . µik × Pi (k|k) + [ˆ xi (k|k) − x xi (k|k) − x

(32)

IMM-filtering can be used as a method of Phase Identification, i.e. identifying the type of movement which best describes the motion of the target. For example, if the target is an aircraft, then the models could include a left and right turn, straight/level, ascending and descending models. If the model probabilities persistently favor (with larger weighting) the descending model, then additional processing could be initiated from this phase identification to determine if the aircraft is landing.

11

V. DATA Simple simulated but realistic dynamic and observation model for target tracking constitutes the data for this project. This estimation problem consists of four sensors measuring bearing only, each with the target to track always in the field of view. The target vehicle moves by switching between two different motion models: a Nearly Constant Velocity (NCV) a.k.a. Constant Velocity Wiener process [8] and a coordinated turn (CT) model. The models for the bearing only estimation, NCV, and CT dynamical motion will be explained next. The simple bearing only estimation for 2-D state filtering begins by defining the state vector as T x(k) = [x(k) y(k) x(k) ˙ y(k)] ˙ .

(33)

Let us start by modeling the dynamics of the target with nearly constant velocity (NCV) or Velocity Wiener process model. The discrete state propagation model then becomes    1 0 ∆t 0 x(k)     0 1 0 ∆t   y(k)     x(k + 1) =    + w(k)  0 0 1    0 x(k) ˙    0 0 0 1 y(k) ˙

(34)

where w(k) is Gaussian process noise with zero mean and covariance 

0

1 ∆t2 2

0

1 ∆t3 3

0

1 ∆t2 2

0

∆t

0

1 ∆t2 2

0

∆t

1 ∆t3 3

  0  T Q(k) = E[w(k)w (k)] =   1 ∆t2  2 0



   q  

(35)

where q is the spectral density of the process noise, which is set to 0.08 for the UKF and EKF and 0.03 for this model inside the EKF-based IMM and UKF-based IMM. The observation model for sensor β is defined as β

θ (k) = arctan

y(k) − sβy

x(k) −

sβx

!

+ rβ (k)

(36)

where (sβx , sβy ) is the position of sensor β and rβ (k) ∼ N (0, σr2 ), with σr2 = 0.07 in the below simulations. For the case where there are two sensor observing the targets the first derivative of

12

the observation matrix, which is needed for the EKF, is given by   −y(k)−s1y x(k)−s1x β 0 0 ∂h (x(k)) (x(k)−s1x )2 +(y(k)−s1y )2 (x(k)−s1x )2 +(y(k)−s1y )2 , = Hx (x(k), k) =  2 2 −y(k)−sy x(k)−sx x(k) 0 0 (x(k)−s2 )2 +(y(k)−s2 )2 (x(k)−s2 )2 +(y(k)−s2 )2 x

y

x

β = 1, 2.

y

(37)

The models needed for simulating our target filtering process are almost complete. The last model is the coordinated turn model, which models the position of a target model through a turn based on a simultaneous estimate of the turn rate parameter. The state vector for this model is x(k) = [x(k) y(k) x(k) ˙ y(k) ˙ ω(k)]T ,

(38)

where ω(k) is the current state of the turn rate in radians/second. The dynamical model is formulated as 

1 0

sin(ω(k)∆t) ω(k) 1−cos(ω(k)∆t) ω(k)

cos(ω(k)∆t)−1 ω(k) sin(ω(k)∆t) ω(k)

0





     0 1  0         x(k + 1) =  0 0 cos(ω(k)∆t) − sin(ω(k)∆t) 0  x(k) +       0 0 sin(ω(k)∆t) cos(ω(k)∆t) 0      0 0 0 1

0



 0    0   ν(k),  0   1

(39)

where ν(k) ∼ N (0, σω2 ) is the univariate Gaussian process noise for the turn rate parameter. This model is quite nonlinear and an ongoing estimate of the turn parameter is also maintained. The derivative of this matrix is somewhat non-trivial and an adequate derivation can be found in [8], thus also the reason for using the unscented transform (UT). A. Definition of Metrics The are several unique outputs of the filtering algorithms in the previous sections from which insight can be drawn as to the performance and behavior of the EKF, UKF, and IMM algorithms. The first and ultimately most valuable is Root Mean Squared Error (RMSE), which is self explanatory. Another metric almost as important, if not more in the context of data association, is the covariance consistency. This is given by [1], [9] ˆ (k))(x(k) − x ˆ (k))T ] ≥ 0 P(k) − E[(x(k) − x

(40)

13

which can also be rewritten more efficiently as ˆ (k))T P−1 (k)(x(k) − x ˆ (k))] ≤ 1. E[(x(k) − x

(41)

So the metric in the results that follow indicate an accurate error covariance if the covariance consistency is less than one. Notice that this metric indicates that whenever it is greater than one, the covariance should be more uncertain about the estimate location. This is an extremely important measure of the fidelity of a estimator, especially in situations where data association must be performed before filtering can even occur [9]. Data association relies on the covariance of a track to determine whether or not it is reasonable that the measurement in question should go with that filtered state. If the error covariance is small (i.e. overly confident) then it will likely not associate and eventually break off tracking the incoming measurements which truly should belong to it. Finally, the last plots are the switching mode probabilities and a plot of the estimated turn rate by the IMM filters. The switching probabilities are the metric that makes it possible to perform phase detection and qualify whether an object motion state is in a particular maneuver. VI. R ESULTS Four types of filters were applied to the simulated target tracking scenario including the Unscented Kalman Filter (UKF), the Extended Kalman Filter (EKF), and the Interacting Multiple Model (IMM) based on EKFs. In addition, an IMM with multiple UKFs was also implemented. The time step is ∆t = .1. The initial mode probabilities for the IMMs using the NCV and CTM were .95 and .05, respectively. The transition matrix was the following.   .99 .01 . pij =  .01 .99

(42)

Scenario 1

This scenario is 40 seconds long and has a single maneuvering target which execute the following actions at the specified times: •

Initial velocity of first state vector set to 1m/s going north, with initial position at (0,0).



At 6s make a turn left with rate 1 rad/s for 3 seconds.



At 9s move straight for 6 seconds.



At 17s commence turn right with rate -1 rad/s for 1.5 seconds.

14



At 18.5s move straight for 3.5 seconds.



At 22s commence another turn left with rate 1 rad/s for 5 seconds.



At 27s move straight for 3 seconds.



At 30s commence another turn right with rate -1 rad/s for 5 seconds.



At 35s move straight for 5 seconds.

The path of the target in the scenario is shown in Figure 2(a) The results of filtering for all four of the estimation methods is in Figure 2(b). Notice at this level all of the methods seem to have very similar performance. Another nice graphic is in Figure 3 which plots the trajectory of the (a) UKF estimates and (b) the EKF-based IMM-filter. This plot also shows the 1-σ covariance ellipse for the error over time. The IMM mode switching probabilities are shown in Figure 5(a). These are the results which could be used to identify which phase of flight the target in question was in. Notice there is a small delay before the IMM can completely switch over to the correct model. It is possible this comes from the lagging estimate of the turn rate in Figure 5(b). Since this model is mismatched with the true turn rate, there is also going to be a increase in the actual error, while the covariance may not actually correspond to this, enter, covariance consistency. The covariance consistency in Figure 4(b) demonstrates that while the EKF and UKF manage to stay moderately low even during the turns, the IMMs have difficult time and their accuracy is not was the covariance should be demonstrating. The UKF-based IMM reduces is covariance consistency considerable faster than the EKF-based IMM coming out of the turns. Notice that all of the methods have a very high covariance consistency during the coordinated turns and only begin to reach the desirable value of unity after the target has been in the NCV linear model for a while. Finally, the RMS error for all four methods is in Figure 4(a). The error of the IMM filters is considerable lower in the beginning of the run, while later on the EKF-based IMM has the worst error at the turns, and the UKF-based IMM actual gets down below the error of the other filters for short periods of time as the IMM-filter is leaving the CT model. The UKF and EKF performs similarly. Poor choice of mode probabilities The consequences of choosing bad mode transition probabilities can cause the IMM-filter to blend the updated state from the less likely filter and interfere with accurate prediction for the

15

12 12 Real trajectory Positions of Sensors Starting position

True trajectory EKF UKF IMM−EKF IMM−UKF

10

10

8 8

6 6

4 4

2 2

0 0

−2 −2

−2 −2

0

2

4

6

8

10

0

4

6

8

10

12

(a) Scenario 1 Trajectory Fig. 2.

2

(b) Filtered Estimates

The trajectory of scenario 1 and the filtered estimates of EKF, UKF, EKF based IMM, and UKF based IMM.

12

12 True trajectory UKF 1−σ Uncertainty

10

True trajectory EIMM 1−σ Uncertainty

10

8

8

6

6

4

4

2

2

0

0

−2

−2 −2

0

2

4

(a) UKF Fig. 3.

6

8

10

12

−2

0

2

4

6

8

10

12

(b) EKF based IMM

The filtered estimates and 1-σ covariance ellipses of the target track in scenario 1 for the UKF and EKF based IMM.

current most likely mode. Using Scenario 1 and changing the transition probability matrix to   .8 .2 . (43) pij =  .2 .8

The effects of this poor choice of values is reflected in the switching probabilities in Figure

12

16

0

2

10

10

Covariance Consistency

RMSE

EKF UKF IMM−EKF IMM−UKF

−1

10

EKF UKF IMM−EKF IMM−UKF

−2

10

1

10

0

0

5

10

15

20 25 Time (seconds)

30

35

40

10

0

5

10

(a) RMSE Fig. 4.

15

20 25 Time (seconds)

30

35

40

(b) Covariance Consistency

The RMSE and covariance consistency of the filtered estimates in scenario 1.

True CT Model IMM−EKF: NCV Model IMM−EKF: CT Model IMM−UKF: NCV Model IMM−UKF: CT Model

1.2

1

1.5

1

0.8

0.5

0.6 0

0.4 −0.5

0.2 0

−1

0

5

10

15

20

25

30

35

−1.5

(a) IMM Mode Proabilities Fig. 5.

True turn rate IMM−EKF IMM−UKF

40 0

5

10

15

20

25

30

35

40

(b) Turn Rate Estimates

The IMM mode probabilities and turn rate estimates of the IMM-filtered estimates in scenario 1.

6(a) which hardly ever peek out of a completely blended mode. The covariance consistency and RMS error for the IMM filters also suffers as a result of the ensuing model mismatch in Figure 6(b) and (d). The estimates of the turn rate are also lacking in Figure 6(c). Scenario 2

17

3

10

True CT Model IMM−EKF: NCV Model

EKF

IMM−EKF: CT Model

1.2

UKF

IMM−UKF: NCV Model

IMM−EKF

IMM−UKF: CT Model

IMM−UKF Covariance Consistency

1

0.8

0.6

0.4

2

10

1

10

0.2

0 0

0

5

10

15

20

25

30

35

10

40

0

(a) IMM Mode Proabilities

5

10

15

20 25 Time (seconds)

30

35

40

(b) Covariance Consistency 0

1.5

10

True turn rate IMM−EKF

1

IMM−UKF

RMSE

0.5

0

−1

10

EKF

−0.5

UKF IMM−EKF IMM−UKF

−1

−1.5

−2

0

5

10

15

20

25

30

35

40

(a) Turn Rate Estimates

10

0

5

10

15

20 25 Time (seconds)

30

35

40

(b) RMSE

Fig. 6. The effect of choosing a poor set of IMM transition probabilities. Turn rate estimates, mode probability and covariance consistency are all greatly affected.

This scenario has a single slightly maneuvering target. Slightly maneuvering implies that targets path used a very small turn to simulate uncertainties in its motion. Effectively this keeps the scenario from being a trivial constant velocity model. The target executes the following actions at the specified times: •

Initial velocity .25m/s toward North West, with initial position at (0,0).



At 2s make a slow turn right with rate -.2



At 5s move straight for 2 seconds

18

4

4

3

3

2

2 True trajectory EKF UKF IMM−EKF IMM−UKF

1

1 Real trajectory Positions of Sensors Starting position

0

0

−1

−1

−2

0

2

4

6

8

10

12

−2

(a) Scenario 2 Trajectory Fig. 7.

0

2

4

6

8

10

(b) Filtered Estimates

The trajectory of scenario 2 and the filtered estimates of EKF, UKF, EKF based IMM, and UKF based IMM.



At 7s commence another turn right with rate .2



At 9s move straight for 1 seconds



Repeat second through last until reaches 30 seconds.

The path of the target in the scenario is shown in Figure 7(a) The results of filtering for all four of the estimation methods is in Figure 7(b). Notice at this level all of the methods seem to have very similar performance. Another nice graphic is in Figure 8 which plots the trajectory of the (a) EKF estimates and (b) the UKF-based IMM-filter. This plot also shows the 1-σ covariance ellipse for the error over time. The IMM mode switching probabilities are shown in Figure 10(a) and demonstrate that the IMM-filters are using the mixture of states as an advantage. The error and covariance consistency are visibly better for these IMM-filters in Figure 9(a) and (b). There is again a lag in estimate of the turn rate in Figure 10(b). Since this scenario is much smoother he covariance consistency in Figure 9(b) demonstrates that all four methods manage to stay moderately low even. The UKFbased IMM reduces is covariance consistency considerable faster than the EKF-based IMM coming out of the turns. The error of the IMM filters is considerable better throughout the run, and the UKF and EKF again perform similarly.

12

19

4

4

3

3

2

2

1

1 True trajectory EKF 1−σ Uncertainty

0

True trajectory UIMM 1−σ Uncertainty

0

−1

−1

−2

0

2

4

6

8

10

−2

12

0

2

(a) EKF Fig. 8.

4

6

8

10

12

(b) UKF based IMM

The filtered estimates and 1-σ covariance ellipses of the target track in scenario 2 for the EKF and UKF based IMM.

0

2

10

10

Covariance Consistency

RMSE

EKF UKF IMM−EKF IMM−UKF

−1

10

EKF UKF IMM−EKF IMM−UKF

−2

10

1

10

0

0

5

10

15 Time (seconds)

20

25

30

10

0

(a) RMSE Fig. 9.

5

10

15 Time (seconds)

20

25

(b) Covariance Consistency

The RMSE and covariance consistency of the filtered estimates in scenario 2.

VII. C ONCLUSIONS This project focused on testing different nonlinear prediction and filtering methods. The EKF is the standard filter for such problems and two methods were implemented against it to gain improvement in RMS error and covariance consistency, these were the Unscented Kalman

30

20

True CT Model IMM−EKF: NCV Model IMM−EKF: CT Model IMM−UKF: NCV Model IMM−UKF: CT Model

1.2

1

0.4 True turn rate IMM−EKF IMM−UKF

0.3

0.2

0.8

0.1

0.6 0 0.4 −0.1 0.2 −0.2 0 0

5

10

15

20

25

30

0

(a) IMM Mode Proabilities Fig. 10.

5

10

15

20

25

(b) Turn Rate Estimates

The IMM mode probabilities and turn rate estimates of the IMM-filtered estimates in scenario 2.

Filter and the Interacting Multiple Model methods. Dynamical and observation Models for the simulated data were reviewed and two scenarios were used to benchmark the results of the EKF, the UKF, the EKF-based IMM and the UKF-based IMM. The results demonstrated that the IMM-filter methods generally held an advantage to the standard methods when the model parameters were tuned correctly The UKF and EKF performed similarly, this is likely because there was not a abundance of second order information being lost in the nonlinear bearing measurement transformation. The IMMs however, held a distinct advantage in being able to switch to a more accurate model for a target changing dynamical modes. The UKF-IMM had the best performance of all the methods tested on this simulated data. R EFERENCES [1] S. J. Julier and J. K. Uhlmann, “A New Extension of The Kalman Filter to Nonlinear Systems,” Proc. SPIE Vol. 3068, p. 182-193, Signal Processing, Sensor Fusion, and Target Recognition VI, Ivan Kadar; Ed, July 1997. [2] J. K. Uhlmann. “Algorithms for multiple target tracking,” American Scientist, 80(2):121-141, 1992. [3] J. K. Uhlmann. “Simultaneous map building and localization for real time applications,” Technical report, University of Oxford, 1994. Transfer thesis. [4] S. J. Julier and J. K. Uhlmann.“A General Method for Approximating Nonlinear Transformations of Probability Distributions” Can be downloaded from http://www.robots.ox.ac.uk/ siju, 1994.

30

21

[5] S. J. Julier, J. K. Uhlmann, and H. F. Durant-Whyte.“A New Approach for the Nonlinear Transformation of Means and Covariances in Linear Filters” IEEE Trans. on Automatic Control, Accepted for Publication as a Technical Note, 1996. [6] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan. “Estimation with Applications to Tracking and Navigation,” Wiley Interscience, 2001. [7] S. Blackman and R. Popoli. “Design and Analysis of Modern Tracking Systems,” Artech House, 1994. [8] J. Hartikainen and S. S¨arkk¨a. “Optimal filtering with Kalman filters and smoothers a Manual for Matlab,” Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, February, 2008. [9] O. E. Drummond, A. J. Perrella, Jr., and S. Waugh, “On target track covariance consistency,” Proc. SPIE Vol. 6236, Signal and Data Processing of Small Targets Edited by Oliver Drummond, June 2006. [10] J. Mendel, “Lessons in Estimation Theory for Signal Processing, Communications, and Control,” Prentice Hall, New Jersey, 1995.

Tracking and Phase Identification of Targets in ...

The typical solution is to linearized the process about the current state, under the ... Filtering and estimation are two of the most ubiquitous tools in electrical ... time step based on the previous measurement data and would have the MMSE form of ...... “Simultaneous map building and localization for real time applications,” ...

437KB Sizes 2 Downloads 201 Views

Recommend Documents

Finding and Tracking Targets in the Wild: Algorithms ...
nection from a laptop computer. ... in turn commanded by laptop computers over direct Ethernet ... the target actively avoids capture by moving in the best.

Online aggressor/targets, aggressors, and targets: a ...
ages of 10 and 17 years were interviewed, along with one parent or guardian. To assess the .... they report many of the social challenges victims do, including ...

Optimal phase synchronization in networks of phase ...
Jan 12, 2017 - P. S. Skardal,1,a) R. Sevilla-Escoboza,2,3 V. P. Vera-Бvila,2,3 and J. M. Buldъ3,4. 1Department of Mathematics, Trinity College, Hartford, Connecticut 06106, USA. 2Centro ..... Performance of the alignment function in other cases. In

Online aggressor/targets, aggressors, and targets: a ...
1Johns Hopkins Bloomberg School of Public Health, Center for Adolescent Health Promotion and ..... Importance of Internet to self (very or extremely)c,e.

The XAFS Phase Isolation and Characterization of Dispersion Phase ...
kind of system by usual data analysis. A method which combines Lu Kunquan's XAFS formula with XRD was proposed to isolate XAFS of crystalline and ...

The XAFS Phase Isolation and Characterization of Dispersion Phase ...
Abstract: According to Lu Kunquan's XAFS formula for mixing phase system, it is impossible to get the true structure of this kind of system by usual data analysis.

Identification and in vitro production of Lactobacillus ...
Mar 5, 2010 - program was 95°C for 2 min, 35 cycles of 95°C for 30 s,. 55°C for 1 ..... that the Lactobacillus species is also an important factor since women ...

Revisions in Utilization-Adjusted TFP and Robust Identification of ...
Mar 7, 2017 - basic idea behind this estimation is that cost-minimizing firms simultaneously vary .... However, nothing about the illustration .... idiosyncratic measurement error is orthogonal to the news shock, making it irrelevant for the.

Revisions in Utilization-Adjusted TFP and Robust Identification of ...
Mar 7, 2017 - news shocks for business cycle fluctuations remains hotly debated, the main .... run output while deviations of output from trend are mostly driven by ..... In contrast, the 2014 and 2016 vintages of adjusted TFP growth are less ...

Cosmogenic 3He and 21Ne measured in quartz targets ...
were measured several months later as a result of technical problems unrelated to our ..... Swift yet detailed reviews by William Amidon (Caltech) and. Samuel ...

Worksheet in DTI performance indicators and targets for 2011.pdf ...
Trade policy/negotiation 1. Number of policy proposals approved/endorsed by the Secretary Number/Absolute Value 15. Trade facilitation/promotion 2. Value of ...

targets, incentives and public accountability in ...
Most modern European universities are funded almost entirely from public .... According to the European Commission [2006], the EU has about 4 000 institutions ..... whether this results in better service delivery to the students; sometimes I.

Phase behaviour of argon and krypton adsorbed in ...
Jun 27, 2003 - small vapour regions to coalesce into large bubbles within the porous medium. These larger bubbles support more oPs (3γ) annihilation.

Groups Identification and Individual Recommendations in ... - Unica
users by exploiting context-awareness in a domain. This is done by computing a set of previously expressed preferences, in order to recommend items that are ...

Cosmogenic 3He and 21Ne measured in quartz targets ...
Jun 11, 2009 - 14 cm radius and 45 cm height, filled with 4 kg of industrial quartz sand ... spallation line) indicates that despite the purity of the industrial quartz.

ALE 17. Phase Changes and Phase Diagrams
The temperature decreases as more and more of the liquid is converted into a gas. ... Use the data below to calculate the total heat in Joules needed to convert ...

Phase behaviour of argon and krypton adsorbed in ...
Jun 27, 2003 - This may result in the positron annihilating with an electron of opposite ... the balance between the zero-point pressure of the Ps atom and the ...

Cosmogenic 3He and 21Ne measured in quartz targets ...
Jun 11, 2009 - Our best estimates for the 3He and 21Ne attenuation lengths are 134.8±5.9 g/cm2 ... E-mail address: [email protected] (P. Vermeesch). ... The bulk of this work is done on landforms of known age (Desilets and Zreda, .... picture) i

Moving Targets: Security and Rapid-Release in ... - Research at Google
Nov 7, 2014 - of rapid-release methodology on the security of the Mozilla. Firefox browser. ... their development lifecycle, moving from large-scale, infre- quent releases of new ...... http://www.cigital.com/presentations/ARA10.pdf. [36] Gary ...

Cosmogenic 3He and 21Ne measured in artificial quartz targets after ...
this crucial assumption in the framework of the CRONUS-EU and CRONUS-Earth initiatives. The bulk of this work is done on lava flows of known age. In a comple- mentary effort, we here present the first results of an alternative approach measuring, for

Existence and robustness of phase-locking in coupled ...
Jan 25, 2011 - solutions under generic interconnection and feedback configurations. In particular we ..... prevents the oscillators to all evolve at the exact same.