Feature Following and Distributed Navigation Systems Development for a Small Unmanned Aerial Vehicle with Low-Cost Sensors Deok-Jin Lee1, Isaac Kaminer2, Vladimir Dobrokhodov,3 and Kevin D. Jones,4 Naval Postgraduate School, Monterey, CA, 93943 Code This paper represents the development of feature following control and distributed navigation algorithms for a small unmanned aerial vehicle equipped with a low-cost sensor unit. An efficient map-based feature generation and following control algorithm is developed. A distributed navigation system is designed for real-time attitude, position, and velocity estimation of the unmanned aircraft with a cascade filtering architecture, resulting in a fault-tolerant navigation system. The performance of the proposed feature following control and the cascaded navigation algorithm is demonstrated in both hardware-in-the-loop simulation and real flight test with application to feature tracking with a stabilized gimbaled camera onboard a small unmanned aerial vehicle.

I. Introduction

U

NMANNED aerial vehicles (UAVs) play a strategic role in a broad range of applications, including surveillance, reconnaissance, border control, and search missions [1]. Autonomy is a key technology for future high-performance and remote operation in these applications. Challenging tasks for successful autonomy of the UAVs include flight path planning, real-time estimation and navigation, and path following control [2,3]. Feature tracking is a logical extension of the emerging view of UAV as a flying sensor, combining wireless communication network, advanced flight control algorithm, and the sensory capability of a camera onboard the unmanned aerial system (UAS) [4]. The feature following capability makes it extremely useful for tactical unit support tasks including surveillance and reconnaissance with real-time video and high resolution imaging. Application of the feature following capability to the delivery of high-resolution (HR) video and imagery requires designating a feature as a desired sensor path on a geo-rectified digital map either as discrete point or a continuous path. The corresponding UAV/sensor flight path is generated on the ground and delivered to the UAV where it is used by both the UAV and the sensor simultaneously as the path to be followed by the footprint of the sensor, and an optimal flight path to be followed by the small UAV as illustrated in Fig. 1. This approach results in a much smaller airspace footprint when compared to the conventional waypoint navigation solution. The latter can not guarantee that the sensor continuously maintains the feature path in the center of the camera frame with the specified resolution regardless of the shape of the geographical feature. Majority of the commercial autopilots available for flight control of the UAVs employs traditional way-point (WP) navigation [4]. However, the traditional WP navigation is not suited for aggressive and complex feature tracking applications since it requires long straight line segments in for precise path following [5]. Vision-based road following and flight navigation applications has been demonstrated using an onboard camera [6,7]. It requires an efficient real-time vision control system that detects natural features and tracks the feature road for the UAV with real-time image processing strategy, which is obtained at an additional onboard computational cost. These shortcomings can be overcome by implementing a new real-time path following control and enhanced navigation algorithms onboard the UAV. Previous work reported in [1] provides a comprehensive solution to the path following problem that includes real-time path generation and nonlinear path-following algorithms augmented by L1 adaptive control. In this paper, these algorithms are further extended to solve the feature following problem in a way that allows the user to select any geographical feature on a geo-referenced map. The corresponding UAV flight path is 1

NRC Research Fellow, Dept. of Mech. & Astronautical Eng., Member AIAA. Full Professor, Dept. of Mech. & Astronautical Eng., member AIAA. 3 Research Assistant Professor, Dept. of Mech. & Astronautical Eng., member AIAA. 4 Research Associate Professor, Dept. of Mech. & Astronautical Eng., Associate Fellow AIAA. 2

1 American Institute of Aeronautics and Astronautics

obtained using a mapping technique which extracts points of the target feature and produces a dynamically feasible path in a geographical coordinate system.

Figure 1 Concept for feature following with camera onboard a small UAV as a fly-the-sensor concept Objectives of this paper are twofold. The first one is to develop efficient feature generation and following control algorithms that guarantee presence of the selected geographical feature in the center of image frame at all times, and the second objective includes development of a robust and high rate navigation algorithm which provides necessary navigation solution to the feature following system. The resulting integrated guidance, navigation and control system enables an untrained operator to plan and execute a mission using a high-level fly-the-sensor, rather than flythe-UAV control concept. The schematic diagram of overall concept of the feature following control system is shown in Fig. 2 where the control inputs from the feature following algorithm are fed back to the autopilot as an outer loop control and for an inner loop controller, an L1 adaptive control law can be applied to compensate for a worst situation or for enhancing the feature following performance [1]. The first objective is achieved by integrating a feature generation algorithm based on a mapping technique with the recently developed path-following algorithm [1]. The resulting feasible flight path is given by a power polynomials defined using virtual arc length are independent of time, which makes their real-time computation easy. The optimization technique used to generate these paths maximizes the field of view of either a fixed or gimbaled camera onboard the UAV subject to its bank angle constraints. This process maps a user defined feature into a feasible UAV flight path that guarantees the presence of the feature in the image frame of the onboard camera and accounts for the dynamic constraints of the UAV. In the remainder of the paper this path is called a feature path. Once the feature path is generated it is sent to the onboard computer, where it is used by the integrated pathfollowing and distributed navigation algorithms to control the UAV and the onboard camera. Clearly, a robust navigation solution is a critical component of this system. In this paper, an efficient distributed navigation system with a cascaded filtering architecture is developed to provide an accurate and robust navigation solution to the feature following algorithm. The proposed navigation algorithm relies on the extended Kalman filter framework to fuse the rate gyro based dead-reckoning navigation information with the GPS sensor measurements. It thus takes advantage of the short-term stability of the DR solution and accounts for the gap due to slow update rate of the GPS measurements. The navigation filter architecture has a cascaded form where the attitude estimation is carried out first, followed by position and velocity estimation. The latter is made by integrating the estimated attitude with the speed from the GPS measurement. One of the advantages of the cascaded architecture for the navigation solution lies on the fact that it is fault-tolerant to anomalous sensor information and the fault can be isolated by implementing sensor fault detection (FD) algorithm [8]. If sensor fault or anomalous sensor information is detected the navigation system does not use any faulty sensor data and skips the update, and only the state and covariance predictions are performed until a new sensor data is obtained. The integration of the path-following algorithm [1], feature path generation and control GUI interface, reliable navigation algorithm, and camera stabilization based on remote networking capabilities provide a simple and effective solution to the complex feature tracking problem using high resolution imagery. This allows non-trained 2 American Institute of Aeronautics and Astronautics

personnel to utilize advanced features of modern airborne systems without in-depth knowledge of the UAV navigation and control systems.

uc



x

Figure 2 Diagram for feature path generation and tracking control with distributed navigation system The remainder of this paper is organized as follows. Section II describes the feature path generation algorithm. Section III discusses the development of a distributed navigation system that provides position velocity and attitude estimates, and a summary of the new third-generation rapid flight test prototyping system (RFTPS) developed in the unmanned systems lab at the Naval Postgraduate school is presented in Section IV. Section V presents hardware-inthe-loop and flight test results.

II. Optimal Feature Path Generation A. Analytical Feature Path Generation In this section, the generation of an analytical feature path is introduced. It is divided into three steps. In the first step, a feature path of a general road or border is extracted by using simple point-and-click or mouse dragging scribble operation on a geo-referenced digital map. In the second step, the extracted feature is smoothed by using a smoothing algorithm to get rid of jumps coming from a jitter or mouse movement. Finally the smoothed feature is used to obtain the analytical feature path defined by 3D spatial polynomial that is a function of a virtual arc length. This overall process is illustrated in Fig. 3.

Figure 3 Diagram for feature path generation Fig. 4 shows a feature track extracted by using simple point-and-click or mouse dragging scribble operation on a digital map at the Camp Roberts area. After the feature track on the map is smoothed by using a smoothing algorithm to get rid of the noise or jitter due to mouse movement, it is approximated by a polynomial path shown in Fig. 5. The polynomial representation of the feature path is used to determine a flight path for the UAV. This step is cast as an optimization problem detailed in the remainder of this section.

3 American Institute of Aeronautics and Astronautics

Figure 4 Feature trajectory mapping by using point-and-click or mouse dragging

Figure 5 Polynomial feature trajectory reconstruction in geographical coordinates Let p c (τ ) = [ x(τ ), y (τ ), z (τ )]T present the feature path where τ = [0,τ f ] and τ f is the total virtual arc length. Each coordinate x(τ ), y (τ ), z (τ ) is represented by a N-degree of polynomial [9] N

xi (τ ) = ∑ ai , d τ d , i = 1, 2,3

(1)

d =0

where x1 = x, x2 = y, x3 = z for the sake of simplicity. The degree of the polynomials is determined by the number of initial and final boundary conditions. Let N 0 and N f represent the order of the highest derivative of p c (0) and

p c (τ f ) , then the minimum degree N ∗ of each polynomials is N ∗ = N 0 + N f + 1 . These in turn can be used to 4 American Institute of Aeronautics and Astronautics

compute the polynomial coefficients in Eq. (1). For example, if N 0 = N f = 2 then N ∗ = 5 and as a result Eq. (2) can be used to compute the coefficients of a 5th degree of polynomial path. (Details on this approach can be found in Ref. 13) ⎡1 0 ⎢0 1 ⎢ ⎢0 0 ⎢ ⎢1 τ f ⎢0 1 ⎢ ⎣⎢0 0

0 0 2

τ 2f 2τ f 2

0 0 0

0 0 0

τ 2f τ 2f 3τ 2f 4τ 2f 6τ f 12τ 2f

⎤ ⎡ ai ,0 ⎤ ⎡ xi (0) ⎤ ⎥ ⎢ a ⎥ ⎢ x ′(0) ⎥ ⎥ ⎢ i ,1 ⎥ ⎢ i ⎥ ⎥ ⎢ ai ,2 ⎥ ⎢ xi′′(0) ⎥ ⎥⎢ ⎥ = ⎢ ⎥ τ 2f ⎥ ⎢ ai ,3 ⎥ ⎢ xi (τ f ) ⎥ 5τ 2f ⎥ ⎢ ai ,4 ⎥ ⎢ xi′(τ f ) ⎥ ⎥⎢ ⎥ ⎢ ⎥ 20τ 2f ⎦⎥ ⎣⎢ ai ,5 ⎦⎥ ⎣⎢ xi′′(τ f ) ⎦⎥ 0 0 0

(2)

The parameterization in Eq. (1) completely determines the 3-D spatial UAV path with all the boundary conditions satisfied. Furthermore, from (2) it is clear that the total virtual arc length τ f can be used as an optimization parameter. Given the position vector in Eq. (1), the curvature of the feature path is calculated by [1]

κ

1 dT 1 = r dτ p′(τ )

(3)

dp(τ ) 1 , p′(τ ) = dτ p′(τ )

(4)

where r is a local radius and

T = p′(τ )

In a typical feature following mission the UAV is flying along the feature path with a constant velocity v = v . In this case the bank angle of the UAV along the path φUAV is given by ⎛ v2 ⎞ ⎞ −1 ⎛ v ⎟ = tan ⎜ κ ⎟ ⎝g ⎠ ⎝ gr ⎠

φUAV = tan −1 ⎜

(5)

Typical constraints on the bank angle and roll rate of the UAV are 0 < φUAV ≤ 35 ,

φUAV ≤ φmax

(6)

where the roll rate angle is computed by

φ=

dφ dτ dφ v = dτ dt dτ p′(τ )

(7)

For the case of constant velocity the acceleration along the flight path must satisfy the following constraint a p (τ ) ≤ amax

where

5 American Institute of Aeronautics and Astronautics

(8)

a p (τ ) =

v 2p p′(τ )

2

T ⎛ p′(τ ) ( p ′(τ ) ) ⎜I − 2 ⎜ p′(τ ) ⎝

⎞ ⎟ p ′′(τ ) ⎟ ⎠

(9)

Figure 6 Geometry between bank angles and lateral distance errors Next using the problem geometry shown in Fig. 6 we obtain additional constraints on the UAV bank angle that guarantee that the user specified feature remains in the center of the image frame of the onboard camera. Let φT denote the angle between the line connecting the origin of the UAV fixed frame to the feature and local horizon. Let pT (τ ) denote the feature path. Then

⎛i ⎞ ⎝ iz ⎠

φT = tan −1 ⎜ y ⎟

(10)

⎡ix ⎤ pUAV (τ ) − pT (τ ) ⎢ ⎥ = iy pUAV (τ ) − pT (τ ) ⎢ ⎥ ⎢⎣ iz ⎥⎦

(11)

where

From Fig. 6 it follows that the desired feature will remain at the center of the image frame if

φT =

π 2

− (φUAV + φG )

(12)

where φG is the angle of the gimbal camera between the UAV vertical frame to the line of sight to the feature. On the other hand, if

φT −

π 2

+ (φUAV + φG ) ≤ ε < ϑ / 2

6 American Institute of Aeronautics and Astronautics

(13)

where ϑ is the LOS of the camera and ε is a threshold value to be minimized, then the feature will remain in the image frame of the onboard camera. Therefore, determining a desired flight path for the UAV can be reduced to the following optimization problem min ε

(14)

v ,τ f

subject to the following constraints 0 < φUAV ≤ φmax ,

π

φUAV ≤ φmax

(15)

+ (φUAV + φG ) ≤ ε < ϑ / 2

(16)

vmin ≤ v p (τ ) ≤ vmax , a p (τ ) ≤ amax

(17)

φT −

2

Fig. 7 illustrates a solution to the optimization problem in Eq. (14). It includes a reference feature path and a flight path for the UAV (modified feature path) obtained by solving Eq. (14). 1350

1300

Modified Feature Path

yN (m)

1250

1200

1150

Feature Path on Map 1100 -2370

-2360

-2350

-2340

-2330

-2320

-2310

-2300

xE (m) Figure 7 Optimized feature trajectory design B. Graphical User Interface

Once a feasible flight path is generated its geodetic coordinates are converted into a KML or KMZ format compatible with Google Earth. Furthermore, it is displayed on Google Earth 3D map as shown in Fig. 8. .

7 American Institute of Aeronautics and Astronautics

Figure 8 UAV flight path visualized on Google Earth The feature-following concept allows an operator with no conventional training in UAV operation to specify the desired ground path of an airborne sensor, in this case a high-resolution camera, while the UAV flight path required to achieve this sensor path is computed autonomously. It is a true fly-the-sensor technology, whereby the end-user can specify the mission by simply using a simple graphical user interface (GUI) with point-and-click or mouse-drag (scribble) operations on a map.

Figure 9 Graphical user interface for feature path generation and following control The GUI control capability can be divided into two main functions: one is for feature path generation and optimization using the scribble mapping technique, and second is for feature following control by the UAV. The buttons on the right hand side on the GUI in Fig. 9 send information that comes from the feature generation step to the UAV to be used by the real-time path following algorithm [1].

III.

Feature Tracking Control Algorithms

After a feature path is generated by the GUI-based mapping algorithm, the trajectory information is sent to the onboard computer where a feature path tracking control algorithm is executed in real-time. This section describes path following control algorithms in 3-D for a real-time navigation. A path following algorithm is introduced in order to make each vehicle to follow the spatial path generated by the path generation algorithm [1]. The kinematics equations of the small vehicle are used in order to take pitch rate and yaw rate as virtual outer-loop control inputs. 8 American Institute of Aeronautics and Astronautics

The coordinate used is based on a Serret-Frenet frame F attached to a generic point moving with the desired 3D path and the wind frame W attached to the UAV x -axis aligned with the vehicle’s velocity vector. Details are shown in Fig. 10.

Figure 10 Coordinate Systems for Feature Tracking System where Q denotes the center of mass of the aircraft and P be an arbitrary point on the path that plays the role of the center of mass of a virtual aircraft to be followed. Q can be resolved in I as q I = [ xI , yI , z I ]T or in F as q F = [ xF , yF , zF ]T . The angular velocity of the F frame resolved in F with respect to the inertial frame I is

denoted by ω FFI . Let p c (τ ) be the path to be followed by a UAV. Then, the simplified UAV kinematics equations [1] are expressed by xI = υ cos γ cosψ yI = υ cos γ cosψ z I = υ cos γ cosψ

(18)

γ =q ψ = r cos −1 γ where υ is the magnitude of the UAV velocity, γ is the flight path angle, ψ is the heading angles, and q and r are the pitch rate and yaw rate in the y -axis and z -axis components resolved in the wind frame W . According to the standard nomenclature used in Ref. [1], let l denote path length along the desired path p c (τ ) . Then the equations for the path following error kinematics in the F frame are obtained by

xF (t ) = −l (t )(1 − κ (l (t )) yF (t )) + υ (t ) cos(θ e (t )) cos(ψ e (t )) yF (t ) = −l (t )(κ (l (t )) xF (t ) − ς (l (t )) zF (t )) + υ (t ) cos(θ e (t )) sin(ψ e (t )) zF (t ) = −ς (l (t ))l (t ) yF (t ) − υ (t )sin(θ e (t ))

(19)

θ e (t ) = uθ (t ) ψ e (t ) = uψ (t ) l (t ) = k1 xF (t ) + υ (t ) cos(θ e (t )) cos(ψ e (t ))

where k1 > 0 is a constant, and it is assumed that the speed profile of the UAV along the path is bounded by υ (t ) ≥ υmin > 0 . The state vector x(t ) for the error kinematics is defined as x(t ) = ⎡⎣ xF (t ) yF (t ) zF (t ) θ e (t ) − δθ (t ) ψ e (t ) − δψ (t ) ⎤⎦

9 American Institute of Aeronautics and Astronautics

T

(20)

Now, the feedback control laws are computed by uθc (t ) = − k2 (θ e (t ) − δθ (t ) ) +

sin θ e (t ) − sin δθ (t ) c2 zF (t )υ (t ) + δθ (t ) c1 θ e (t ) − δθ (t )

uψ c (t ) = − k3 (ψ e (t ) − δψ (t ) ) −

sinψ e (t ) + sin δψ (t ) c2 yF (t )υ (t ) cos θ e (t ) + δψ (t ) c1 ψ e (t ) − δψ (t )

(21)

where k2 > 0 , k3 > 0 , c1 > 0 and c2 > 0 . The terms δθ and δψ are defined by ⎛

zF (t ) ⎞ ⎟⎟ ⎝ zF (t ) + d1 ⎠ ⎛ yF (t ) ⎞ δψ (t ) = sin −1 ⎜⎜ ⎟⎟ ⎝ yF (t ) + d 2 ⎠

δθ (t ) = sin −1 ⎜⎜

(22)

where 0 < θ a ≤ 1 , 0 < ψ a ≤ 1 , d1 > 0 and d 2 > 0 .

IV. Distributed Navigation Systems Design The primary objective of this section is to design a multi-rate filter to provide precise estimates of the UAV position and velocity. This is done using fault-tolerant distributed cascaded filtering architecture while keeping the computational workload as low as possible.

Figure 11 Diagram for distributed navigation system with cascade filtering structure Fig. 11 shows the overall distributed navigation concept for attitude, position and velocity estimation using cascaded filtering architecture. First, A complementary filter [10] is used to estimate attitude of the UAV. Second, an integrated dead-reckoning and GPS navigation system that combines a low-cost strapdown inertial rate data and GPS measurement by using the extended Kalman filter is designed. The navigation filter provides not only the position estimation but also velocity estimation with respect to a local geographical navigation frame (North, East, Down). A simple dead-reckoning (DR) approach is used to provide prediction of the position and velocity in between GPS updates. The proposed integrated DR/GPS navigation algorithm minimizes the onboard computational workload. One of the advantages of the distributed navigation system implemented is its robustness to sensor failures achieved by identifying and isolating anomalous sensor signals. The architecture of the position and velocity estimation filter is shown in Fig. 12 where the navigation algorithms are designed by integrating the attitude information from the attitude complementary filter with the position and velocity from the GPS sensor. Next, a detailed derivation of this architecture is presented.

10 American Institute of Aeronautics and Astronautics

Figure 12 Diagram for integrated position and velocity estimation

A. Position Estimation Suppose the velocity components of the vehicle resolved in the body frame are given, then the equation of motion of the UAV in the navigation frame is computed by multiplying a direction cosine matrix provided from the attitude estimation subsystem with the body velocity vector as follows [11] p n = Cbn v b

(23)

where p n ∈ ℜ3×1 represents the position vector expressed in the local geographic navigation frame defined by the directions of north, east , and down, p n = [pN

pE

−h ]T ∈ ℜ3×1 and v b ∈ ℜ3×1 denotes the velocity vector in the

body frame with directions, out the noise, the right wing, and the down belly, respectively, v b ≡ [u v w ]T . Since the attitude and navigation filtering structure is based on the cascade form, the direction cosine matrix, Cbn ∈ ℜ3×3 , is available from the attitude estimation in advance. Now the continuous form of the position navigation equation can be discretized as

p nk +1 = p nk +

tk +1

∫ C v dt n b

b

(24)

tk

p + Δt C v n k

n b

b

Suppose the direction cosine matrix is parameterized in terms of quaternion, q ≡ [ position equation can be written in a form of a nonlinear function as p

n k +1

T

q4 ] , then the discrete

= f p (p , q k , v b ) where the nonlinear n k

function f p is a function of a position vector, a quaternion vector, and a body frame velocity vector. Usually, the computation of the position can be made in a computationally efficient way that the body velocity component can be replaced with a speed based on the assumption that there is negligible sideslip and angle of attack. Therefore, the forward velocity vector, u , can be replaced with the speed of the UAV,V , and the others are zeros, i.e., u V , v = w 0 . The body velocity vector can be written as v b ≡ [V 0 0]T . Note the derivation of the extended Kalman filter is based on linearizing state and observation models using the first-order Taylor series expansion. If there exists an estimate pˆ nk ∈ ℜ3×1 at time k , then the predicted position estimate at time k + 1 is approximated by pˆ nk +1|k = f p (pˆ nk , qˆ k |k , v b ) = pˆ nk |k + Δt Cbn (qˆ k |k ) v b

(25)

where the quaternion estimate, qˆ k |k , is provided by the attitude complementary filter. Now the predicted state covariance equation is computed by the outer product of the predicted estimate error

11 American Institute of Aeronautics and Astronautics

Pkp+1|k = E[δp k +1|k δpTk +1|k | (y1 ,… , y k )]

(26)

where δp k +1|k is the predicted state estimate error defined by subtracting the true state given in Eq. (24) from the predicted state given in Eq. (25) as follows δp k +1|k ≡ p nk +1 − pˆ nk +1|k . After expanding Eq. (25) using the first-order Taylor series about the estimate, pˆ nk|k , the predicted position estimate error is approximated by δp k +1|k ≈

∂f p ∂p nk|k

(p

n k

− pˆ nk |k )

(27)

= Fp , k δ p k | k

where Fp , k ∈ ℜ3×3 is the jacobian matrix of f p evaluated at p nk |k = pˆ nk |k . It should be noted that the position prediction given in Eq. (25) has a nonlinear term, f p ,2 (qˆ k , v b ) ≡ Δt Cbn (q k ) v b , which is a function of the estimated quaternion and the body velocity vector rather than the position estimate vector. This makes it difficult to derive the jacobian matrix, Fp , k , directly. However, the jacobian matrix can be derived by applying the chain rule and having the assumption that the estimated quaternion and velocity information is fixed over the computational cycle Δt = tk +1 − tk . Suppose the augmented state vector uˆ k is defined, uˆ k ≡ [qˆ nk v b ] , then, the jacobian matrix can be approximated as follows Fp , k =

∂f p ∂p nk |k

= I 3×3 +

= I 3×3 + F2, k

∂f p ,2 ∂uˆ k ∂uˆ k ∂p nk |k

∂uˆ k ∂p nk|k

(28)

Then, the predicted position covariance equation is approximated by

Pkp+1|k ≈ Pkp|k + E ⎡⎣F2, k δuˆ k δuˆ kT F2,T k ⎤⎦ = Pkp|k + F2, k Pku F2,T k

(29)

where Pku = E[δuˆ k δuˆ kT ] ∈ ℜ7×7 can be divided into two parts. One is the covariance matrix based on the quaternion and the other is derived in terms of the velocity vector. Since the quaternion part of uˆ k is the output obtained from the attitude filter that produces the steady-state attitude estimate with bounded estimation errors, the variance of uˆ k can be assumed to be a steady-state value, i.e., constant covariance matrix, Pkp = P p , is composed of two block diagonal matrices ⎡P p Pkp ≡ ⎢ q ⎣ 0

0 ⎤ ⎥ Pvp ⎦

(30)

This will be a tuning variable that is specified based on the attitude sensor systems to make the position estimation accurate. Note that the prediction of the position is available at multi-rate, i.e., 20 Hz, depending on the rate gyro sampling rate. However, the position measurement from GPS sensor in the navigation frame is available at 1 Hz, thus the measurement update is only available at 1 Hz. Thus in this paper, pseudo-measurements are introduced, which can be drawn at multiple sampling rates. For example, the pseudo-measurements y k + i , i = 1,… ,10 , are obtained at 10Hz between the 1-Hz GPS sample period by setting the pseudo-measurement equivalent to the GPS measurement given at time k , y k + i = p nm , where m is the GPS measurement update rate, but when the next GPS 12 American Institute of Aeronautics and Astronautics

measurement is available at time m + 1 or i = 10 the values of pseudo-measurements are changed to the new GPS measurement y k +10 + i = p nm +1 , i = 1,… ,10 . The measurement equation is expressed in a linear form by ⎧ i = 1,… , ∞ y p , k + i = p nk + m , ⎨ ⎩ m = int ( i / ds )

(31)

where int operator produces an integer value, ds is the sampling rate, i.e., ds = 10 Hz , the measurement matrix H p , k becomes diagonal, and it is assumed that the GPS sensor noise vector with zero-mean and covariance , v p , k ∼ N (0, R p , k ) . Suppose there exists a predicted position estimate pˆ nk +1|k ∈ ℜ3×1 at time k + 1 , then

the innovation vector, υp , k +1 , the difference between the sensor measurement and predicted observation, is defined as υp , k +1 ≡ y p , k +1 − yˆ p , k +1 = y p , k +1 − H p , k +1pˆ nk +1|k . The covariance of the innovation vector is obtained by [12] Ppυυ, k +1 = H p , k +1Pkp+1|k HTp , k +1 + R p , k

(32)

Now, the Kalman gain matrix is computed by

Kkp+1 = Pkp+1|k HTp , k +1 ( Ppυυ, k +1 )

−1

(33)

where Pkpy+1 ≡ Pkp+1|k HTp , k +1 is the predicted cross-correlation matrix between the predicted position state and measurement vector. Finally, the position estimate and covariance updates are obtained by pˆ nk +1|k +1 = pˆ nk +1|k + Kkp+1υp , k +1

(34)

Pkp+1|k +1 = Pkp+1|k − Kkp+1Pkυυ+1 (Kkp+1 )T

(35)

B. Velocity Estimation Before developing the velocity navigation filtering algorithm, it is desired to simplify navigation equation to reduce the computational load of the microcomputer onboard which has limited computational power. In general, the contribution of the Coriolis acceleration terms, and also the effect of the earth angular velocity, the centripetal force, is small for navigation in the vicinity of the Earth. After simplification, the navigation equations of a UAV in the local geographic navigation frame is expressed by [13] vn = f n + gn

(36)

where f n is the specific force vector measured by a triad of accelerometers resolved in the navigation frame, f n = Cbn f b , and g n is the gravitational acceleration contribution in the navigation frame, g n = [0 0 g]T . Now, for the real-time implementation, the continuous navigation equation is converted into the discrete-time form by v nk +1 = v nk + u nf , k + g n Δt

(37)

where u nf , k represents the sum of the velocity changes over each update, and is a function of vehicle body attitude, f n = Cbn f b .

u nf , k =

tk +1



tk

f n dt =

tk +1

∫Cf

n b b

dt

tk

13 American Institute of Aeronautics and Astronautics

(38)

Now, the discrete time velocity equation can be written in a form of a nonlinear function as v nk +1 = fv ( v nk , k ) . If there exists an estimate vˆ nk|k ∈ ℜ3×1 at time k , then the predicted velocity estimate at time k + 1 is approximated by vˆ nk +1|k = vˆ nk |k + u nf , k + g n Δt

(39)

Now the predicted velocity covariance equation is computed by Pkv+1|k = E[δ v k +1|k δ vTk +1|k | (y v ,1 ,… , y v , k )]

(40)

where δ v k +1|k is approximated by δ v k +1|k ≈

∂fv ( v kn − vˆ kn|k ) ∂v nk|k

(41)

= Fv , k δ v k |k

where Fv ,k ∈ ℜ3×3 is the jacobian matrix of fv evaluated at v nk = vˆ nk |k . In order to compensate for the truncated errors, a weighted acceleration matrix is introduced by using the velocity change term, fv ,2 (qˆ k , f b ) ≡ u nf , k = Δt Cbn (q k )f b , which is a function of the estimated quaternion and the body acceleration vector. Suppose the augmented state vector uˆ v , k is defined, uˆ v , k ≡ [qˆ nk f b ] , then, the jacobian matrix is approximated as follows Fv , k =

∂f ∂uˆ v , k ∂fv = I 3×3 + v ,2 n ∂v k |k ∂uˆ v , k ∂v nk |k

= I 3×3 + F

v 2, k

∂uˆ v , k

(42)

∂v nk|k

v , and is equal to the jacobian matrix of the position part, i.e., F2,v k = F2,p k . The where the jacobian matrix, F2,k

predicted velocity error covariance is approximated by Pkv+1|k ≈ Pkv|k + E ⎡⎣F2,v k δuˆ v , k δ uˆ vT, k (F2,v k )T ⎤⎦ = Pkv|k + F2,v k Pkv F2,T k

(43)

where Pkv = E[δuˆ v , k δuˆ vT, k ] ∈ ℜ3×7 is a scaled weighted matrix that is composed of two block diagonal matrices ⎡P v Pkv ≡ ⎢ q ⎣ 0

0 ⎤ ⎥ Pav ⎦

(44)

where the top left-side matrix is the covariance matrix based on the quaternion and the bottom term is related to acceleration and can be adjusted to compensate for neglected accelerations. Since the first part is connected to the quaternion output from the attitude filter that produces the steady-state attitude estimate, the diagonal matrix has the same covariance value in the position estimation, Pqv = Pqp . The second block diagonal, Pav , is a tunable weight matrix that needs to be specified based on the accelerometer sensor systems to make the velocity estimation more accurate and to compensate for the neglected accelerations. Since the velocity vector, v nk , is the product of the

14 American Institute of Aeronautics and Astronautics

direction cosine Cbn and the body velocity (or the speed V ) of the UAV as, v nk = Cbn v b , the predicted covariance, Pkv|k , of the velocity term can be replaced by the second term in the position covariance by

Pkv|k =

1 F2, k Pku F2,T k Δt

(45)

Thus, the predicted velocity error covariance equation can be simplified further as

1 p p p T F2, k Pk (F2, k ) + F2,v k Pkv (F2,v k )T Δt 1 = F2, k ⎡⎣ Pkp + Δt Pkv ⎤⎦ F2,T k Δt

Pkv+1|k ≈

(46)

where F2, k ≡ F2,p k = F2,v k . The velocity measurement in the navigation frame is available from DGPS sensor which is available at 1 Hz, thus the following velocity pusedo-measurement equations are used ⎧ i = 1,… , ∞ y p , k + i = v nk + m , ⎨ ⎩m = int ( i / ds )

(47)

where the measurement matrix Hv ,k has diagonal elements, and it is assumed that the GPS velocity sensor noise vector has zero-mean and covariance , vv , k ∼ N (0, Rv , k ) . The predicted velocity estimate vˆ nk +1|k ∈ ℜ3×1 at time k + 1 is expressed by yˆ v , k +1 = Hv , k vˆ nk +1|k . The velocity estimate and covariance updates are obtained by applying the Kalman update equation given in Eqs. (34)-(35) as did in the position estimation. C. Sensor Fault Detection The distributed navigation system implemented is based on the cascaded senor information integration which provides sensor failure detection and isolation capability by isolating any anomalous sensor data resulting from sensor failure or from corruption of the sensor signals [8],[14]. For sensor fault detection, the statistical chi-test χ 2 is utilized [8]. First, it is assumed the innovation covariance matrix is given by the navigation filter as Pkυυ+1 = H k +1Pk +1|k HTk +1 + R k

(48)

The associated likelihood function for innovation is defined by −1 ⎧ 1 ⎫ L {υk +1 } = exp ⎨− υTk +1 ( Pkυυ+1 ) υk +1 ⎬ ⎩ 2 ⎭

(49)

where υk +1 is the measurement innovations vector. Now, the log-likelihood is given by log L {υk +1 } = −υTk +1 ( Pkυυ+1 ) υk +1 −1

(50)

Then the normalized innovation is expressed by

d k2+1 ≡ υTk +1 ( Pkυυ+1 ) υk +1 −1

15 American Institute of Aeronautics and Astronautics

(51)

which is a normalized innovation describes a quadratic ellipsoidal volume centered on the observation prediction, and it the innovations are zero mean and white then the normalized innovation is a χ 2 random variable with nυ degrees of freedom. If an observation falls within this volume, then it is considered valid. For this test the equivalent statistical χ 2 test is expressed by υTk +1 ( Pkυυ+1 ) υk +1 −1

χ = 2



≥0

(52)

where nυ is the dimension of the innovation vector. This has non-negative value with a minimum value of zero, thus an upper limit threshold value on χ 2 can be used to detect anomalous sensor information. A threshold value 2 2 χ max is chosen such that the sensor data is rejected when χ 2 > χ max .

V. UAV Flight Systems For feature tracking application a Rascal UAV as a rapid flight test prototyping system (RTFPS) for small unmanned air vehicles (SUAVs) developed at the Unmanned Systems Lab in the Naval Postgraduate School is utilized for hardware-in-the-loop simulation and real flight test experiment [4]. The new RTFPS integrated avionics system architecture that includes all the principal components along with the Piccolo plus autopilot [16], and an overhead view of the avionics is shown in Fig. 13.

Figure 13 Hardware integration and implementation architecture The Piccolo plus autopilot is used for primary flight control, with its dedicated 900MHz serial data link. In the integrated flight system, two PC104 [17] computers are integrated, one is for a task of guidance, navigation and control (GNC), and the other plays a role of a gateway computer to bridge an onboard LAN and a wireless mesh networking [18]. The latter is provided from a Motorola WMC6300 PCMCIA mesh card which is built on the Quadrature Division Multiple Access (QDMA) modulation and is optimized for mobile ad-hoc broadband networking, and recently we moved to a Persistent Systems WaveRelay router [19]. A Pelco NET300T video server [20] is used to stream the analog video feed from the camera, and all network devices are linked through a Linksys 5-port hub. Analog image took from a CCD camera is transmitted through the PelcoNet video server (NET300T) across an Ethernet networks integrated in the SUAV system to a ground control center. The NET300T can display the video on a PC through any Web browser. The integrated avionics package weighs total 1 lb, and requires about 25 W power with all components active. Imagery is provided by a Sony FCB-IX11A color block-camera [21] shown in Fig. 14 which has the features; a 1/4 CCD, 10x optical zoom, 1-9/16 x 1-13/16 x 2-5/8 inches measurement, 1.5 W low power consumption, and 3.5 ounces weight, and is ideal for space-limited small UAVs. It provides the on-line features such as zoom, auto focus, adjustable gain, white balancing, and titling capabilities available over the high-speed serial communication link and TTL signal-level control for quick command processing.

16 American Institute of Aeronautics and Astronautics

Figure 14 Gimbaled camera unit with 10x zoom The camera is mounted in a low-cost 2-axis gimbaled system that has a 10-cm ball with the lower half exposed under the belly of the UAV just behind wings. The gimbaled system has the pan and tilt channels driven by two high-speed digital servos which are actuated through a serial-PWM 12 bit controller connected to the RISC ATMEGA-169 8-bit microcontroller (8MHz) which is the central component of the architecture that implements gimbal control through integration of line-of-sight (LOS) rate measurements (tri-axial sensor head), and gimbal reference commands sent from onboard GNC SBC PC104 computer [4]. The LOS inertial stabilization technique is based on the subtraction of the UAV Euler angular rates measured in inertial space from the gimbal reference commands. The schematic diagram of the gimbaled system is illustrated in Fig. 15.

Figure 15 Inertial stabilization scheme in gimbaled camera unit Fig. 16 depicts the flight test architecture of the feature path tracking strategy. When the UAV receives a feature following command from a ground control center, the UAV and the camera automatically start following the polynomial feature path for surveillance and tracking of a target that is moving along the feature trajectory by using the information provided from the Piccolo ground control center and onboard path following control algorithm. The integrated UAV/gimbaled control algorithm in turn keeps the tracking in the center of the camera frame. The video imagery taken from the gimbaled camera is transmitted to the image processing computer over the analog 2.4 GHz link in real-time. For the inner loop controller, an L1 adaptive control law can be applied to compensate for a worst situation or for enhancing the feature following performance.

17 American Institute of Aeronautics and Astronautics

Figure 16 Architecture for feature trajectory tracking

VI. Feature Following Results A. Hardware-In-The-Loop Simulation Results In this section, simulation results demonstrating the performance of the feature following and navigation algorithms are demonstrated. Fig. 16 shows a feature path generated by the mapping algorithm, and is passed onto the onboard feature following algorithm to make the UAV follow the path while keeping the feature in the image of onboard camera. Initially the small UAV is orbiting at a predefined location near the ground control station (origin). When a feature following command is sent to the UAV from the ground control station along with the information describing the target feature path, the onboard computer generates an initial bridge trajectory to bring the UAV to beginning of the feature path from its current location. When the UAV reaches the target feature path, automatic transition command is executed to switch the UAV from the bridge trajectory to the target feature path. At this point the UAV starts following the feature path with the camera pointing at the target feature.

4000 3500 Command Track UAV Track

3000 2500

North (m)

2000 1500 1000 500 Initial Tracking Point

0 Final Tracking Point -500 -1000 -3000

-2500

-2000

-1500

-1000

-500

0

500

1000

1500

East (m)

Figure 17 Feature following results; blue line – commanded reference feature path, red line – UAV flight path 18 American Institute of Aeronautics and Astronautics

100 80

East North

Tracking Errors (m)

60 40 20 0 -20 -40 -60 -80 -100

50

100

150

200

250

300

350

400

450

500

Time (sec)

Figure 18 Feature following errors, blue line – North, red Line – East

North Errors (m)

The hardware-in-the-loop (HIL) test results are shown in Fig. 17, where the blue line represents the desired feature path and the read line - the actual UAV route. When the feature following mission is completed, it is possible to design an additional surveillance mission by allowing the UAV to follow a returning route which connects the final point of the feature path to the initial point. This surveillance mission is depicted as a returning route. As can be seen the UAV kept following the target feature fairly well. The feature following errors between the commanded and true UAV path are shown in Fig. 18, and the mean value of the deviation errors reside within 30 m and the maximum deviation from the desired trajectory is about 40 m during the feature following application. It can be seen from these results that precise feature following is achieved.

0.2 0.1 0 -0.1 -0.2

0

50

100

150

200

250

150

200

250

150

200

250

Time (Sec) 0.15

Altitude Errors (m)

East Errors (m)

0.1 0.05 0 -0.05

0

50

100

Time (Sec) 0 -0.005 -0.01 -0.015

0

50

100

Time (Sec)

Figure 19 Navigation position errors from distributed navigation filter 19 American Institute of Aeronautics and Astronautics

North Errors (m/s)

2

x 10

-4

0 -2 -4 -6 0

East Errors (m/s)

6

100

150

200

250

50

100

150

200

250

50

100

150

200

250

4 2 0 -2 -4

Altitude Errors (m/s)

x 10

50 -4

2

0 x 10

-4

1 0 -1

0

Figure 20 Navigation velocity errors from distributed navigation filter Figures (19) and (20) present the position and velocity estimates generated by the distributed navigation system and used as inputs to the feature following algorithm. As can be seen, the estimation errors with respect to the DGPS position and velocity are very small, zero at steady state with the maximum deviation of the position estimates less than 1m, and of the velocity estimates less than 10−3 (m/s). Thus the proposed distributed navigation system provides accurate navigation solution necessary for precise and reliable feature following capability.

B. Flight Test Results This flight test was conducted in support of the quarterly, joint Cooperative U.S. Special Operations Command and the Naval Postgraduate School Field Experiments, held at McMillan field in Camp Roberts California. In this section, sample results from several experiments are shown providing some insight into the capabilities of the feature following system. Fig. 21 shows a desired feature path generated by the mapping algorithm, which is reconstructed by using analytical polynomial equations as shown in Fig. 22. This polynomial path and its boundary conditions are passed onto the onboard computer to make the UAV track the desired path while keep the onboard camera targeting the feature. The overall feature following scenario is illustrated in Fig. 23. Initially the small UAV is loitering near the ground control center with a constant radius at a predefined location. If a feature following command is sent to the UAV from the ground control station along with the information regarding the target feature path, the onboard computer generate an initial bridge trajectory, which provides a route for the UAV to reach the target feature from the current location. When the UAV reaches the target feature path, automatic transition command is executed to switch the UAV from the bridge trajectory to the target feature path. After the feature following is finished, the UAV is sent to the original loitering circle, where it repeats the tracking mission again.

20 American Institute of Aeronautics and Astronautics

Figure 21 Mapping for feature trajectory generation in geographical coordinates

Figure 22 Polynomial feature trajectory reconstruction in geographical coordinates

21 American Institute of Aeronautics and Astronautics

Figure 23 Feature following results; blue line – commanded reference path, red line – UAV flight path

Figure 24 Feature following errors in North and East coordinates

22 American Institute of Aeronautics and Astronautics

The feature following errors between the commands and true UAV track is shown in Fig. 24, and the mean value of the deviation errors reside within 40 m and the maximum deviation from the desired trajectory is about 50 m during the feature following application. Figure 25 shows the mosaic of the user selected features constructed from the images obtained by the onboard high resolution camera, taken at 550m AGL with a 30 degree FOV.

Figure 25 Mosaic obtained from images taken during the feature following application

VII. Conclusion In this paper, a small UAV equipped with a low-cost, high-resolution, nadir imaging camera is utilized for feature following application. Both hardware-in-the-loop and flight test results were presented. The newly integrated feature generation and following concept allows an operator with no prior training in UAV operations to specify the desired ground track to be followed by the airborne high-resolution camera, while the actual UAV flight path required to follow this sensor path is autonomously computed by the onboard computer. In addition, the proposed distributed navigation system provides very precise position and velocity estimates. Thus the complete system combines feature generation and following algorithms with advanced distributed navigation algorithm. This system was successfully used to capture imagery of discrete targets as well as continuous paths, where the feature following algorithm guaranteed that the sensor was pointing at the desired path. The developed feature-following algorithm, feature path generation and control GUI interface, reliable navigation algorithm, integrated dual-purpose HR sensor (still and video) and remote networking capabilities provide a simple and effective solution to the complex feature following problem with a high resolution imagery, allowing non-trained personnel to utilize advanced features of modern airborne systems without in-depth knowledge of the UAV navigation and control systems.

Acknowledgments This work was supported by USSOCOM under NPS-SOCOM TNT cooperative. The research of the first author is supported by the National Research Council Associateship tenured at the Unmanned Systems lab at the Naval Postgraduate School.

23 American Institute of Aeronautics and Astronautics

References 1

I. Kaminer, O. Yakimenko, V. Dobrokhodov, A. Pascoal, N. Hovakimyan, A. Young, C. Cao, and V. Patel, “Coordinated Path Following for Time-Critical Missions of Multiple UAVs via L1 Adaptive Output Feedback Controllers”, AIAA Guidance, Navigation, and Control Conference and Exhibit, Hilton Head, SC, Aug. 20-23 2007. 2 M. E. Campbell, J.-W. Lee, and E. Scholte, “Simulation and Flight Test of Autonomous Aircraft Estimation, Planning, and Control Algorithms”, Journal of Guidance, Control, and Dynamics, vol. 30, no. 6, Nov.-Dec. 2007 pp. 1597-1609. 3 I. Kaminer, A. Pascoal, E. Hallberg, and C. Silvestre, “Trajectory Tracking for Autonomous Vehicles: An Integrated Approach to Guidance and Control”, AIAA Journal of Guidance, Control and Dynamics, Vol. 21, No. 1, 1998, pp. 29-38. 4 V. Dobrokhodov, O. Yakimenko, K. D. Jones, I. Kaminer, E. Bourakov, and I. Kitsios, “New Generation of Rapid Flight Test Prototyping Sysem for Small Unmanned Air Vehicles”, AIAA Guidance, Navigation, and Control Conference and Exhibit, Hilton Head, SC, Aug. 20-23 2007, AIAA 2007-6567. 5 R. Rysdyk, “Unmanned Aerial Vehicle Path Following for Target Observation in Wind”, AIAA Journal of Guidance, Control, and Dynamics, vol. 29, no. 5, Sep.-Oct. 2007 pp. 1092-1100. 6 Eric Frew, T. Mc-Gee, Z. Kim, X. Xiao, S. Jackson, M. Morimoto, S. Rathinam, J. Padial, and R. Senguta, “Vision-Based Road Following Using a Small Autonomous Aircraft”, in Proceedings of the IEEE Aerospace Conference, 2004, pp. 3006-3015. 7 J. Egbert and R. W. Beard, “Low Altitude Road Following Control Constraints Using Strap-down EO Cameras on Miniature Air Vehicles”, in Proceedings of the IEEE American Control Conference, 2007, pp. 353-358. 8 M. S. Grewal, A. P. Andrews, L. R.Weill Global Positioning Systems, Inertial Navigation, John Wiley & Sons, Inc., New York, 2007. 9 O, Yakimenko, “Direct Method for Rapid Prototyping of Near-Optimal Aircraft Trajectories,” AIAA Journal of Guidance, Control, and Dynamics, Vol.23, No.5, 2000, pp.865-875. 10 R. G. Brown and P. Y. C. Hwang, Introduction to Random Signals and Applied Kalman Filtering, John Wiley & Sons, Inc., New York, NY, 3rd ed., 1997 11 D. B. Kingston, and R. W. Beard, “Real-Time Attitude and Position Estimation for Small UAVs using Low-Cost Sensors”, AIAA Unlimited Systems Conference and Workshop, Chicago, IL, Sep. 2004, Paper. No. AIAA-2004-6533. 12 R. van der Merwe, E. A. Wan, and S. J. Julier, “Sigma-Point Kalman Filters for Nonlinear Estimation and Sensor Fusion: Applications to Integrated Navigation”, AIAA Guidance, Navigation, and Control Conference and Exhibit, Providence, Rhode Island, Aug. 2004, AIAA 2004-5120. 13 D. H. Titterton, and J. L. Weston, “Strapdown Inertial Navigation Technology,” IEE Radar, Sonar, Navigation and Avionics Series 5, London, UK, 1997. 14 Y. Bar-Shalom, X. –R. Li, and T. Kirubarajan, Estimation with Applications to Tracking and Navigation, John Wiley & Sons, Inc., New York, 2001. 15 Cao C., Patel V., Hovakimian N., Kaminer I.I, Dobrokhodov V.N, “Stabilization of Cascaded Systems Via L1 Adaptive Controller with Application to a UAV Path Following Problem and Flight Test Results,” American Control Conference, New York, July 11-13, 2007, WeC11.3. 16 “Piccolo Documentation,” http://www.cloudcaptech.com [cited 15 July 2008]. 17 “Microspace PC-104,” http://www.adlogic-pc104.com [cited 25 June 2008]. 18 “Motorola WMC6300 Mesh Card,” http://www.motorola.com/mesh [cited 15 July 2008]. 19 “Wave Relay QUAD Radio Router,” http://www.persistentsystems.com/products/ [cited 20 July 2008]. 20 “PelcoNet Video Server,” http://www.pelco.com/producets [cited 25 June 2008]. 21 “Sony FCB-IX11A Color Bock-Camera,” http://www.aegis-elec.com/producets [cited 10 July 2008].

24 American Institute of Aeronautics and Astronautics

Feature Following and Distributed Navigation Systems ...

navigation algorithms for a small unmanned aerial vehicle equipped with a low-cost sensor ..... Figure 10 Coordinate Systems for Feature Tracking System ...... Multiple Access (QDMA) modulation and is optimized for mobile ad-hoc broadband.

1MB Sizes 1 Downloads 116 Views

Recommend Documents

Graph-Based Distributed Cooperative Navigation ... - Semantic Scholar
Apr 3, 2012 - joint pdf for the case of two-robot measurements (r = 2). ...... In this section, we discuss the effect of process and measurement noise terms on the ..... (50). The computational complexity cost of calculating the .... Figure 5: Schema

Distributed Algorithms for Guiding Navigation across a ...
systems, well-suited for tasks in extreme environments, es- pecially when the .... the smallest number of communication hops to a sensor that .... filing. In our current implementation, we perform the neigh- bor profiling on the fly. Every time a ...

Position Paper: Feature Interaction in Composed Systems
cation domain. Our goal is the investigation of fea- ture interaction in component-based systems be- yond telecommunication. The position paper out-.

[Read PDF] Distributed Systems: Concepts and Design ...
Page 1 ... as the way forward for industry. The depth of coverage will enable students to evaluate existing distributed systems and design new ones. Related.

Verifying Cloud Services: Present and Future - Distributed Systems ...
for four hours, the accounts of Dropbox's 25 million users could be accessed ... and cloud users, who use a software or storage service exe- cuting in the cloud, ..... management reasons, a PaaS platform might migrate com- ponents around ...

distributed-systems-concepts-and-design-5th-edition.pdf ...
Whoops! There was a problem loading more pages. Retrying... distributed-systems-concepts-and-design-5th-edition.pdf.