2011 IEEE/RSJ International Conference on Intelligent Robots and Systems September 25-30, 2011. San Francisco, CA, USA

A Nonlinear Observer Approach for Concurrent Estimation of Pose, IMU Bias and Camera-to-IMU Rotation Glauco Garcia Scandaroli1 , Pascal Morin1 , Geraldo Silveira2 1

2 CTI Renato-Archer – DRVC Division Rod. Dom Pedro I, km 143,6, Amarais. CEP 13069-901, Campinas/SP, Brazil

INRIA Sophia Antipolis-Méditerranée 2004 route des Lucioles, Sophia Antipolis 06902, France

[email protected], [email protected], [email protected]

Abstract— This paper concerns the problem of pose estimation for an inertial-visual sensor. It is well known that IMU bias, and calibration errors between camera and IMU frames can impair the achievement of high-quality estimates through the fusion of visual and inertial data. The main contribution of this work is the design of new observers to estimate pose, IMU bias and camera-to-IMU rotation. The observers design relies on an extension of the so-called passive complementary filter on SO(3). Stability of the observers is established using Lyapunov functions under adequate observability conditions. Experimental results are presented to assess this approach. Index Terms— Inertial Estimation, Inertial Vision, Nonlinear Observers, Lyapunov Function.

I. I NTRODUCTION Fusing information obtained from different sensors is ubiquitous in robotics. For some systems and applications, the fusion is necessary in order to obtain an information that cannot be extracted from a single sensor. In other cases, data fusion aims at improving the information quality by exploiting the complementary characteristics of different sensors. The present work is dedicated to data fusion for an inertial-visual system. The objective is to develop observers that can provide high-quality pose estimates via the online estimation of various inertial measurement unit (IMU) biases and self-calibration of camera-to-IMU (C-to-IMU) frames rotation. Inertial-visual fusion has been an active research topic for many years (see, e.g., [1], [2]). The objective is to combine pose measurements provided by a camera at a relatively low frequency with high frequency measurements of the angular velocity and proper acceleration provided by an IMU. Pose measurements allow to limit the drift associated with direct integration of the IMU data. On the other side, IMU measurements provide incremental displacements on short time-intervals that can be used to initialize vision algorithms, or to compensate for a momentary loss of vision. Unfortunately, the fusion process can be seriously impaired by measurement errors or uncertainties on the system dynamics. A first source of difficulties comes from IMU measurement bias, which can be significant for most low-cost IMUs used in robotic applications. Since bias may vary due to several factors (e.g. temperature, battery level), they should be permanently estimated. Another source of G. G. Scandaroli is funded by grants from the Conseil Régional ProvenceAlpes-Côte d’Azur, and the DGCIS Rapace project leaded by Geocean.

978-1-61284-455-8/11/$26.00 ©2011 IEEE

difficulties concerns various parameters related to the use of different coordinate frames, e.g. the camera and the IMU frames. Usually these parameters are constant and can be estimated in a preliminary calibration step. A possible solution is to proceed using accelerometers as a measurement of gravity [3]. Nevertheless, this method should assume that IMU bias has been already identified. Some authors have recently proposed the concurrent estimation of pose and IMU bias together with self-calibration [2], [4]. An underlying difficulty is that persistent motion conditions must be satisfied for the system to be completely observable. This property, together with the non-linearities of the system motion equations, make this estimation problem challenging. This study extends a previous work by the authors on nonlinear filter design for pose and IMU bias estimation. The present contribution is twofold. Firstly, we complement the nonlinear observer proposed in [5] with an estimation of C-to-IMU rotation. This procedure is important because the fusion of visual and inertial data much relies on an accurate estimation of this rotation matrix. Secondly, we validate the proposed approach through experiments with a hand-held inertial-visual sensor. The present approach is related to recent works on both nonlinear observer design and inertial-visual fusion. Concerning nonlinear observers, the main contribution of this paper is an extension of the passive complementary filter on SO(3) [6] to the case where rotation and angular velocity measurements are obtained in different frames and the rotation between these frames is unknown. This paper is also strongly related to recent works on inertial-visual fusion, e.g. [2], [4], where concurrent pose estimation and self-calibration is also addressed. Comparing to these works, the present paper is less general as only C-to-IMU rotation calibration is addressed while calibration of other parameters is also considered in [2], [4]. However, the present estimation result is stronger since exponential stability of the observer is established under appropriate observability conditions. Extended Kalman filter and unscented Kalman filter approaches used in [4] and [2] respectively do not provide such stability guarantees, especially under the fast motions necessary to ensure good observability properties for the parameters calibration. Pose measurements are obtained by directly exploiting pixel intensities as proposed in [7], instead of using image

3335

B {RCB , pB C} {RC , pC }

W

frame i to frame j and pji ∈ R3 represents the translational displacement of a frame i w.r.t. frame j. When j = W, the superscripts are omitted. The dynamics for the system comprising attitude and translational displacement of B with respect to W is:

{RB , pB } C

R˙ B = RB S(ωB ), p˙B = v, v˙ = RB aB ,

(3)

where v ∈ R3 is the body’s translational velocity in world coordinates, ωB ∈ R3 and aB ∈ R3 are the rotational velocity and translational acceleration in B coordinates.

Fig. 1.

Assumption 1 There exist five positive constants cω , cω˙ , cω¨ , c... ˙ B (t)| ≤ cω˙ , ω , ca such that ∀t ∈ [0, ∞): |ωB (t)| ≤ cω , |ω ... |¨ ωB (t)| ≤ cω¨ , | ω B (t)| ≤ c... , and |a (t)| ≤ c . ω B a

Representation of important frames.

features. That method has been chosen because of its efficiency, and the possibility of using all image information, even from areas where no image features exist. Fusion with inertial data is instrumental in providing a good initialization of the vision algorithm and avoid local minima. We remark that the proposed observer does not depend on any specific vision-based localization method.

This technical assumption is made for the sake of stability analysis. Clearly, it is always satisfied in practice for physical systems. To measure ωB and aB , an IMU consisting of rate gyroscopes and accelerometers is employed. It is considered that B and the IMU reference frame coincide, therefore gyroscopes measure the angular velocity ωB and accelerometers the proper acceleration, i.e. body’s acceleration minus gravitational field. Due to MEMS manufacturing, IMU measurements include offset biases that result in:

II. T HEORETICAL BACKGROUND

ω = ωB + ωb , a = RB T (v−g ˙ 0 )+ab , ω˙ b = 0, a˙ b = 0, (4)

A. Mathematical Notation and Identities The unitary vector with 1 in the i-th position is denoted as ei ∈ R3 . The special orthogonal group is denoted as SO(3). Its associated Lie algebra is the set of anti-symmetric matrices denoted as so(3). The cross-product can be represented by the product S(u)v=u × v, ∀ u, v ∈ R3 , where S(·) ∈ so(3). The inverse of the S(·) operator is denoted vex(·), i.e., vex(S(u)) = u, u ∈ R3 , S(vex(A)) = A, A ∈ so(3). With A ∈ R3×3 , the symmetric and anti-symmetric T T , Pa (A) = A−A . operators are defined as Ps (A) = A+A 2 2 Let u ∈ R3 , R ∈ SO(3), then the following properties hold: ¡ ¢ ¡ ¢ S(Ru) = RS(u)RT , R vex Pa (R) = vex Pa (R) . (1)

Consider any parametrization Θ such that R ≈ I3 + S(Θ) at first order around the 3 × 3 identity matrix I3 . From the d general rotation dynamics dt R = RS(u), and around R=I3 : ¡ ¢ ˙ ≈ u, Θ vex Pa (R) ≈ Θ. (2) ©£ R p ¤ The ¯ special Euclidean 0 1 ª group is denoted as SE(3) = ¯ R ∈ SO(3), p ∈ R3 , and its associated Lie algebra is the ª ©£ S(u) v ¤ ¯ ¯ u, v ∈ R3 . One set of twist matrices se(3) = 0 0 6 can ¡define¢ Ξ(x) ∈ se(3), x ∈ R , and one writes T (x) = exp Ξ(x) , ∈ SE(3) from the exponential map properties. B. System description

This work deals with the inertial-visual system depicted in Fig. 1. Denote by W an inertial world frame defined as appropriate. Similarly, let B and C denote two different body and camera frames attached to the same rigid body. Measurements relative to angular velocity and linear acceleration are expressed in the former frame, and pose measurements are relative to the later. Rij ∈ SO(3) is the rotation matrix from

where ω, a are rate gyroscope and accelerometer measurements; ωb , ab denote the gyroscope and accelerometer bias respectively; and g0 is the gravitational field in W coordinates. It is important to notice that other sensor characteristics are neglected such as limited bandwidth, bias variation w.r.t. temperature, scale errors and additive measurement noise. The experimental results show that highly accurate estimates are obtained by the proposed approach despite these effects have not been considered in the modeling. As detailed in Section III, a visual algorithm is used to obtain rotation and translation measurements of the camera w.r.t. the world frame. These measurements are (RC , pC ) instead of (RB , pB ). In order to fuse visual and IMU measurements, the parameters (RCB , pB C ) of the transformation from camera frame to body frame are needed. Poor estimation of these parameters can strongly damage the fusion process. Considering that IMU sensors and the visual system present different sampling frequencies, for several periods of time only estimates obtained from IMU integration are available. If the estimates RB are biased, e.g. to due a bad estimation of RCB , the translational displacement integrates this bias twice. This misleading double integration leads to a biased position estimation. This can be problematic for the vision algorithm, especially for minimization-based algorithms that require a “good enough” initialization. Obtaining high quality estimates of RCB is one of the objectives of this paper. C. Planar scenes Denote Ci a reference frame attached to the optical center of a camera at an instant i; Let pCi ∈ R3 denote the coordinates of a point in space expressed in Ci frame coordinates. Pinhole cameras [8] are based on the perspective projection

3336

model, where a point pCi is projected onto the image plane Ii with the following transformation pCi = e3 T1pCi KpCi ∈ P2

(5)

where K ∈ R3×3 represents the intrinsic parameters of the camera, i.e., focal length, skew factor and principal point. Now, consider two images Ii , Ii+1 obtained at instants i, i+1, with Ci , Ci+1 denoting the camera frame at these respective instants. Suppose that a point pj lies on a plane Π of the 3D space, let pCj i denote the coordinates of pj in Ci , and nCi the scaled normal vector to Π such that nCi T pCj i = 1. The coordinate transformation between the two camera frames for any pj ∈ Π yields: ¢ ¡ C C C nCi T pCj i , Hii+1 pCj i , pj i+1 = RCii+1 + pCi+1 (6) i where H ∈ GL(3) is called Euclidean homography transformation. Using the point projection onto an image (5), and the 3D coordinate planar transformation yields C

λpj i+1 = KHii+1 K −1 pCj i , Gi+1 pCj i , i

(7)

C

where pCj i , pj i+1 are the projections of the point pj on the images Ii and Ii+1 , respectively, and G ∈ GL(3) denotes the projective homography transformation. Note that the relation from the points is made up to a scale factor λ. This scalar hcan be suppressedi by defining a group action, e1 T Gp e2 T Gp , ,1 e3 T Gp e3 T Gp

T

2

where ∆xCi |n is the computed increment that updates the ¡ ¢ pose via T (b xCi |n ) = exp Ξ(∆xCi |n ) T (b xCi |n−1 ), until the resulting |∆xCi |n | is “small enough”. IV. E STIMATOR DESIGN From Eqs. (3) and (4), the dynamics writes: R˙ B = RB S(ω − ωb ), ω˙ b = 0, p˙B = v, v˙ = RB (a − ab ) + g0 ,

2

: GL(3)×P →P . The folw(G, p)= lowing properties hold for the group action w: w(I3 , p) = p, w(Ga , w(Gb , p)) = w(Ga Gb , p), ¡ ¢−1 (8) w(G, p) = w(G−1 , p)

An image Ii consists in a matrix of intensities, and the function Ii (p) : P2 →R maps the point p to its respective intensity in Ii . From (8) it is not difficult to verify that for two images I1 , I2 : ¡ ¡ ¢¢ I1 (p) = I2 w G21 , p . (9) III. V ISUAL M ETHOD

The employed visual method is based on the direct use of the pixel intensity and the image flow. We consider in this work a known planar target so that we can focus only on the localization aspects. For localization in unknown piecewise planar scenes, the reader is referred to [7]. The visual method aims at obtaining the pose (RC , ¯pC ). ¯This problem can be ¯ ¯2 stated as finding x bCi = ¡ arg min ¢ x r(x) , where r(x) is the vector of residuals rj G(x) defined from Eq. (9), i.e., rj (G) = Ii (w(Gi0 , pj )) − I0 (pj ),

It is not a simple task to solve this problem analytically, and the solution is given by nonlinear iterative minimization. The solution is based on the Taylor expansion and a secondorder approximation [9]. This second-order method provides that Hessians are never computed explicitly yielding the following approximation of r(x) around x = 0 ¡ ¢ (11) r(x) = r(0) + 21 J(0) + J(x) x + O3 . ¯ r(a) ¯ where J(x)= ∂ ∂a denotes the Jacobian of r, and O3 ¯ a=x elements of order higher than two. Note that this approximation depends on the unknown x, which is the global minimum of the minimization problem. The computation of J(x)x is possible owing to the the known reference image I0 and the Lie group parametrization. Ultimately, the solution for the minimization is given iteratively by the linear least squares ¢ ¡ 1 xCi |n−1 ) ∆xCi |n = −r(b xCi |n−1 ), (12) 2 J(xC0 ) + J(b

∀ pj ∈ I0 , (10)

where I0 is the image obtained at a reference frame C0 . Recall from Eqs. (6) and (7) that Gi0 is related to the rotation and translational displacement of two different camera frames. Let TCi denote the current camera pose and TC0 the pose of the reference frame. Then, ¢ ¡ the parametrization xCi ∈ R6 : TCi = T (xCi ) = exp Ξ(xCi ) is used, and H(x) : R6 → GL(3), G(x) : R6 → GL(3) can be defined from (6), (7).

R˙ CB = 0, a˙ b = 0.

(13) (14)

Assuming that pB C can be neglected, the visual system and IMU together provide the following measurements y = (pC , a, RC , ω) = (pB , a, RB RCB , ω). The problem addressed in this section is the state estimation of System (13)-(14). The observers ˙ ˙ B bB b b b R bb +αR ), ω b˙ b =αω , R B =RB S(ω − ω C =RC S(αC ), (15) ˙pb = vb + αp , vb˙ = R b ab ) + g0 + αv , b a˙ b = αa (16) B (a − b B

are defined for the attitude and position estimation, respectively, where αR , αω , αC , αp , αv , αa are innovation terms defined further on. Denote the estimation errors as eB ,RB R bB T , R

peB ,pB − pbB ,

ω eb ,ωb − ω bb , ve,v − vb,

eB ,RB (R bB )T , R C C C e ab ,ab − b ab .

(17) (18)

The estimation error dynamics is thus given by ( ˙ e e b eb − R bB αR ), ω R e˙ b = −αω , B = RB S(−RB ω (19) ˙ B eB e bCB αC ), R = RC S(−R C ( ˙ = ve − αp , ve˙ = −RB e eB )R bB (a−b pe ab − (I−R ab ) − αv , B (20) e a˙ b = − αa .

eB , ω The objective is to define innovation terms so that (R eb , eB , peB , v, e R a )=(I , 0, I , 0, 0, 0) is an asymptotically stable b 3 3 C equilibrium for the error dynamics. Since the rotation dynamics is independent of the translation dynamics, the estimation of the former is addressed first.

3337

A. Rate gyro bias and C-to-IMU rotation estimation

B. Accelerometer bias estimation

bC = R bB R bB denote the estimate of RC , as deduced Let R C bB and R bB . With this notation, the main from the estimates R C theoretical result of this paper is given next.

The authors proposed in a previous work [5] a position and accelerometer bias observer based on the dynamics (20):

Theorem 1 Let  ¡ ¢ b T e bCB αC , R  αR = k1 RB vex P¡a (RC ) − ¢ bB T vex Pa (R eC ) , αω = −k2 R   bB )T R bB T Pa (R eC )R bB (ω − ω αC = k3 (R bb ), C

(21)

eC =RC R bC T . Assume that the followwith k1 , k2 , k3 >0 and R ing condition is satisfied: ∃ δ > 0 : |ω˙ B (t) × ω ¨ B (t)| > δ, ∀ t.

(22)

eB , ω eB ) = (I3 , 0, I3 ) is a locally exponentially Then, (R eb , R C stable equilibrium point of the error dynamics (19). Proof. See Appendix. This result calls for several remarks. Relation (22) is a “persistent excitation” condition related to the system’s observability properties. Indeed, System (13) with RC and ω as measurements is not observable for every input ωB . Trivially, it can be verified that the state is not observable when ωB = 0. One can show that (22) is a sufficient condition for the system’s uniform observability. The system is still uniformly observable under slightly weaker conditions, but it is not observable when ω˙ B (t) × ω ¨ B (t) = 0. The proposed observer can be viewed as an extension of the passive complementary filter on SO(3) proposed in [6]. More precisely, setting k3 = 0 in (21) and assuming that bB = I3 , the observer reduces to an attitude and rate RCB = R C gyro bias estimator. In this special case, it has been shown in [6] that this estimator is semi-globally exponentially stable, independently of ωB . Despite the fact that semi-global exponential stability seems more difficult to prove for the present observer, simulation results suggest that its domain of attraction is also quite large. In practice, Condition (22) will not always be satisfied and some care is necessary in the implementation of the proposed observer. A possibility consists in first using the full observer in a preliminary calibration step with persistent motion, thus obtaining a good estimate of RCB , and then setting k3 = 0 in order to use, as explained above, the observer as an attitude and rate gyro bias estimator. A second possibility consists in using the stability condition (22) so as to tune the gain k3 in function of the level of “motion excitation”. Basically, this gain associated with the estimation of the C-to-IMU rotation should be non-zero only when the quantity |ω˙ B (t) × ω ¨ B (t)| is significantly larger than zero, so as to avoid possible drift bB in case of weak motion excitation and measurement of R C noise. Although this quantity is not directly measured, it can be estimated from the measurement of ω. This is detailed in Section V-A.

Lemma 1 (Position and accelerometer bias observer) [5] bB = RB and ω Assume that R bb = ωb . Let

αp = k4 peB , αv = k5 peB , αa =−k6 (I3 + k14 S(ωB ))RB T peB , (23)

with k4 , k5 , k6 >0, such that k6
(24)

(25)

where,

bB T peB (26) αp =k4 peB , αv =k5 peB , αa =−k6 (I3 + k14 S(ω−b ωb ))R

and the innovation terms αR , αω , αC are given by (21), with k1 , · · · , k6 > 0, and k6 < k4 k5 . Assume that condieB , ω eB , peB , ve, e tion (22) is satisfied. Then, (R eB , R ab ) = (I3 , 0, C I3 , 0, 0, 0) is a locally exponentially stable equilibrium point of the estimation error dynamics (19)–(20).

The proof is similar to the proof of [5, Corollary 2]. When pB bB C is not negligible and an estimate p C of this term is available, the term peB in (26) should be replaced either by bB )T pbB − pbB or by pC − R bB pbB − pbB . pC − RC (R C C C V. E XPERIMENTAL R ESULTS

A. System setup

The proposed method is evaluated for the estimation of pose, IMU bias and C-to-IMU rotation for an inertial-visual sensor. We make use of the sensor depicted in Fig. 2(a) consisting of a xSens MTi–G IMU working at a frequency of 200 [Hz], and an AVT Stingray 125B camera that provides 20 images of 800 × 600 [pixel] resolution per second. The camera uses Theia SY125M wide-angle lenses, and a previous calibration of intrinsic parameters resulted in focal length (448.85, 450.26) [pixel] and principal point (394.30, 292.82) [pixel]. A version of Fig. 2(b) is printed out on a 376 × 282 [mm] sheet of paper to serve as a reference for the visual system. The target is placed over a surface parallel to the ground, and this configuration allows

3338

B

0.9 0.8

B

0.7 0.6

C

C

z

0.5

B C B

0.4

C

0.3

(a) Inertial-visual sensor

(b) Reference image

0.2 0.1

Fig. 2. Inertial-visual sensor used in the experiment and reference image used for visual feedback of pose.

W

0

B

0 20

−0.2

−20 100 50 0 −50

y

ˆ ¨ ˆ˙ B ×ω ˆ |ω ¨ B| ω

0.2 0 −0.4

B

ˆ˙ ω

0

−0.2

x

Fig. 4. Estimated trajectory in 3D space. Position trajectory in gray, and components 1,2,3 in blue, green, and red for W, B, and C frames.

20

k

3

0 1

0.5 0

0

2

4

6

8

10

12

14

16

t [s ]

Fig. 3. Evaluation of the filter for identification of observability condition. b˙ B [rad/s2 ], jerk ω b¨ B [rad/s3 ], From top to bottom: estimated acceleration ω b˙ B × ω b¨ B | and resulting k3 . The evaluation of observability condition |ω b˙ B , and ω b¨ B are presented in blue, green and red. components 1,2,3 for ω

a direct calculation of the scaled normal vector nC0 , and the reference frame C0 considering the reference image with 320 × 240 [pixel] and the camera’s intrinsic parameters. Concerning the observability condition (22) obtained for the observer, we use a variable gain k3 = f (δ(t)), with δ(t) = |ω˙ B (t)ר ωB (t)|. This choice allows the application of the C-to-IMU innovation only if the observability condition is satisfied. However, the term |ω˙ B (t)ר ωB (t)| cannot be directly measured. Hence, a secondary filter is designed in order to identify this condition from gyro measurements. We consider an approximate model of constant angular jerk, i.e. ... b˙ B , ω B =0, and a linear Kalman filter is used defining ω bB , ω b and ω ¨ B as states, and ω as measurement. The success of this filter relies on the fact that the bias ωb is constant for short periods of time, therefore it should not influence the b˙ B and ω b evaluation of ω ¨ B . Also the goal is not to precisely estimate these variables, yet to identify when |ω˙ B × ω ¨ B | > δ, 5

for δ > 0. We use k3 (δ)=k 3 (1 + e− 3 (δ−10) )−1 . B. Results and discussion

To evaluate the motion classification, the sensor is placed on a tripod and three different angular motions are performed. The two firsts are made around single axes, from 0.4 to 3 [s], and 5.5 to 8 [s]. A third motion satisfying the observability condition is made from 9 to 13 [s]. The result obtained using the estimator is depicted in Fig. 3, where b˙ B [rad/s2 ], angular jerk ω b angular acceleration ω ¨ B [rad/s3 ], b˙ B ×ω b |ω ¨ B | and the resulting k3 (δ) are displayed from top to bottom. It is visually clear that the estimated evolution

of angular acceleration and jerk are mostly parallel for the first two motions. For the third motion, the angular b˙ B ×ω b acceleration and jerk are not parallel and |ω ¨ B | is large. b˙ B ×ω b Note that in the end of this motion, around 13 [s], |ω ¨B| slowly decreases, but the gain function decreases faster. This behavior is substantial to ignore slow motions that would not contribute to the estimation process w.r.t. the noise value of gyro measurements. After being able to detect when the observability condition is satisfied, we performed a hand-held experiment for the estimation of pose, IMU bias and C-to-IMU rotation. An initial guess for the biases is obtained after leaving the IMU over the same surface for a few seconds. Measuring the C-to-IMU distance, we obtain an approximate pB C =(8, 0.5, 3) [cm], and the initialization of the frame rotation RCB in Euler angles is manually performed resulting ψ=(−90, 0, −90) [o ]. The observer of Corollary 1 is employed using k1 =3.33, k2 =1, k4 =7.85, k5 = 15.14, k6 =4.11, and the proposed variable gain function k3 (δ) with k 3 =1. This gain tuning follows the procedure described in [5] for settling times of 1, 9, 0.6, 1.2, eB , ω and 8.5 [s] for R eb , peB , ve, e ab , respectively. Results obtained for this experiment are depicted in Figs. 4, 5, and 6. The supplemental multimedia material presents the image sequence together with the evolution of trajectory. This supplemental material also shows a sample of the same sequence without C-to-IMU self-calibration leading to the divergence of the visual algorithm. Fig. 4 presents the estimated position trajectory during the experiment. The resulting trajectory presents regions with richer motion, that refer to the self-calibration itself, and transition between these regions. One can identify three regions with richer motion. Fig. 5 depicts four sample images obtained from the camera at 1.5, 9, 20, and 25 [s], where the red lines represent the last target measured by the visual system, yellow is the initialization provided by the nonlinear observer and cyan the current measurement. From these images, one can see the improvement yielded by the data fusion and self-calibration.

3339

Time: 1.50 s

Time: 9.00 s

b Φ

100 0

−100

ψe

2 0

−2

b

0.05 0

ω b Time: 20.00 s

Time: 25.00 s

−0.05

b a

b

0 −0.1 −0.2

ω

2 0 −2

k

3

1

Fig. 5. Sample images obtained at 1.5, 9, 20, and 25 [s]. Red lines represent the previous measured target, yellow the visual method’s initialization using pose and C-to-IMU rotation estimates and cyan the current measurement.

Firstly, at 1.5 [s], the previous measurement presents a medium-sized distance w.r.t. the current, and the estimates obtained improve the visual method’s initialization, however some distance can still be noticed mainly due to initial IMU bias and C-to-IMU rotation errors. The other samples are obtained after time has passed. Despite faster motion of the camera, the initialization provided by the estimator is closer to the solution achieved by the visual method. Hence, this corroborates the improvement obtained with the estimation of IMU bias and C-to-IMU rotation. Fig. 6 shows the values obtained from top to bottom for body orientation, and C-toIMU rotation evolution from the initial state, both in Euler angles [o ], gyro bias [rad/s], accelerometer bias [m/s2 ], gyro measurements [rad/s], and the innovation gain k3 (δ). From the evolution of the innovation gains, three periods can be identified. These periods correspond to larger angular velocities and to the regions with richer motion depicted in Fig. 4. Besides, note that during the self-calibration, from 1 to 5 [s], from 9 to 14 [s] and from 16 to 22 [s], the Cto-IMU rotation components in Euler angles move from an initial estimate of ψ = (−90, 0, −90) [o ] towards a steady state around ψ = (−89.5, −2, −88) [o ]. VI. C ONCLUSION AND FUTURE WORK This article proposes a new method for estimating pose, IMU bias and C-to-IMU rotation. An exponentially stable nonlinear observer is developed together with its proof obtained under observability conditions. The satisfaction of this observability condition can be identified using gyroscope measurements through a method also described. Experimental results using an inertial-visual sensor support the effectiveness of the proposed method. Future work will consist in extending the approach to the estimation of other parameters such as the C-to-IMU translational displacement, and scaled normal vector of the visual target.

0.5 0

0

5

10

15

20

25

t [s ]

Fig. 6. Resulting estimates of pose, IMU bias and C-to-IMU rotation. From top to bottom results for the estimates of B orientation, C-to-IMU rotation from initial state, both in Euler angle [o ], gyro bias [rad/s], accelerometer bias [m/s2 ], gyro measurements [rad/s] and gain k3 (t). The components eω b, ψ, 1, 2, and 3 of Φ bb , b ab and ω are presented in blue, green and red.

ACKNOWLEDGMENTS

The authors thank Maxime Meilland and Daniele Pucci for the useful discussions during the development of this project. R EFERENCES [1] P. Corke, J. Lobo, and J. Dias, “An introduction to inertial and visual sensing,” Intl. J. of Robot. Res., vol. 26, pp. 519–535, 2007. [2] J. Kelly and G. S. Sukhatme, “Visual-inertial sensor fusion: Localization, mapping and sensor-to-sensor self-calibration,” Intl. J. of Robot. Res., vol. 30, pp. 56–79, 2011. [3] J. Lobo and J. Dias, “Relative pose calibration between visual and inertial sensors,” Intl. Journal. Robot. Res., vol. 26, pp. 561–575, 2007. [4] F. Mirzaei and S. Roumeliotis, “A Kalman filter-based algorithm for IMU-camera calibration: Observability analysis and performance evaluation,” IEEE Trans. on Robot., vol. 24, pp. 1143 –1156, 2008. [5] G. G. Scandaroli and P. Morin, “Nonlinear filter design for pose and IMU bias estimation,” in IEEE Intl. Conf. on Robot. and Autom., 2011. [6] R. Mahony, T. Hamel, and J.-M. Pflimlin, “Nonlinear complementary filters on the special orthogonal group,” IEEE Trans. on Autom. Control, vol. 53, pp. 1203–1218, 2008. [7] G. Silveira, E. Malis, and P. Rives, “An efficient direct approach to visual SLAM,” IEEE Trans. on Robot., vol. 24, pp. 969–979, 2008. [8] Y. Ma, S. Soatto, J. Kosecka, and S. S. Sastry, An Invitation to 3-D Vision: From Images to Geometric Models. SpringerVerlag, 2003. [9] E. Malis, “Improving vision-based control using efficient second-order minimization techniques,” in IEEE Intl. Conf. on Robot. and Autom., 2004, pp. 1843–1848. [10] H. K. Khalil, Nonlinear Systems, 3rd ed. Prentice Hall, 2002. [11] M.-D. Hua, “Attitude estimation for accelerated vehicles using GPS/ INS measurements,” Control Eng. Pract., vol. 18, pp. 723–732, 2010.

A PPENDIX : P ROOF OF T HEOREM 1 eC = The error for C frame orientation is defined as R B T b T Bb T Bb T b e e b e RC (RC ) RB =RB RC RB = RB RB RC RB . We remark eC , ω eB ) = (I3 , 0, I3 ) is a stable equilibrium that if (R eb , R C eB , ω eB ) = (I3 , 0, I3 ) is also a stable equilibpoint, then (R eb , R C e ∈ R3 : R eC ≈ I3 + S(Θ), e rium point of dynamics (17). Let Θ

3340

e e ∈ R3 : R eB ≈ I3 + S(Φ), e ψe ∈ R3 : R eB ≈ I3 + S(ψ), Φ C ωB ≈ ω − ω bb . Using these approximations and the expression eC = R eB R bB R eB R b T yields, around the equilibrium, R eC ≈ R C B e e e I3 + S(Θ) ≈ (I3 + S(Φ))(I3 + S(RB ψ)). Thus, ˙ ˙ e e e˙ ≈ S(Φ)+S(R e S(Θ) B S(ωB )ψ+RB ψ)

Using Eqs. (2), (17), (21) and neglecting the higher order terms, one can write ˙ e e − RB ω Θ ≈ − k1 Θ eb + RB RCB αC + RB S(ωB )ψe − RB RB αC C

eC , R eB , ω and the linearized system for (R C eb ) writes  ˙  e B, e e − RB ω  = − k1 Θ eb + RB S(ψ)ω Θ T e ˙ω e b = k2 RB Θ,   e ˙ e B. ψ = k3 S(RB T Θ)ω

Define the Lyapunov candidate function 1 ωb |2 2k2 |e

Then along the solutions of (28)

+

1 e2 2k3 |ψ| .

From this point, using Barbalat’s Lemma [10, p. 323] and the observability condition (22) it is not very difficult to show eω e = 0 is an asymptotically stable equilibrium of that (θ, eb , ψ) the linearized system. However, this is not even sufficient to prove local asymptotic stability of the original system. We prove below, via the design of a strict Lyapunov function, that the linearized system is uniformly exponentially stable. Then, local exponential stability of the original nonlinear system follows from [10, p. 161]. We proceed by defining the auxiliary variable γ and its derivative ¡ ¢ e e (31) e γ˙ = k2 I3 −k3 S(ωB )2 θ+S( ω˙ B )ψ, γ=ω eb +S(ωB )ψ, and the secondary function

× α2 |2 , (34b)

d eT e 2 − ε(1 + O(ε))|γ|2 ε dt (θ γ) ≤ 12 (1 + O(ε))|θ| e2 + 1 ε2 |S(ω˙ B )ψ|

(35a)

2

d eT e 2 + 1 ε(1 + O(ε2 ))|γ|2 −ε2 dt (ψ S(ω˙ B )γ) ≤ O(ε)|θ| 2 (35b) 2 2 e e2 − ε (1 + O(ε))|S(ω˙ B )ψ| + 1 ε3 |S(¨ ωB )ψ|

2 7 2 2 3 d eT e e +ε 2 |S(... e2 ω B )ψ| ωB )S(ω˙ B )ψ)≤O(ε )|θ| −ε dt (ψ S(¨ (35c) 5 e 2 − ε3 (1 + O(ε))|S(¨ e2 ωB )ψ| + O(ε 2 )|S(ω˙ B )ψ|

Note the use of following majorations: εθeT S(ω˙ B )ψe ≤ 1 e2 2 e 2 ) on (35a), and ε3 ψeT S(... ω B ) S(ω˙ B )ψe ≤ ˙ B )ψ| 2 (|θ| + ε |S(ω 7 5 1 e 2 + ε 2 |S(... e 2 ) on (35c), where (34a) still 2 ω B )ψ| ˙ B )ψ| 2 (ε |S(ω holds with the right power terms. From (35): e 2 − 1 ε(1 + O(ε))|γ|2 L˙ s ≤ 12 (1 + O(ε))|θ| 2 √ e 2 − 1 ε3 (1 + O(ε))|¨ e2 − 12 ε2 (1 + O( ε))|ω˙ B ×ψ| ωB ×ψ| 2 7 ... e 2. (36) + ε 2 | ω B ×ψ|

Using (34b) one obtains

e 2 − 1 ε(1 + O(ε))|γ|2 L˙ s ≤ 21 (1 + O(ε))|θ| 2 ³ √ 2 √ ... 2 ´ 2 ε)) 2 e , (37) − 21 ε3 (1+O( εc ω |ψ| | ω ˙ × ω ¨ | − B B cω˙ +cω ¨

where the majoration of the last line is done remarking that |ω ˙ B ||¨ ωB | ˙ B |, |¨ ωB | → 0 and, from Assumption 1, |ω ˙ B |+|¨ ωB | →0 when |ω cω˙ + cω¨ > |ω˙ B | + |¨ ωB |. If ∃ δ > 0 : |ω˙ B (t) × ω ¨ B (t)|2 > δ, it is direct to verify that for ε > 0 small enough, ¡ e 2 − ε(1 + O(ε))|γ|2 L˙ s ≤ 12 (1 + O(ε))|θ| ¢ √ e2 . (38) − ε3 (1 + O( ε))δ ′′ |ψ| with δ ′′ >0. Finally, defining L = Ld + k1 Ls , one verifies from (29), (30), (32), and (38) that there exists ε0 > 0 such that for any ε ∈ (0, min(ε0 )), L˙ ≤ −η(ε)L,

(32)

where ε ∈ (0, 1). The derivative for each term results ¢ ¡ 2 eT d eT e (33a) ˙ B )ψe , W1 θ−W 2 γ+S(ω dt (θ γ)=−|γ| +θ T T d eT e e ωB )γ ˙ B )γ) = k3 θ S(ωB )S(ω˙ B )γ + ψ S(¨ dt (ψ S(ω (33b) T T e e e e + ψ S(ω˙ B )W1 θ + ψ S(ω˙ B )2 ψ, ... d eT e ψeT (S(¨ ωB )S(ω˙ B )ψ)= ωB )2 +S( ω B )S(ω˙ B ))ψe dt (ψ S(¨ (33c) e ωB )S(ω˙ B )+S(ω˙ B )S(¨ ωB ))S(ωB )θ, −k3 ψeT (S(¨

ka kb 2 ka +kb |u| |α1

where δ∈(0, 1), u, v, ∈ R3 , α1 , α2 ∈R3 with |α1 |=|α2 |=1, and ka , kb >0. Eq. (34a) follows from the triangular inequality, and the proof for (34b) can be found in [11]. Denoting by O(εk ) any term locally bounded by |ε|k in the neighborhood of ε = 0, one can write from (33) and (34a)

(29)

e 2 − θeT ω e ωb T θe + ψeT S(ωB )θ, e L˙ d = − k1 |θ| eb − θeT S(ωB )ψ+e 2 e ≤ 0. = − k1 |θ| (30)

e Ls = εθeT γ − ε2 ψeT S(ω˙ B )γ − ε3 ψeT S(¨ ωB )S(ω˙ B )ψ,

ka |α1 × u|2 + kb |α2 × u|2 ≥

(27)

e then in Consider the following variable change θe = RB T Θ, e e coordinates (θ, ω eb , ψ), System (27) is thus given by  ¡ ¢  e˙ e eb − S(ωB )ψ, e  θ = − k1 I3 + S(ωB ) θ − ω e (28) ω e˙ b = k2 θ,   e ˙ e ψ = − k3 S(ωB )θ. e2 + Ld = 21 |θ|

where W1 = k2 I3 −k3 S(ωB )2 , W2 = k1 I3 + S(ωB ). From e with k′ = e ≤ k′′ |θ|, e |W2 θ| e ≤ k′ |θ|, Assumption 1, |W1 θ| 0 0 0 2 ′′ k2 + k3 cω , k0 = k1 + cω . In order to continue the analysis, note the inequalities 1 3 3 ¡√ ¢ |δuT v| = |δ 4 uT δ 4 v| ≤ 21 δ|u|2 + δ 2 |v|2 , (34a)

η(ε)>0,

since L is a definite positive function for ε>0 small enough. This concludes the proof of uniform exponential stability of e ω e the origin (θ, eb , ψ)=0 for System (28), and consequently for the linearized System (27). As the origin of (27) is eC , ω eB ) = an uniformly exponentially stable, then (R eb , R C (I3 , 0, I3 ) is a locally exponentially stable equilibrium point eB , ω eB ) = (I3 , 0, I3 ) of the nonlinear system and so is (R eb , R C for the dynamics (19).

3341

A Nonlinear Observer Approach for Concurrent ...

bias, and calibration errors between camera and IMU frames can impair the ..... From top to bottom: estimated acceleration ̂˙ωB [rad/s2], jerk ̂¨ωB [rad/s3],.

1MB Sizes 7 Downloads 176 Views

Recommend Documents

A Nonlinear Force Observer for Quadrotors and Application to ...
tion [2], tool operations [3], [4], desired forces application [5] or operation of an on ... solution for small quadrotors, considering their low load capabilities. Another ...

A Labelled Semantics for Soft Concurrent Constraint ...
They can be considered as generalised notions of existential quantifier and diagonal element [21], which are expressed in terms of operators of cylindric algebras [18]. 6. Definition 9 (Cylindrification). Let V be a set of variables. A cylindric oper

A system architecture for fault tolerance in concurrent ...
mechanisms for concurrent systems are ... Our proposed system architecture ful- ...... Communication and Computer Networks, VLSl and Design Automation,.

A system architecture for fault tolerance in concurrent ...
al acceptance test status and t ensure. 1x2 - yt < tolerable processes via cm. 24. COMPUTER. 1 ... Figure 1. Control flow between the application program and the Recovery ..... degree in Computer Engineering or related areas. ... each year with two m

design for manufacturability and concurrent engineering pdf ...
design for manufacturability and concurrent engineering pdf. design for manufacturability and concurrent engineering pdf. Open. Extract. Open with. Sign In.

Views: Compositional Reasoning for Concurrent ...
Jan 23, 2012 - Abstract. We present a framework for reasoning compositionally about concurrent programs. At its core is the notion of a view: an abstraction of the state that takes account of the possible interference due to other threads. Threads' v

dance observer
also they allowed me to compose dances of mv own and present them on their programs. I don't know of an,v companv which will allow its members all .... MANAGING EDITOR: Louis Horst. Publishcd monthly from Scphmbor lo M.y, bi-rnonthly from . Junc to S

Concurrent programming
Page 9. 9. CMSC 15400. Three ways to create concurrent flows. Allow server to handle mul ple clients simultaneously. 1. ..... Single core laptop. 0. 1. 2. 3. 0 2 4 6 ...

Concurrent Stream Processing - GitHub
... to SQL and execute federated queries across data sources. ... where source data arrives in streams. .... or a single input pipe that is copied to two destinations.

A Non-Expansive Convolution for Nonlinear-Phase ...
In this case, the matrix. (A11J + A12) has to be nonsingular. In the next section, we make consideration of this condition to design the NLPPUFB. 2) K = 3: The problem is to calculate ˆa1 in Fig. 3(b). In this case, we have a0 = B11Jx1 + B12Jx0 = [.

A nonlinear elastic deformable template for soft ...
Modern medical imaging systems can provide a lot of data explaining the anatomy and function of a .... applications in medical image analysis. ... model is a summary representation of the manual segmentation of a (large, as big as possible).

and PD-PID Controllers for a Nonlinear Inverted Pendulum System
nonlinear problem with two degrees of freedom (i.e. the angle of the inverted pendulum ..... IEEE Region 10 Conf. on Computers, Communications, Control and.

A Study of Nonlinear Forward Models for Dynamic ...
644727) and FP7 European project WALK-MAN (ICT 2013-10). .... placement control for bipedal walking on uneven terrain: An online linear regression analysis.

A nonlinear elastic deformable template for soft ...
motion) are such that no generic method has truly emerged yet for routine practice. ...... registration approach for the pet, mr and mcg cardiac data fusion Med.

A stochastic representation for fully nonlinear PDEs and ...
where u(t, x) is a solution of PDE (0.1)-(0.2) and (Ys,Zs)s∈[t,T] is a unique pair of adapted .... enization by analytic approaches is also an interesting subject.

A two-grid approximation scheme for nonlinear ...
analysis of the Fourier symbol shows that this occurs because the two-grid algorithm (consisting in projecting slowly oscillating data into a fine grid) acts, to some ...