> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

1

A Closed Form Expression for the Uncertainty in Odometry Position Estimate of an Autonomous Vehicle Josep M. Mirats Tur, José Luis Gordillo, Carlos Albores

Abstract — Using internal and external sensors to provide position estimates in a two-dimensional space is necessary to solve the localization and navigation problems for a robot or an autonomous vehicle. Usually, a unique source of position information is not enough so researchers try to fuse data from different sensors using several methods as for example Kalman filtering. Those methods need an estimation of the uncertainty in the position estimates obtained from the sensory system. This uncertainty is expressed by a covariance matrix, which is usually obtained from experimental data assuming, by the nature of this matrix, general and unconstrained motion. We propose in this paper a close form expression for the uncertainty in the odometry position estimate of a mobile vehicle using a covariance matrix whose form is derived from the cinematic model. We then particularize for a non-holonomic Ackerman driving type autonomous vehicle. Its cinematic model relates the two measures being obtained for internal sensors: the velocity, translated into the instantaneous displacement, and the instantaneous steering angle. The proposed method is validated experimentally, and compared against Kalman filtering. Index Terms—robot positioning uncertainty, localization, navigation, non-holonomic constraints.

I. INTRODUCTION Mobile robots and autonomous vehicles (AV's) can assist or even substitute human, to tackle tedious, monotonous and dangerous activities, such as in mining, civil engineering constructions, rescue in catastrophes, or agricultural labors. To reliably perform a task while navigating in the real world it is necessary to locate those vehicles, to know their position as well as its associated uncertainty. Usually, to accurately provide a position estimate, a unique source of position Manuscript first received June 14, 2004, revised 2 Nov. 2004, accepted on Feb. 5 2005. This work has been partially supported by Consejo Nacional de Ciencia y Tecnología (CONACyT), under grant 35396, the Franco-Mexican Laboratory for Informatics (LaFMI) and the Spanish Superior Council for Scientific Research (CSIC). Josep M. Mirats Tur is actually working as researcher in the Institut de Robòtica (UPC-CSIC), Parc Tecnològic de Barcelona, Edifici U, C. Llorens i Artigas, 4-6, 2a pl., 08028 Barcelona, Spain. (phone: +34 93 4015791; fax: +34 93 4015750; e-mail: [email protected]). José L. Gordillo is associate professor in the Center for Intelligent Systems (CSI), Tecnológico de Monterrey, 5º piso torre sur CETEC, Av. Eugenio Garza Sada 2501, 64849 Monterrey N.L., México. (e-mail: [email protected]). Carlos Albores Borja is a PhD student in the CSI of the Tecnológico de Monterrey. (e-mail: [email protected])

information is not enough, so researchers try to fuse data from different sources. For instance, in [1]-[3] indoor localization is performed using data from different sensors, [4]-[6] tackle self-localization and navigation or, for outdoor applications, [7] studies a Kalman based sensor data fusion algorithm for a robot campus tour guide, [8] and [9] use data fusion for agriculture and mining respectively. When fusing information, Kalman filtering is frequently used with the problem of determining the covariance matrices associated with the position uncertainty, since the errors of the sensory system affects the position estimate [10], [11]. These matrices are determined experimentally from data and the used algorithms are very sensitive to this parameter determination. Furthermore, the computation of those matrices assumes general and unconstrained motions, ignoring the nature of phenomena as when the movement is constrained by nonholonomic architecture, leading to poor position uncertainty estimation. Different methods have been proposed in the literature to estimate the position uncertainty. For example [12] employed a min/max error bound approach, [13] used a scalar as positon uncertainty measure, without reference to the orientation error. Uncertainty expressed by a covariance matrix is used in more recent works as [14]-[17]. In [18] a method for determining this covariance matrix is given for the special case of circular arc motion with constant radius of curvature. In [19] the odometry error is modeled for a synchronous drive system depending on parameters characterizing both systematic and non-systematic components. None of these methods give a close and general way to compute the robot’s position uncertainty. Their main limitation is, again, to consider general and unconstrained motion ignoring possible motion constraints due to the architecture of the vehicle. This paper proposes a method to compute a closed form for the uncertainty in the position estimate obtained from internal sensors. The position uncertainty will be expressed using a covariance matrix and determined from the position estimation equations given by the cinematic model. So, the proposed method is general for any platform and set of sensors, taking directly into account the physical structure of the vehicle, which is translated into the control variables being manipulated and sensed. Next sections are devoted to the general description of the method and its application to a

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < particular AV used for mining operations. For this vehicle, a brief description of its cinematic model and the required mathematical development to derive the close form for the covariance matrix representing the position estimate uncertainty are given. Finally, the proposed method was compared against the standard Extended Kalman Filter (EKF) in the computation of the uncertainty associated with the odometry position estimate. II.

POSITION ESTIMATE UNCERTAINTY

Consider we are dealing with a mobile robot or AV for which a cinematic model M(x, y, θ, r, s, t) is given. Model M is generally non-linear on x, y, θ, (position of the vehicle on the plane), r (physical parameters of the vehicle), vector s (internal sensor measurements), and t (discrete time moments). We are concerned with the problem of determining the uncertainty associated with the robot's position computed from its internal sensors using the cinematic model. Suppose that for time t-1 (we are using t-1, for t - ∆t, t-2 for t 2∆t, and so on) the position of the vehicle and its associated uncertainty, P(t-1)=[x(t-1), y(t-1), θ (t-1)], Cov [Pt-1] are known. For time t, after the vehicle has performed a certain movement, and sensors on the robot have noisily measured it, the new position can be obtained using P(t) = P(t-1) + ∆P(t). The position increment will be computed from the cinematic model and actual sensor measurements. Temporal variations on the physical parameters of the robot, and different sensor measurements for ∆x, ∆y and ∆θ are allowed so as to maintain generality in the model. f,g,h represent general functions that model these increments for components x, y, θ, respectively.  ∆xt  ∆Pt =  ∆yt ∆  θt

Equations (3) to (5) express the uncertainty of the position determined from internal sensors while taking into account the physical architecture of the vehicle. In order to solve (4) and (5), a distribution for the sensors noise must be assumed, which will be determined from experimental characterization. III.

USED ROBOT PLATFORM

A standard vehicle for mining operations has been used in this study. Manufactured by Johnson’s Industries, the vehicle is electrically powered and manually driven. Direction as well as driving wheels were automated in order to perform mine material transport operations outside the mine. Considering the bicycle model, the cinematic model of the AV being used [20] appears in Figure 1. There are two non-holonomic constraints, one for the steer wheel, and the other for the rear wheel: x& f sin(θ + φ ) − y& f cos(θ + φ ) = 0 ;

x& sin θ − y& cos θ = 0

where (xf, yf) are the coordinates for the steer wheel. These equations establish that lateral displacement is null. Now, using the constraints given for a rigid body we have that: x f = x + L cos θ ;

y f = y + L sin θ

φ

L (xf,yf)

θ

y

  f (s (t ), r (t ), x(t − 1), y (t − 1),θ (t − 1) )     =  g (s (t ), r (t ), x(t − 1), y (t − 1),θ (t − 1) )    h(s (t ), r (t ), x(t − 1), y (t − 1), (t − 1) )  θ   

2

(1)

φ

Now, the uncertainty in the robot’s position will depend on model inaccuracies, noise in the sensor measurements and additive errors from the previous position estimate. It can be computed from the covariance matrix of the robot’s position:

R

x Figure 1. Cinematic model for a common car-like vehicle.

Cov [Pt]= Cov [Pt-1 + ∆Pt] = Cov [Pt-1 ]+ Cov [∆Pt]+ 2Cov [Pt-1 , ∆Pt] (2)

Substituting into the first equation and defining the radius of the curve that the vehicle describes as R = tg −1φ the cinematic

The term Cov [Pt-1 ] is recursive which can be initialized to 03x3 if the initial position of the robot is well known. We will consider, in a first approach that the influence of the previous position Pt-1, on the increment of run path ∆Pt, is not meaningful. The term of interest is Cov [∆Pt]:

model for a car-like vehicle is:

[

] [ ] [ ] ] E[∆x ∆y ] E[∆x ] E[∆y ∆y ] E[∆y ] E[∆θ ∆y ] E[∆θ

Cov [∆Pt] = E ∆Pt ∆Pt T − E ∆Pt E ∆PtT

[

E ∆Pt ∆Pt

[

E [∆Pt ] E ∆Pt

T

T

] ]

[ [ [

 E ∆x t ∆x  =  E ∆y t ∆x  E ∆θ ∆x t 

T t T t T t

t

t

t

T t T t T t

(3)

] ] ]

∆θ tT   (4) ∆θ tT  ∆θ tT 

t t t

[ ] E[∆x ] E [∆y ] E [∆x ] E [∆θ ] (5) [ ] E[∆y ] E [∆y ] E[∆y ] E [∆θ ] [ ] E[∆θ ] E [∆y ] E[∆θ ] E [∆θ ]

 E [∆x t ] E ∆x tT  =  E [∆y t ] E ∆x tT  E [∆θ ] E ∆x T t t 

t

t

t

T t T t T t

t

t

t

T t T t T t

 x&   cos θ   0       sin θ & y      0 = v + 1 1  θ&   tgφ   0 v 2    L    φ&   1     0   

where v1 and v2 are the velocities for the vehicle and the direction respectively. A more complete model should include rotation angles for each wheel, or take into account the possible deformation of the wheels; however the simple proposed model has the essential elements for the analysis and should be enough for control purposes [21].

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

φˆt = φ t + ε

IV. AV’S POSITION ESTIMATE UNCERTAINTY

φ

where

φ

ε

≈ N (0, σ φ2 )

We propose using the position estimation equations of the cinematic model to obtain the covariance matrix expressing the uncertainty for the odometry estimated position. This uncertainty estimation takes directly into account the physical structure of the vehicle, which is translated into the control variables being manipulated and sensed. For the case at hands, the only inputs to the model, used to obtain the (x,y,θ) variables required to compute the position of the vehicle, are the two measures being obtained for internal sensors: the velocity, translated into the instantaneous displacement ∆d tod ,

So the uncertainty in the position Pt od is given by:

and the instantaneous steering angle φ t . Hence a closed

E ∆x tod ∆x tod

Cov[ Pt od ] = Cov[Pt-1 + ∆ Pt od ] = Cov[Pt-1 ] + Cov[∆ Pt od ] + 2Cov[Pt-1 , ∆ Pt od ] In order to solve (3) for our vehicle, it is necessary to calculate each of the elements of the involved matrices. The first element in equation (4) is calculated as follows:

[

]

mathematical form for such matrix will be obtained: for time t, given a position estimate Pt, we have an associated uncertainty in the estimation given by Cov[Pt]. The position estimate from odometry at time t is computed as, Pt od = Pt-1 + ∆ Pt od , corresponding to the sum of the robot position in a previous time t-1, plus the increments in position measured from the odometry system and computed using the equations from the cinematic model. ∆θ t   ˆ od  ∆d t cos(θ t −1 + ) od ∆Pt od

 ∆x t  =  ∆y tod  ∆θ od  t

2      ∆θ t  od ˆ )  =  ∆d t sin(θ t −1 + 2     od ˆ   ∆d t  tg (φˆt )  L   ∆x

∆θ

∆y

yk+1 yk xk

R

xk+1

ø

Figure 2. Geometric diagram for the computation of ∆x, ∆y, and ∆θ. A vector with an inclination of θt-1+∆θ/2 is used

Measures obtained from both, odometers and steer angle sensors are not error free. In order to compute the uncertainty in the position given by the odometry system, Cov[ Pt od ], we consider that those errors follow a normal distribution 1 ∆dˆ tod = ∆d tod + ε od where ε od ≈ N (0, σ od2 )

In fact, to compute errors from odometers we considered left and right odometers separately, obtaining σ od2 ,r , σ od2 ,l and computing

4σ od2 = σ od2 ,r + σ od2 ,l





(

)

2

∆θ t  cos 2 (θ t −1 + ) 2 

 t  = 1 E  ∆dˆ od t 2 

(

) (1 + cos( 2θ 2

t −1

) − sin( 2θ t −1 ) ∆θ t )  

where the following equivalences have been used: 1 cos 2 (α ) = (1 + cos( 2α ) ) ; cos(α ± β ) = cos α cos β m sin α sin β ; 2

We considered also, in order to simplify the presented mathematical development and without loss of generality of the given method, the assumption that ∆θ t is small between two consecutive measures. Such assumption is guaranteed because our AV moves at low speeds and a high sampling frequency for sensors is used. Hence, the mathematical development presented in this paper may not be valid for vehicles moving at high speed. Should we need to calculate the position uncertainty of a high speed vehicle, we would only need to reformulate the given mathematical expressions without the limitation of ∆θ t small. Going on the calculation:

(

]

)

(

)

2 2 E ∆xtod ∆xtod =…= 1 E  ∆dˆ tod  + 1 E  ∆dˆ tod cos( 2θ t −1 )         2 2 1  ˆ od 2 E ∆d t sin( 2θ t −1 ) ∆θ t   2 

(

)

Particularizing each term of the previous expression 0

1

= E ∆dˆ od cos(θ + ∆θ t ) ∆dˆ od cos(θ + ∆θ t ) t −1 t t −1  t 2 2  = E  ∆dˆ od

[

θ

3

( (

) )

(

)

[

(

]

1  ˆ od 2  = 1 1 E ∆d t (∆dtod )2 + σ od2 E (∆d tod + ε od )(∆d tod + ε od ) =  2 2 2  1  ˆ od 2 2 1 2  2 E ∆d t cos( 2θ t −1 )  = ∆d tod + σ od  cos (θ t −1 ) −   2  2  1  ˆ od 2  = E ∆d t sin( 2θ t −1 ) ∆θ t  2  = sin(θ ) cos(θ ) E  ∆dˆ od 2 ∆θ  t −1 t −1 t  t  3 1 = sin(θ ) cos(θ ) E  ∆dˆ od tgφˆ  t −1 t −1 t t    L = 1 sin(θ ) cos(θ ) E  ∆dˆ od 3  E tg φˆ t −1 t −1 t t   L

((

)

)

(

(

)

)

)

( (

) ) [ ]

= (∆d tod )3 + 3∆d tod σ od2 tg (φ t + ε

φ max

)

where physical considerations on the steer angle measurement

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

[ ]

have been taken into account in order to solve the E tgφˆt . After computing all the expectances a closed form for Cov(∆ Pt od ) is obtained as the rest of two symmetric 3x3

[

matrices,

od

E ∆Pt od ∆P T t

]− E[∆P ] E [∆P ]. T od t

od

t

Defining the

constants k1 and k2 as:

(

k1 = ∆d tod

)

2

+ σ od2 ;

k2 =

((

1 ∆d tod L

)

3

+ 3∆d t + σ od2

4

during the run2. Figure 3 shows the obtained odometry data for coordinates (x,y). Then, the uncertainty of the odometry position estimates obtained with, both, an EKF and the proposed method was compared. Figures 4 and 5 show the obtained covariance, denoting the uncertainty of the position estimates, for x and y axis, using the proposed formulation and an EKF respectively.

)

Elements for the first matrix, given in (3), are: φ c11 = k1 cos 2 (θ t −1 ) − k 2 sin(θ t −1 ) cos(θ t −1 ) tg (φ t + ε max )

k1 k φ sin(2θ t −1 ) + 2 cos(2θ t −1 )tg (φt + ε max ) 2 2 k k φ φ c13 = c31 = 1 cos(θ t −1 )tg (φt + ε max ) − 2 sin(θ t −1 )tg 2 (φt + ε max ) L 2L k k φ c 22 = 1 (1 − cos(2θ t −1 )) + 2 sin(2θ t −1 )tg (φ t + ε max ) 2 2 k k φ φ c 23 = c32 = 1 sin(θ t −1 )tg (φt + ε max ) + 2 cos(θ t −1 )tg 2 (φt + ε max ) L 2L k φ c33 = 12 tg 2 (φt + ε max ) L c12 = c 21 =

Figure 3. Odometry data obtained for the run path. The proposed path starts at the origin (0,0) advancing from left to right in the graph to complete a round trip.

A complete development of all the elements for matrix (4) is given in the Appendix. The second matrix, (5), is computed from the vector product E [∆Pt od ] E ∆P T tod , where

[

[

E ∆Pt od

]

]

k  od  φ )  ∆d t cos(θ t −1 ) − 1 sin(θ t −1 ) tg (φ t + ε max 2L   k φ ) =  ∆d tod sin(θ t −1 ) − 1 cos(θ t −1 ) tg (φ t + ε max   2L   ∆d tod φ   tg (φ t + ε max ) L   Figure 4. Estimated position uncertainty for coordinates x, y using the presented formulation.

The full expression for this second matrix is not included due to the length of the involved expressions. It can be easily and directly derived from multiplying E [∆Pt od ] E ∆P T tod . Finally,

[

]

the closed form for the uncertainty in the odometry position estimate will be given by subtracting the two recently computed matrices: E ∆Pt od ∆P T tod − E [∆Pt od ] E ∆P T tod . The

[

]

[

]

mathematical expression for the final matrix is not meaningful by itself, being much more practical using a computer code that computes both matrices separately and then subtracts them. V. ASSESSMENT OF RESULTS To assess the effectiveness of the proposed method to compute the uncertainty associated with the odometry position estimate of a mobile vehicle, an experiment was designed and run using the AV described in section III. A 50 m. per side squared path, marked on the floor of the ITESM parking lot, was followed while manually driving the described AV. Data from internal sensors (encoders and steer angle) were gathered

Figure 5. Estimated position uncertainty for coordinates x,y from an EKF. Note the linearity and dependency to the orientation in the measurement, in regard to the axis, given by the value steps.

2

Maximum velocity was 8 Km/h. Sampling time interval was 20 Hz.

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < It can be seen from figures 4 and 5 that the obtained uncertainty for coordinates x, y using the method presented in this paper (about 10 m) is lower than the obtained with an EKF (about 17 m). It turns out that the real measured position error for the used platform in the run experiment is about 10m (which means a covariance uncertainty of about 100, as our method states). VI. CONCLUSIONS AND FUTURE WORK The position of a robot or an AV in a two-dimensional space can be represented by the three components vector (x, y,θ ) where x, y, represent the position coordinates of the robot on the space, and θ its orientation. Thus the robot position at time t is denoted as Pt = (xt, yt, θt)T. There is always an error associated with the movement of the autonomous platform. The imprecision in the position estimate is due to errors in the sensory system used to determinate such estimations as well as unmodeled factors in the vehicle model. These errors are normally estimated from data derived from experimentation and then integrated in algorithms, as for example Kalman filtering, which are highly sensitive to the obtained parameters. We have proposed in this paper a way to obtain a closed form expression for the position uncertainty, by means of a covariance matrix, when position estimates are being obtained using internal sensors. The proposed method is valid for any platform or set of sensors. We then particularize the formulation in order to obtain a close form for the covariance matrix representing the measure of position uncertainty for a given kind of autonomous vehicle, consisting into a nonholonomic Ackerman architecture. For the platform used in this study, it is important to note that the instantaneous control variables are: ∆d tod and φ t , the displacement in function of time and the steering angle respectively; these are the sole variables that the internal system can manipulate and sense. Even if lot of research and experiments are left for a near future, the initial experimental evaluation demonstrates that the proposed method computes the uncertainty in position with highest accuracy than the standard EKF method. At the same time, the obtained data suggest that the proposed method is independent on the orientation of vehicle in regard to the axis, in contrast to the EKF method where the uncertainty grows in regard to the axis to which the vehicle is oriented. Authors are actually working in the integration of such matrix computation into a probabilistic frame (errors in the positioning sensors can be assumed to be random and can be modeled by a parametric probability distribution) that allows performing data fusion from different positioning systems; concretely work is being done on the integration of odometry and GPS measures. We hope that the formulation presented in this paper will help to reduce errors when estimating the position of the vehicle using different sources of data.

5

APPENDIX A complete derivation of the terms of the first matrix expressed in (4) is given here. The first element in (4) was calculated in section IV. Taking into account that the matrix is symmetric, and ∆θ t is small between two consecutive measures, the rest of elements are calculated as follows:

[

]

c12 = c21 = E ∆xtod ∆ytod =

= E ∆dˆ od cos(θ + ∆θ t ) ∆dˆ od sin(θ + ∆θ t ) t −1 t t −1  t  2

 = E  1 ∆dˆ od t 2 

(

(

)

2

)

2 

 sin( 2θ t −1 + ∆θ t ) 

= 1 E  ∆dˆ od 2 (sin( 2θ ) cos( ∆θ ) + cos( 2θ ) sin( ∆θ )) t t −1 t t −1 t   2

 = 1 E  ∆dˆ od t 2 

(

) (sin( 2θ



2

t −1

) + cos( 2θ t −1 ) ∆θ t )  

In the last three equalities the following equivalences and assumptions have been used: sin( 2α ) = 2 sin(α ) cos(α ) ; sin(α ± β ) = sin α cos β ± cos α sin β ; Going on the calculation and using some of the results given in section IV when computing c11,

[

]

(

E ∆xtod ∆ytod = 1 E  ∆dˆ tod 2 

)

2

(

1 sin( 2θ t −1 )  + E  ∆dˆ tod  2 

)

2

cos( 2θ t −1 ) ∆θ t  

=…= k1 sin(2θ ) + k 2 cos(2θ )tg (φ + ε φ ) = c12 = c21 t −1 t −1 t max 2

2

[

]

(

) tg (φˆ ) (cos(θ

= ∆θ ∆dˆ tod c13 = c31 = E ∆xtod ∆θ tod =  ˆ od E ∆d t cos(θ t −1 + t ) tg (φˆt ) 2 L  

1  ˆ od E ∆d t L 

2

t

(

t −1

) cos(

)

∆θ t ∆θ  ) − sin(θ t −1 ) sin( t )) = 2 2 

(

)

2 1 1  ˆ od 2 ∆ θ tg (φˆ )  = cos(θ t −1 ) E  ∆dˆ tod tg (φˆt )  t t    2 L sin(θ t −1 ) E  ∆ d t  L 2 1 od ˆ  E tg (φ t ) = cos(θ ) E  ∆dˆ t −1 t   L od   2 ∆dˆ 1 ˆ ˆ = t sin(θ t −1 ) E  ∆dˆtod  E tg (φ t ) tg (φ t ) 2L L  

(

(

)

)

[

]

[

]

= ... = k1 cos(θ )tg (φ + ε φ ) − k 2 sin(θ )tg 2 (φ + ε φ ) t −1 t max t −1 t max 2L

L

The following equivalences have been used: cos(α ± β ) = cos α cos β m sin α sin β ;

[

]

c22 = E ∆ytod ∆ytod = E ∆dˆ od sin(θ + ∆θ t ) ∆dˆ od sin(θ + ∆θ t ) = t −1 t t −1  t 2 2   = 1 E  ∆dˆ od 2 (1 − cos( 2θ + ∆θ )) = t t −1 t  2  2 1 = E  ∆dˆ od (1 − cos (2θ ) + ∆θ sin( 2θ )) = t t −1 t t −1   2  2 2 1 1 od od  - cos (2θ )E  ∆dˆ + = E  ∆dˆ t t −1  2  t  2 

( ( (

) ) )

(

)

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

) [

(

]

+ 1 sin( 2θ ) E  ∆dˆ od 3  E tg (φˆ ) =…= t −1 t   t

  2L = k1 (1 − cos(2θ )) + k 2 sin(2θ )tg (φ + ε φ ) t −1 t −1 t max 2 2

Where the following equivalences have been used: 1 sin 2 α = (1 − cos 2α ) ; cos(α ± β ) = cos α cos β m sin α sin β ; 2

[

]

= ∆θ t ∆dˆ tod c23 = c32 = E ∆ytod ∆θ tod =  ˆ od E ∆d t sin(θ t −1 + ) tg (φˆt ) 2 L  

(

)

∆θ ∆θ 1  ˆ od 2  E  ∆d t (sin(θ t −1 ) cos( t ) + cos(θ t −1 ) sin( t )) tgφˆt  = 2 2 L  

(

)

= 1 E  ∆dˆ od 2 (sin(θ ) + cos(θ ) ∆θ t ) tgφˆ  = t t −1 t −1 t L  2 

) [ ] ) E (∆dˆ )  E [tg φˆ ] =   (

= 1 sin(θ ) E  ∆dˆ od 2  E tgφˆ + t −1 t t   L + 1 cos(θ t −1 2

od t

3

[8] [9]

[10]

[11] [12] [13]

[14]

2

t

2L = k1 sin(θ )tg (φ + ε φ ) + k 2 cos(θ )tg 2 (φ + ε φ ) t −1 t max t −1 t max 2L L

The following equivalences and assumptions have been used: sin(α ± β ) = sin α cos β ± cos α sin β ;

[15] [16] [17]

c33 = E [∆θ tod ∆θ tod ]

(

od od = E  ∆dˆ t tg (φˆ ) ∆dˆ t tg (φˆ ) =  t t  L  L 

) [

]

= 1 E  ∆dˆ od 2  E tg 2 φˆ = k1 tg 2 (φ + ε φ ) t t t max  L2  L2

[19]

ACKNOWLEDGEMENT

[20]

Josep M. Mirats Tur wish to thanks the Dept. of Computer Science in the Instituto Tecnológico de Monterrey (ITESM) for holding him as invited professor during 2004. This work was possible thanks to the invitation from Juan Nolazco, Director of the mentioned department. REFERENCES [1]

[2] [3] [4] [5]

[6] [7]

[18]

M.A. Salichs, J.M. Arminol, Moreno, and A. de la Escalera, “Localization System for Mobile Robots in Indoor Environments”, Integrated Computer-Aided Engineering, Vol. 6, No.4, pp. 303-318, 1999. J. Howell, and B.R. Donald, “Practical Mobile Robot Self-Localization”, Proc., IEEE Int. Conference on Robotics and Automation (ICRA), San Francisco, CA, April 2000. pp. 3485-3492. R.G. Brown, and B.R. Donald, “Mobile Robot Self-Localization without Explicit Landmarks”, Algorithmica 26(3/4), pp. 515-559, 2000. S. Thrun, “Finding landmarks for mobile robot navigation”, Proc., IEEE Int. Conference on Robotics and Automation (ICRA), 1998, pp. 958-963. J.R. Asensio, J.M. Montiel, and L. Montano, “Goal Directed Reactive Robot Navigation with Relocation Using Laser and Vision”, Proc., IEEE Int. Conference on Robotics and Automation (ICRA), May 1999, pp. 2905-2910. H. Gonzalez-Banos and J. C. Latombe., “Navigation strategies for exploring indoor environments”, Int. Journal of Robotics Research, 21(10-11), pp. 829-848, Oct-Nov. 2002. R. Thrapp , and C. Westbrook, “Robust localization algorithms for an autonomous campus tour guide”, Proc., IEEE Int. Conference on Robotics and Automation (ICRA), May 2001, Seoul, Korea.

[21]

6

T. Pilarsky, M. Happold, H. Pangels, M. Ollis, K. Fitzpatrick, and A. Stentz, “The demeter system for automated harvesting”, Autonomous Robots 13, pp.9-20, 2002. S. Scheding, G. Dissanayake, E.M. Nebot, and H. Durrant-Whyte, “An experiment in autonomous navigation of an underground mining vehicle”, IEEE Transactions on Robotics and Automation, Vol. 15, No. 1, pp. 85-95, 1999. H. Chung, L. Ojeda, and J. Borenstein, “Sensor fusion for mobile robot dead reckoning with a precision-calibrated fiber obtic gyroscope”, Proc., IEEE Int. Conference on Robotics and Automation (ICRA), Seoul, Korea, May 2001, pp. 3588-3593 S. Majumder, S. Scheding, and H.F. Durrant-Whyte, “Multi sensor data fusion for underwater navigation”, Robotics and Autonomous Systems, Vol 35, No 1, pp. 97-108, 2001. R.A. Brooks, “Visual map making for a mobile robot”, Proc., IEEE Int. Conference on Robotics and Automation (ICRA), St. Louis, Missouri, 1985, pp.824-829. R. Chatila, and J. P. Laumond, “Position referencing and consistent world modeling for mobile robots”, Proc., IEEE Int. Conference on Robotics and Automation (ICRA), St. Louis, Missouri, 1985, pp.138145. R.C. Smith, and P. Cheeseman, “On the representation and estimation of spatial uncertainty”, Int. Journal of Robotics Research, 5, pp. 55-68, 1987. C.M. Wang, “Location estimation and uncertainty analysis for mobile robots”, Proc., IEEE Int. Conference on Robotics and Automation (ICRA), 1988, pp. 1230-1235. A. Kelly, “General solution for linearized systematic error propagation in vehicle odometry,” in Proc. Int. Conference Intelligent Robot and Systems (IROS01), Maui, HI, Oct–Nov. 2001, pp. 1938–1945. A. Pozo-Ruz, “Sistema sensorial para localización de vehículos en exteriores”, Ph.D. dissertation, Dept. Elect. Eng., University of Málaga, Spain, ISBN:84-699-8140-4, 2001. K.S. Chong and L.Kleeman, “Accurate odometry and error modeling for a mobile robot”, Proc., IEEE Int. Conference on Robotics and Automation (ICRA), 1997, pp. 2783-2788. A. Martinelli, “The odometry error of a mobile robot with a synchronous drive system”, IEEE Transactions on Robotics and Automation, Vol. 18, pp. 399-405, Jun. 2002. G. Palacios, "Control de dirección de un vehículo autónomo", M.S. thesis, Instituto Tecnológico de Monterrey, México, 2000. J.P. Laumond, “Robot motion planning and control”, Laboratoire d’Analyse et d’Architecture des Systèmes, LAAS Report 97438, 1998.

A Closed Form Expression for the Uncertainty in ...

REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <. 1. Abstract ... (phone: +34 93 4015791; fax: +34 93 ...

306KB Sizes 1 Downloads 160 Views

Recommend Documents

Approximate Closed-Form Expression of the Electric ...
International Journal of Computer and Electrical Engineering, Vol.3, No.1, February, 2011. 1793-8163. 48. Abstract—In this paper, an approximate ... The authors are with the Department of Electrical and Electronic. Engineering, Bangladesh Universit

An Exact Closed-Form Expression for Bit Error Rate of ...
prove the performance of wireless communications over fading channels without the need ... error rate of DF relaying systems that employ MRC at the des- tination are ...... PCS mobile phone systems and LG chairman's office planning future ...

A Closed-Form GARCH Option Valuation Model
(1998) that uses a separate implied volatility for each option to fit to the smirk/smile in implied volatilties. The GARCH model remains superior even though the ...

Simple and Exact Closed-Form Expressions for the ... - IEEE Xplore
results are derived for Nakagami-m fading. The results derived in this paper thus allow for lock detector threshold values, lock probabilities, and false alarm rates ...

Closed-Form Posterior Cramér-Rao Bounds for ... - Semantic Scholar
equations given by (3) and (4) in the LPC framework. ... in the Cartesian framework for two reasons. First, ...... 0:05 rad (about 3 deg), and ¾s = 1 ms¡1. Then,.

Closed-Form Posterior Cramér-Rao Bounds for ... - Semantic Scholar
E-mail: ([email protected]). ... IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. .... Two examples of pdf of Zt given Xt. (a) If Zt is far from the bounds. ... system, we do not have a direct bijection between the.

Pattern Recognition A closed-form reduction of multi ...
cDepartment of Management Information Systems, The University of Arizona, USA. A R T I C L E. I N F O .... Compared with the reduction in [4] which also utilizes the data ..... with probability at least 1 − over the choice of bi,. 1. N. N. ∑ i=1

On the closed-form solution of the rotation matrix arising in computer ...
Apr 9, 2009 - ... a Frobenius inner product, which is a sum of element-wise products of ... In 1952, Green [8] showed the solution to orthogonal Procrustes ...

Policy Search in a Space of Simple Closed-form ...
we consider two different settings: in the lookahead-free setting, we consider that .... the set S is obtained by uniformly sampling 100 points within the domain.

nonmatrix closed-form expressions of the cramer-rao bounds for near ...
CRB can be high while our approach is cheap and (ii) some useful informations can be deduced from the behavior of the bound. In par- ticular, we show that ...

Efficient Closed-Form Solution to Generalized ... - Research at Google
formulation for boundary detection, with closed-form solution, which is ..... Note the analytic difference between our filters and Derivative of Gaussian filters.

Expressing Uncertainty with a Talking Head in a ...
of uncertainty in the context of QA systems which suggests that users prefer visual over linguistic signaling. ..... speech was saved as an audio file and converted to MP3 format. Next, Abode .... Another page played a test animation to check that.

Self-powered sensor for use in closed-loop security system
May 16, 1990 - includes a self-powered sensor network which provides a switch-actuating signal ... switch is opened, voltage from the closed-loop security system becomes ... which must monitor a variety of physical conditions in a variety of ...

Unitary Precoders for CQI Reliability in Closed Loop ...
Centre of Excellence in Wireless Technology, Chennai, India 600 113. 3. Department of ... Mbps. MIMO (Multiple In Multiple Out) technology, which involves the use ..... [2] 3G Americas, “HSPA to LTE Advanced: 3GPP Broadband Evolution to.

Self-powered sensor for use in closed-loop security system
May 16, 1990 - Q1 and Q1 will again turn on (i.e. the circuit will auto matically reset itself after .... CERTIFICATE OF CORRECTION. PATENT NO. : R133 3 8 0 7.

Gene expression perturbation in vitro—A growing case ...
Sep 14, 1982 - expression data and review how cell lines adapt to in vitro environments, to what degree they express .... +46 8 5248 6262; fax: +46 8 319 470.

A gene expression signature of confinement in ...
graphical view of the contributions to each PC separately and clearly imply that PC1 largely captures the effect of habitat, PC2 captures genetic relatedness, PC3 ...

SCHOOL CLOSED
PIZZA. Cheese. Pepperon i. 7. McDonald's. Happy Meal. Plain Hamburger. Cheese Burger. 4Pc Chicken. Nuggets. 8. Pick Up Stix. Brown. OR. White Rice w/house chicken & fortune cookie. 9. Taco Company. 1 Taco. Bean & Cheese. Burrito. Cheese Quesadilla. 1

5.The Closed-Loop in Supply Chain Management (SCM).pdf ...
Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 5.The Closed-Loop in Supply Chain Management (SCM).pdf. 5.The Closed-Loop in Supply Chain Management

Accounting for uncertainty in DEMs from repeat ... - Wiley Online Library
Dec 10, 2009 - 1 Department of Watershed Sciences, Utah State University, 5210 Old Main Hill, NR 210, Logan, UT 84322, USA. 2 Institute of Geography ...

Restrictions on the Freedom of Expression in Cambodia's Media
and social rights in Cambodia and to promote respect for them by the Cambodian .... His killing brings to at least ten the number of journalists murdered since the country's new ... news media covering newspapers, radio, television and internet sites

pdf-0744\uncertainty-in-international-law-a-kelsenian-perspective ...
... apps below to open or edit this item. pdf-0744\uncertainty-in-international-law-a-kelsenian-p ... e-research-in-international-law-by-jorg-kammerhofer.pdf.