Particle Filter Integrating Asynchronous Observations for Mobile Robot Position Tracking in Cooperative Environments Andreu Corominas Murtra1 , Josep M. Mirats Tur1 , Alberto Sanfeliu1,2 Abstract— abstract ...

I. INTRODUCTION Position tracking of mobile robots has been, and currently is, an active and large research field, given the basic and key interest of the task [1], [2]. For autonomous navigation purposes a mobile robot need to be robustly localized, either in a local frame or in a global one. This position tracking need to be done in real-time since the tracking output usually closes a control loop in charge of following a path to reach a given goal position. This role of the position tracking task in the navigation framework imposes real-time constraints when computing it. In most of the mobile robots, the position tracking is computed by means of a filter process that integrates observations coming from different perception subsystems onboard the robot, and even other observations coming from remote observers not onboard the robot, when the robot is running in a cooperatie environment [3]. Whether these observations are onboard or offboard, they have all an observation moment, that rarely coincides with the time that the observation data is available at the filtering processing unit. Moreover, the filtering process has to set a moment for its output estimation and this moment, again, does not coincide with the observation moments. The above outlined time considerations become more serious in two cases of interest: cooperative robotics and fast vehicles. In the first case the observations arriving at filtering process may be done by remote observers (computers) connected with the tracking computer (usually onboard) by means of a wireless network. This observations will suffer from a considerable delay provoked by acquisition, processing and communications. These delays can reach the order of several hundreds of milliseconds, enough time for a mobile robot to perform non negligible displacements. In the second case, little delays, even provoked by latencies of the onboard sensors, are traduced to large displacements due to the high speed of the vehicle. Previous works on considering delayed observations for position tracking purposes are based on kalman filter framework [4], [5], but adapting particle Authors are with: 1 Institut de Rob`otica i Inform`atica Industrial, IRI (UPC-CSIC). Barcelona, Spain. www-iri.upc.es. 2 Universitat Polit`ecnica de Catalunya, UPC. Barcelona, Spain. www.upc.edu. [acorominas, jmirats, asanfeliu] @iri.upc.edu Research conducted at the Institut de Rob`otica i Inform`atica Industrial of the Universitat Polit`ecnica de Catalunya and Consejo Superior de Investigaciones Cient´ıficas. Partially supported by Consolider Ingenio 2010, project CSD2007-00018, CICYT project DPI2007-61452, and IST-045062 of the European Community Union.

filters to integrate asynchronous observations remains a less exploarated area of interest. Moreover these works only focuses on integration of delayed observations but in real applications, some observations arrives after prior estimates, thus the concept of advanced observation arises. This occurs in filters that integrates multiple observations, spending a non negligible delay for each observation integration and, therefore, new observations can arrive after prior estimate moment. This paper presents a modified framework for particle filter position tracking that integrates observations taking into account their time stamps. The filter does not propagate the particle set once per iteration as classical approaches do. The proposed algorithm propagates the particle set only when a new observation arrives with a time stamp greater than the last propagation. At this moment, the filter propagates with the kinematic model and the odometric observation and then integrates the observation as a delayed one. In order to integrate delayed observations, the proposed approach keeps an historic of the last estimations and backpropagate particles to compute observation models at positions where particles were expected to be at the observation time. The paper shows results of simualted experiments comparing the presented approach with a classic particle filter, showing clearly the improvements of the new approach. The paper is organized as follows: section II describes a basic particle filter, used in this work to compare the performance of the proposed approach. Section III describes the proposed particle filter that integrates asynchronous observations... II. BASIC PARTICLE FILTER A particle filter is a recursive algorithm that estimates a probability density function of the state of a system given a set of observations and, optionally, a kinematic or dynamic model of that system. The representation of the density function is made by a set of samples as vectors in the state space, each one having a weight related with the likelihood that the system state has in that point given the observations. The pair formed by a sample vector and a weight is called a particle. Further details on particle filters and its applications on mobile robotics can be found in [6]. Our implementation of the particle filter for mobile robot position tracking do not discretize neither the state space nor the observation space.That is, the estimated state of the robot is a vector in a continuous space and both, real observations and the expected ones, computed to calculate the likelihoods, are also vectors in continuous spaces.

Let be Xrτ = (xτr , yrτ , θrτ ) the robot true state at time τ . This true state remains always unknown and is the target variable to estimate. The output of the tth iteration of the ˆ rt = position tracking process is the position estimate X t t ˆτ t ˆ (ˆ xr , yˆr , θr ), the estimated covariance matrix Cr and the time stamp of those estimates τ t . Please note that τ refers to a continuous time and t indicates an iteration index. Let be X the state space as a three dimensional continuous space, ˆ t ∈ X = {(xmin , xmax ), (ymin , ymax ), (−π, π]} Xrτ , X r (1) That is, the robot is assumed to be always in the working area. If we integrate data coming from NB + 1 observers the approximation made by the sample representation of the density function at iteration t can be written as: p(Xrτ |{oτk }) ∼ P t = {(Xit , wit )}∀τ, k

(2)

The above expression indicates that the probability density function is approximated with the set P t formed by NP particles (i = 1..NP ). The output estimate takes into account all the observations from the start of the filter execution, denoted as {oτk }, being oτk a single observation with time stamp τ made by the k th observer. An observation oτk is assumed to be inside the k th observation space, oτk ∈ Ok

(3)

When an observation is integrated at tth iteration, we denote it as otk . In order to compute a likelihood of the ith particle given the observation otk , we compute: p(Xit |otk ) = Lk (otk , osk (Xit ))

(4)

where the Lk function is a likelihood function between two observations: the one made by the k th observer, otk , and the expected one, osk (Xit ), computed using the k th observation model. This osk (Xit ) observation is also called synthetic because it is computed artificially using a model. The likelihood function Lk () is defined as: Lk : Ok2 → R ∈ [0, 1]

(5)

thus, we define a likelihood function for each observer (k = 1..NB ) and the outputs of such functions are always bounded to [0, 1] interval. The k = 0 index is reserved for the odometric observation which is integrated by means of a kinematic model instead of a likelihood function. Algorithm 11 summarizes an iteration of the implemented basic particle filter In the algorithm 11, the propagate() function propagates the particle set with the last odometric increments (k = 0) and the kinematic model of th platform f (). Xit = f (Xit−1 , ot0 ); ∀i = 1..NP



NB Y k=1

P t =propagate(P t−1, ot0 ) for k = 1..NB do for i = 1..NP do p(Xit |otk ) = Lk (otk , osk (Xit )) wit = wit · p(Xit |otk ) end for end for + P t =propagate(P t, ot0 ) ˆ t , Cˆ t , τ t ) =setEstimate(P t ) (X ˆ t, Cˆ t , τ t ) publish(X resampling(P t)

Just After the weight updating by correction, a normalization is performed to assure that the sum of all weights is 1: ′

wit

p(Xit |otk ); ∀i = 1..NP , ∀k = 1..NB (7)

wit

= PNP

j=1

wjt



; ∀i = 1..NP

(8)

In order to avoid large latency delays due to the processing of the correction loop, we decided to add another propagation step just before compute the estimate to be published by the process, thus the filter publishes a prior. This prior is computed using the acumulated odometry from the last propagation() call. This odometry observation is denoted as + ot0 . This is also done in order to compare the basic filter with the proposed one in a more right conditions. The setEstimate() function parametrizes the particle set as a Gaussian density function. Please note that this Gaussian estimation is computed in order to publish (deliver to other processes) a close result ready to be used by other processes (system integration purposes). The particle set, remains the genuine output of the particle filter. The parameters of Gaussian the density function are: NP X

x ˆtr =

σxt )2 = xti · wit ; (ˆ

yˆrt

=

NP X i=1

yit

NP X

(xti − xˆtr )2 · wit

(9)

i=1

i=1

·

(ˆ σyt )2

wit ;

=

NP X

(yit − yˆrt )2 · wit

(10)

i=1

PNP sinθit · wit t ˆ ) θr = atan( Pi=1 NP t t i=1 cosθi · wi

(ˆ σθt )2 =

NP X

(acos(cos(θit − θˆrt )))2 · wit

(11)

(12)

k=1

(6)

After propagation, a correction() loop integrates the availble observations without taking into account time considerations. This correction step can be formalized as: wit = wit−1 ·

Algorithm 1 Basic particle filter iteration INPUT: P t−1 , otk ∀k ˆ t , Cˆ t , τ t ) OUTPUT: P t , (X

t σ ˆxy =

NP X

t t (xti − x ˆtr ) · (yit − yˆrt ) · wit ; σ ˆxθ =σ ˆyθ = 0; (13)

k=1

The publish() function sends through an output TCP port the estimate computed by the filter. Processes requiring position data should connect to this port in order to receive position data in real-time.

The resampling() step generates a new particle set resampling the current one by means of regularized resampling method [6]. When an old particle is choosen to be resampled, the new one resets its weight to 1/NP and draws a new state vector as a random vector following a normal random centered on the old particle state with standard deviations derived from the platform size. III. ASYNCHRONOUS PARTICLE FILTER This sectin describes the proposed approach to take into account the moment of the observations when they are intgrated in the particle filter. In order to outline the proposed algorithm, we have to introduce some definitions: t t t t t • Ωk = (ok , Ck , τk , sk ) is an observation ok , with cot variance matrix Ck , arriving to the computing unit at iteration t, coming from the k th observer, made at continuous time τkt and with status stk . t • Ω is the set composed by the last observation from each of the NB observer. This set is not completly ready at the begining of the filter iteration and changes dynamically while filtering advances, since data reception is done concurrently and asynchronously with filtering at the tracking processing unit. t ˆ t−∆, Cˆ t−∆ , τ t−∆ ), . . . , (X ˆ t−1 , Cˆ t−1 , τ t−1 ), • H = {(X t ˜t t ˜ (X , C , τ )} is a set keeping the filter history of the ∆ last posterior estimations and the last prior etimate made by the filter. The tth iteration of the proposed asynchronous particle filter, integrating observations coming from NB observers, is outlined in the algorithm 24. ... here we explain the algorithm .... IV. SIMULATION RESULTS: COMPARISON OF THE TWO FILTERS In order to evaluate the performance of the proposed algorithm, we have executed an expriment consisting of a simulation of a mobile robot running on an environment of 10.000m2 at speed of about 2m/s, completing a path of about 300m. The simulated robot is equipped with two laser scanners, a compass and a GPS. Moreover, a camera network is deployed on the environment, providing observations of the robot position. Table I summarizes the rates and the latencies of each observer. The values used by the simulator were set taking into account real devices and systems. TABLE I R ATES AND LATENCIES OF THE OBSERVERS ot0 ot1 ot2 ot3 ot4 ot4

Observer platform odometry front laser back laser compass GPS Camera Network

Rate (Hz) 20 4 4 5 1 1

latency (ms) ∼0 50 50 20 20 500

The experimental testbench was composed by two computers. The computer 1 was executing the simulator, the basic and the asynchronous particle filters. The computer

Algorithm 2 Asynchronous particle filter iteration INPUT: P t−1 , H t−1 , Ωt ˆ t , Cˆ t , τ t ), H t OUTPUT: P t , (X ∆ P t =propagate(P t−1) ˜ t , C˜ t , τ t ) =setEstimate(P t ) (X ˜ t, C˜ t , τ t )) H t .pushBackCurrentEstimate((X for k = 1..NB do j =max ι ∈ {t − ∆, . . . , t}|τ ι ≤ τkt if j == t then P t =propagate(P t) ˜ t , C˜ t , τ t ) =setEstimate(P t ) (X ˜ t, C˜ t , τ t )) H t .replaceLastEstimate((X j =t−1 end if j+1 t τ −τ α = τ j+1 −τkj ˆ H = αX ˆ j + (1 − α)X ˆ j+1 X t H ˜ ˆ ∆X = X − X for i = 1..NP do XiH,t = Xit − ∆X p(XiH,t |otk ) = Lk (otk , osk (XiH,t )) wit = wit · p(XiH,t |otk ) end for end for ˆ t , Cˆ t , τ t ) =setEstimate(P t ) (X ˆ t, Cˆ t , τ t ) publish(X t ˆ t, Cˆ t , τ t )) H .replaceLastEstimate((X t resampling(P )

2 executed the GUI and was saving the frames in order to produce the attached video. This scenario allows to compare the two filters in real-time with the same conditions since they are running on the same simulation execution. For both filters, the number of particles was set to NP = 100. Software details can be found in [7]. Using this testbench we present two experiments. The experiment A was done with the deployed camera network switched off, thus both filters were integrating only the observations provided by the onboard sensors. In the experiment B we have switch on the camera network. In this second case both filters integrate also the remote observations provided by the camera network. This experiment strategy wants to show the key role of the new approach when accurate but delayed observations are available, as those provided by the camera network. In order to evaluate the performance of the filters we evaluate the following error figure: exy = sqrt((ˆ xt − xτ )2 + (ˆ y t − y τ )2 )

(14)

To compute this error we approximate lineary the simulated ground truth data at exact moments where estimations are computed. This is done by considering the ground truth sample just before the estimate and just after the estimate. A. Experiment A: Camera Network Off Figure 1 shows the error exy when the camera network was switcged off. In this case the proposed filter performs

X−Y error (camera Network Off)

Async Filter Error and Estimated Standard Deviation x

1.2

2.5 Basic Particle Filter

Filter Error Estimated Standard Deviation: sx

Asynchronous Particle Filter 1.0 2.0

0.8

[meters]

[meters]

1.5 0.6

1.0 0.4

0.5 0.2

0.0

0.0 0

20

40

60

80

100

120

140

160

0

20

40

60

80

time [s]

Fig. 1.

100

120

140

160

180

time [s]

XY error of both filters when the camera netwok is switched off.

Fig. 3.

ex and the estimated covariance σ ˆx for the asynchronous filter.

Async Filter Error and Estimated Standard Deviation y

X−Y error (camera Network On) 1.2

2.5

Filter Error

Basic Particle Filter

Estimated Standard Deviation: sy

Asynchronous Particle Filter

1.0

2.0

0.8

[meters]

[meters]

1.5 0.6

1.0 0.4

0.5

0.2

0.0

0.0 0

20

40

60

80

100

120

140

160

0

180

20

Fig. 2.

exy error of both filters when the camera netwok is switched on.

sligtly better, since the observers do not provide data with large latencies and therefore the asynchronous filter does not take advantage of its properties. B. Experiment B: Camera Network On When a camera network is switched on, we put in the scenario a very accurate observer that, however, provides observations with large latencies. In this scenario, the proposed asynchronous filter performs much better than the basic one as figure 2 shows. The asynchronous filter outperforms the basic one with the exception of a short passage, where two filters have demonstrated a good recovery behaviour. This execution is recorded and presented in the attached video, where the particle sets of each filter can be seen with the simulated ground truth psition of the robot. For this second experiment the error for each variable is presented, each one accompanied with the estimated covariance. The following figures 3, 4, 5 shows how the filter

40

60

80

100

120

140

160

180

time [s]

time [s]

Fig. 4.

ey and the estimated covariance σ ˆy for the asynchronous filter.

error remains in the most time inside the covariance bound of 1 · σ. C. Discussion Table II summarizes the mean errors for both filters and both experiments A & B. As expected, the proposed approach works much better when an accurate but delayed observer plays in the scene, as the case when the camera network is ON. We can also see how the θ estimate does not improve its performance since it depends basically of the odometry and the compass, and these two observers are fast and with little latencies. On table II the reader can also compare the asynchronous filter with and without the camera network and discover that only a little improvement appears in terms of position estimate accuracy but gains in terms of robustness since another observer is integrated on the filter. From this point we want to evaluate the feasability of track the position of a robot with only the odometry and the camera network, in order to

[5] A. Ranganathan, M. Kaess, and F. Dellaert, “Fast 3D Pose Estimation With Out-of-Sequence Measurements,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (San Diego, California. October, 2007.). [6] S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “ A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,” Transactions on Signal Processing, vol. 50, pp. 174–188, February 2002. [7] A. Corominas Murtra, J. Mirats Tur, O. Sandoval, and A. Sanfeliu, “Real-time Software for Mobile Robot Simulation and Experimentation in Cooperative Environments,” in Proceedings of the Simulation, Modelling and Programming for Autonomous Robots (SIMPAR). Lecture Notes on Artificial Intelligence 5325. Springer ed. ISBN 978-3-54089075-1., (Venice, Italy. November, 2008), pp. 135–146.

Async Filter Error and Estimated Standard Deviation theta 0.06 Filter Error Estimated Standard Deviation: sth 0.05

[rad]

0.04

0.03

0.02

0.01

0.00 0

20

40

60

80

100

120

140

160

180

time [s]

Fig. 5.

eθ and the estimated covariance σ ˆθ for the asynchronous filter. TABLE II M EAN ERRORS OF A & B EXPERIMENTS

CamNet OFF CamNet ON

Basic µ(exy ) [m] 0.36 1.05

PF µ(eθ ) [rad] 0.013 0.013

Asynchronous PF µ(exy ) µ(eθ ) [m] [rad] 0.28 0.012 0.26 0.012

consider he proposed algorithm as a practical solution to be onboard of cheap robots running on environments where a camera network is deployed. The following section presents results on this issue. V. SIMULATION RESULTS: POSE TRACKING WITH ODOMETRY AND CAMERA NETWORK Once the asynchronous filter has shown good properties integrating observations with high latencies, we want to investigate a fusion scheme with only odometry and the camera network. This two observations are very complementary since odometry has a high rate, a small latency and a good accuracy in short displacements, while a camera network provides absolute and accurate x − y observtaions with a large latency, but does not suffer from accumulated drifts as odometry does. In this experiment we use the same testbench as the previous ones but we execute only the asynchronous filter. VI. FUTURE WORKS R EFERENCES [1] S. Thrun, D. Fox, W. Burgard, and F. Dellaert, “Robust Monte Carlo localization for mobile robots,” Artificial Intelligence, vol. 128, pp. 99– 141, 2001. [2] J. Levinson, M. Montemerlo, and S. Thrun, “Map-Based Precision Vehicle Localization in Urban Environments,” in Proceedings of the Robotics: Science and Systems Conference, (Atlanta, USA. June, 2007.). [3] A. Sanfeliu and J. Andrade-Cetto, “Ubiquitous networking robotics in urban settings,” in Proceedings of the IEEE/RSJ IROS Workshop on Network Robot Systems, (Beijing, China. October, 2006.). [4] Y. Bar-Shalom, “Update with Out-of-Sequence Measurements in Tracking: Exact Solution,” IEEE Transactions on Aerospace and Electronic Systems, no. 3, pp. 769–778, 2002.

Particle Filter Integrating Asynchronous Observations ...

Position tracking of mobile robots has been, and currently ..... GPS. 1. 20 ot. 4. Camera Network. 1. 500. The experimental testbench was composed by two com- puters. .... Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,”.

112KB Sizes 2 Downloads 244 Views

Recommend Documents

Convergence Results for the Particle PHD Filter - CiteSeerX
convergence of the empirical particle measure to the true PHD measure. The paper first ... tation, or Particle PHD Filter algorithm, is given in Section. Daniel Edward Clark ...... [Online]. Available: citeseer.ist.psu.edu/crisan00convergence.html. [

Convergence Results for the Particle PHD Filter
[2], the random finite set Θ was represented by a random counting measure nΘ ..... error of the PHD Particle filter at each stage of the algorithm. These depend on ..... t−1. ) 2. |Qt−1]), where the last equality holds because the particles are

Convergence Results for the Particle PHD Filter - CiteSeerX
distribution itself. It has been shown that the PHD is the best-fit ... Electrical and Computer Engineering, Heriot-Watt University, Edinburgh. [email protected] ... basic idea of point processes is to study collections of point occurrences, the .....

Particle PHD Filter Multiple Target Tracking in Sonar ...
The matrices in the dynamic model ... The PHD is approximated by a set of discrete samples, or ... observation covariance matrix R. The dot product of the.

Particle Filter based Multi-Camera Integration for ...
calibrated cameras via one color-based particle filter. The algorithm re- ... ensured using a multi-camera system, which guarantees broad view and informa-.

Boosting Target Tracking Using Particle Filter with Flow ...
Target Tracking Toolbox: An object-oriented software toolbox is developed for implementation ..... In data fusion problems it is sometimes easier to work with the log of a ..... [13] Gustafsson, F., “Particle filter theory and practice with positio

Enhancing Memory-Based Particle Filter with Detection-Based ...
Nov 11, 2012 - The enhance- ment is the addition of a detection-based memory acquisition mechanism. The memory-based particle filter, called M-PF, is a ...

Probabilistic Multiple Cue Integration for Particle Filter ...
School of Computer Science, University of Adelaide, Adelaide, SA 5005, ..... in this procedure, no additional heavy computation is required to calculate the.

Particle Filter Based Localization of the Nao Biped Robots
Moreover, kidnap scenarios which could not be considered and handled with the uni-modal Kalman .... Thereafter, by running a binary search on green color between the current pixel and the previous checked one, ... motions of the cameras installed on

Semantics of Asynchronous JavaScript - Microsoft
ing asynchronous callbacks, for example Zones [26], Async. Hooks [12], and Stacks [25]. Fundamentally ..... {exp: e, linkCtx: currIdxCtx};. } bindCausal(linke) { return Object.assign({causalCtx: currIdxCtx}, linke); .... the callbacks associated with

The Cantonese utterance particle gaa3 and particle ...
Metalanguage (NSM) framework and natural speech data from the Hong Kong. Cantonese Corpus to ..... 'But you need – no aa3, [to participate in] those you need to plan for the correct time. (4) gaa3. ..... Both try to back up their arguments ...

Asynchronous Parallel Coordinate Minimization ... - Research at Google
passing inference is performed by multiple processing units simultaneously without coordination, all reading and writing to shared ... updates. Our approach gives rise to a message-passing procedure, where messages are computed and updated in shared

Static Deadlock Detection for Asynchronous C# Programs
contents at url are received,. GetContentsAsync calls another asynchronous proce- dure CopyToAsync .... tions are scheduled, and use it to define and detect deadlocks. ...... work exposes procedures for asynchronous I/O, network op- erations ...

Synchronous and Channel-Sense Asynchronous ...
Abstracr-Adaptive random-access schemes are introduced and analyzed to provide access-control supervision for a multiple-access communication channel. The dynamic group-random-access (DGRA) schemes introduced in this paper implement an adaptive GRA s

Asynchronous Byzantine Consensus - automatic ...
Jun 24, 2007 - A. B. C normal phase recovery phase normal phase recovery phase liveness: processes decide ... usually always safety: one decision per ... system state execution emphasis speed robustness number of steps small (fast) large (slow) solut

Asynchronous Stochastic Optimization for ... - Research at Google
for sequence training, although in a rather limited and controlled way [12]. Overall ... 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) ..... Advances in Speech Recognition: Mobile Environments, Call.

Particle Systems
given an overview of the system from which the structure of the rest of the report .... paper we provide some real test data on the performance of each method.

Vessel Enhancement Filter Using Directional Filter Bank
Jul 21, 2008 - reduced compared to that in the original one due to its omni-directional nature. ...... Conference on Signals, Systems and Computers, Vol. 2, 1993, pp. ... matching, IEEE Trans. on Circuits and Systems for Video. Technology 14 ...

Observations on Cooperation
Introduction. Model. Results. Discussion. Observations on Cooperation. Yuval Heller (Bar Ilan) and Erik Mohlin (Lund). Erice 2017. Heller & Mohlin. Observations on Cooperation. 1 / 22 ... Consistency: The induced signal profile is θ. A strategy dist

Observations on Cooperation
Model. Results. Discussion. Observations on Cooperation. Yuval Heller (Bar Ilan) and Erik Mohlin (Lund). PhD Workshop, BIU, January, 2018. Heller & Mohlin. Observations .... Consistency: The induced signal profile is θ. Definition (Steady state (σ,

Observations on Cooperation
Jun 26, 2017 - case Bob acts opportunistically, is restricted. The effectiveness of .... Summary of Results We start with a simple result (Prop. 1) that shows that ...

Pulsar Observations - Astropeiler Stockert eV
3 -. Due to the exposed location of the Stockert Telescope this was not an infrequent .... The distribution of these pulsars over the galaxy is depicted below (in.