Particle Filter Based Localization of the Nao Biped Robots *

#

#

#

Ehsan Hashemi 1, Maani Ghaffari Jadid 2, Majid Lashgarian 3, Mostafa Yaghobi 4, Mohammad Shafiei R. N.#5 *

#

Faculty of Industrial and Mechanical Eng., Faculty of Electrical, IT, and Computer Eng., Qazvin Branch, Islamic Azad University, Qazvin, Iran {1e.hashemi, 2m.ghaffari}@qiau.ac.ir, {3m.lashgariyan, 4m.yaghobi, 5m.shafiei}@mrl-spl.ir Abstract— Proper performance of biped robots in various indoor environment conditions depends on reliable landmark detection and self-localization. Probabilistic approaches are among the most capable methods to present a real-time and inclusive solution for biped robot's localization. This paper focuses on the newly proposed odometry system implementing the kinematic model, predictive estimator of the camera position, particle filter methods such as Monte Carlo Localization (MCL) and Augmented MCL with utilization of landmarks, lines, points, and optimized filtering parameters for robots' state estimation. Moreover, kidnap scenarios which could not be considered and handled with the uni-modal Kalman filterbased techniques, are studied here for the Nao biped robots on the RoboCup standard platform league's soccer filed. Experimental data are employed to evaluate effectiveness and performance of the proposed vision module and augmented MCL scheme. Keywords: particle filter; Monte Carlo localization; selflocalization ; biped robots

I.

INTRODUCTION

Self-localization could also be an intricate task for biped robots on the RoboCup's standard platform league (SPL) soccer field because of the nature and limitations of the local vision, few landmarks and complicated motion characteristics. These limitations justify employing an adaptive head motion behavior and efficient methods both in perception and localization duties to use all available lines and landmarks with lower noises. The Hough transform has been the most practical algorithm for extracting global features such as straight lines, circles, ellipses, etc. and have had some improvements in recent years [1-3], but it takes a noticeable memory storage and process time. Therefore, its usage on the Nao platform has computational costs. Other line detection approaches were examined in [4] and [5] by assuming that the line points extracted from the so called "Scan line" algorithm mostly belong to the field lines. This technique may results in an undesirable and false selection of other line points from deficiencies in the post Scan line filtering skills such as "BanSector", "NonLine Spots". This drawback exists in the ideal selection assumption of line points which increases this method's sensitivity. RANSAC [6-9] algorithm has been executed in this research as an effective one. RANSAC is a sampling practice that generates candidate solutions by using the least amount number of observations required to estimate the underlying model parameters. As revealed in [6], unlike conventional sampling techniques that use as much of the data as

possible to obtain an initial solution and then proceed to reduce outliers, RANSAC uses the smallest set possible and proceeds to enlarge this set with consistent data points . Kalman filtering based approach is extensively used both to generate velocities and to calculate object positions as studied in [10-12]. These filters have a proper calculation time and acceptable memory efficiency, but their disadvantages such as limitations due to the Gaussian density assumption both in measurement and state estimation, local navigation instead of global localization leads to unacceptable position estimation especially in disturbed environments with stochastic robots behavior. Another solution to the biped robots navigation problems is the grid-based Markov-localization method [13-15] which introduces randomly complex probability densities, but the precision at which it can signify the state has to be previously determined. Furthermore, memory necessities and high computational costs are main drawback of this method. On the other hand, particle filters are appropriate for nonlinear and non-Gaussian problems because of their capability to handle a wide variety of intricate cases such as kidnapped robots [16, 17]. These filters suffer from high computational accomplishments in real-time situations which increase estimation time of particles positions. These filters could be modified to be applicable for nonlinear problems with reasonable speed as described in [18]. Monte Carlo localization is an efficient and recently more practiced method which is able to globally localize a robot with the aim of multi-modal distributions depiction [19, 20]. It is utilized in several researches to determine the biped robot's position and orientation and examined here in this article for the Nao biped robots. This method requires both a motion and an observation model and has been studied and executed for wheeled and legged robots as demonstrated in [21-24]. The motion and observation models deal with specifying the probability for a specific behavior to move the robot to particular relative positions and the probability for taking measurements at particular locations correspondingly. Successive estimations of position and orientation of particles are acquired by perceived lines, edges, and fixed landmarks. Landmarks for measurement model have come to be used to refer to the items such as goal posts, and shapes from intersection of various field lines, center of circle and points of circle. There are several works on the object detection and vision-based self localization for both wheeled and legged mobile robots [25-28] among which line-based localization is the most prominent one [29, 30]

and also employed in this article. This paper has been divided into six sections. The second section describes the image processing routines, calibration method to overcome the distance error caused by fault placement of the camera inside the robot's head, and RANSAC line detection approach. Section III deals with development of the odometry system and kinematic equations of the Nao biped robots alongside a camera position estimator which performs as a predictive to cope with undesired delays and joint errors. Section IV contains implementation of particle filter (PF) methods such as MCL and Augmented MCL for the biped robots' navigation with detail mathematical description. Experimental results of the above mentioned Augmented MCL are presented in section V. Finally, section VI provides conclusion and authors’ future works respectively. II.

(a)

PERCEPTION METHOD

This section describes employed perception algorithm and landmark detection task for biped robots in a SPL field. Object recognition developed by the Mechatronics Research Laboratory (MRL) SPL perception group relying on the field having a set of distinct landmarks on known positions is presented and described in the MRL team description paper [31]. Since goal posts are not always observable by the robot due to limitations of robot's view field, it is needed to recognize some other landmarks inside the field such as so called L, T, and X shape line intersections. Initially, image processing loop starts with retrieving an image from the camera. Procedure of getting the data from the camera falls into two methods, namely: V4L2 API and Naoqi framework's libraries. Here, the former is used which causes less dependency on Naoqi, the provided framework for Nao by Aldebaran Co. Perspective view of the robot leads to recognition of less distance between image's lower pixels in metric space than the pixels exists in the upper side of an image. Then, the lower pixels could be scanned with less density than the upper ones. Conclusively, scanning one fifth of image pixels would result in satisfactory cycle time together with sufficient image data to extract the desired features from the field. This could be attainable by categorizing pixels based on their colors and checking green pixels with skipping strategy which is based on scanning with specified intervals in the rows. Thereafter, by running a binary search on green color between the current pixel and the previous checked one, the accurate pixel representing the border on the image's column will be selected. Border position in each column and convex hull technique on border position data are both implemented to specifying field borders. Afterward, by vertical "Scan line" action, white segments could be extracted and their centers would be used in RANSAC algorithm to extract line segments. A. Error Reduction with Camera Position Calibration The following paragraph will focus on camera calibration with correction of camera position matrix. Fig. 1 demonstrates manual camera calibration effect on the line perception performance. The observed misalignments of center circle and goal areas perceived lines in Fig. (1-a) are attributed to errors in the camera position and orientation matrices. The term "Camera Parameters" is generally understood to mean two various assemblies: inherent and external. Inherent parameters are the essential part of camera characteristics such as lens

(b) Figure 1.

(a) Non-Calibrated and (b) Calibrated images of the perceived lines

distortion, focal length etc. which can be determined easily by standard calibration routines presented in [32] or may be given by the manufacturer. Alternatively, external parameters are classified as extraneous and imposed constraints such as real time position estimation of head cameras relative to a specified reference point by transformation matrices. Biped robots' walking characteristics and tilt and pan motions of the cameras installed on robots' head cause undesirable indistinctness in estimation of relative positions of various objects. Calibrations for the external parameters such as tilt and pan movements are performed with precise kinematic chain extraction and coordinate systems transformations. Position errors of the camera are reduced in this work by manual regulations of rotation matrices of kinematic chains which include camera's tilt and roll angles besides robot's body tilt and roll angles. Equation (1) describes the projection of the field's points with global coordinates on the camera plane.  su   sv   M inh    s 

C B

R M ext

 x  y   z   1 

(1)

In which BC R is a constant transform from the base to the camera reference frame; inherent and external M are mathematically described as (2):  f / dx M inh   0  0

0 0 u0  f / dy 0 v 0 ; M ext  CB T 1 0 1 0 

(2)

Where, and CBT is a homogeneous transform from base frame to the head’s camera. Two essential parameters, 1) disorientation of the mounted camera and 2) misalignment of the torso with the vertical axis due to drivers' indeterminate backlash brings about positioning errors. This modification on the camera fault positioning is performed by simultaneous change of above mentioned main angles which shall be optimized from two different aspects: selection of right orientations to be regulated and the value of orientation correction. As a consequence of

implementing camera calibration in the presented study, it appears that false landmark and object position errors decrease significantly as graphically illustrated in Fig. (1b). These errors shall be studied precisely and needs to be removed by an automatically reliable calibration skill which is authors' future works. B. Landmark Detection General overview of the RANSAC line detection approach is described in the following paragraph. RANSAC is a general parameter estimation approach which is designed to cope with a large proportion of outliers in the input data and uses transformed white points to extract one line. Ratio of inliers points would grow due to the fact that one line would be extracted on each RANSAC call. Furthermore, the minimum number of inliers must be known to calculate the number of iterations. This could be obtainable by the assumption of one over number of lines, 1 / n as  , the ratio of inliers to outliers. Suitable number of iterations is calculated by (3) with estimation of number of perceived lines in the worst case scenario. k

log 1  a    1 2  log 1      n   

(3)

Where, k represents the number of iteration, a is the accuracy which is needed to separate inliers, and n is number of lines in every RANSAC function call. III.

KINEMATIC MODELING AND POSE ESTIMATOR

Bipedal locomotion requires an accurate forward kinematics model to specify desired center of mass (CoM) pose related to the base and end effector trajectories. Nao robot has 21 degrees of freedom, DOF, including 6 DOF in each leg introduced as ankle roll, ankle pitch, knee pitch, hip pitch, hip roll, and hip yaw-pitch. Yaw-pitch joints of the hips are physically bound and driven with one servo motor. DH frame assignment and related coordinate sequences are shown in Fig. 2, and associated parameters are mentioned in Table I. DH parameters for the right leg is assumed the same as the left one except  6 which is  / 4 . Moreover, kinematic and dynamic models of the robot are chiefly discussed in [33] and [34]. TABLE I. Frame (Joint)

LEFT LEG DH PARAMETERS DH Parameters

2

i  / 2  /2

3

0

Tibia-length

0 0 0

4

0

Thigh-length

0

5

 / 2

1

60 6

 / 4

ai 0 0

di

i 2 3

4

0 0 Rot x ( / 2)  Rot z ( / 2) 0

0

1

5  6  3 / 4

Figure 2.

DH schematic of legs for the NAO robots

As it is shown in Fig. 2 and assumed in the following analytical method, the base frame is located at the ankle joint and the hip joint is the location of the end effector. After determination of matrix transform between B and E it is possible simply specify the CoM pose with two fixed translation along y and z direction respectively. Equations (4) and (5) show final homogenous transform from CoM to base frame. These equations could be used to determine odometry data for the robot including position and orientation. T  B0T 01T 21T 23T 34T 45T 605T 606T E6T CoMET

B CoM

(4)

(5)  r11 r12 r13 x  r  B  21 r22 r23 y  CoM T   r31 r32 r33 z     0 0 0 1 Computing camera pose from the base frame involves 13 joints which among them 7 joints orientation is pitch angle. In practice, in order to compensate time delay in reading these angles’ values a Kalman Filter is employed as an estimator. The difference between current and previous command angle is defined as the estimator input and sensor data for joints’ angles provide the observation.

IV.

MONTE-CARLO LOCALIZATION

Sections II and III were introduced to lay out the practical dimensions of perception and landmark detection skills besides camera's kinematic chain extraction. Afterwards, this section is contributed to localization schemes implemented in this research. The localization procedure of this study set out with the aim of Augmented MCL technique and subsequent modifications and improvements on this method particularly on the optimized re-sampling for the kidnapping scenarios. Kalman filterbased approach is fed with the information from the last state estimations by the filter itself and the sensory information such as odometry results, then the output is filtered and provides relatively reliable location and orientation information as well as speed estimates which plays a key role in biped robots self-localization. Robustness of KF methods is lower than the gird-based Markov localization, but the accuracy of the former approach is higher than the latter one. Therefore, implementation of a combined Markov-Kalman scheme could be a proper choice as examined and presented in [35]. Monte-Carlo self-localization has been a most common choice in recent years for state estimation especially in biped robots and improvements on this

scheme result in a computationally inexpensive navigation and a reliable model with the presence of multimodal distributions [36, 37]. It is a probabilistic algorithm which approximates a state and its variance by a set of samples consisting of one possible state and a weight which represents the likelihood of this state. This method update the density representation by the Monte Carlo scheme [38] and represents uncertainty by maintaining a set of samples that are arbitrarily selected from it instead of introducing the probability density function itself. There are some enhancements to the general vision-based MCL algorithm in presence of errors produced by imperceptible landmarks which is utilizing line observations into the probability updates by the idea of identifying lines as atomic entities instead of using individual line pixels as comprehensively described in [39, 40]. Table II presents Augmented MCL algorithm details which are employed here in this paper. Final weighting of a particle is attainable from numerical product of an assortment of detected landmarks probabilities. The current particle set X t 1 updates through motion model according to the last performed action ut and results in X t . Weighting value is computable for each particle by considering observation z t . Parameters wave , wslow , w fast are estimated to enable the algorithm to be adaptive to the situations such as kidnapping. Resampling procedure based on particles’ distribution is done in line 11 to 16. Positive parameters,  slow and  fast are required to be tuned to achieve satisfactory outcome. Re-sampling optimization is also a key factor for precise and fast localization and is under investigation in this research group TABLE II.

AUGMENTED MCL METHOD

_ 1:  2: 3: 4:

_



_

5: 6: 7:



13: 14: 15: 16: 17: 18: 19:





8: 9: 10: 11: 12:

,

,

V.

:

, ∅ 1

static

,

, _ 〉

,

,

,



1 probability

0.0, 1.0

/









∈ 1, … ,





Figure 3. Nao biped robots and the mounted colored pattern

It should also be considered that  fast must be greater than  slow . Distance and angle error is computed for each particle relative to the current pose of the particle in the measurement model according to the landmark type. Since the goal posts are major orientation parameters and robot's direction identifiers because of their unique color characteristics, a newly developed history model is utilized in the measurement in which the last seen goal post is identified and kept till new perception of goal posts or changing in odometry data in case of zero values for odometry data. Finally, the output of the measurement function is a probability for each particle computed by the product of particle probability from every detected landmark. Probabilities are extracted from normal distribution probability density function with zero mean and different experiments have been carried out to confirm the applicability and the precision of the approach. Pose extraction is performed here in this research based on the K-means theory after several examination of some related methods such as Best particle, Binning, and Average presented in [36].



RESULTS AND DISCUSSION

Independent tests were carried out in this section on the robot for three different trajectories to verify the proposed approach both on experimental and simulation data. Fig. 3 depicts a Nao biped robot with the mounted tracking colored outline. This colored pattern is employed for experimental positioning of the robot in the field and comparison with the perceived positions and orientations. The first set of analysis in this section examined performance of the proposed localization method for a polygon maneuver which results in tangible pose data as graphically illustrated in Fig. 4. These results are obtained with dedicated 70% missed landmarks, 20% missed goal posts, imposed 40 cm and 10 degrees landmarks position and orientation errors, 10 cm and 5 degrees points position and orientation errors. Estimated positions with the mentioned noise consideration show dependable approach. Elliptic and circular shapes symbolize position variances in two horizontal and vertical directions. Fig. 5 represents normal distance and orientation errors of the path shown in Fig. 4. As it is clear from the fluctuations in Fig. 5, errors with large values are dedicated to the beginning of the motion in which particles do not have proper time to be concentrated at a point and sharp edges. For the purpose of analysis, a rectangular-like maneuver is generated by the behavior control section and is assigned for the robot as shown in Fig. 6. in which odometry and augmented MCL results are compared with the real robot's positions to investigate effect of odometry errors on final pose.

Figure 8. Augmented MCL performance for kidnapping scenario Figure 4. Localization data for a direct path maneuver

An implication of the finding from Fig. 7 is that errors will increase due to the position divergence in situations that the pose extraction purely relies on the odometry in which there is a clear trend of particle concentration on turning points. The capability shown in Fig. 8 leads to prompt detection of kidnapping and move particles from the end of left pathway to new pose on the right trail. Fig. 9 shows that the maximum horizontal position error up to 38cm exists around landing points of the kidnapped robots which is due to particle concentration. Larger values of orientation and vertical position errors are contributed to kidnapping state, but it returns to the practical error fluctuation with average of 11 degrees and 42 cm correspondingly.

Figure 5. Position and orientation errors of the proposed MCL approach

There is an acceptable consistency between the desired path and the augmented MCL localization output in Fig. 6.

Figure 9. Figure 6. Pos data comparison; Augmented MCL (red pluses), Top camera localization (blue lines), and odometry data (black lines)

Fig. 7 and 8 depicts pose errors variations of perceived location by the augmented MCL for the rectangular path and performance of augmented MCL for kidnapped robots.

Figure 7.

Localization errors for the rectangular pathway

Position and orientation errors for a kidnapped robot

Improper head motion has been identified as major contributing factor for position errors up to 32% due to the lower number of landmark detection which is under revision. VI.

CONCLUSION

Presented approach is tested on the Nao biped robot which shows a reasonable performance with acceptable localization deviations during a game test. In addition, the present findings seem to be consistent with other research on PF and KF based approach with consideration of kidnapped robot scenario. There are two main concerns currently being adopted in this research into the MCL method on biped robots. One is implementing particular directions of various landmarks such as L type which facilitates fast localization and reduces miss-matching faults and the other is history implementation of the significant landmarks such as goals. Moreover, the relevance of produced errors and the localization which is purely dependent to the odometry is clearly supported by the current findings. Further works needs to be done to establish a correlation between the employed Augmented MCL and the noise model in various field zones and with adaptive head motion for precise and smooth localization which are authors’ next research topic. Moreover,

fluctuations of the available position and orientation estimation could also be removed with execution of appropriate filtering after zone finalization and a reliable pose extraction such as hybrid particle history which is under study. ACKNOWLEDGMENT Authors gratefully acknowledge the technical support of Mechatronics Research Lab. and MRL-SPL team members. REFERENCES [1]

[2]

[3]

[4] [5] [6]

[7] [8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

M. Nakanishi, T. Ogura, “Real-time Line Extraction Using a Highly Parallel Hough Transform Board,” in International Conference on Image Processing., Santa Barbara., CA, USA, 1997, pp. 582 - 585 vol.2 T. T. Nguyen, X. D. Pham, J. W. Jeon, “An improvement of the Standard Hough Transform to detect line segments,” IEEE International Conference on Industrial Technology, Chengdu, 2008, pp. 1-6. T. T. Nguyen, et al “A Test Framework for the Accuracy of Line Detection by Hough Transforms,” 6th IEEE International Conference on Industrial Informatics, Daejeon, 2008, pp. 1528 – 1533. T. Röfer, et al “B-Human Team Report and Code Release,” University of Beremen, Beremen, Germany, Tech Rep, 2011. S. Czarnetzki, et al “Nao Devils Dortmund Team Report 2010,” Technical University Dortmund, Dortmund, Germany, 2010. M.A. Fischler and R.C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381– 395, 1981. R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. University Press, Cambridge, 2001. P. Torr and C. Davidson. IMPSAC: A synthesis of importance sampling and random sample consensus to effect multi-scale image matching for small and wide baselines. In European Conference on Computer Vision, pp. 819–833, 2000. P. Torr and A. Zisserman. MLESAC: A new robust estimator with application to estimating image geometry. Computer Vision and Image Understanding, 78(1):138–156, 2000. P. S. Maybeck, “The Kalman filter: An introduction to concepts,” In I. Cox and G. Wilfong. Editors, Autonomous Robot Vehicles, Springer-Verlag, pp. 194-204, 1990. Leonard and H. Durrant-Whyte, “Mobile robot localization by tracking geometric beacons,” IEEE Transactions on Robotics and Automation, vol. 7, no. 3, pp. 376-382, 1991. J. S. Gutmann, T. Weigel, and B. Nebel, “A fast, accurate, and robust method for self-localization in polygonal environments using laser range finders,” Advanced Robotics Journal, vol. 14, no. 8, pp. 651-668, 2001. A. R. Cassandra, L. P. Kaelbling, J. A. Kurien, “Acting under uncertainty: Discrete Bayesian models for mobile robot navigation,” in Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, 1996. W. Burgard, D. Fox, D. Hennig, and T. Schmidt, “Estimating the absolute position of a mobile robot using position probability grids,” in Proc. of the 14th National Conference on Artificial Intelligence, 1996, pp. 896–901. D. Fox, W. Burgard, and S. Thrun, “Markov localization for mobile rohots in dynamic environments,” Journal of Artificial Intelligence Research, vol. 11, 1999. B. Ristic´, S. Arulampalam, and N. Gordon, “Beyond the Kalman Filter: Particle Filters for Tracking Applications,” Norwell, MA: Artech House, 2004. P. M. Djuric´, J. H. Kotecha, J. Zhang, Y. Huang, T. Ghirmai, M. F. Bugallo, and J. Míguez, “Particle filtering,” IEEE Signal Process. Mag., vol. 20, no. 5, pp. 19–38, 2003. J. Carpenter, P. Clifford, and P. Fearnhead, “An improved particle filter for nonlinear problems,” in Proc. Inst. Elect. Eng., Radar Sonar Navigation, vol. 146, pp. 2–7., 1999.

[19] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte-Carlo localization for mobile robots,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, 1999. [20] S. Thrun, W. Burgard, and D. Fox, Probabilistic robotics. MIT Press, 2005. [21] S. Thrun, D. Fox, W. Burgard, and F. Dellaert, “Robust MonteCarlo localization for mobile robots,” Journal of Artificial Intelligence, 2001. [22] T. Rofer and M. Jungel, “Vision-based fast and reactive MonteCarlo localization,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, 2003. [23] C. Kwok and D. Fox, “Reinforcement learning for sensing strategies,” in Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, 2004. [24] M. Sridharan, G. Kuhlmann, and P. Stone, “Practical vision-based monte carlo localization on a legged robot,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, 2005. [25] T. Schmitt, R. Hanek, M. Beetz, S. Buck, and B. Radig, “Cooperative probabilistic state estimation for vision based autonomous mobile robots,” IEEE Transactions on Robotics and Automation, vol. 18, no. 5, pp. 670–684, 2002. [26] D. Stronger, P. Stone, “A Comparison of two approaches for vision and self-localization on a mobile robot,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 3915-3920, 2007. [27] J. S. Gutmann, W. Burgard, D. Fox, and k. Konolige, “An experimental comparison of localization methods,” in Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, 1998. [28] J.-S. Gutmann and D. Fox, “An experimental comparison of localization methods continued,” in Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, 2002. [29] A. Bais, R. Sablating, G. Novak, “Line-based landmark recognition for self-localization of soccer robots,” in Proc. of the IEEE Int. Conf. on Emerging Technologies, pp. 132-137, 2005. [30] T. Rofer, T. Laue, and D. Thomas, “Particle-filter-based self localization using landmarks and directed lines.” in RoboCup, ser. Lecture Notes in Computer Science, A. Bredenfeld, A. Jacoff, I. Noda, and Y. Takahashi, Eds., vol. 4020, pp. 608–615, Springer, 2005. [31] E. Hashemi, O. Amir Ghiasvand, M. Ghaffari J., M. Lashgarian, H. RassamFard, M. Shafiei, “MRL team description 2011 standard platform league,” in RoboCup 2011, TDPs, Turkey, 2011. [32] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. [33] M. Ghaffari Jadidi, E. Hashemi, M.A. Zakeri Harandi, and H. Sadjadian, “Kinematic modeling improvement and trajectory planning of the nao biped robot,” in Proc. of the Joint International Conference on Multibody System Dynamics, Finland, 2010. [34] E. Hashemi and M. Ghaffari Jadidi, “Dynamic modeling and control study of the nao biped robot with improved trajectory planning,” in Proc. of the Int. Conf. on Advanced Computational Engineering and Experimenting, France, 2010. [35] J. S. Gutmann. Markov-Kalman localization for mobile robots in Proc. of the Int. Conf. on Pattern Recognition, 2002. [36] T. Laue, T. Rofer, “Pose extraction from sample sets in Robot SelfLocalization – A Comparison and a Novel Approach,” in Proc. of the 4th European Conference on Mobile Robots - ECMR’09, I. Petrovi´c and A. J. Lilienthal, Eds., Mlini/Dubrovnik, Croatia, pp. 283–288, 2009. [37] T. Laue, T. Rofer, “Particle Filter-based State Estimation in a Competitive and Uncertain Environment,” in Proc. of the 6th Int. Workshop on Embedded Systems,Finland, 2007. [38] J. E. Handschin, “Monte Carlo techniques for prediction and filtering of non-linear stochastic processes,” Automatica vol. 6, pp. 555–563, 1970. [39] T. Hester and P. Stone, “Negative information and line observations for Monte Carlo localization,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, 2008. [40] J. Hoffmann, M. Spranger, D. G¨ohring, and M. J¨ungel, “Exploiting the unexpected: Negative evidence modeling and proprioceptive motion modeling for improved Markov localization,” in RoboCup, pp. 24–35, 2005.

Particle Filter Based Localization of the Nao Biped Robots

Moreover, kidnap scenarios which could not be considered and handled with the uni-modal Kalman .... Thereafter, by running a binary search on green color between the current pixel and the previous checked one, ... motions of the cameras installed on robots' head cause undesirable indistinctness in estimation of relative.

360KB Sizes 1 Downloads 172 Views

Recommend Documents

Enhancing Memory-Based Particle Filter with Detection-Based ...
Nov 11, 2012 - The enhance- ment is the addition of a detection-based memory acquisition mechanism. The memory-based particle filter, called M-PF, is a ...

Particle Filter based Multi-Camera Integration for ...
calibrated cameras via one color-based particle filter. The algorithm re- ... ensured using a multi-camera system, which guarantees broad view and informa-.

Particle Filter Integrating Asynchronous Observations ...
Position tracking of mobile robots has been, and currently ..... GPS. 1. 20 ot. 4. Camera Network. 1. 500. The experimental testbench was composed by two com- puters. .... Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,”.

Convergence Results for the Particle PHD Filter - CiteSeerX
convergence of the empirical particle measure to the true PHD measure. The paper first ... tation, or Particle PHD Filter algorithm, is given in Section. Daniel Edward Clark ...... [Online]. Available: citeseer.ist.psu.edu/crisan00convergence.html. [

Convergence Results for the Particle PHD Filter
[2], the random finite set Θ was represented by a random counting measure nΘ ..... error of the PHD Particle filter at each stage of the algorithm. These depend on ..... t−1. ) 2. |Qt−1]), where the last equality holds because the particles are

Convergence Results for the Particle PHD Filter - CiteSeerX
distribution itself. It has been shown that the PHD is the best-fit ... Electrical and Computer Engineering, Heriot-Watt University, Edinburgh. [email protected] ... basic idea of point processes is to study collections of point occurrences, the .....

The Insights of DV-based Localization Algorithms in the ... - CiteSeerX
Email: yuanfang [email protected], Li [email protected]. † ... tion service system. However, it is ... use the distance vector (DV) routing method to broadcast.

The Insights of DV-based Localization Algorithms in the Wireless ...
School of Software, Dalian University of Technology, China. Email: yuanfang [email protected] ... applications in wireless sensor networks (WSNs). However, most.

Particle PHD Filter Multiple Target Tracking in Sonar ...
The matrices in the dynamic model ... The PHD is approximated by a set of discrete samples, or ... observation covariance matrix R. The dot product of the.

Boosting Target Tracking Using Particle Filter with Flow ...
Target Tracking Toolbox: An object-oriented software toolbox is developed for implementation ..... In data fusion problems it is sometimes easier to work with the log of a ..... [13] Gustafsson, F., “Particle filter theory and practice with positio

6DOF Localization Using Unscented Kalman Filter for ...
Email: [email protected], {ce82037, ykuroda}@isc.meiji.ac.jp. Abstract—In this ..... of the inliers is high, the accuracy of visual odometry is good, while when the ..... IEEE International Conference on Robotics and Automation, Vol. 1, ... Sur

Kalman Filter for Mobile Robot Localization
May 15, 2014 - Algorithm - This is all you need to get it done! while true do. // Reading robot's pose. PoseR = GET[Pose.x; Pose.y; Pose.th]. // Prediction step. ¯Σt = (Gt ∗ Σt−1 ∗ GT t )+(Vt ∗ Σ∆t ∗ V T t ) + Rt. // Update step featu

Probabilistic Multiple Cue Integration for Particle Filter ...
School of Computer Science, University of Adelaide, Adelaide, SA 5005, ..... in this procedure, no additional heavy computation is required to calculate the.

INTERACTING PARTICLE-BASED MODEL FOR ...
Our starting point for addressing properties of missing data is statistical and simulation model of missing data. The model takes into account not only patterns and frequencies of missing data in each stream, but also the mutual cross- correlations b

Image-Based Localization Using Context - Semantic Scholar
[1] Michael Donoser and Dieter Schmalstieg. Discriminative feature-to-point matching in image-based localization. [2] Ben Glocker, Jamie Shotton, Antonio Criminisi, and Shahram. Izadi. Real-time rgb-d camera relocalization via randomized ferns for ke

Microtubule-based localization of a synaptic calcium - Development
convenient tool with which to analyze the function of microtubules in biological .... visualization of protein subcellular localization and AWC asymmetry in ... 138 tir-1(tm3036lf); odr-3p::tir-1::GFP r. –. 0. 100. 0. 147 odr-3p::nsy-1(gf), L1 s. â

Hierarchical Dynamic Neighborhood Based Particle ...
Abstract— Particle Swarm Optimization (PSO) is arguably one of the most popular nature-inspired algorithms for real parameter optimization at present. In this article, we introduce a new variant of PSO referred to as Hierarchical D-LPSO (Dynamic. L

Particle-based Viscoelastic Fluid Simulation
and can animate splashing behavior at interactive framerates. Categories and ..... We can visualize how the combined effect of pressure and near-pressure can ...

Microtubule-based localization of a synaptic calcium - Semantic Scholar
NSY-5 gap junction network is required for the induction of AWC asymmetry (Chuang et al., 2007). Once AWC .... with a speed of seven frames per second and an exposure time of 140 mseconds. Movies were analyzed using .... To test directly the effect o

WhyCon: An Efficent, Marker-based Localization ... - University of Lincoln
landing of the UAV's on a slowly moving UGV [15]. The system was also used outside of the aerial robotics domain, e.g., to evaluate the accuracy of ground robot.

Nao se Apega, Nao - Isabela Freitas.pdf
Whoops! There was a problem loading more pages. Retrying... Nao se Apega, Nao - Isabela Freitas.pdf. Nao se Apega, Nao - Isabela Freitas.pdf. Open. Extract.

Fall Prediction of Legged Robots Based on Energy ...
Abstract—In this paper, we propose an Energy based Fall. Prediction (EFP) which observes the real-time balance status of a humanoid robot during standing.

Image-Based Localization Using Context (PDF Download Available)
the search space. We propose to create a new image-based lo-. calization approach based on reducing the search space by using. global descriptors to find candidate keyframes in the database then. search against the 3D points that are only seen from

fast wavelet-based single-particle reconstruction in cryo ...
The second idea allows for a computationally efficient im- plementation of the reconstruction procedure, using .... We will use the following definition for the Fourier transform of a D-dimensional function f(x) = f(x1,...,xD): ... the above definiti