Simultaneous Topological Map Prediction and Moving Object Trajectory Prediction in Unknown Environments Shu Yun Chung, Han Pang Huang, Member, IEEE

Abstract— To achieve fully autonomous mobile robot in unknown environment, a mapping and localization technique is required. Recently, in unknown dynamic environment, SLAMMOT (Simultaneous Localization and Mapping and Moving Object Tracking) is also attracted the extensive attention in this area. In this paper, we extended SLAMMOT problem to simultaneous map prediction and moving object trajectory prediction. The robot not only passively collects the data and executes SLAMMOT, but actively predicts the future scene. The recursive Bayesian formulation of SLAMMOT with scene prediction is also derived for real time operation. Preliminary results are also shown and validated the idea in this paper. Index Terms— SLAM, SLAMMOT, Topological Map, Moving Object Tracking, Mobile Robot.

S

I.

INTRODUCTION

CENE understanding is one of the most important things for an autonomous robot. Accordingly, in the last two decades, SLAM becomes one of the most popular topics in mobile robot field. Since the major problem of SLAM focused on decreasing the uncertainty of map and robot pose, most of them discussed SLAM problem in static environment and information of moving objects were usually ignored or filtered out. The static map and moving objects were viewed as completely independence in traditional SLAM methods. However, a populated environment is usually constructed for certain purposes. For example, since the road is usually built for public transportation, the road size will consider the car size and traffic flow. On the other hand, the motions of vehicles would also follow the road direction. Hence we believe that there are still some “connections” between moving objects and static map in certain level. Recently, some advanced researches of SLAM were proposed. Wang[8] represented the concept of SLAMMOT (Simultaneous Localization, Mapping and Moving Object Manuscript received March 1, 2008. This work is partially supported by the Industrial Development Bureau, Ministry of Economic Affairs of R.O.C. under grants 96-EC-17-A-04-S1-054. Shu Yun Chung is with Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan (phone: 886-2-33664478; fax: 886-2-23676064; e-mail: [email protected]). Han Pang Huang is a professor of Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan (e-mail: [email protected]). He is currently the chief of department of mechanical engineering and the director of robotics laboratory. .

Tracking). It not only solves SLAM problem, but also estimates and tracks the moving object motion. Chang [2] demonstrated prediction-based SLAM (P-SLAM) to predict the structure inside an unexplored region. The predicted map can be utilized as the prior information of Bayesian formulation and can enhance the accuracy of SLAM. This paper extended the concept of SLAMMOT and PSLAM to simultaneous map prediction and moving object trajectory prediction. By extracting the map structure, we can predict the motion of moving objects. On the contrary, it can also help to predict the map structure by observing moving object motion. This paper is organized as follows. In section II, the benefits to predict the map and the moving object trajectories “simultaneously” are discussed. In section III, the methodologies of scene prediction are represented. In section IV, the recursive Bayesian formulation of SLAMMOT with map prediction is derived. In section V, preliminary experiments are represented to validate the idea. Finally, conclusions are summarized in section VI. II. PROBLEM STATEMENT In fact, prediction is not the privilege for robotics or machine learning. People are used to make predictions in their daily life. It can help people to avoid the emergencies or accomplish the work faster. In traditional SLAM problem, it just focuses on estimating the robot pose and map by sensing the environment. If we can utilize the information extracted from SLAM module and further predict the unexplored area, the predicted map could be the useful information for robot motion planning and exploration. Moreover, the environment is often full of walking people, moving cars and other moving objects. By prediction, it can drastically decrease the probability of collision. The main reason to predict map and moving object trajectory “simultaneously” is that most of the time the moving objects will follow the road direction. Thus the robot is able to predict the trajectories of moving objects with map structure. On the contrary, by observing moving object motion, the map structure is also predictable. There are also some previous researches demonstrated that the estimation of moving object motion and road curve simultaneously can get better performance than estimated separately [6]. Compared with previous SLAM researches, SLAMMOT with scene prediction is more generalized. It not only keeps the characteristics of SLAMMOT and PSLAM, but also further focuses on future information estimation.

III. SCENE PREDICTION A. Topological Map Prediction In this paper, we adopted GVG (Generalized Voronoi Graph) [4] to be the topological map. Since GVG is extracted from the environment structure, it is appropriate to be applied to the prediction of map structure. During the exploration period, the topological map can be represented by three kinds of structures, incomplete road skeleton, complete road skeleton, and potential gateways. The following sections will describe the prediction of structure and hypothesis modeling. 1) Road Map Prediction & Intersection Prediction Fig. 1(a) shows the incomplete road skeleton and complete road skeleton. To predict the unexplored area, one way is to find the roadmap pattern from explored area. Several previous researches [6][5] had discussed the modeling of the road shape. To represent with statistical modeling, we adopt weighted linear regression [1] to model the incomplete road skeleton. The skeleton nodes are depicted as the data points for the weighted linear regression. After road modeling process, the intersection can be easily detected by checking the crossing of two lines.

represented by two cases. The first case is that the intersection consists of two road models. The other is that the intersection consists of one road model and one predicted road model. The intersection of two predicted road models does not create the hypothesis because of the large uncertainty. The probability modeling of an intersection hypothesis is introduced as follows. a) Two Road Models

Two kinds of factors are utilized to represent the probability of hypotheses. One is the distance weighting ωid which depends on the relative distance between individual GVG meeting point and predicted location of intersection (see Fig. 1(b)). The other is the direction weighting ωiσ which involves with the slope variance of road model estimated by weighted linear regression. The modeling equations of ωid and ωiσ are listed in eq.(1)-(2). ⎧⎪ 0 Dj = ⎨ a b ⎪⎩ d j − d j

, if

d aj ≤ d bj

d aj , if

d aj > d bj

j = 1, 2

(1)

ωid = exp ( −c1 ( D1 + D2 ) ) , ωiσ = exp ( −c2 (σ 1 ⋅ σ 2 ) )

(2)

(

)

where

Where c1, c2 are positive constants and σ 1 , σ 2 are slope variance of road model. The intersection hypothesis is modeled by eq.(3). N is the number of hypotheses and ωi is the weighting of hypothesis i. ωi = (ωid ⋅ ωiσ )

N

∑ (ω

d i

i

⋅ ωiσ

)

(3)

b) One Road Model and One Predicted Road Model (a) (b) Fig. 1 (a)Incomplete road skeleton and complete road skeleton (b) Intersection hypothesis modeling

2) Gateway Prediction To efficiently predict the location of potential gateways in partially known environment, we utilized the concept of frontiers [9], which is a well-known motion strategy indicator in exploration task. The frontier is the boundary between explored space and unexplored space. The extracted frontiers are clustered into several groups. The gateway likelihood is modeled by the size of gateway with prior Gaussian distribution N ( μ , σ ) . The mean μ and variance σ can be set by prior knowledge of the environment. Once the gateway likelihood is higher than the threshold, the frontier group will be marked as a potential gateway. Moreover, in this paper, we also make an assumption that pedestrians always walk on the road. While the pedestrian walks in or out the potential gateway, the hypothesis of predicted road model is generated. To decrease the number of false detection, it does not generate the gateway hypothesis when the unexplored area surrounding the robot is too large. 3) Hypothesis Modeling Once an intersection is detected, the corresponding hypothesis is generated. The intersection hypothesis can be

Since the predicted road model has no enough information of GVG skeleton to model the road uncertainty, the likelihood of gateway is used instead of that. The parameters D j and σ j are rewritten as Dp and σ p (see eq.(4)). The weighting of the hypothesis still follows eq.(3). l ( k ) is the likelihood of a potential gateway at time step k. Dp , σ p ∝ l ( k )

−1

(4)

B. Moving Objects Detection, Tracking, and Prediction In this section, we represent a more generalized tracking method. In order to detect the moving objects with a laser range finder, the occupancy grid map is incrementally built by the robot. Once the laser measurement falls into the grids which were previously unoccupied, it is regarded as the measurement from moving objects. The prediction module is divided into short term and long term prediction. The short term prediction follows the general motion model prediction such as the constant velocity model and the constant acceleration model. Goal-directed trajectory prediction is utilized for long term prediction. Moreover, the structure of MHT (Multiple Hypothesis Tracking) is also used to achieve robust data association. For simplification, a detailed discussion of MHT and short term prediction is ignored. The long term prediction and generalized tracking formulation will be introduced as follows.

1) Long Term Trajectory Prediction At first, several potential goals are extracted from GVG skeleton and frontier grid. Then the robot estimates the goal weighting recursively by observing the moving object motion. An A* planner is utilized to predict the long term trajectory from current moving object position to goal. Finally, the future position of moving objects at different time step is represented by the occupancy grid map. a) Goal Extraction

Since pedestrian essentially attempts to follow the road, the potential goals can be extracted from the environment skeleton. To efficiently search the potential goals, we defined a perception circle which surrounds the robot with certain distance. The potential goals are extracted from the crossovers of GVG and perception circle. The searching directions are shown as the green arrows in Fig. 2. b) Goal Weighting Inheritance and Propagation

The location of the potential goal would change while the robot moves in the environment. In other words, the current goal should inherit the weighting from its parents. For instance, in Fig. 3(a), the child goal inherits the weighting from two parent goals while the robot moves toward left side. The weighting of inheritance is shown as eq.(5). Gk1+1 = Gk1 + Gk2 (5) On the contrary, in Fig. 3(b), the parent goal propagates the weighting to its two child goals while the robot moves toward right side. The weighting will propagate as eq.(6). Gk1+1 = Gk2+1 = Gk1 2 (6) c) Goal Weighting Update

We make an assumption here that pedestrian is always directed by only one goal. Once a moving object is detected, the robot collects the observations of the moving object and estimates the weightings of potential goals sequentially. The goal weighting formulation will be discussed later. d) Path Planning and Probability Modeling

Instead of predicting precise trajectories, we try to model the predicted position of moving objects by probability distribution in time domain.

First of all, an A* planner queries the trajectories from current moving object position to individual goals. The probability distribution of moving object position is modeled by multiple Gaussian distribution. The mean and variance are affected by the velocity and steering of moving objects and are estimated from observed data. Finally, the probability configuration time space is produced by combining individual Gaussian model with goal weighting. 2) Formulation of the Generalized Tracking & Prediction Tracking problem can be treated as the posterior p ( ok | Z k ) estimation. ok is the status of a moving object at time step k and Z k is defined as the data set of sequential observations. In this paper, the observations at different time step are viewed as independence. The generalized tracking problem can be represented as DBNs (Dynamic Bayesian Networks) shown in Fig. 4. According to the total probability and Bayesian formulation, the posterior can be factorized into eq. (7). G is the potential goal of the moving object. p ( ok | Z k ) m

=∑ i =1 m

∝∑ i =1

n

∑ p (o

k

j =1

) (

n

j =1

k

j k

k

i k

k

Update

j k

(7) i k

k −1

Prediction

The third term

(

i

p Gk | Z k

)

j k

k

k

Goal Weighting Model Weighting

is represented as the goal

weighting. Corresponding factorization of Bayesian formula is shown in eq.(8). p ( Gki | Z k ) ∝ p ( zk | Gki , Z k −1 ) p ( Gki | Z k −1 ) (8) In the similar case, motion model weighting p ( s | Z ) can j

k

k

also be factorized into eq.(9).

(

( ) ( ) (9) In prediction process, we split the problem into short term and long term prediction. Short term prediction is to forecast the next status of the moving object. In long term prediction, the future motion can be directed by the goal. Considering the uncertainty of prediction, the motion after n time step is modeled by the superposition of individual goal models with different weights. The formulation of short term and long term prediction is shown as eq.(10)(11). p skj | Z k

)

∝ p zk | skj , Z k −1 p skj | Z k −1

p ( ok + n | ok ) =

(a) (b) Fig. 3 (a)Goal Weighting Inheritance (b) Goal Weighting Propagation

)

p ( z | G , s , o ) p ( o | G , s , Z ) p (G | Z ) p ( s | Z ) ∑  

 

i k

p ( ok +1 | ok ) =

Fig. 2 The searching direction of potential goals

) (

| Gki , skj , Z k p Gki | Z k p skj | Z k

n

p (o | s , o ) p ( s | o ) ∑ 



j =1

k +1

j k

k

Prediction

m

j k

k

p (o | G , o ) p (G | o ) ∑  

k +n

i =1

(10)

Model Weighting

i k

k

Prediction

i k

k

Goal Weighting

z0

z1

z2

o0

o1

o2

s0

s1

s2

G0

G1

G2

Fig. 4 DBNs for generalized tracking problem

(11)

IV. FORMULATION OF SCENE PREDICTION

Now we decompose the current observation zk into three terms zk = zku + zke + zko . The first term z ku is the observation from unexplored static environment. zke is the observation from explored static environment. z ko is the observation from moving objects. Moreover, we assume that z ku , zke , and z ko are conditional independence if robot pose is known. Hence, the first term in the right side of eq(16) can be derived as: p ( zk | xk , M , hk , ok )

( p(z

=

) ( | x ,M ) p(z

=

e k

k

u k

Fig. 5 The architecture of SLAMMOT with scene prediction

A. Bayesian Formula of SLAM At first, we review the Bayesian formula of SLAM algorithm. The SLAMMOT with scene prediction algorithm will be based on these formulations and explained in next section. The complete trajectory of robot, consisting of the robot’s pose at every time step, is indicated as X . The k

control input at time step k is written as uk . The set of all X k  { x0 , x1 ," , xk } , U k  {u1 ," , uk }

(12) According to Bayes’ rule and Markov assumption, we can model SLAM problem as the recursive Bayesian formula at eq.(13). The detailed derivation can be referenced in [7]. Here M indicates the environment map. p ( xk , M | Z k ,U k ) 

Posterior at k

∝ p ( zk | xk , M ) ∫ p ( xk | xk −1 , uk ) p ( xk −1 , M | Z k −1 , U k −1 ) dxk −1 (13) 



Update Posterior at k −1 

Prediction

k

k

considered as independence in the online SLAM algorithm. Since the mapping of online SLAM process is incremental, it cannot predict the observation measurement in the unexplored area. Hence the likelihood p ( z ku | xk , hk , M ) can be formulized as p ( zku | xk , hk ) . Moreover, by the chain rule, the second term in the right side of eq.(16) can be factorized into. p ( xk , M , hk , ok | Z k −1 , U k ) =

p ( xk , M | Z k −1 , U k ) p ( hk | ok , Z k −1 , U k ) p ( ok | Z k −1 ,U k )

set of sequential hypotheses is expressed as H k . H k  {h0 , h1 ," , hk }

(14) The posterior probability of SLAM with moving object tracking and map prediction can be modeled as p ( xk , M , hk , ok | Zk ,U k ) (15) Where ok is the moving object state at time k . By Bayes’

(

) (

) (

∝ p zke | xk , M p zku | xk , hk p zko | xk , ok

)

(19)

p ( xk , M | Z k −1 , U k ) p ( hk | ok , Z k −1 , U k ) p ( ok | Z k −1 ,U k )

Finally, the posterior can be modeled as recursive Bayesian formula p ( xk , M , hk , ok | Z k ,U k )

(

)

m

(

) (

)

∝ p zku | xk , hk ∑ p hk | ok , hki −1 p hki −1 | Z k −1 ,U k −1 

 i =1

Update

(

Prediction

)

p zko | xk , ok ∫ p ( ok | ok −1 ) p ( ok −1 | Z k −1 ,U k −1 ) d ok −1 



(

Prediction

)

p z | xk , M ∫ p ( xk | uk , xk −1 ) p ( xk −1 , M | Z k −1 ,U k −1 ) d xk −1 



e k

Update

Prediction

u1

u2

x0

x1

x2

z0

z1

z2

TM 0

TM 1

TM 2

M

rule, the most recent measurement is factored out as: (16)

(18)

By rearranging eq.(16)-(18), the posterior will be

Update

B. SLAMMOT with Topological Map Prediction In this section, the SLAMMOT algorithm is considered to combine with the topological map prediction. The DBNs of SLAMMOT with map prediction is shown in Fig. 6. New topological hypotheses at time step k is denoted as hk . The

p ( zk | xk , M , hk , ok ) p ( xk , M , hk , ok | Z k −1 , U k )

k

(17)

observed from the real environment, z ku and M still can be

p ( xk , M , hk , ok | Z k , U k )

control input of the robot is written as U k .



k

o k

One thing we should notice is that even though z ku is

The proposed SLAMMOT with scene prediction consisted of several main modules, SLAMMOT module, MHT of topological map module, and MHT of moving object tracking. The complete architecture is shown in Fig. 5.

p ( xk , M , hk , ok | Z k , U k )

) ( ) | x ,h ) p(z | x ,o )

p zke | xk , M p zku | xk , hk , M p zko | xk , ok

o0

o1

o2

Fig. 6 DBNs of SLAMMOT with map prediction

(20)

V. EXPERIMENTAL RESULTS The test environment is B1 floor in the engineering building of NTU. The total size is nearly 50m x 100m. It consists of some corridor structures and a looped area. The hand made map is shown in Fig. 7(b). The robot is equipped with the sensors, a laser range finder and motor encoder, shown in Fig. 7(a). We show the tracking performance and scene prediction in the individual experiment. Moreover, the SLAM algorithm is based on our previous works [3]. A. Experiment I In experiment I, the robot is required to locate the position of pedestrian and itself from the laser data. At the same time, it still needs to track the pedestrian and predict the future location of the pedestrian simultaneously. Fig. 8 displays one of the prediction results while the robot is tracking human 1. The extracted goals and corresponding predicted trajectories are shown in Fig. 8(a). Fig. 8(b)-(d) show predicted probability C-T space of the pedestrian at different time step. Since Goal B and C have almost equally high likelihood, the predicted position of the pedestrian trends to Goal B and C separately after the predicted t = 5. Moreover, predicted results can also enhance tracking performance. For example, in Fig. 9(a), human 1 turns left and goes into the occlusion area at time step t = 6; meanwhile, the weights of Goal A, B, C are 0.00065, 0.855, and 0.144 separately. The predicted results and probability distribution at 13 second are displayed in Fig. 9(b). The total occlusion time period is almost 7 seconds. After that, 2 moving objects are detected (see Fig. 9(c)). To verify the effect of prediction, we compare two different situations. TABLE I shows the hypotheses without any prediction. TABLE II shows the results with prediction. As a result, if there is no any prediction, the likelihood of the original hypothesis will be too low and the observations are associated with two new moving objects at time step t = 13. On the contrary, with the trajectory prediction, the original hypothesis is still tracked at time step t = 13. Three hypotheses which listed in TABLE II are generated at t =13. Once more observations are collected, the most likely hypothesis will get highest probability. Fig. 9(d) shows the tracking results with prediction. The hypothesis with highest probability matches with the ground truth. B. Experiment II In experiment II, the robot is exploring the unknown environment. The complete explored area and its GVG skeleton are shown in Fig. 11(c)(d).

(a)

(b) Fig. 7 (a) robot platform (b) test environment

Time 6

13 17

Time 6

13

TABLE I MHT WITHOUT TRAJECTORY PREDICTION Hypotheses Information H1 : z1 → index 1

H1 : z1 → index 2 (new index),

p1 : 1

z2 → index 3 (new index) H1 : z1 → index 2 , z2 → index 3

p1 : 1

TABLE II MHT WITH TRAJECTORY PREDICTION Hypotheses Information H1 : z1 → index 1

p1 : 1

H1 : z1 → index 1 , z2 → index 2 (new index)

p1 : 0.51

H 2 : z1 → index 2 (new index), z2 → index 1

p2 : 0.34

H 3 : z1 → index 2 (new index),

p3 : 0.15

z2 → index 3 (new index) 17

p1 : 1

H1 : z1 → index 1 , z2 → index 2

p1 : 0.78

H 2 : z1 → index 2 , z2 → index 1

p2 : 0.10

H 3 : z1 → index 2 , z2 → index 3

p3 : 0.12

(a) t = 0 second

(b) predicted t = 1 second

(c) predicted t = 5 second (d) predicted t = 12 second Fig. 8 Predicted results. Goal weighting A: 0.005 B:0.4847 C: 0.5099

(a) t = 6 second

(b) predicted t = 13 second

(c) t =13 second (d) t = 17 second Fig. 9 Multiple hypotheses tracking with trajectory prediction

(a) laser scan = 30

(b) predicted t = 6 second

is tracked and the weightings of goals are 0.0142, 0.5063, and 0.4794 for Goal A, B and C. One predicted road model is generated since human 1 walks into Goal B after laser time scan 50. In the similar way, another predicted road model is generated because human 2 walks from Goal C to Goal B. Another case is that human 3 walks from Goal D to Goal E shown in Fig. 10(e).The weighting of Goal D and E are 0.12 and 0.879. The predicted trajectories and probability C-T space are displayed in Fig. 10 (e)(f). One predicted road is generated shown in Fig. 10 (e). At the same time, one intersection hypothesis is also generated in Fig. 11(a). This hypothesis is tracked and modified in laser scan 630 (Fig. 11(b)). Finally, the robot goes back to node B and verifies the hypothesis. VI.

(c) laser scan = 50

(d) laser scan = 60

(e) laser scan = 520 (f) predicted t = 6 second Fig. 10 Experiment II: trajectories and map prediction

CONCLUSIONS

In this paper, we proposed the idea of simultaneous map prediction and moving object trajectory prediction. Compared with traditional SLAM methods, SLAMMOT with scene prediction not only passively builds the map and locates robot itself, but actively predicts the looped structure and moving object motion. With multiple hypothesis tracking, it can help to keep tracking the consistence of topological map and data association of moving objects. The recursive Bayesian formulation is also represented in this paper. Moreover, by combining long term trajectory prediction, a new generalized framework of tracking and prediction of moving objects is also proposed. Preliminary experimental results showed that it can enhance the tracking performance and efficiently predict the map structure. In the future, we will focus on incorporating the prediction information in motion planning and develop efficient planning methods in unknown dynamic environments. REFERENCES

(a) laser scan = 520

(b) laser scan = 630

(c) (d) Fig. 11 Experiment II: SLAMMOT with scene prediction

Fig. 11 shows the global view of exploration at different time steps. Fig. 10 shows the details of exploration. The robot moves from node A to J and goes back to A at the end. In the beginning, the robot moves in the corridor AB. Because of the occlusion, two frontier groups are observed at laser scan step 30 and three potential goals are extracted. One is GVG goal and two are frontier goals which are marked as A, B and C separately. At the same time, human 1

[1]R.J. Carroll and D. Ruppert, Transformation and Weighting in Regression, Chapman and Hall, New York, 1988. [2]H. J. Chang, C.S. Lee, Y.H. Lu, Y. Hu , ”P-SLAM: Simultaneous Localization and Mapping with Environment-Structure Prediction,” IEEE Trans. on Robotics, vol.23, no.2, pp.281-293, 2007. [3]S.Y. Chung, H.P. Huang, ”Relative-Absolute Information for Simultaneous Localization and Mapping,” IEEE Int. Conf. on Robotics and Biomimetics, pp.1641-1646, 2007. [4]H. Choset and J. Burdick, “Sensor Based Planning, Part I: The Generalized Voronoi Graph,” IEEE Intl. Conf. on Robotics and Automation, vol. 2, pp. 1649 – 1655, May 1995. [5] E D. Dickmanns, A. Zapp, ”A Curvature-based Scheme for Improving Road Vehicle Guidance by Computer Vision.,” SPIE conference on mobile robots, pp.161-168, 1986. [6]A. Eidehall, F. Gustafsson, “Combined Road Prediction and Target Tracking in Collision Avoidance,” IEEE Intelligent Vehicles Symposium, pp. 619-624, 2004. [7]S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics, MIT Press, 2005. [8]C.C.Wang, C.Thorpe, S. Thrun, ” Online Simultaneous Localization and Mapping with Detection and Tracking of Moving Objects: Theory and Results from a Ground Vehicle in Crowded Urban Areas,” IEEE Int. Conf. on Robotics and Automation, vol.1, pp. 842-849, 2003. [9]B. Yamauchi, “ A Frontier-Based Approach for Autonomous Exploration,” IEEE Int. Sym. on Computational Intelligence in Robotics and Automation, pp. 146-151, 1997.

Simultaneous Topological Map Prediction and Moving ...

Abstract— To achieve fully autonomous mobile robot in unknown environment, a mapping and localization ... Moving Object Tracking, Mobile Robot. I. INTRODUCTION. CENE understanding is one of the most ..... are affected by the velocity and steering of moving objects and are estimated from observed data. Finally, the.

758KB Sizes 0 Downloads 147 Views

Recommend Documents

simultaneous localization and map building by a ... - Semantic Scholar
Ultrasonic sonar ranger sensors are used to build an occupancy grid, the first structure, and a map ... on line segments extracted from the occupancy grid is built.

simultaneous localization and map building by a ... - Semantic Scholar
Ultrasonic sonar ranger sensors are used to build an occupancy grid, the first ... the localization can be carried out by integrating the odometry data provided.

Multi-Stage Localization Given Topological Map for ...
Faculty of Computer and Information Science,. Ain Shams University in Cairo ... The range of 360 degrees is divided into 6 bins. – The average length of lines in ...

Topological Map Building for Mobile Robots Based on ...
Component-Based Robot Software Platform,” ETRI. Journal, vol. 32, no. 5, pp. ... IEEE Transactions on Robotics and Automation, vol. 20, no. 3, pp. 433-443 ...

A Hybrid Prediction Model for Moving Objects - University of Queensland
for a data mining process. ... measures. • We present a novel data access method, Trajectory Pat- ..... node has free space, pk is inserted into it, otherwise it splits.

A Hybrid Prediction Model for Moving Objects - University of Queensland
a shopping center currently (9:05 a.m.), it is unreasonable to predict what her ..... calls, hence, the computing time is significantly reduced. ..... an eight hour period. .... 24. 26. 28. 30. 32. 34. 36. 38. Eps. ( a ) number of patte rn s. Bike. C

Relative-Absolute Map Filter for Simultaneous ...
avoid the error measurement effect, which may lead the large oscillation of gain K .... Implementation,” IEEE Transactions on Robotics and Automation,. Vol.17 ...

Chromatin Topological Transitions
Page 1 ... 3)) In particular, it is often hard to make the difference between direct ..... 4) G. J. Narlikar, H. Y. Fan and R. E. Kingston, Cell 108 (2002), 475.

Topological persistence and dynamical heterogeneities ...
Aug 30, 2007 - define two alternative dynamic four-point susceptibilities XA τ and XB τ, well suited for characterizing ... As a topological measure of overlap for particulate systems, ... point J, such that the average bead kinetic energy vanishes

Determination of accurate extinction coefficients and simultaneous ...
and Egle [5], Jeffrey and Humphrey [6] and Lich- tenthaler [7], produce higher Chl a/b ratios than those of Arnon [3]. Our coefficients (Table II) must, of course,.

Simultaneous Technology Mapping and Placement for Delay ...
The algorithm employs a dynamic programming (DP) technique and runs .... network or the technology decomposed circuit or the mapped netlist is a DAG G(V, ...

Simultaneous elastic and electromechanical imaging ...
Both imaging and quantitative interpretation of SPM data on complex ... Stanford Research Instruments, and Model 7280, Signal Re- covery as ... Typical values were 1. =99 kHz .... nal in the center and enhanced PFM amplitude at the circum-.

Simultaneous elastic and electromechanical imaging by scanning ...
Received 3 March 2005; accepted 15 August 2005; published 20 September 2005. An approach for combined imaging of elastic and electromechanical ...

LGU_NATIONWIDE SIMULTANEOUS EARTHQUAKE DRILL.pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Main menu.

Masten-Prufer - Simultaneous Community and Court Enforcement ...
Retrying... Masten-Prufer - Simultaneous Community and Court Enforcement Supplement.pdf. Masten-Prufer - Simultaneous Community and Court Enforcement ...

Simultaneous determination of digoxin and ...
ability of P-gp expression [5]; and (iii) P-gp kinetic profiling. [6]. ... data acquisition and processing. ..... sions and to obtain accurate permeability data for digoxin.

Relative-Absolute Information for Simultaneous Localization and ...
That is why it is always required to handle the localization and mapping. “simultaneously.” In this paper, we combine different kinds of metric. SLAM techniques to form a new approach called. RASLAM (Relative-Absolute SLAM). The experiment result

Simultaneous Learning and Planning
Abstract— We develop a simultaneous learning and planning capability for a robot arm to enable the arm to plan and ... are to be learnt by the planner during the course of execution of the plan. Planar motion planning using pushing .... of this pro

Masten-Prufer - Simultaneous Community and Court Enforcement ...
Under the conditions set out in Proposition 2, filing suit is individually rational in. this region ... Simultaneous Community and Court Enforcement Supplement.pdf.

Simultaneous determination of digoxin and ...
MILLENNIUM32 software (version 3.05.01) was used for data acquisition and ... the linearity, sensitivity, precision and accuracy for each ana- lyte [16].

Simultaneous Tensor Decomposition and Completion ...
a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the ... We conducted experiments to empirically verify the convergence of our algorithm on synthetic data, ...

Chromatin Topological Transitions - LPTMC
This leads to both compaction and topological deformation of the DNA by one negative turn per nucleosome. 2). Acting both as a compaction and regulatory tool, nucleosomes must be reasonably stable while keeping some dynamic properties to allow transi

Chromatin Topological Transitions
(c) In the three-state model of chromatin fiber, chromatin fiber in front .... whole scenario can be conveniently experimented at desk by using an old XXth century ...

Reasoning with Topological and Directional Spatial ...
Oct 5, 2010 - For any ε > 0, Ns has a rectangle solution {ri}n .... and m2 = m3 = [2,6] × [2,6]. ..... Advances in Spatial Databases (SSD-95), pages 292–309.