Vision Based Tracking and Navigation of Mobile Robots in Pioneer-2 Platforms Kuntal Roy and Amit Konar and Ajit K. Mandal Electronics and Tele-communication Engineering Department Jadavpur University { [email protected], [email protected], [email protected] }

 Abstract - The paper addresses the classical problem of target tracking and provides a solution to the problem using two pioneer-2 mobile robots. The mobile robots used in the proposed application include facilities for online control of the pan angle, tilt angle and zoom of the camera. The above facilities compositely assist the tracker in localization of the moving target. A specialized software platform ARIA [5], that supports implementation of simple behaviors such as avoid obstacles, motion planning and motor status checking has been utilized to develop complex behaviors like object localization, controlled pathplanning and prediction of the target position. The tracking system has been designed and thoroughly tested to work in a factory environment. Index Terms – Visual Tracking, Pioneer-2 Mobile Robots, Back-Propagation Neural networks, Extended Kalman Filter. I.

INTRODUCTION

The paper provides a novel scheme for target tracking and interception by two mobile robots. The tracker robot identifies the location of the moving target robot by employing the color tracking scheme and online zoom control of the camera attached with the robot. The motion of the target robot is being online controlled through a keyboard attached with a desktop computer. The desktop computer receives video and sonar packets from the moving tracker, and generates control command for its motion planning. The proposed scheme employs a synergism of neural nets and Extended Kalman filter [1] to generate a route for the tracker based on the predicted direction of motion of the target. In one of our previous works, it has already been reported that among the classical neural algorithms for machine learning, the backpropagation algorithm outperforms all the rest at least in connection with the motion planning of the robot [4]. The back-propagation algorithm

used in the present paper thus is worthwhile. The Extended Kalman filter model provides an additional framework for the online detection of the target position with a very good accuracy. The work reported in the paper consists of five major sections. In Section II, we present the methodology of determining the necessary shift (both turn and forward movement) of the tracker toward the target. One important experimental aspect of this section is to determine the necessary forward movement of the tracker from the approximate camera-zoom change needed for approximate constant target-area (in pixel2 unit). This in particular has significance as it facilitates the scope of target localization in the grabbed image frame. Section III of the paper is concerned with the prediction of the moving target position using Extended Kalman filtering. In Section IV, we present a scheme for pathplanning of the tracker using back-propagation algorithm. Section V provides the integration of all the three schemes outlined earlier so as to design a complete target tracking system. II.

NAVIGATION OF TRACKER FROM ITS GRABBED IMAGE

The tracker robot grabs image successively with a sampling interval of about 400 ms (Sampling interval less than 400 ms will affect the accuracy in prediction by a Kalman filter) and attempts to locate the target robot in its grabbed imageframe. The tracker detects any shift of the target robot in the grabbed image-frame. The necessary shift in camera-position of the tracker and also the position of the tracker itself can be determined from the known shift of the center of mass of the target-robot (û[ û\  DV GHVFULEHG below. If the center of mass of the grabbed image is shifted in comparison to that of the previous

image (Fig. 1), we can determine the shift in the current center of mass (xc, yc) from its last value (xp, yp) by the following definition:

application, we do not generate any command for tracker movement in case the target approaches tracker i.e. when the image area of the target enhances. In order to get a measure of the target’s distance from the tracker, we need to adopt the following scheme. First we go on decreasing the camera zoom until the area of the target in the grabbed image of the current frame becomes equal to that of the previous frame. Typically the area of the target should be constant in the range (1400 – 1500) pixel2. An empirical relation between target to tracker distance (dist) and camera-zoom (zoom) of tracker has been developed experimentally as follows:

Fig. 1: Change in center of mass between the previous and current location of target-robot

and

û[ [c –xp û\ \c - yp

We can easily observe from Fig.1 that the center of mass of the target-robot shifts depending on the current position of the target-robot and the area of the target-robot viewed by the tracker. This area shrinks as the target moves far away from the tracker. Determination of the necessary changes in camera-pan and tilt of the tracker now can be accomplished using the above change in x- and y- position of the center of mass. The scale factors for pan- and tilt angle measure with respect to x- and y- shift of the center of mass have been determined offline experimentally. The experimental results envisage that for a 1° shift in camera pan-angle and tile angle, the xshift and the y-shift of the center-of-mass of target-robot between successive image frame are approximately 5 pixels and 4 pixels respectively. Consequently, change in pan-angle, ûSDQ-DQJOH   û[, and change in tilt-angle, ûWLOW-DQJOH   û\. Now, for efficient tracking the area of the targetrobot viewed by the tracker gradually shrinks as the tracker approaches the target. The speed of the tracker thus can be evaluated from the rate of shrink of the image area of the moving target. It is indeed important to note that in the present

dist = 74 + 9.0 * (zoom/50) + 0.1 * (zoom/50)2 where, ‘dist’ is measured in centimeter (cm). For small value of zoom (<250, where camera-zoom range is 0−1024), we may approximate the equation as linear equation and change in distance between target and tracker with respect to change in camera-zoom of tracker is given by d dist / d zoom = 9/50 It is experimentally verified that for 20-cm forward/backward shift of target-robot toward tracker-robot, the area of target-robot viewed by tracker enhances/diminishes by nearly 200 pixel2.

III.

PREDICTING CAMERA-PAN AND TILT ANGLE OF TRACKER USING EXTENDED KALMAN FILTER

The most important aspect in target tracking is to predict the position of the target by the tracker. In order to predict the target’s position, the tracker should be able to keep track of its movements. A video camera attached with the tracker was employed to capture the video frames of the moving target. To eliminate the scope of missing the target in the camera view, the next position of the camera should be correctly predicted from its present and past positions. The principles of Extended Kalman filtering [1] can be applied for prediction of the targets subsequent positions. Let x be the measurement vector describing the position of the target at time t. Thus

x=[r

θ t] T

where, r = the perpendicular distance of the target from the tracker at time t, θ = the angular displacement of the target measured with respect to the midway between the sonars S4 and S5 of the tracker, i.e. θ is the angular shift of tracker with respect to the target from time t=0 to the current time of observation, t = the time elapsed measured from the beginning of the observation phase Let, f is the measurement equations describing the relationship between the Cartesian position of the target and its parabolic trajectory. Notationally, f is given by f =

at2+bt+c-r cosθ

=0

2

pt +qt+s-r sinθ where (x, y) = (rcos θ, rsin θ) denotes the position of the target at time t, c and s are the initial displacements of the target with respect to the midway between the sonars S4 and S5 of the tracker, b and q are the velocity in the corresponding directions, and a and p are the time rate of change of velocity in the corresponding directions, and a be the estimator used to estimate a set of parameters describing the next position of the camera. The estimator a is given by a = [a b c p q s]T. Given, three successive position of the moving target, we can determine its next position by evaluating the estimator a using the following steps of Kalman filtering. Procedure Kalman (x1, x2, x3, a) Begin Initialize error covariance martrix S. Repeat Update estimator a with a new i) measurement vector xi. Update Kalman Filter gain Ki. ii) iii) Update error covariance matrix Si Until Si approaches a NULL matrix End. So, the position (x, y) of the target can be determined by substitution of the appropriate

parameters of the estimator vector in the measurement equation f. IV.

USE OF THE BACK-PROPAGATION NEURAL NET

After the current target position and the obstacle locations in the tracker's workspace are determined, we use a pre-trained backpropagation neural net for controlling the stepwise motion of the tracker. A three-layered feedforward neural net has been employed in the present context for generating the control commands for motion [1-4] of the tracker. In our realization, we considered the readings from 8 sonar transducers; consequently, the neural net has 9 inputs, the first one being the predicted position of the target. The outputs of the neural net are the amplitude and direction of motion of the tracker robot. The proposed neural net is trained with 756 training instances for continuous obstacles and 854 training instances for discrete obstacles. In the application phase, the neural net just requires one forward pass, thus the speed of response of the neural net is very fast of the order of 0.5 milliseconds on a 750 M-Hz Pentium III machine. V

ALGORITHMS

The algorithms used for target tracking are briefly outlined below. Procedure target_tracking() Begin Lock-camera; Repeat Set-camera-parameters; Predict-camera-next-position // by Kalman filter // Align-tracker // with the current camera-pan position //; Plan-tracker-next-position; If target is out-of-view Then lock camera by procedure Out-of-view-corrections; Until the tracker hits the target; End Procedure Lock-camera() Begin Initialize ∆pan-angle and ∆tilt-angle; Until the object is visible Repeat

For l = – 2 to + 2 For k= - 19 to +19 pan-angle = pan-angle + k ∆pan-angle; End For; tilt-angle = tilt-angle + l ∆tilt-angle; End For; End End. Procedure Set-camera-parameters (∆distance, Initial center of mass, pan-angle(t+1), tiltangle(t+1), ∆zoom) Begin While the target is partially (totally) visible do Begin Compute center of mass of the object; If current center of mass (xc, yc) shifts from its previous location (xp, yp) Then determine the drift rate along x and y directions by dx/dt= - r sin θ (d θ /dt) and dy/dt=r cos (θ ) (d θ /dt) Where, θ = tan-1 ((5/4) * ((yp-yc)/(xp-xc))) and r= √ ((xp-xc)2/25 + (yp-yc)2/16) Set Pan-angle deviation rate (t+1) = Pan-angle deviation rate (t) + dx/dt Set Tilt-angle deviation rate (t+1) = Tiltangle deviation rate (t) + dy/dt End While End. Procedure Align-tracker ( ) Beign If a command for target direction setting is received Then align the tracker in the direction of the camera-pan and also align the camera of the tracker to its initial position. End. Procedure Plan-tracker-next-position ( ) Begin Vary the Camera-zoom (∆zoom) so that the area of the target in the grabbed image of the tracker is more or less constant. Measure distance to be traversed by the tracker by

// Back-propagation neural net needs to be employed here for navigation // Again fix the Camera-zoom of tracker to the previous value (zoom fix) and check if the area of the target in the grabbed image of the tracker is more or less constant. End Procedure Out-of-view-corrections () Begin Until the target is within the view of the camera Repeat Begin Move around the current position of the camera by a small angle; small angle = small angle + increment. End zoom current = zoom previous + increment; End ∆zoom = zoom fix – zoom current; // zoom fix is the fixed camera-zoom value (=200) of tracker with which it should be able to track target-robot // Compute distance to be traversed by the tracker toward target by ∆distance = - (d distance/d zoom)* ∆zoom; End. The experiments have been carried out on Pioneer2 mobile robot. It is clear from our experiments that the tracker can generate more accurate control commands in presence of the extended Kalman filter. References 1.

Ayache, N., Artificial Vision for Mobile Robots: Stereo Vision and Multisensory Perception, MIT Press, 1991.

2.

Vincze, M. and Hager, G. D. (Eds.), Robust Vision for Vision-Based Control of Motion, IEEE Press, 1995.

3.

Kortenkamp, D. and Bonasso, R. P. and Murphy R., Artificial Intelligence and Mobile Robots: Case Studies of Successful Robot Systems, AAAI Press/ MIT Press, 1998.

4.

Nehmzow, U., Mobile Robotics: A Practical Introduction, Springer, 2000.

5.

ARIA 1.1.7: Active Media Robotics Interface Application Manual, 2002.

∆distance = - (d distance/d zoom) * ∆zoom;

Vision Based Tracking and Navigation of Mobile ...

The mobile robots used in the proposed application ... through a keyboard attached with a desktop computer. The desktop computer receives video and sonar ...

61KB Sizes 9 Downloads 232 Views

Recommend Documents

Stereo Vision based Robot Navigation
stereo vision to guide a robot that could plan paths, construct maps and explore an indoor environment. ..... versity Press, ISBN: 0521540518, second edi-.

Real-Time Vision-Aided Localization and Navigation Based on Three ...
Jul 18, 2011 - position errors in all axes to the levels present while the first two images were ... E. Rivlin is with the Department of Computer Science, Technion ...

Vision for Mobile Robot Navigation: A Survey
One can now also design a vision-based navigation system that uses a ...... planning, it did not provide a meaningful model of the environment. For that reason ...

Biologically Inspired Bearing-Only Navigation and Tracking
positioning system, without any range measurement sensors but with only the ... navigation and tracking, that requires only bearing infor- mation from at least ...

Vision-based UAV Navigation in Mountain Area
cessful autonomous flight. [3]. Most UAV autonomous navigation techniques are based on GPS(Global Positioning System) and the fusion of. GPS with INS(Inertial Navigation System) information. However, GPS is sensitive to the signal .... 6 Experimental

Robust Trajectory Tracking Controller for Vision Based ...
Aug 18, 2005 - ‡Associate Professor & Director, Flight Simulation Laboratory, Aerospace Engineering Department. ... An critical technology for autonomous aerial refueling is an adequate ... information for real-time navigation applications. ... Vis

Adaptive Vision-Based Collaborative Tracking Control ...
robotic system to move along a predefined or dynamically changing trajectory, the regulation result in [6] was extended in [3] to address the UGV tracking ...

Vision-only Navigation and Control of Unmanned Aerial ...
Department of Computer Science & Electrical Engineering, OGI School of Science & Engineering, ... Xi'an China in 1995 and M.S. degree in the Institute of.

Locus: An indoor localization, tracking and navigation system for multi ...
tracking and navigation system for multi-story buildings called Locus that determines floor and location by ... Key words: Indoor location, Localization, Tracking, Navigation, Context- and location-aware applications and .... human effort. To the bes

Locus: An indoor localization, tracking and navigation ...
heuristics derived from Wi-Fi signal strength. Preeti Bhargava1 ... space increases dramatically, or the test site is changed; the radio map must be remeasured to ...

2009_TRR_Draft_Video-Based Vehicle Detection and Tracking Using ...
2009_TRR_Draft_Video-Based Vehicle Detection and Tracking Using Spatiotemporal Maps.pdf. 2009_TRR_Draft_Video-Based Vehicle Detection and Tracking ...

CS231M · Mobile Computer Vision
Structure of the course. • First part: – Familiarize with android mobile platform. – Work on two programming assignments on the mobile platform. • Second part: – Teams will work on a final project. – Teams present in class 1-2 state-of-th

Vision substitution and moving objects tracking in 2 ...
Abstract. Vision substitution by electro-stimulation has been studied since the 60's. Camera pictures or movies encoded in gray levels are dis- played via an ...

Fragments based Parametric tracking - CiteSeerX
mechanism like [1,2], locates the region in a new image that best matches the ... The fragmentation process finds the fragments online as opposed to fragment- ing the object ... Each time the fragment/class with the maximum within class variance is .

Fragments based Parametric tracking - CiteSeerX
mechanism like [1,2], locates the region in a new image that best matches the .... Each time the fragment/class with the maximum within class variance is selected( ..... In: Proceedings of the International Conference on Computer Vision and.

Quantitative and qualitative evaluation of vision-based ...
vision-based teleoperation of a mobile robot .... Every user made a total number of 9 trials, i.e.. 3 paths by ..... virtual space displaying static and dynamic image.

Spline-based Robot Navigation
and target position and orientation, and a set of obstacles in the environment. The shapes, positions and orientations of the obstacles in space are fully described. The task is to find a continuous path for the object from its initial position to th

Graph-Based Distributed Cooperative Navigation ... - Semantic Scholar
Apr 3, 2012 - joint pdf for the case of two-robot measurements (r = 2). ...... In this section, we discuss the effect of process and measurement noise terms on the ..... (50). The computational complexity cost of calculating the .... Figure 5: Schema

A precise teach and repeat visual navigation system based ... - GitHub
All the aforementioned modules are available as C++ open source code at [18]. III. EXPERIMENTAL EVALUATION. To evaluate the system's ability to repeat the taught path and to correct position errors that might arise during the navigation, we have taug

Experiences with the Impact of Tracking Technology in Mobile ...
mobile phones still employ sensor-based tracking, and typi- cally produce inaccurate .... ment paper maps with digital content retrieved from an on- line source.

Integrated Mobile and Static Sensing for Target Tracking
Email: {oek2,at329,jzs3,gme8,lt35}@cornell.edu. Gene Whipps .... is not necessarily the best strategy for it to track down the target, it must distribute its ...

Fast wide baseline matching for visual navigation - Computer Vision ...
should support their identification from different, but nev- ... Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern ...