J Sign Process Syst DOI 10.1007/s11265-010-0504-7

Implementation of a Moving Target Tracking Algorithm Using Eye-RIS Vision System on a Mobile Robot Fethullah Karabiber & Paolo Arena & Luigi Fortuna & Sabestiano De Fiore & Guido Vagliasindi & Sabri Arik

Received: 20 October 2008 / Revised: 24 June 2010 / Accepted: 24 June 2010 # Springer Science+Business Media, LLC 2010

Abstract A moving target tracking algorithm is proposed here and implemented on the Anafocus Eye-RIS vision system, which is a compact and modular platform to develop real time image processing applications. The algorithm combines moving-object detection with feature extraction in order to identify the specific target in the environment. The algorithm was tested in a mobile robotics experiment in which a robot, with the Eye-RIS mounted on it, pursued another one representing the moving target, demonstrating its performance and capabilities. Keywords Target tracking . Robotic . Analog system . Segmentation . Motion detection

1 Introduction Moving Target Tracking is an important area among image processing applications such as robotics, video surveillance and traffic monitoring. In general, there are two different approaches for object tracking: recognition-based tracking and motion-based tracking [1]. In recognition-based tracking system, the objects are identified by extracting the features of objects. Motion-based tracking approaches use the motion detection properties. In both cases, the tracker should be able to detect all the new targets automatically in F. Karabiber (*) : S. Arik Computer Engineering Department, Istanbul University, Istanbul, Turkey e-mail: [email protected] P. Arena : L. Fortuna : S. De Fiore : G. Vagliasindi Dipartimento di Ingegneria Elettrica Elettronica e dei Sistemi, Università degli Studi di Catania, Catania, Italy

a computationally simple way that can be implemented in real time. In the last decade, many approaches have been proposed in the literature for tracking moving-objects. In [2, 3], moving-object detection and tracking in traffic monitoring and video surveillance applications are presented. In these applications, the images taken from a stationary camera are processed. It is more complicated to track the objects in the image sequences acquired from a mobile camera because of having an apparent motion related to the camera motion. A number of methods have been proposed for detection of the moving targets using mobile camera. Jung and Sukhatme [4] developed a real-time moving object detection algorithm using probabilistic approach and an adaptive particle filter for an outdoor robot carrying a single camera. A method using background motion estimation and difference of two consecutive frames for detection and tracking of the moving-objects is proposed in [5]. In addition, some others techniques focusing on detecting and tracking movingobjects proposed for different applications [6–8]. Cars in front are tracked using a camera mounted on a moving vehicle in [7] and a single object in forward-looking infrared imagery taken from an airborne or a moving platform was tracked using an approach presented in [8]. In [9], object tracking is reviewed and classified into different categories. In this study, a system with both a motion and recognition-based tracking approaches is developed. The proposed algorithm is based on image-processing techniques such as segmentation, motion detection and feature extraction. Segmentation, which detects the objects in the image sequence, is the main part of the algorithms. The proposed segmentation algorithm is based on edge detection and morphologic operations. Motion detection is developed using difference of successive image-frames.

J Sign Process Syst

Finally tracking the detected moving objects is carried out using their positional information in the image. Featureextraction techniques are used to extract information about segmented and moving objects. Finally, a mobile robot tracks the moving objects using centroid points of the target. In order to implement the proposed algorithm in real time, we have developed a computationally simplified version of the algorithm. The proposed algorithm is implemented using capabilities of Eye-RIS Vision System [10] to execute the algorithm in very short time. Eye-RIS Vision System is designed to develop Real Time Vision Applications. In order to evaluate the performance of the proposed approach in real time, we tested the algorithm on Rover II robot [13] with the Eye-RIS v1.2 visual system to track Minihex robot [14]. The composition of this paper is the following: in Section 2, general information about Eye-RIS Vision System is presented. The proposed moving-target tracking algorithm is described in Section 3. Segmentation, motion detection and feature extraction of the detected objects are described in subsections in Section 3. In Section 4, experimental results and discussion are presented. Finally, Concluding Remarks are reported in Section 5.

2 Eye-RIS Vision System The Eye-RIS vision systems are conceived to implement real-time image acquisition and processing in a single-chip using CMOS technologies. A large variety of applications such as video surveillance, industrial production systems, automotive and military can be developed using Eye-RIS. Details of the system is given in [10]. A brief description of the Eye-RIS is given below. Eye-RIS system employs a bio-inspired architecture. Indeed, a key component of the Eye-RIS vision system is a retina-like front-end which combines signal acquisition and processing embedded in the same physical structure. It is represented by the Q-Eye chip [10], an evolution of the previously adopted Analogic Cellular Engines (ACE) [11], the family of stand-alone chips developed in the last decade and capable of performing analogue and logic operations on the same architecture. The Q-Eye was devised to overcome the main drawbacks of ACE chips, such as lack of robustness and large power consumption. The Eye-RIS Vision comprises three boards or levels. Each of them performs the specific functions. The first level contains the Focal Plane Processors, which performs the image acquisition and the pre-processing tasks. The second level contains the digital microprocessors, program and data SDRAM memory, flash memory and I/O connectors. The third level includes the debugging and communications

circuitry. In general, only the first two boards are needed for running vision applications. The program can be stored in a flash memory in Nios II board. Eye-RIS is a multiprocessor system and has two different microprocessors: Anafocus’ Q-Eye Focal Plane Processor in the first board and Altera’s Nios II Digital Microprocessors in the second board. AnaFocus Q-Eye Focal Plane Processor (FPP) acts as an Image Coprocessor. It acquires and processes images, extracting the relevant information from the scene being analyzed, usually with no intervention of the Nios II processor. Q-Eye is massively parallel, performing operations simultaneously in all of its cells in the analogue domain. Its basic analog processing operations among pixels are linear convolutions with programmable masks. Size of the acquired and processed image is the Quarter Common Intermediate Format (Q-Cif) standard 176×144. Altera NIOS II digital microprocessor is a FPGAsynthesizable digital microprocessor (32-bit RISC μP at 70 MHz- realized on a FPGA). It controls the execution flow and processes the information provided by the FPP. Generally, this information is not an image, but characteristics of images analyzed by Q-Eye. Thus, no image transfers are usually needed in Eye-RIS. The Eye-RIS Application and Development Kit (ADK) is an Eclipse-based software development environment required to write, compile, execute and debug imageprocessing applications on the Eye-RIS Vision System. The Eye-RIS ADK is integrated into the ALTERA Nios II Integrated Development Environment (Nios II IDE). In order to program the Q-Eye, FPP code, a specific programming language, was developed. The Nios II is programmed using C++ programming language. In addition, the Eye-RIS ADK includes two different function libraries to ease developing applications. The FPP Image Processing Library has many functions to implement some basic image processing operations such as arithmetic, logic and morphologic operations, spatio-temporal filters, and threshold. The Eye-RIS Basic Library has several C/C++ functions to execute and debug FPP code and to display images. All of above features allow Eye-RIS Vision System to process images at ultra-high speed but still with very low power consumption. This system offers a great opportunity to develop real-time vision applications.

3 Moving Target Tracking Algorithm A block diagram of the algorithm can be seen in Fig. 1. This algorithm is mainly divided into three parts. The first part, which is the most important one, is segmentation. In the second part, motion detection algorithm is performed to

J Sign Process Syst

Figure 1 Block diagram of the moving target tracking algorithm.

obtain motion detection mask using difference operation between image sequences. The third part of the algorithm is to merge the first two parts for obtaining centroid of the target using Feature Extraction for robot action. The images are acquired through the Sense Function in Eye-RIS ADK, which performs an optical acquisition in a linear integration way. The integration time is provided by the user as an input parameter. It is also possible to apply an analog gain to the sensed image. After obtaining the images, Gaussian Diffusion function is performed by using the Resistive Grid module available in the Q-Eye to remove noise. The bandwidth of the filter is specified by means of the rgsigma parameter, whose value is related to the standard deviation of an equivalent Gaussian filter. An exemplary raw acquired image and the output of Gaussian filter are shown in Fig. 2.

Figure 3 Sobel-based edge detection algorithm.

[12]. Here, the summary of the segmentation algorithm is given. The segmentation algorithm is implemented mainly in three steps. In the first step, Sobel operators based edge detection approach is implemented on the system. Then, morphologic operations are used to obtain the segmented image.

3.1 Segmentation Segmentation is the process of dividing a digital image into multiple meaningful regions for easier analysis. Segmentation is the most crucial part of moving target tracking algorithm. A new segmentation algorithm using the capabilities of Eye-RIS Vision System was presented in

Figure 2 a Acquired image b Output of Gaussian filter.

Figure 4 a Output of Sobel vertical filter b Absolute difference c Threshold d Edge detection.

J Sign Process Syst Figure 5 Block diagram of morphologic operations.

Figure 7 Segmentation results for a the aquired image, b a loaded image from computer.

3.1.1 Edge Detection Since an edge essentially demarcates two different regions, detecting edges is a very critical step for segmentation

algorithms. A Sobel Operators [1] based edge detection algorithm is implemented using the functions that hardware structure is permitted. Block diagram of Sobel-based edge detection algorithm is shown in Fig. 3. In the first step of proposed Sobel-based edge detection algorithm, Sobel convolution masks [1] are applied in different directions (horizontal(SFh), vertical(SFv), leftdiagonal(SFld), right-diagonal(SFrd)) using the templates in Eq. 1. Output of the Sobel filter in vertical direction is given in Fig. 4a. 8 9 8 9 < 1 0 1 = < 1 2 1 = ð1Þ SFh ¼ 2 0 2 ; SFv ¼ 0 0 0 ; : ; : ; 1 0 1 1 2 1 8 < 2 SFld ¼ 1 : 0

1 0 1

9 8 0= <0 1 ; SFrd ¼ 1 ; : 2 2

1 0 1

9 2 = 1 ; 0

After applying sobel filters, the absolute values of sobel filter outputs must be found. However, the hardware can not compute absolute values of an image. In order to overcome this problem, a gray image which value of 127 correspods to zero in Analog Memory is created. Then, AbsDifference

Figure 6 Results of some applied morphologic operations a Point remove, b Closing, c Hole filler, d Opening.

Figure 8 Detecting moving-objects a Motion detection mask, b Moving object.

J Sign Process Syst Table 1 Features extracted from objects in Fig. 7d. Features

Object 1

Object 2

Object ID Area MajorAxisLength MinorAxisLength Eccenticity EquivDiameter Orientation Extent

0 501 32.53 20.27 0.78 25.26 87.17 0.88

1 405 26.05 21.02 0.59 22.71 23.76 0.71

Centroid

41.35, 82.58

107.37, 77.75

function defined in Eye-RIS ADK is performed between sobel filter outputs and the gray images to obtain absolute values of sobel outputs. Output of absolute difference operation between horizontal direction and the created gray image is shown in Fig. 4b. Then Threshold operation is executed to obtain binary images. Finally, Logic OR operation is executed to obtain one single image. Output of threshold operation for Fig. 4b and edge detection result can be seen in Fig. 4c and d, respectively. 3.1.2 Morphologic Operations After performing edge detection, mathematical morphology techniques are performed as a post-processing operation in the segmentation algorithm. Mathematical morphology is an approach to image analysis based on set theory [15]. Dilation and Erosion are the fundamental morphologic operations. Dilation is used to thicken a region and to bridge the gaps in a binary image. Erosion eliminates the irrelevant details and makes the binary image smaller. Block diagram of the Morphologic operations can be seen in Fig. 5. All morphologic operations are performed using Image Processing Library developed for the Eye-RIS System. Output of edge detection is followed by skeletonization process to reduce all the contours to one-pixel thin lines.

Then isolated pixels not belonging to the edges are removed (Fig. 6a). The results of edge detection may give missing edge information due to low image quality. In order to complete missing points, closing operation is performed. Closing is the dilation of binary image through a specific structure element, followed by erosion of the resulting binary image using the same structure element. Closing operation also smoothes the contours of an object and eliminates small holes (Fig. 6b). After completing missing points, Hole Filler operation is applied to fill the holes in the objects. This operation is performed in this way: firstly, a Dilate operation is applied to the input picture iteratively, followed by an iterative application of Erosion and logic AND operations between eroded image and input image (Fig. 6c). The last step of morphologic operations in the segmentation algorithm, the Opening operation, is performed to remove small objects and to smooth the image. Opening operation, as opposed to closing, is the erosion followed by the dilation. Output of morphologic operations is given in (Fig. 6d). In order to show the efficiency and accuracy of segmentation algorithm, we merged the segmentation results with the input image. The output of Segmentation algorithm for an image acquired by the Eye-RIS system can be seen in Fig. 7a. Figure 7b shows the outcome of segmentation for an image loaded from the local computer. 3.2 Moving Object Detection Since the target is moving, the motion detection mask is found to detect moving-objects. Firstly, Absolute difference of consecutive frames is obtained by using AbsDifference function. Then, threshold operation is applied to obtain changed pixels. Finally, Opening operation is performed to remove small objects and to smooth motion detection mask. The output of motion detection mask, which represents the moving parts, is shown in Fig. 8a. Finally, Recall operation is performed to obtain only moving objects. In this operation, input image is output of segmentation and mask is motion detection mask. Dilation is performed first on the mask, followed by erosion and Table 2 Time performance of the algorithm. Operation Preprocessing Edge detection Morphologic operations Motion detection Feature extraction Total

Figure 9 Determining the robot action.

Execution time 0.73 3.94 2.16 1.68 8.54 17.23

ms ms ms ms ms ms

J Sign Process Syst

various features of objects (area, coordinates of centroid, Major Axis length, minor axis length, orientation, etc.) from the input binary images. For example, some features of the objects using Feature extraction function for the Fig. 6d are given in Table 1. Note that the InstantVision Libraries provide a wide range of signal and image processing, classification and multi-target tracking functionalities, in an easy-to-use and safe software environment and data structures, along with various I/O utilities. They are written to be platformindependent and are portable to a number of different hardware platforms. A modified version of the InstantVision libraries runs on the NIOS-II processor in Eye-RIS Vision System [10]. 3.4 Finding Target and Determining Robot Action Figure 10 Experimental arena. On the foreground is the Rover II, on the background is the MiniHex. Two other static obstacles are also present in the arena on the two side of it.

logic AND operation is performed between the mask and the input iteratively. Result of Recall operation is given in Fig. 8b, identifying the detected moving objects. 3.3 Feature Extraction In order to extract features of detected moving and segmented objects, the Feature Extraction routines of the InstantVision Signal and Image Processing Library [16] is used. The functions find the number of objects and extract

Figure 11 A sequence of frame depicting the pursuing of the Minihex (located in the top left of frame 1) by the Rover II (located on the low center of frame 1). While the Minihex is moving, the Rover II is

The last step of the moving target tracking algorithm is to find the centroid of the moving object. The robot can track the moving objects using the centroid of the target. It is difficult to distinguish when a target is moving along one of the radii departing from the robot, or it moves slowly, or when it does not move at all. Therefore, firstly the centroids of detected moving objects are tried to find. When a moving object can be found, this object is assumed to be the target. Otherwise, when no moving objects or more than one moving object are detected, some features of the segmented objects such as area, ratio of major and minor axis and centroid to choose the target are used. If a target cannot be detected, the robot moves around slowly to find the target.

turning and going towards it, being able also to distinguish the Minihex from the two additional static obstacles in the arena.

J Sign Process Syst

After obtaining the centroid of the target, it must be determined that the action of the chasing robot. Therefore, a behavioral map is devised according to the position of the centroid of the target in the field of view of the robot, i.e. the scene acquired by the Q-Eye. The scene is divided into nine parts each of which is linked to a specific action the robot performs if the centroid coordinates are in that specific portion. Figure 9 depicts the nine regions together with the action the robot will perform accordingly. Since the specific task of the robot is to find the target and keep the target in its center view, the aim of the moving actions is to keep the target in the part 5, which is the center of the image.

different scenarios in the arena. An example scenario is shown in Fig. 11. Experimental results show the efficiency of the algorithm in real environment. The results demonstrate the capability of Eye-RIS Vision System for real time vision applications. The analog vision system can be used efficiently to implement vision application in real-time. Since the proposed algorithm is computationally simple and uses the capability of the Eye-RIS Vision system efficiently, it gives big advantage in processing speed with comparison to the algorithms in literature. The proposed algorithm is promising to tackle the one of the most challenging problem of the moving target tracking using mobile camera.

4 Experimental Results and Discussion 5 Conclusion In this study, segmentation, motion detection and feature extraction was implemented to track moving-targets using Eye-RIS Analog Vision system. Because of the hardware structure of the system, we were limited in the functions we could execute and use in devising the algorithm. Timing analysis of the implemented algorithms is given in Table 2. All operations can be done in 17.23 ms, which means about 58 frame/sec. This execution time is sufficient for real-time applications. The proposed algorithm was tested using Eye-RIS Vision System on Rover II Robot. The roving platform used for navigation experiments is a modified version of the dual-drive Lynx Motion rover, called Rover II [13]. It is equipped with a bluetooth telemetry module, four infrared short distance sensors Sharp GP2D120 (detection range 3–80 cm), four infrared long distance sensors Sharp GP2Y0A02 (maximum detection distance about 150 cm), a digital compass, a low level target detection system, hearing board for cricket chirp recognition and the Eye-RIS v1.2 visual system. The low level control of the motors and the sensor handling is realized through a microcontroller STR730. This choice optimizes the motor performance of the robot while maintaining in the EyeRIS visual system the high level cognitive algorithms. Moreover the Rover II can be easily interfaced with a PC through a bluetooth module to perform some preliminary tests, debugging the results directly on the PC. Minihex robot was used as the moving target. The Minihex robot is an application of a VLSI chip for locomotion control of a legged-robot. The chip that implements a CPG through a CNN-based structure, has been already used in different bioinspired structures [14]. Moreover the chip permits the integration of exteroceptive sensors that can be used to close the feedback with the environment implementing simple navigation control strategies. An experimental arena, given in Fig. 10, is designed to test the proposed moving-target tracking algorithm. Rover II was able to track the target with%93-success-rate using

A moving target tracking algorithm is presented here. The algorithm combines moving object detection with feature extraction in order to identify the specific target in the environment. It was implemented on the Eye-RIS focal plane processor, which allows exploiting its real-time processing capability, to perform all the single-frame processing in 17.23 ms. The algorithm was tested in a real environment using two robots: one, with the Eye-RIS mounted on it, representing the chaser and the other representing the moving-target. During the experiments, the chaser was able to identify the target, distinguishing it from the other obstacles, and to follow it in the environment.

Acknowledgment F.Karabiber was supported by the Scientific and Technical Research Council of Turkey under the Fellowship Program for Phd students. The work at the University of Catania was supported by the EU FP7 project SPARK II.

References 1. Acharya, T., & Ray, A. K. (2005). Image processing: Principles and applications. Wiley-Interscience. 2. Cucchiara, R., Grana, C., Neri, G., Piccardi, M., & Pratim, A. (2001). The Sakbot system for moving object detection and tracking. Video-Based Surveillance Systems—Computer Vision and Distributed Processing, 145–157. 3. Rota, N., & Thonnat, M. (2000). Video sequence interpretation for visual surveillance. Proc. of Third IEEE International Workshop on Visual Surveillance, 59–68. 4. Jung, B., & Sukhatme, G. S. (2004). Detecting moving objects using a single camera on a mobile robot in an outdoor environment. In The 8th Conference on Intelligent Autonomous Systems, 980–987, Amsterdam, The Netherlands. 5. Behrad, A., Shahrokni, A., & Motamedi, S. A. (2001). A robust vision-based moving target detection and tracking system. In The Proceeding of Image and Vision Computing Conference, University of Otago, Dunedin, New Zealand.

J Sign Process Syst 6. Lozano, O. M., & Otsuka, K. (2008). Real-time visual tracker by stream processing. Journal of Signal Processing Systems, 57(2), 285–295. 7. Marinus, B., Leeuwen, V., & Groen, F. C. A. (2002). Motion interpretation for in-car vision systems. In The Proceeding of the IEEE/RSJ International Conference on Intelligent Robots and Systems, EPFL, Lausanne, Switzerland. 8. Yilmaz, A., Shafique, K., Lobo, N., Li, X., Olson, T., & Shah, M. A. (2001). Target-tracking in FLIR imagery using mean-shift and global motion compensation. In Workshop on Computer Vision Beyond the Visible Spectrum, Kauai, Hawaii. 9. Yilmaz, A., Javed, O., & Shah, M. (2006). Object tracking: a survey. ACM Computing Surveys, 38(4), Article 13. 10. Rodríguez-Vázquez, A., Domínguez-Castro, R., Jiménez-Garrido, F., Morillas, S., Listán, J., Alba, L., et al. (2007). The Eye-RIS CMOS vision system. In Analog circuit design (pp. 15–32). Springer. 11. Roska, T., & Rodriguez-Vazquez, A. (2002). Towards visual microprocessors. Proceedings of the IEEE, 90(7), 1244–1257. 12. Karabiber, F., Arena, P., De Fiore, S., Vagliasindi, G., & Arik, S. (2009). A new segmentation algorithm implemented on the EyeRIS Cmos vision system. Microtechnologies for the New Millennium in Bioengineered and Bioinspired Systems, Dresden Germany. 13. Alba, L., Arena, P., De Fiore, S., & Patané, L. (2009). Robotic platforms and experiments. In Spatial temporal patterns for action-oriented perception in roving robots (pp. 399–422), Springer. 14. Arena, P., Fortuna, L., Frasca, M., & Patanè, L., & Pollino, M. (2006). An autonomous mini-hexapod robot controlled through a CNN-based CPG VLSI chip. Proc. of CNNA, Istanbul. 15. Serra, J. (1982). Image analysis and mathematical morphology. London: Academic Press. 16. http://www.analogic-computers.com/Support/Documentation/.

Fethullah Karabiber received B.Eng. degree in Electronic and Communications Engineering from Istanbul Technical University in 2001. He received M.Sc. and Ph.D. degree in Computer engineering from Istanbul University in 2005 and 2009, respectively. He worked as Research and Teaching assistant at Computer Engineering Department in Istanbul University from 2001 to 2009. He has been in Catania University as visitor researcher for 6 months in 2008. His research interests include image and signal processing, Bioinformatics.

Paolo Arena received the Degree in Electronic Engineering and the Ph.D. in Electrical Engineering in 1990 and in 1994, respectively, from the University of Catania, Italy. He is currently Associate Professor of System Theory, Automatic Control and Biorobotics. He is co-author of more than 230 technical papers, five Books and several industrial patents. His research interests include adaptive and learning systems, neural networks and optimisation algorithms, cellular neural networks and collective behaviours in living and artificial neural systems for locomotion and perception control. He is a Senior Member of the IEEE, Chair elect of the Chapter of the Circuits and Systems Society, Central and South Italy, and served as an Associate Editor of the IEEE Transaction on Circuits and Systems-Part I in the period 2002-2003 and 2005. He coordinated several National and international research projects. He is actually the coordinator of the EU funded project SPARK II “Spatial Temporal patterns for action-oriented perception in roving robots II: an insect brain computational model”.

Luigi Fortuna is Full Professor of System Theory at the University of Catania. He now teaches the following courses: Complex Adaptive Systems, Robust Control. He has published more than 450 technical papers and is coauthor of ten scientific books among which: Chua's Circuit Implementations (World Scientific, 2009), Bio-Inspired Emergent Control of Locomotion Systems (World Scientific, 2004), SoftComputing (Springer 2001), Nonlinear Non Integer Order Circuits and Systems (World Scientific 2001), Cellular Neural Networks (Springer 1999), Neural Networks in Multidimensional Domains (Springer 1998), Model Order Reduction in Electrical Engineering (Springer 1994), Robust Control - An Introduction (Springer 1993). His scientific interests include: Robust Control, Nonlinear Science and Complexity, Chaos, Cellular Neural Networks, Soft-Computing Strategies for Control. Robotics, Micro-Nanosensor and Smart Devices for Control, Nano-Cellular Neural Networks Modelling. Since 2000 he is IEEE Fellow. He was IEEE CAS Chairman of the CNN Technical Committee, IEEE CAS Distinguished Lecturer 20012002, IEEE Chairman of the IEEE CAS Chapter Central-South

J Sign Process Syst ITALY. He was the coordinator of the courses in Electronic Engineering and head of the DIEES Department. Since 2005, he is the Dean of the Engineering Faculty of the University of Catania.

University of Catania in 2003 and in 2007 respectively. Since March 2007 he is contract researcher at Department of Electric Electronic and System Engineering (DIEES) of the University of Catania. His scientific interests include, but are not limited to, real-time image processing through Cellular Neural/nonlinear Networks (CNN) with applications to mobile robotics, tokamak systems monitoring, medical imaging; the development of bioinspired robots based on the paradigm of Central Pattern Generator (CPG) implemented on CNNs; the development of soft-computing techniques for the prediction and the classification of critical states in nuclear fusion plasmas.

Sebastiano De Fiore was born in Catania, Italy, 1981. He received the *Informatics Engineering degree *from the University of Catania, in 2005. Now he is a PhD student in the Electrical Electronic and System Control Engineering Department of the University of Catania. His research involves the study of locomotion and control navigation algorithms inspired by the basic principles of living systems. Sebastiano De Fiore participates, as a partner, to the different EU funded projects.

Guido Vagliasindi received the Electrical Engineering degree and the Ph.D. in Electronic and Automation Engineering from the

Sabri Arik received the Dipl.Ing. degree from Istanbul Technical University, Istanbul, Turkey, the Ph.D. degree from the London South Bank University, London, UK, and the Habilitation degree from Istanbul University, Istanbul, Turkey. He is now with the Department of Computer Engineering, Istanbul University. His major research interests include cellular neural networks, nonlinear systems and matrix theory. He has authored and coauthored some 50 publications. Dr. Arik is a member of the IEEE Circuits and Systems Society Technical Committee of Cellular Neural Networks and Array Computing. He was the recipient of the Outstanding Young Scientist Award in 2002 from the Turkish Academy of Sciences, Junior Science Award in 2005 from the Scientific and Technological Research Council of Turkey and the Frank May Prize (Best Paper Award) in 1996 from the London South Bank University.

Implementation of a Moving Target Tracking Algorithm ...

Jun 24, 2010 - Using Eye-RIS Vision System on a Mobile Robot. Fethullah Karabiber & Paolo ..... it gives big advantage in processing speed with comparison.

485KB Sizes 2 Downloads 454 Views

Recommend Documents

An Implementation of a Backtracking Algorithm for the ...
sequencing as the Partial Digest Problem (PDP). The exact computational ... gorithm presented by Rosenblatt and Seymour in [8], and a backtracking algorithm ...

Search game for a moving target with dynamically generated ... - Irisa
Jul 16, 2009 - agement and data fusion issues[6]. .... practice, we will take R = R(4) or R = R(A,$). As- .... but the conditionaI marginals are sufficient in practice.

Search game for a moving target with dynamically generated ... - Irisa
Jul 16, 2009 - egy is determined and then the target strategy. We ... big. An extensive definition is not possible. However, for algorithmic reason, it will be ...

Behavioural Processes Domain is a moving target for ...
Contents lists available at ScienceDirect. Behavioural Processes journal homepage: www.elsevier.com/locate/behavproc. Domain is a moving target for ...

Moving Ground Target Isolation by a UAV Using Predicted Observations
evader's location. This scenario leads the UAV's control actions to depend on partial information. A sufficient condition for guaranteed isolation of the intruder is provided along with the corresponding pursuit policy. Since the policy is non-unique

A Stream Field Based Partially Observable Moving Object Tracking ...
object tracking. In Section III, our proposed tracking algorithm which combines the stream field and RBPF is presented. Then, our proposed self-localization and object tracking ... motion planning and obstacle avoidance in mobile robotic domain [13-1

Multi-Target Tracking Using 1st Moment of Random ...
Feb 12, 2007 - 2.5 Illustration of N-scan Pruning . .... the log-likelihood ratio (LLR) of a track hypothesis at the current time step, i.e. k = 2 in this case. . . . 97.

Pursuit of a Moving Ground Target on a Graph Using Partial Information
Pursuit of a Moving Ground Target on a Graph Using Partial Information. 4. Green (y+ = −1): This implies that the evader has not visited u thus far. Therefore, the path information update is given by: P+(u, −1) = P\Q, Q = { k : k ∈ Pu, t + de(n

segmentation and tracking of static and moving objects ... - IEEE Xplore
ABSTRACT. In this paper we present a real-time object tracking system for monocular video sequences with static camera. The work flow is based on a pixel-based foreground detection system followed by foreground object tracking. The foreground detecti

PPD Quality Tracking System Implementation Report.pdf ...
PPD Quality Tracking System Implementation Report.pdf. PPD Quality Tracking System Implementation Report.pdf. Open. Extract. Open with. Sign In.

A Feature Tracking Algorithm Using Neighborhood ...
computer vision, etc. The minimum .... should be a good degree of motion similarities between the neigh- .... IEEE Workshop on Neural Networks for Signal Processing,. (Kyoto ... Security Purposes,” Proceedings of IEEE Annual Int'l Car-.

Pursuit of a Moving Ground Target on a Graph Using ...
This definition of path uncertainty, meaning, the uncertainty about which of the n paths the evader is actually traveling on, results in a significant simplification of the underlying coupled estimation and control problem. Hereafter, we shall use th

A DNA-Based Genetic Algorithm Implementation for ... - Springer Link
out evolutionary computation using DNA, but only a few implementations have been presented. ... present a solution for the maximal clique problem. In section 5 ...

a novel parallel clustering algorithm implementation ...
In the process of intelligent grouping of the files and websites, clustering may be used to ..... CUDA uses a recursion-free, function-pointer-free subset of the C language ..... To allow for unlimited dimensions the process of loading and ... GPU, s

52. Implementation of Children Tracking System.pdf
There was a problem previewing this document. Retrying... Download. Connect more ... Implementation of Children Tracking System.pdf. 52. Implementation of ...

Development of Object Tracking Algorithm and Object ...
Now we applied the SSD formula for a vector with 3 components. 2. _. 1. ( , ). ||( (, ). ) ( (, ). )|| .... Fig 3.4: The plot Original values vs. calculated disparity values. 12 ...

a novel parallel clustering algorithm implementation ... - Varun Jewalikar
calculations. In addition to the 3D hardware, today's GPUs include basic 2D acceleration ... handling 2D graphics from Adobe Flash or low stress 3D graphics.

a novel parallel clustering algorithm implementation ...
parallel computing course which flattened the learning curve for us. We would ...... handling 2D graphics from Adobe Flash or low stress 3D graphics. However ...

Implementation of Fast Radix-2 DCT Algorithm using ...
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, ... signal flow graphs, and Coordinate rotation digital computer (CORDIC) .... The following are some major features of our proposed CORDIC-based fast ...

Minimum Time UAV Pursuit of a Moving Ground Target ...
Sep 18, 2014 - for a target moving on a road network and heading at a known speed toward a set of goal ... to visit next, including possibly staying at the current UGS ... Institute of Technology, Wright-Patterson AFB, OH 45433 location (and if ...

Integrated Mobile and Static Sensing for Target Tracking
Email: {oek2,at329,jzs3,gme8,lt35}@cornell.edu. Gene Whipps .... is not necessarily the best strategy for it to track down the target, it must distribute its ...