A VISUAL SERVOING ARCHITECTURE FOR CONTROLLING ELECTROMECHANICAL SYSTEMS R. Garrido, A. Soria, P. Castillo, I. Vásquez Abstract -- Most of the architectures employed in visual servoing research use specialised hardware and software. The high cost of the specialised hardware and the engineering skills required to develop the software complicates the set-up of visual controlled systems. The present day costs for the key elements for computer vision such as cameras, frame grabbers and computers continue to fall thus making reasonable lowcost visual architectures. In this paper, we present a visual servoing architecture for controlling electromechanical systems based on standard off-the-shelf hardware and software. The proposed scheme allows to control a wide class of systems including robot manipulators. The programming environment is based on MathLab/Simulinkâ which allows to take advantage of the graphic programming facilities of Simulinkâ. Two experimental evaluations are presented, a linear motion cart controlled using direct visual servoing and a planar robot arm controlled in a look and move framework. Index terms -- Direct Visual Servoing. Low-cost visual architecture. Electromechanical control system. Planar Robot Control.

I. INTRODUCTION The sense of sight procures most of the information received by a human being allowing him to interact with his environment and to survive. In the case of a robot manipulator, computer vision is an useful sensor since it mimics the human sense of sight and allows for non-contact measurement of the environment, a feature which could give it more versatility with respect to robots endowed only with optical encoders and limit switches. This feature could potentially allow to a robot to deal with problems related to flexibility and backlash in the robot transmission system and with partial knowledge of the robot kinematics. The task in visual control of robots is to use visual information to control the pose of the robot end-effector relative to a target object or a set of target features. The first works dealing with computer vision applied to robot manipulators appeared in the seventies [1], [5]. However, technical limitations at that time prevented an important development of this subject. During the eighties and nineties, the availability of more powerful processor and the development of digital electronics gave a strong impulse to visual control of robots and mechanical systems. Examples of works involving visual control of robot manipulators are [3], [6], [7], [8], [10], [11], [12], [13]. For other mechanical CINVESTAV - IPN, Departamento de Control Automático Av. IPN No. 2508, C.P. 07360, México, D.F, MEXICO fax (52)57 47 70 89. e-mail: [email protected]

systems see for example [2] where visual control is applied to an inverted pendulum and [9] for a flexible arm controlled using a vision system. In most of the references cited above, the experiments were executed using specialised hardware and software, which may increase time required to set up an experiment. One of the first computer vision architectures was the BVV1 [2], which contained up to 15 Intel 8085A, 8-bit microprocessors. In the same paper, another architecture, the BVV2, was also proposed. The main difference between the aforementioned architectures is that the later employs Intel 8086 16 bits processors. Almost all the software was written in machine code and the architectures were tailored to suit the computing needs of each experiment. Both systems are able to work at 17 ms of visual sampling period. Unfortunately, the authors does not mention if other kind of interfaces are available with these systems, e.g. analog to digital converters, interfaces for optical encoders, etc. In the case of visual control of robot manipulators, Corke [1] proposed an architecture in which image processing is done using a Datacube card and visual control is performed through a VME-based computer. Visual sampling period was 20 ms and custom made software, ARCL, was employed for programming. In some experiments the robot controller shares the control law execution. In Hashimoto and Kimura [4], vision processing was done using a Parsytec card mounted in a personal computer and another personal computer hosting a transputer network was employed for control. Vision sampling period was 85 ms and the period for joint robot control was 1 ms. Papanikolopoulos and Koshla [8] uses an architecture in which an IDAS/150 board is used for image processing and 6 Texas Instruments TMS320 DSP processors for controlling the joints of a CMU DDArm II robot. All the cards were connected through a VME bus hosted in a Sun workstation. Vision sampling period was 100 ms and the robot controller has a period of 3.33 ms. The whole system runs under the Chimera-II real-time software environment. Another interesting architecture is proposed in [13] where a custom made board based on a Texas Instruments TMS320C15 fixed-point processor is employed for image processing. Sampling period for image acquisition and processing was 16.7 ms. Control at the visual level is executed using a network of RISC processors with a sampling period of 7.3 ms. Joint control was left to the robot controller. A serial protocol is employed to connect the robot to the personal computer hosting the RISC and the image processors. In [10], a personal computer hosts a

Texas Instruments TMS320C31 DSP processor for joint control of a direct drive two degrees of freedom robot and a Data Translation DT3851-4 card for image processing. Sampling period for joint control was 2.5 ms and 50 ms for the visual feedback part. From the above non-exhaustive review, it can be concluded that in most cases data processing is executed using specialised and sometimes high cost boards, and it is not always possible to modify the image processing algorithms since in some boards these algorithms are in hardware. The control part also relies on specialised hardware such as transputers and DSP. It is also worth remarking that programming is made using machine code or C language. This feature may be adequate for researcher with good programming skills but for other users, some time would be needed for acquiring a good level of familiarity with the system and for setting up an experiment. Motivated by the remarks made above, in this work we propose a visual servoing architecture for controlling electromechanical systems based on personal computers and standard off-theshelf hardware and software. This paper is organised in the following way. Section II describes the visual servoing architecture. In Section III, the proposed architecture is tested through two experiments, namely direct visual servoing of a linear motion cart and look and move control of a two degree of freedom robot arm. Finally, in Section IV some concluding remarks are given and future work is also discussed.

II. VISUAL SERVOING ARCHITECTURE A. Overview From the review presented in the introduction, it is clear that in most of the visual servoing architectures, the image processing and the control parts should be executed using separate processors. This philosophy is reasonable if one takes into account the computing burden associated with image processing. In some instances in the robot control part, visual servoing algorithms may also require significant computing resources. However, as it was pointed out, specialised hardware is employed for the above aims. In order to benefit from the advantages of the above philosophy and, at the same time, avoiding excessive costs associated with highly specialised components, it would be interesting to integrate off-the-shelf hardware and software in a particular architecture. It would allow, on the one hand, a user-friendly control algorithm programming environment, and on the other hand, a performance comparable with those architectures proposed in the visual servoing literature. In our case, the proposed architecture achieves the above goals separating the visual servoing task into 3 different components, each having a specific function (see Figure 1): • A programming, algorithm development and data logging environment component. • A control component that can interact with the vision component to fulfil the control goals. • A vision component capable of perceiving the environment.

A/D ELECTROMECHANICAL SYSTEM

D/A Op. Enc.

ISA Bus

CAMERA RS-170

REAL-TIME CONTROL COMPUTER Wincon Client

FRAME GRABBER National Instruments PCI 1408 RS-232 PCI Bus IMAGE ACQUISITION AND PROCESSING COMPUTER Borland C

DATA ACQUISITION CARD Servotogo S8

ETHERNET PROGRAMMING AND DATA LOGGING COMPUTER Wincon Server MathLab/Simulink MS Visual C++

Figure 1. Block diagram for the proposed visual servoing

position through the determination of its centroid. The object is the part of the electromechanical system, which needs to be controlled, for example the tip of a robot arm. When the centroid of the object of interest is computed, it is transmitted to the Client via a RS-232 link at 115.4 Kbauds. Visual information is available in a Simulink diagram as a block in the same way as an optical encoder or a digital to analog converter. It is worth to note that once the program for image acquisition and processing is launched, the user does not need to take care of it. Visual sampling rate is 50 hz (20 ms) and is a function of the time required to image acquisition (16.7 ms), processing (2.3 ms) and the time required for sending data to the Client (1 ms).

B. Programming, algorithm development and data logging environment The computer devoted to programming, development and data logging, which we will call in the sequel the Server, hosts MathLab/Simulinkâ from The MathWorks Inc. , Winconâ from Quanser Consulting Inc. and MS Visual C++â software, all running under Windows 95â. Simulinkâ is devoted to programming the control algorithms and compiles the graphical code produced under Simulinkâ. Winconâ (Server part) downloads the code to the real-time control computer, which we will call the Client. Once the code has been downloaded, it is possible from the Server to start and stop the experiment, to change controller parameters on the fly and to log data from the Client. Interconnection between the Server and the Client is made through an Ethernet network. Further details can be found in [15]. In our set-up, the Server is a Pentium computer running at 200 Mhz.

E. Platform development In Table 1 we distinguish the key elements of the architecture. We have divided them as follows: hardware, standard software and non-standard software. Here, We conceive non-standard software as the drivers or programs developed to integrate the standard hardware and software elements to achieve the proposed visual servoing scheme.

C. Real time control For the Client, we use a computer with a Pentium processor running at 350 MHz under Windows 95â. Winconâ is employed to run the code generated at the Server. Data acquisition is performed using a Servotogo S8 card, which is able to handle optical encoders and analog voltage inputs and outputs. Sampling time will depend on the processing power of the computer. In the experiments we set the sampling frequency at 1 Khz.

III. EXPERIMENTAL EVALUATION Two experimental evaluations were made to test the platform. In the first experiment, a linear motion cart is controlled under the direct visual servoing philosophy[12]. The above means that the measurement used for feedback comes only from the vision subsystem. In the case of the second experiment a two degree of freedom robot is controlled using the look and move philosophy [12], indeed, the control law uses measurements from the vision subsystem and from the robot optical encoders. In both cases we employed image based controllers which means that position measurements are in pixels so avoiding calibration problems. See [1] for further details on image based control. Programming of the control laws took less than a hour in each case. Moreover, tuning was easy because the real time capabilities of changing parameters on the fly offered by Winconâ.

D. Image acquisition and processing Image acquisition and processing is performed using a Pentium based computer running at 450 Mhz. under Windows NTâ. For image acquisition we use a Pulnix camera, model 9710, which outputs a video signal in the RS-170 standard. Image is converted to digital data using a frame grabber from National Instruments, model PCI-1408. Image processing was done using Borland Câ language and consists in image thresholding and detection of the object

Hardware

Standard Software

Non-Standard Software

Development-Data Logging • PC • Ethernet Card

Vision PC Camera Frame Grabber RS 232 ♦ MathLab-Simulink Frame Grabber setup software ♦ Wincon Server C/C++ Compiler ♦ C/C++ Compiler § I/O RTW Servotogo § RS 232 Communication card driver in C/C++ RS 232 RTW Port Driver § Image processing algorithm in C/C++ Table 1. Key elements of the proposed architecture. • • • • ♦

• • •

Real-Time Control PC I/O Card Ethernet Card



Wincon Client

Figure 2. Block diagram for the experiment with the linear motion cart Figure 2. Block diagram for the experiment with the linear motion cart

Figure 2. Block diagram for the experiment with the linear motion cart A. Linear motion cart It consists of a solid aluminium cart driven by a DC motor. The cart slides along a stainless shaft using linear bearings. The whole prototype was covered with white stripes and a black circle of 7 cm of diameter was attached to the cart. In this experiment we used a discrete PID controller and a direct visual servoing scheme (see Fig. 2). Note that the only information employed for feedback comes from the vision system. Experimental result is depicted in the Figure 3. The desired output in x axis is a square wave signal of 30 pixels of amplitude and a frequency of 0.05 Hz. The above result shows several typical features found in electromechanical systems controlled using visual information. Firstly, note that there exists an overshoot in the response. This behaviour is due to the fact that the simple PID controller employed in the experiment does not take into account the time delay introduced by the visual measurement. Then, increasing the proportional gain increases the overshoot. On the other hand, decreasing the gain avoids overshoot but the steady state error increases. This problem may be alleviated by using integral control but high integral gains produce an oscillatory behaviour. Another factor that affects the steady state position error is the quantization introduced by the camera which may be coarser than the quantization introduced by an optical encoder.

Figure 3. Experimental result for the linear motion cart.

B. Robot arm The robotic system considered in this experiment is composed by a built-in-house planar robot manipulator with two degress of freedom moving in the vertical plane. In this approach, the vision system provides the image position centroid measured directly on the image plane in pixels. In order to obtain a good binary image, a metallic circle at the robot tip was painted in black and the rest of the robot was painted in white. Figure 4 depicts a block diagram of the closed loop control system used for the robot. In this case we are using a look and move structure because optical encoders are employed for measuring position q which is subsequently used for estimating numerically joint speed q using a high pass filter. The PD plus gravity compensation control law is:

τ = J T ( q) K p R(θ ) ~ x s − K v q + g ( q)

(1)

where q is the vector of joint displacements, is the n×1 vector of applied joint torques, J (q ) is the jacobian matrix,

K p and K v are linear 2×2 symmetric positive definite matrices, R (θ ) is the rotation matrix generated by rotating the camera about its optical axis by θ radians and ~ x s = x s* − x s is the vector error in the image plane in pixels. Further details about this algorithm can be found in [4]. In the experimental set-up, the centre of the robot first axis coincides with the origin of the image plane and the camera was aligned so that θ = 0 , then R (θ ) may be considered the identity matrix. Two experiments were performed, in the first case, the set point was

x s* = [r1 0] with r1 a square wave signal of 30 pixels of T

amplitude centred at 85 pixels and a frequency of 0.05 Hz. The robot response is depicted in Fig. 3 for the x axis. In the second experiment (Figure 4), the reference was set to

x s* = [0 r2 ] with r2 a square wave signal of the same frequency of r1 and 20 pixels of amplitude centred at 0 T

Figure 5. Experimental results for the robot arm.

pixels. In the first experiment, set point was reached in 2 s. For the second experiment settling time was longer than in the first experiment. The above results points out the nonlinear nature of the closed loop system. The gains were set such that the responses do not exhibit overshoot. As in the experiment with the cart, more gain will produce overshoots, a behaviour due essentially to the time delay introduced by the vision system. Note also that in the response for the y axis, there exist some small oscillations. The above phenomena is due to the interlaced nature of the RS-170 standard.

Figure 4. Block diagram for the experiment with the robot arm

IV. CONCLUDING REMARKS In this work, a visual servoing architecture for controlling electromechanical systems based on off-the-shelf hardware and software is proposed. Programming, development and data logging is performed through MathLab/Simulinkâ software allowing an user-friendly interface and a performance comparable with other visual servoing architectures. Modularity is a key characteristic of the proposed platform since in order to incorporate new hardware, it is only necessary to write the drivers in the MathLab/Simulink environment. Moreover, if more computing power is needed, personal computers with increased capabilities may be employed without changing the software. Two experiments were shown to test the capabilities of the architecture. Future work includes adding a second frame grabber and another video camera to perform stereo vision and the inclusion of a motorised platform. References [1] Corke, P.- Visual Control of Robots: High performance Visual Servoing. Taunton, Somerset, England: Research Studies Press, 1996. [2] Dickmanns, E. & Graefe, V.- "Applications of Dynamic Monocular Machine Vision", Machine Vision and Applications, vol 1. pp. 241-261. [3] Feddema, J. & Lee, C. “Adaptive image feature prediction and control for visual tracking with hand-eye coordinated camera”, IEEE Trans. On System, Man and Cybernetics, vol. 20, nº. 5.- 1990, pp 11721183. [4] Hashimoto, K. & Kimura, H.- "LQR optimal and non-linear approaches to visual servoing", (pp. 165-198) in Hashimoto, K. (Ed.).Visual Servoing.- Singapore: World Scientific, 1993. [5] Hutchinson S.; Hager, G. & Corke, P.- "A Tutorial on Visual Servo Control", IEEE Trans. on Robotics and Automation, vol. 12, nº.5.October 1996, pp. 651-670. [6] Kelly, R.- "Robust Asymptotically Stable Visual Servoing of Planar Robots", IEEE Trans. on Robotics and Automation, vol. 12, nº. 5.October 1996, pp. 759-766. [7] Maruyama, A. & Fujita, M.- "Robust Control for Planar Manipulators with Image Feature Parameter Potential", Advanced Robotics, vol. 12, nº. 1.- 1998. pp. 67-80. [8] Papanikolopoulos, N. & Khosla, P.- "Adaptive Robotic Visual Tracking: Theory and Experiments", IEEE Trans. on Automatic Control, vol. 38, nº. 3.-March 1993, pp. 429-444. [9] Tang, P. ; Wang, H.; Lu, S.- "A vision-based position control system for a one-link flexible arm", J. of the Chinese Inst. of Eng., vol. 18, nº. 4.- 1995, pp 565-573. [10] Reyes, F. & Kelly, R.- "Experimental Evaluation of FixedCamera Direct Visual Controllers on a Direct-Drive Robot", [Proc.] of the 1998 IEEE International Conference on Robotics and Automation (Leuven, Belgium, May 16-20).-New York: IEEE, 1998, pp. 2327-2332. [11] Richards, C. & Papanikolopoulos, N.- "Detecting and Tracking for Robotic Visual Servoing Systems", Robotics & Computer-Integrated Manufacturing, vol. 13, nº. 2.- 1997, pp. 101-120. [12] Weiss, L.; Sanderson, A. & Neuman, C.- "Dynamic SensorBased Control of Robots with Visual Feedback", IEEE J. Robot. Automation, vol. RA-3.- October 1987, pp. 404-417. [13] Wilson, W.; Williams Hulls, C. & Bell, G.- "Relative EndEffector Control Using Cartesian Position Based Visual Servoing", IEEE Trans. on Robotics and Automation, vol. 12, nº. 5.- October 1996, pp. 684-696. [14] Castillo-García, P.- Plataforma de control visual para servomecanismos./ M. Sc. Thesis: Departamento de Control Automático CINVESTAV-IPN ,August 2000. [15] Quanser Consulting.- Wincon 3.0.2a Manual.

a visual servoing architecture for controlling ...

servoing research use specialised hardware and software. The high cost of the ... required to develop the software complicates the set-up of visual controlled ..... Papanikolopoulos, N. & Khosla, P.- "Adaptive Robotic Visual. Tracking: Theory ...

245KB Sizes 4 Downloads 216 Views

Recommend Documents

A Daisy-Chaining Visual Servoing Approach with ...
Following the development in Section 2.2 and 2.3, relationships can be obtained to determine the homographies and depth ratios as4 pi = αi (A ( ¯R + xhn∗T) ...

Generic Decoupled Image-Based Visual Servoing for Cameras ... - Irisa
h=1 xi sh yj sh zk sh. (4). (xs, ys, zs) being the coordinates of a 3D point. In our application, these coordinates are nothing but the coordinates of a point projected onto the unit sphere. This invariance to rotations is valid whatever the object s

Line Following Visual Servoing for Aerial Robots ...
IEEE International Conf. on Robotics and Automation,. Michigan, USA, May 1999, pp. 618–623. [2] T. Hamel and R. Mahony, “Visual servoing of an under-.

THE EFFICIENT E-3D VISUAL SERVOING Geraldo ...
with. ⎛. ⎢⎢⎨. ⎢⎢⎝. Ai = [pi]× Ktpi bi = −[pi]× KRK−1 pi. (19). Then, triplet of corresponding interest points pi ↔ pi (e.g. provided by Harris detector together with.

Visual Servoing from Robust Direct Color Image Registration
as an image registration problem. ... article on direct registration methods of color images and ..... (either in the calibrated domain or in the uncalibrated case).

Visual Servoing from Robust Direct Color Image Registration
article on direct registration methods of color images and their integration in ..... related to surfaces. (either in the calibrated domain or in the uncalibrated case).

Improving Visual Servoing Control with High Speed ...
[email protected]. Abstract— In this paper, we present a visual servoing control ... Electronic cameras used in machine vision applications employ a CCD ...

Visual Servoing over Unknown, Unstructured, Large ...
single camera over large-scale scenes where the desired pose has never been .... Hence, the camera pose can be defined with respect to frame. F by a (6 ...

Direct Visual Servoing with respect to Rigid Objects - IEEE Xplore
Nov 2, 2007 - that the approach is motion- and shape-independent, and also that the derived control law ensures local asymptotic stability. Furthermore, the ...

Stable Visual Servoing of an Overactuated Planar ...
using an AD2-B adapter from US Digital. Algorithms are coded using the ... Visual Control of Robots: High performance Visual Servoing. Taunton, Somerset ...

THE EFFICIENT E-3D VISUAL SERVOING Geraldo ...
Hence, standard 3D visual servoing strategies e.g. (Wilson et al. ... As a remark, the use of multiple cameras for pose recovery e.g. binocular (Comport et al.

Stable Visual Servoing of an Overactuated Planar ...
forward kinematics parameters lead to position and orientation errors. Moreover, solving the ...... IEEE Robotics & Automation Magazine, December 2006. [19].

eBook Download A Visual Dictionary of Architecture ...
Ching s signature presentation. It is the only dictionary that provides concise, accurate definitions illustrated with finely detailed, hand-rendered drawings.