Stable Visual Servoing of an Overactuated Planar Parallel Robot M. A. Trujano, R. Garrido, A. Soria Departamento de Control Automático, CINVESTAV-IPN, México D.F., México

Abstract – This work presents an image-based visual servoing scheme applied to a class of overactuated planar parallel robots with revolute joints. The objective is to move the robot end-effector to a desired constant image position. A Proportional Derivative algorithm computes torques for the robot active joints. The Derivative and Proportional actions operate at the visual level. A linear filter allows obtaining velocity estimates from position measurements from a vision system. Lyapunov Stability theory allows concluding closed loop stability without invoking the LaSalleKrassovsky invariance principle. Experiments on a laboratory prototype permit evaluating the performance of the closed loop system. Keywords – Visual Servoing, Overactuated Planar Parallel Robot, PD Control Law, Velocity Estimation, Lyapunov stability.

I. INTRODUCTION Forward kinematics is a key component in most of the published control schemes about parallel robots since it allows determining the position and the orientation of the robot endeffector [1], [2], [3], [4], [5]. However, calibration errors in the forward kinematics parameters lead to position and orientation errors. Moreover, solving the forward kinematics of a parallel robot in real time may become difficult when the number of degrees of freedom increases [6], [7], [8], [9]. An alternative to forward kinematics for obtaining the end-effector coordinates is to use a vision system. This feature would allow dispensing exact knowledge of the robot direct kinematics. The paper focus on redundant planar parallel robots of the RRR-type (see Fig. 1) studied in [4] and [10]. An aspect to consider is that overactuation reduces or even eliminates some kinds of singularities and improves Cartesian stiffness in the robot workspace [10], [11]. Visual servoing of serial manipulators has been an interesting research topic in robotics [12], [13], [14], [15], [16], [17], [18], [19], [20]. The objective in visual control of robots is to use visual information to control the pose of the robot endeffector relative to a target object or a set of target features. In contrast, visual servoing of parallel robots is an emergent area and until recently some papers reports interesting research in this area. Visual information of the robot legs allows controlling a Gough-Stewart parallel robot in [21] and [22]. Reference [23] reports another approach. In this case, the Authors show how to control an I4R parallel robot using only visual feedback. Simulation results using a realistic robot model gives satisfactory closed-loop performance. The approach in [24] allows controlling experimentally a Gough-Stewart platform in camera to hand framework. References [25] and [26] present a visual control scheme applied to the delta robot “RoboTenis”. This approach uses the robot native joint controller as an inner loop,

and the camera, which rests on the robot end-effector, closes an outer control loop. Moreover, the authors show uniform ultimate boundedness of the tracking error. Experiments validate the proposed approach. The aforementioned approaches do not take into account explicitly the robot dynamics and their foundations are the standard visual servoing kinematic techniques. References on visual control of parallel robots that consider the robot dynamic model are [27], [28]. The methodology presented in [27] takes into account the robot dynamics using an end-effector mounted camera. The proposed control law solves a set point regulation problem using fuzzy logic and numerical simulations validate the approach. Assuming knowledge on the robot dynamic and kinematic parameters, the approach presented in [28] uses a Cartesian computed torque control law to obtain a linear robot dynamics. The goal of this work is to present an image based visual servoing Proportional Derivative (PD) controller for set point tracking control of a class of overactuated planar parallel manipulators. The proposed control law only uses visual and joint position measurements. Moreover, it does not need explicit knowledge about the robot dynamic parameters. Lyapunov theory takes into account the robot dynamic model and allows concluding asymptotic closed loop stability without invoking the LaSalle-Krassovsky invariance theorem. Experiments show that the closed loop system is robust in face of kinematic and camera pose uncertainty. Throughout this paper, the norm of a vector x

! n is defined as

x ! xT x . For a positive definite

matrix A ! n"n , m { A} and M { A} stand for the smallest and largest eigenvalues and the matrix norm is defined as the corresponding induced norm A !

M

{ AT A} . The paper

layout is as follows. Section II presents the modeling of the robot and vision system. Section III describes the proposed control law and its corresponding stability analysis. Section IV shows experiments on a laboratory prototype. Finally, Section V gives some concluding remarks. II. MODELING ISSUES A. Modeling of the overactuated planar parallel robot. In accordance with [10] and [11], using an equivalent open chain mechanism permits obtaining the model of a planar parallel robot with rotational joints shown in Fig. 1. Figure 2 depicts its equivalent open-chain form. The well-known Euler Lagrange formalism [29] allows modeling each of the branches of the open chain form.

Assuming that the robot moves in the horizontal plane, the following equations model the equivalent open chain mechanism

! ! ! M i " !i # $ Ci " !i # % " ai # , i % 1, 2, 3 "&"i #' "&"i #' "& pi #' M M i12 !# #i $ 2$i cos "i %i $ $i cos "i ! % M i % " i11 # %i " M i21 M i22 # ""& %i $ $i cos "i #' & ' C C ! *$ " sin "i *$i ("i $ !i ) sin "i ! # Ci % " i11 i12 # % " i i "Ci Ci # " $i "i sin "i # 0 ' & 21 22 ' & 2 i

2

(y

B2

* yB3 )$ X B2

2

(y

B3

* yB1 )$ X B3

yBi % yOi $ L sin !i

The entries of the vector X - . / ! 2 are the coordinates of the robot end-effector. The set . corresponds to the robot workspace.

%i % mi 2 ri 2 2 $ I i 2

The above model assume that all links have the same length, i.e., L % ai % bi . Parameters I ij , mij and rij correspond to the inertia, mass, and center of mass of each link. Combining the equations described above yields (1)

with ! # p '#

%" &"

a

(2)

0 M 112 0 0 !# " M 111 0 " 0 M 211 0 0 M 212 0 # " # 0 M 311 0 0 M 312 # " 0 M %" # 0 M 122 0 0 # " M 112 0 " 0 M 0 0 M 222 0 ## 212 " " 0 0 M 312 0 0 M 322 ##' &" ! " C111 0 0 C112 0 0 # " 0 C2 0 0 C212 0 # 11 " # " 0 0 C311 0 0 C312 # C%" # "C121 0 0 0 0 0 # " # " 0 C221 0 0 0 0 # " 0 0 C 0 0 0 #'# 321 &"

The number of active and passive joints is T

qa % +!1 !2 !3 , - !

m

Fig. 1. Overactuated planar parallel robot.

n,

stands for the angles of the active T

(motorized) joints and q p % +"1 "2 "3 , - ! n*m for the angles of the passive (non motorized) joints. In the same T T way, a % "& a1 a2 a3 !#' - ! m , p % "& p1 p2 p3 !#' - ! n*m correspond to the torques in the active and passive joints respectively. Neglecting the passive joints friction allows setting p % 0. The following equations [10] describe the forward kinematics ! X % " x # % f (q) "& y #'

B1

xBi % xOi $ L cos !i

$i % mi 2 ai ri 2

q ! q % "qa # , &" p '#

(y

2

2 i2

Mq $ Cq %

2

X Bi % xB2i $ yB2i , i %1, 2, 3

#i % m r $ I i1 $ mi 2 (a $ r ) $ I i 2 2 i1 i1

! * yB2 )# ' 2 2 2 ! ( xB3 *xB2 )$ X B2 ( xB1 *xB3 )$ X B3 ( xB2 *xB1 )#' D% 2 " xB1 ( yB2 * yB3 )$ xB2 ( yB3 * yB1 )$ xB3 ( yB1 * yB2 )!# & '

x% 1 " X B1 D& y % 1 " X B1 D&

(3) Fig. 2. Equivalent open chain mechanism.

The next key relationships represent the robot differential kinematics S! X q "WX " # H (4) % $& qa " SX (5) ! cos ' 1 )!1 ( sin ' 1 )!1 ( # $ # a1 sin !1 a1 sin !1 $ # $ # cos ' 2 )!2 ( sin ' 2 )!2 ($ S "# (6) $ a2 sin !2 $ # a2 sin !2 # cos ' )! ( sin ' )! ( $ 3 3 3 3 $ # # a sin ! $ a ! sin 3 3 3 &$ %# 3 ! d1y d1x #* $ * # a1b1 sin !1 a1b1 sin !1 $ $ # d2 y d 2x $ # * H "#* (7) $ a b a b ! ! sin sin # 2 2 2 2 2 2$ $ # d3y d3x #* $ * # a3b3 sin !3 a3b3 sin !3 $& % dix "l %cos i )cos ' i )!i (!& , i "1, 2,3 (8) diy "l %sin i )sin ' i )!i (!& , i "1, 2,3 The following expression gives an important relationship between " and " a W T " "S T "a

(9)

Using (1) and (9) it is possible to write down the robot dynamics (1) in terms of the robot end-effector coordinates T

MX )CX " S " a

(10)

where T

M "W MW C "W T MW )W T CW .

Fig. 3. Fixed camera configuration and screen coordinate system.

Consider a perspective transformation as an ideal pinhole camera model [14]. The following expression gives the position of the end-effector in the screen coordinate frame X i "# hR ($ ) ' X *O ()C

(11)

T

where C " #%Cx C y !$& is the image center and $ * sin $ ! R ($ )" # cos $ % sin $ cos $ & is the rotation matrix generated by clockwise rotating the camera about its optical axis by $ radians, # is the scale factor of length in pixels/m, and h is the magnification factor defined as h" % %*z

(12)

where % is the focal length. The angle $ fulfills the inequality *& / 2,$ ,& / 2. Taking the time derivative of (11) yields X i "# hR ($ ) X .

Note that (10) relates the active joint torques " a and the endeffector coordinates X . Matrices M and C have the following structural properties as long as matrix W has full rank [11], [30]

Property 1. M is a symmetric positive definite matrix. Property 2. M *2C is a skew-symmetric matrix. Property 3. There exists a positive constant kC such that

(13)

III. PROPOSED CONTROL LAW Substituting (3) into (11) yields X i " # hR ($ ) ' f (q) * O ( ) C.

(14)

Using this last relationship permits defining the desired T

position in the image coordinate frame X i* "- xi* yi* . as follows

C +k C X .

X i " # hR ($ ) ' f (q ) * O ( ) C.

B. Modeling of the vision system Figure 3 shows the robot and camera configuration. The robot moves on the horizontal coordinate plane xR *yR , known as the robot coordinate plane. The camera optical center is located at a distance z from the robot coordinate plane, and the optical axis T intersects this plane at O" %Ox Oy !& . The image coordinate frame xi *yi is parallel to the robot coordinate frame. The visual

(15)

In this case, q is a robot joint configuration corresponding to the desired position X * . Define the image position error X! as the i

i

visual distance between the desired and measured end-effector positions x* ! x ! X! i " X i* * X i " # i* $ * # i $ . (16) #% yi $& #% yi $& Therefore, substituting (14) and (15) into (16) leads to

T

feature of interest is the robot end-effector position X i "- xi yi . in the image coordinate frame.

X! i " # hR($ ) - f (q ) * f (q) ..

(17)

Therefore, the control problem can be stated as that of designing a control law for the actuator torques a such that the robot end-effector reaches the desired position. In other words, the control aim is achieved if lim X i (t ) " 0 . In order to solve the t !

problem stated above, it is assumed that a control law u defined in terms of the end-effector coordinates drives the robot manipulator dynamics (10), so torques a are the solutions of the following equation S T a " u. (18) The approach employed in [11] uses the Moore-Penrose pseudoinverse of S T , ( S T )† " S ( S T S )%1 , that is equivalent to solve (17) in a Least Square sense. Hence, a is computed as a

" ( S T ) † u.

The first term is a nonnegative function of Y and X! , while the second is a nonnegative function of X! and !! . Now the remaining terms yield a positive definite function with respect to Y and !! if the controller gains fulfill the following inequalities 2# hk13 2# hk12 3 $ $M .M /. (25) ) M ; k . / M 2 a5 a4 After some simplifications and using Property 2, the time derivative of the Lyapunov function candidate (23) along the trajectories of the closed loop system (22) can be written as k24 )

# hk 2 # hk k k V! "% 3 1 X! T MX! ' 4 12 X! T MX! % 2 !!T !! ' 13 !!T !! a k2 a k2 a #h k2 k k (26) % 41 2 Y T CT X! ' 4 1 !!T CT X! % 2 1 !!T MX! a k2 a k2 a k2 2 k % 31 3 Y T Y . a k2

(19) T †

Solution (19) makes sense only if the pseudoinverse ( S ) is well defined, i. e., if S is full rank. Let consider the following PD control law T † & !# (20) a " ( S ) ( K PY ' K D ! $ !! " %a 2 ! ' aY .

(21)

The terms K P " ak1 I 2 , K D " ak2 I 2 , are the proportional and derivative gain matrices with a, k1 , k2 ) 0 and

I 2 + " 2*2 the identity matrix; ( S T )† " S ( S T S )%1 is the MoorePenrose pseudoinverse and Y " R (" )T X i . Equation (21) is a high-pass filter introduced in [33] for joint set-point regulation of ! robot manipulators; this filter produces estimates !! of X . i

Substituting the control law (20) into the robot dynamic model (10) produces the closed-loop system dynamics %1 & ! # & M &(, K PY ' K D !! % CX! #$- #d , X - ,, %# hX! ,Y - " 2 ! dt , !! - ,, ! %a ! % a# hX ( $ (, $-

(22)

The objective is now to provide conditions on the controller gains guaranteeing asymptotic stability. To carry out the stability analysis, consider the following Lyapunov function candidate k2 k 1 !T ! X MX % 41 2 Y T MX! ' 4 1 !!T MX! 2 2a a k2 a k2 k k ' 1 Y T Y ' 22 !!T !!. 2 a# h 2a # h

V"

Let

Z1 "

1 2a

k X! ' a31k !!, and Z 2 " 2

(23)

k X! % a3 1k 2 Y .

These

2

definitions permits rewriting (23) as V " Z1T MZ1 ' Z 2T MZ 2 ' %

k14 a 6 k24

Y T MY %

k12 a 6 k22

k1 k Y T Y ' 22 !!T !! 2 a# h 2a # h !!T M !!.

%

2 2 k 2k kk k12 T T ! k Y C X ' 4 1 !!T CT X! 0 14 C2 Y X! ' 14 C !! X! . 4 2 a k2 a k2 a k2 a k2

On the other hand, note that %

k1 !T ! k1$M .M / 13 ! 2 1 ! 225 ! MX 0 % X ' ! 22 2a2k2 34 a2k2 % 6

where % " # hk1 a 2 k2 is a positive constant. Hence, the Lyapunov function (26) satisfies V! 0%Z T AZ '7( X! , Y , !!) 2 2 Z T AZ " a1 X! ' a2 Y ' a3 !! 2 # kk &k 7( X! , Y , !!) " 14 C , 1 Y ' !! - X! a k2 (, k2 $T ! ! & # Z ", X Y ! ( $ A " diag{a1, a2 , a3} # hk1$m{M } 13 3k1 5 a1 " 341% 2ak 6222 a3k2 2 k3 a2 " 31 2 a k2 1 k2 k1 $M {M }25 a3 " 33 % 3 % 2ak2 226 4 #h a

2

(27)

Note that A is positive definite for k2 and a high enough. If inequalities (25) hold, then, it follows from (23) that

2

1 2a

Using Property 3 yields

(24)

2a# h V k1 2 2 2a # h !! 0 V k2 2

Y 0

(28)

Moreover, from the definition of vector Z in (27) the following inequality holds 2 2 X! 0 Z . (29)

The above upper bounds allows obtaining 2 b " X , Y , !# 14 V 1/2 Z a 1/2 1/2 % 2a 2! h '& /0 k k $ k % 2a! h '& +)) b1 * 1 C ( 1 )) ' ' ' ' k2 ( k 2 , k1 , k2 - 01 .

(30)

Note that the quotient b1 /a4 decreases when a grows. The above inequality permits writing the following upper bound for V in (27) 2 2 b V #4Z T AZ + 14 V 1/2 Z #4"(t ) Z a (31) b "(t ) *#min 2 A34 14 V 1/2. a Choose an initial value V (0) of the Lyapunov function and set a value of a high enough such that " (0) 5 0 . Under this last condition, assume that V grows; since V is continuous there exists a time interval [0,t1 ], where " (t ) 5 0 . However, according to (31) V decreases which is a contradiction; therefore, the Lyapunov function V decreases. Finally, using this fact permits concluding that the closed loop system is asymptotically stable. It is interesting to point out that the region of attraction increases by increasing the parameter a. IV. EXPERIMENTAL RESULTS Experiments conducted on a laboratory prototype (Fig.4) display the performance of the proposed control law. The nominal link lengths of the prototype are L*15 cm . Brushed servomotors from Moog, model C34L80W40 drive the active joints. Incremental optical encoders attached to the motors provide position measurements corresponding to the reference qa . These motors steer the active joints through timing belts with a 3.6:1 ratio. Pulse width modulation digital amplifiers from Copley Controls, model Junus 90 and working in current mode, drive the motors. Absolute optical encoders from US Digital, model A2, with 4096 pulses per turn, supply measurements of the robot active and passive joints angles $i and "i that allows computing ( S T )† . Two computers compose the control architecture; which is an update of the architecture presented in [34]. The first computer, called the vision computer and endowed with an Intel Core2 processor running at 2.4 GHz, executes image acquisition; a Dalsa Camera, model CA-1D-128A, is connected to this computer by means of a National Instruments card, model NI-1422. Image processing is performed using Visual C++ and the DIAS software [35]. The second computer, called the control computer and endowed with an Intel 4 processor running at 3.0 GHz, executes the control algorithm and performs data logging. This computer receives data from the vision computer through an RS-232 port at 115 Kbaud. Data acquisition is carried out through a data card from Quanser consulting, model MultiQ 3. This card reads signals from the optical incremental encoders attached to the motors and supplies control voltages to the power amplifiers. Optical absolute

encoders connect to the control computer through an RS-232 using an AD2-B adapter from US Digital. Algorithms are coded using the Matlab/Simulink 5.2 software under the Wincon 3.02 real-time environment. A counter in the MultiQ 3 card sets a sampling period of Tie * 0.5 ms, which corresponds to the master clock of the closed loop system; this sampling period also sets the sampling time for reading the active joint incremental optical encoders. The image sampling period is Tim * 5 ms; during this time interval, the vision computer executes data acquisition and processing; it also includes the time required to send the centroid of the robot end-effector to the control computer through the RS-232 link. It is worth mentioning that Tim corresponds to the time delay introduced in the visual measurements. The absolute encoder measurements are sampled every Tab *15ms . The sampling time for the visual and absolute encoder measurements are synchronized with the master clock. The choice for the numerical method in Simulink was the ODE 45 Dormand-Price algorithm. Gains for the proposed controller were set to KP * diag{0.4,0.4} , KD * diag{0.2771,0.2771} , a* 300 . The reference xi* is a square wave of 8 pixels of amplitude, and with a frequency of 0.3 Hz, plus a constant signal of 80 pixels. The following linear filter smooth out both references G(s) * 20 (32) s + 20 Figure 5 depicts the closed loop response under control law (20), (21) and Fig. 6 shows the corresponding position errors. V. CONCLUSIONS This paper presents an image-based visual servoing Proportional Derivative regulator applied to an overactuated planar parallel robot. A key feature of this regulator is the use of endeffector velocity estimates obtained through measurements of the image position error in conjunction with a linear filter. The stability proof for the closed loop system does not use the La’SalleKrassovski invariance principle. A practical implementation of the system shows a good performance of the closed loop system.

Fig. 4. Laboratory Prototype.

[12] [13] [14] [15]

[16]

[17] Fig. 5. Closed-loop response using the proposed control law. [18] [19] [20]

[21]

[22]

[23] Fig. 6. Position errors under the proposed control law.

ACKNOWLEDGMENT

[24]

The authors would like to thank Gerardo Castro and Jesus Meza for his support. The Authors also appreciate the comments of the Reviewers.

[25]

[26]

REFERENCES [27] [1]

[2]

[3]

[4]

[5]

[6]

[7] [8] [9] [10]

[11]

S. Kock, W. Schumacher. A parallel x-y manipulator with actuation redundancy for high-speed and active stiffness application. Proc. of the Int. Conf. on Robotics and Automation. Leuven Belgium, 1998. S. Kock, W. Schumacher. A mixed elastic and rigid-body dynamic model of an actuation redundant parallel robot with high-reduction gears. Proc. of the Int. Conf. on Robotics and Automation. San Francisco, U.S.A., 2000. S. Kock, W. Schumacher. Control of a fast parallel robot with a redundant chain and gearboxes: Experimental results. Proc. of the Int. Conf. on Robotics and Automation. San Francisco, U.S.A., 2000. G. Liu, Z. Li. A unified geometric approach to modeling and control of constrained mechanical systems. IEEE Trans. on Robotics and Automation. Vol. 18, No. 4, 2002. Lu Ren, James K. Mills, Dong Sun. Experimental Comparison of Control Approaches on Trajectory Tracking Control of a 3-DOF Parallel Robot. IEEE Trans. Control Systems Technology, Vol. 15, No. 5, pp 982-988, 2007. D.H. Kim, J.Y. Kang, K. I. Lee. Robust tracking control design of a 6 DOF parallel manipulator. Journal of Robotic Systems, Vol. 17, No. 10, pp 527-547. 2000. S. Tadokoro. Control of parallel mechanisms. Advanced Robotics. Vol. 8, No. 6, pp 559-571, 1994. J.P. Merlet. Parallel Robots. Klwuer Academic Publishers. 2000. L.W. Tsai. Robot Analysis. John Wiley and Sons Inc. 1999. H. Cheng. Dynamics and control of parallel manipulators with actuation redundancy. M. Sc. Thesis. Department of Electrical and Electronic Engineering. The Hong Kong University of Science and Technology. 2001. H. Cheng, Y. K. Yiu, Z. Li. Dynamics and Control of redundantly actuated parallel manipulators. IEEE/ASME Transactions on Mechatronics. Vol.8, No. 4, pp 483-491. 2003.

[28]

[29] [30]

[31] [32]

[33] [34]

[35]

P. Corke. Visual Control of Robots: High performance Visual Servoing. Taunton, Somerset, England: Research Studies Press, 1996. S. Hutchinson, G.D. Hager and P.I. Corke. A tutorial on visual servo control, IEEE Trans. Robot. Automat. vol. 12 No. 5 pp.651-670, 1996. R. Kelly. Robust Asymptotically stable visual servoing of planar robots. IEEE Transactions on Robotics and Automation, Vol. 12, No. 5, pp 759-766, 1996. N. Papanikolopoulos, P. Khosla. Adaptive Robotic Visual Tracking: Theory and Experiments. IEEE Trans. on Automatic Control, vol. 38, no. 3.-March 1993, pp. 429-444. L. Weiss, A. Sanderson, C. Newman. Dynamic Sensor-Based Control of Robots with Visual Feedback, IEEE J. Robot. Automation, vol. RA-3. October 1987, pp. 404-417. W. Wilson, C. Williams Hulls, G. Bell. Relative End-Effector Control Using Cartesian Position Based Visual Servoing., IEEE Trans. On Robotics and Automation, vol. 12, no. 5.- October 1996, pp. 684-696. F. Chaumette, S. Hutchinson. Visual servo control part I: Basic approaches. IEEE Robotics & Automation Magazine, December 2006. F. Chaumette, S. Hutchinson. Visual servo control part II: Advanced approaches. IEEE Robotics & Automation Magazine, March 2007. D. Kragic, H.I. Christensen. Survey on Visual Servoing for Manipulation. Technical report ISRN KTH/NA/P-02/01-SE, Department of Numerical Analysis and Computing Science, University of Stockolm, Sweden. 2005. N. Andreff, Ph. Martinet. Unifying kinematic modeling, identification, and control of a Gough-Stewart parallel robot into a vision-based framework. IEEE Trans. on Robotics, Vol. 22, No. 6, 2006. N. Andreff, T. Dallej, Philippe Martinet. Image-based Visual Servoing of a Gough-Stewart Parallel Manipulator using Leg Observations. The International Journal of Robotic Research, vol. 26, pp 667-687, 2007. T. Dallej, N. Andreff, Ph. Martinet. Image-based visual servoing of the I4R parallel robot without propioceptive sensors. Proc. of the Int. Conf. on Robotics and Automation. Roma, Italy, 2007. Nicolas Andreff, Philippe Martinet. Vision-based self-calibration and control of parallel kinematic mechanisms without proprioceptive sensing. Intel Serv Robotics (2009) 2:71–80. L. Angel, A. Traslosheros, J.M. Sebastian, L. Pari, R. Carelli, F. Roberti. Vision-based control of the robotenis system. Recent Progress in Robotics. LNCIS 370, Springer Verlag. pp 229-240. 2008. Sebastian, J.M., A. Traslosheros, L. Angel, F. Roberti, R. Carelli. Parallel robot high speed object tracking. M. Kamel and A. Campilho (Eds.): ICIAR 2007, LNCS 4633, pp. 295.306, 2007. Z. Qi, J.E. McInroy. Nonlinear Image Based Visual Servoing Using Parallel Robots. IEEE Int. Conf. on Robotics and Automation. Roma, Italy, 10-14 April 2007. Paccot, F., Ph. Lemoine, N. Andreff, D. Chablat, Ph. Martinet. A visionbased computed torque control for parallel kinematic machines. Proc. of the Int. Conf. on Robotics and Automation. Pasadena, CA, U.S.A., 2008. M.W. Spong M, M. Vidyasagar, .Robot Dynamics and Control., Wiley, New York 1989. Y.K. Yiu, H. Cheng, Z.H. Xiong, G.F. Liu, Z.X. Li. On the dynamics of parallel manipulators. Proc. of the Int. Conf. on Robotics and Automation. Seoul, Korea. 2001. P. Lancaster, M. Tismenetsky. The Theory of matrices. Academic Press, Orlando. 1985. J. Gangloff, M. de Mathelin. High-speed visual servoing of a 6-d.o.f. manipulator using multivariable predictive control. Advanced Robotics, Vol. 17, No. 10, 2003. H. Berghuis, H. Nijmeijer. Global regulation of robots using only position measurements. Systems & Control Letters. Vol. 21, No. 4, 1993. A. Soria, R. Garrido, I. Vásquez, R. Vázquez. Architecture for rapid prototyping of visual controllers. Robotics and Autonomous Systems, Vol. 54, pp. 486-495, 2006. K. Voss, W. Ortmann, H. Suesse. DIAS-Interactive Image Processing System, V 5.0, Friedrich-Schiller-University Jena, Germany. 1998.

Stable Visual Servoing of an Overactuated Planar ...

forward kinematics parameters lead to position and orientation errors. Moreover, solving the ...... IEEE Robotics & Automation Magazine, December 2006. [19].

690KB Sizes 0 Downloads 176 Views

Recommend Documents

Stable Visual Servoing of an Overactuated Planar ...
using an AD2-B adapter from US Digital. Algorithms are coded using the ... Visual Control of Robots: High performance Visual Servoing. Taunton, Somerset ...

Stable Visual PID Control of a planar parallel robot
IEEE. Int. Conf. on Robotics and Automation,3601-3606 (1998). [9]. Kelly, R., “Robust Asymptotically Stable Visual Servoing of Planar Robots” IEEE Trans. on Robotics and. Automation, 12(5),759-766 (1996). [10]. Soria, A., Garrido, R., Vásquez, I

Stable Visual PID Control of a planar parallel robot
The platform employs two personal computers integrated under ... IEEE Int. Conf. on Robotics and Automation Leuven, Belgium, 2295-2300. (1998). [2]. Cheng ...

a visual servoing architecture for controlling ...
servoing research use specialised hardware and software. The high cost of the ... required to develop the software complicates the set-up of visual controlled ..... Papanikolopoulos, N. & Khosla, P.- "Adaptive Robotic Visual. Tracking: Theory ...

A Daisy-Chaining Visual Servoing Approach with ...
Following the development in Section 2.2 and 2.3, relationships can be obtained to determine the homographies and depth ratios as4 pi = αi (A ( ¯R + xhn∗T) ...

Generic Decoupled Image-Based Visual Servoing for Cameras ... - Irisa
h=1 xi sh yj sh zk sh. (4). (xs, ys, zs) being the coordinates of a 3D point. In our application, these coordinates are nothing but the coordinates of a point projected onto the unit sphere. This invariance to rotations is valid whatever the object s

THE EFFICIENT E-3D VISUAL SERVOING Geraldo ...
with. ⎛. ⎢⎢⎨. ⎢⎢⎝. Ai = [pi]× Ktpi bi = −[pi]× KRK−1 pi. (19). Then, triplet of corresponding interest points pi ↔ pi (e.g. provided by Harris detector together with.

Visual Servoing from Robust Direct Color Image Registration
as an image registration problem. ... article on direct registration methods of color images and ..... (either in the calibrated domain or in the uncalibrated case).

Visual Servoing from Robust Direct Color Image Registration
article on direct registration methods of color images and their integration in ..... related to surfaces. (either in the calibrated domain or in the uncalibrated case).

Improving Visual Servoing Control with High Speed ...
[email protected]. Abstract— In this paper, we present a visual servoing control ... Electronic cameras used in machine vision applications employ a CCD ...

Line Following Visual Servoing for Aerial Robots ...
IEEE International Conf. on Robotics and Automation,. Michigan, USA, May 1999, pp. 618–623. [2] T. Hamel and R. Mahony, “Visual servoing of an under-.

Visual Servoing over Unknown, Unstructured, Large ...
single camera over large-scale scenes where the desired pose has never been .... Hence, the camera pose can be defined with respect to frame. F by a (6 ...

Direct Visual Servoing with respect to Rigid Objects - IEEE Xplore
Nov 2, 2007 - that the approach is motion- and shape-independent, and also that the derived control law ensures local asymptotic stability. Furthermore, the ...

THE EFFICIENT E-3D VISUAL SERVOING Geraldo ...
Hence, standard 3D visual servoing strategies e.g. (Wilson et al. ... As a remark, the use of multiple cameras for pose recovery e.g. binocular (Comport et al.

Arrangements of Planar Curves - Semantic Scholar
Computer Science Department. D-67653 ... Conics and cubics A conic (or conic curve) is an algebraic curve of degree at most 2 and a cubic (or cubic curve) is ...

Arrangements of Planar Curves - Semantic Scholar
Computer Science Department ... Conics and cubics A conic (or conic curve) is an algebraic curve of degree at most 2 and a cubic (or .... The following non-.

Computational chemistry comparison of stable ...
Apr 20, 2007 - Some of most used connectivity indices include first (M1) and second (M2) Zagreb ... First, the data is split at random in two parts, a training series. (t) used for model ..... CambridgeSoft Corporation. Chem3D Ultra software.

Manipulation of Stable Matching Mechanisms ...
Polarization of Interests Revisited. Francis X. Flanagan. Wake Forest University. Abstract. In a college admissions model in which colleges have responsive preferences, I ... 1Loosely, responsive preferences require that a college has a strict prefer

Stable Matching With Incomplete Information - University of ...
Page 1. Econometrica Supplementary Material. SUPPLEMENT TO “STABLE MATCHING WITH INCOMPLETE. INFORMATION”: ONLINE APPENDIX. (Econometrica, Vol. 82, No. 2, March 2014, 541–587). BY QINGMIN LIU, GEORGE J. MAILATH,. ANDREW POSTLEWAITE, AND LARRY S

stable angles of slopes
The experiment results can be related to a general understanding of instability of slopes ... that such slopes tend to have an angle varying from 25 to 40 degrees.

Efficient Multi-Scale Stereo of High-Resolution Planar ...
into memory would not be practical even on a modern desk- top or workstation. ..... [25] C. Strecha, W. von Hansen, L. V. Gool, P. Fua, and. U. Thoennessen.