ASME 2012 5th Annual Dynamic Systems and Control Conference joint with the JSME 2012 11th Motion and Vibration Conference DSCC2012-MOVIC2012 October 17-19, 2012, Fort Lauderdale, Florida, USA

DSCC2012-MOVIC2012-8778

EXPERIMENTAL RESULTS FOR MOVING OBJECT STRUCTURE ESTIMATION USING AN UNKNOWN INPUT OBSERVER APPROACH

Sujin Jang, Ashwin P. Dani, Carl D. Crane III and Warren E. Dixon∗ Department of Mechanical and Aerospace Engineering University of Florida Gainesville, Florida, 32611 Email: [email protected];[email protected];[email protected];[email protected]

velocities observed by a moving camera with known camera motions. A new method based on an unknown input observer (UIO) is developed in [8] to estimate the structure of an object moving with time-varying velocities using a moving camera with known velocities.

ABSTRACT An application and experimental verification of the online structure from motion (SFM) method is presented to estimate the position of a moving object using a moving camera. An unknown input observer is implemented for the position estimation of a moving object attached to a two-link robot observed by a moving camera attached to a PUMA robot. The velocity of the object is considered as an unknown input to the perspective dynamical system. Series of experiments are performed with different camera and object motions. The method is used to estimate the position of the static object as well as the moving object. The position estimates are compared with ground-truth data computed using forward kinematics of the PUMA and the two-link robot. The observer gain design problem is formulated as a convex optimization problem to obtain an optimal observer gain.

The contributions of this work is to experimentally verify the unknown input observer in [8] for structure estimation of a moving object. A series of experiments are conducted on a PUMA 560 and a two-link robot. A camera is attached to the PUMA and the target is attached to the moving two-link robot. The camera images are processed to track a feature point while camera velocities are measured using the joint encoders. To obtain the ground-truth data, the distance between the origin of the PUMA and origin of the two-link robot is measured and positions of the camera and moving object with respect to respective origins are obtained using the forward kinematics of the robots. The estimated position of object is compared with the ground-truth data. The experiments are conducted to estimate the structure of a static as well as a moving object keeping the same observer structure. The experiments prove the advantage of the observer in the sense that a-priori knowledge of object state (static or moving) is not required. A convex optimization problem is solved for computing an observer gain to reduce the effects of noise and disturbances.

INTRODUCTION Recovering the structure of a moving object using a moving camera has been well studied in literature the past decade [1–8]. In [6], a batch algorithm is developed by approximating the trajectories of a moving object using a linear combination of discrete cosine transform (DCT) basis vectors. Batch algorithms use an algebraic relationship between 3D coordinates of points in the camera coordinate frame and corresponding 2D projections on the image frame collected over n images to estimate the structure. Hence, batch algorithms are not useful in real-time control algorithms. For visual servo control or video-based surveillance tasks, online structure estimation algorithms are required. Recently, a causal algorithm is presented in [7] to estimate the structure and motion of objects moving with constant linear ∗ Address

PERSPECTIVE CAMERA MODEL In this section, the kinematic relationship of the moving camera and the object, and geometric relationship of image formation is briefly described.

all correspondence to this author.

1 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

Copyright © 2012 by ASME

[ ]T is expressed as F wC = ωx ωy ωz . Since C wF and F wC are related as C wF = −F wC , C wF

[ ]T = −ωx −ωy −ωz .

(5)

Substituting Eqs. (1) - (3) into Eq. (4) yields        ˙ v px − vcx X(t) 0 ωz −ωy X(t) Y˙ (t)  = v py − vcy  + −ωz 0 ωx  Y (t)  , ˙ Z(t) v pz − vcz ωy −ωx 0 Z(t)   v px − vcx + ωzY (t) − ωy Z(t) = v py − vcy + ωx Z(t) − ωz X(t) v pz − vcz + ωy X(t) − ωxY (t) Figure 1.

(6)

A PERSPECTIVE PROJECTION AND KINEMATIC CAMERA

The inhomogeneous coordinates of [ ]T m¯ 1 (t) m¯ 2 (t) 1 ∈ R3 , are defined as

MODEL.

Kinematic Modeling Considering a moving camera observing an object, define an inertially fixed reference frame, F : {o; Ex , Ey , Ez }, and a reference frame fixed to a camera, C : {oc ; ex , ey , ez } as shown in Fig. 1. The position of a point p relative to a point o is denoted by r p . The position of oc (origin of C ) relative to the point o (origin of F ) is denoted by roc . In the following development every vector and tensor are expressed in terms of the basis {ex , ey , ez } fixed in C 1 . The position of p measured relative to the point oc is expressed as [ ]T r p/oc = r p − roc = X(t) Y (t) Z(t)

m(t) ¯ ,

F

p

= F

FV

=

oc

[ ]T d (r p ) = v px v py v pz ∈ V p ⊂ R3 , dt [ ]T d (ro ) = vcx vcy vcz ∈ Vc ⊂ R3 . dt c

x(t) ,

CV

p/oc

(7)

] Y 1 T Z Z Z

[X

.

(8)

x˙1 = Ω1 + f1 + v px x3 − x1 v pz x3 , x˙2 = Ω2 + f2 + v py x3 − x2 v pz x3 , x˙3 = vcz x32 + (x2 ωx − x1 ω2 )x3 − v pz x32 , [ ]T y = x1 x2 .

(2) (3)

(9)

where Ω1 (u, y), Ω2 (u, y) , f1 (u, x) , f2 (u, x) , f3 (u, x) ∈ R are defined as Ω1 (u, y) , x1 x2 ωx − ωy − x12 ωy + x2 ωz ,

[ ]T d ˙ Y˙ (t) Z(t) ˙ (r p − roc ) = X(t) , = dt = C V p − C Voc = C V p − C Voc +C wF × (r p − roc ) (4) C

p/oc

]T 1 .

m(t) ¯ =

Using Eqs. (6) and (8), the dynamics of the state x(t) can be expressed as

(1)

Using Eqs. (1) - (3), the velocity of p as viewed by an observer in C is given by CV

X(t) Y (t) Z(t) Z(t)

(1),

Considering subsequent development, the state vector x(t) = [ ]T x1 (t) x2 (t) x3 (t) ∈ Y ⊂ R3 is defined as

where X(t), Y (t) and Z(t) ∈ R. The linear velocity of the object and the camera as viewed by an observer in the inertial reference frame are given by FV

[

Eq.

Ω2 (u, y) , ωx + x22 ωx − x1 x2 ωy − x1 ωz , f1 (u, x) , (x1 vcz − vcx )x3 , f2 (u, x) , (x2 vcz − vcy )x3 , f3 (u, x) , vcz x32 + (x2 ωx − x1 ωy )x3 .

where C wF denotes the angular velocity of F as viewed by an observer in C . The angular velocity of the camera relative to F

Assumption 1: The velocities of the camera and object are assumed to be upper and lower bounded by constants. Assumption 2: Since the states x1 (t) and x2 (t) are equivalent to

1 In the entire development, it is assumed that {·} is omitted in vector repree sentation {where {·}e}denotes representations of the vector in ( F the column-vector {F } ) the basis ex , ey , ez i.e. V p = Vp e .

2 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

Copyright © 2012 by ASME

the pixel coordinates in the image plane, and the size of image plane is bounded by known constants, thus it can be assumed that x1 (t) and x2 (t) are also bounded by W ≤ x1 (t) ≤ W , H ≤ x2 (t) ≤ H where W , W and H, H are obtained using the width and height of image plane.

where γ1 ∈ R+ . A full row rank of matrix C ∈ R2×3 is selected as

Camera Model and Geometric Image Formation In order to describe the image formation process, the geometric perspective projection is commonly used as depicted in Fig. 1. The projection model consists of an image plane Π, a center of projection oc , a center of image plane oI and the distance between Π and oc (focal length). Two-dimensional pixel coordinates of the projected point q in the image plane Π is given [ ]T by m(t) ˜ = u(t) v(t) 1 ∈ I ⊂ R3 . The three-dimensional coordinates m(t) ¯ are related to the pixel coordinates m(t) ˜ by the following relationship [9]

3×1 is chosen as D = and a] full [column [ ]T rank of D ∈ R T 1 0 0 or 0 1 0 . The system in Eq. (11) can be written in the following form :

[ ] 100 C= , 010

x˙ = Ax + f¯(x, u) + g(y, u) + Dd y = Cx

where f¯(x, u) = f (x, u) − Ax and A ∈ R3×3 . The function f¯(x, u) satisfies the Lipschitz condition [10, 11] ∥ f (x, u) − f (x, ˆ u) − A (x − x)∥ ˆ ≤ (γ1 + γ2 ) ∥x − x∥ ˆ

m(t) ˜ = Kc m(t) ¯

where γ2 ∈ R+ . A nonlinear unknown input observer for Eq. (12) is designed as z˙ = Nz + Ly + M f¯(x, ˆ u) + Mg(y, u)



xˆ = z − Ey

f 0 cx Kc =  0 α f cy  0 0 1

(14)

where x(t) ˆ ∈ R3 is an estimate of the unknown state x(t), z(t) ∈ 3 R is an auxiliary signal, the matrices N ∈ R3×3 , L ∈ R3×2 , E ∈ R3×2 , M ∈ R3×3 are designed as [12]

where α is the image aspect ratio, f is the focal length and (cx , cy ) denotes the optical center oI expressed in pixel coordinates. To simplify the derivation of the perspective projection matrix, the projection center is assumed to coincide with the origin of the camera reference frame (Assumption 3) and the optical axis is aligned with the z-axis of the coordinate system fixed in the camera (Assumption 4).

M = I3 + EC N = MA − KC L = K (I2 +CE) − MAE ( ) E = −D(CD)+ +Y I2 − (CD)(CD)†

(15)

where (CD)† denotes the generalized pseudo inverse of the matrix CD. The gain matrix K ∈ R3×2 and matrix Y ∈ R3×2 are selected such that

DESIGN OF AN UNKNOWN INPUT OBSERVER An unknown input observer for a class of nonlinear system is designed to estimate the position of tracked object in [8]. Based on Eqs. (6) and (8), the following nonlinear system can be constructed x˙ = f (x, u) + g(y, u) + Dd y = Cx

(13)

(10)

where Kc ∈ R3×3 is an invertible upper-triangular form of intrinsic camera matrix given by 

(12)

( ) Q , N T P + PN + γ21 + γ22 PMM T P + 2I3 < 0

(16)

where P ∈ R3×3 is a positive definite, symmetric matrix. The stability result of the observer is stated using the following theorem. Theorem: The nonlinear unknown input observer in Eq. (14) is exponentially stable if Eq. (16) is satisfied. Proof: See theorem 1 of [8]. To find E, K and P, Eq. (16) is reformulated in terms of a linear matrix inequality (LMI) as [8]

(11)

where x(t) ∈ R3 is a state of the system, y(t) ∈ R2 is an output of the system, u(t) ∈ R6 is a measurable input, d(t) ∈ R is an un[ ]T measurable input, g (y, u) = Ω1 Ω2 0 is nonlinear in y(t) and [ ]T u(t), and f (x, u) = f1 f2 f3 is nonlinear in x(t) and u(t) satisfying the Lipschitz condition ∥ f (x, u) − f (x, ˆ u)∥ ≤ γ1 ∥x − x∥ ˆ

[

] X11 βX12 <0 T −I βX12 3

3 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

(17)

Copyright © 2012 by ASME

where X11 = AT (I3 + FC)T P + P (I3 + FC) A + AT CT GT PYT +PY GCA −CT PKT − PK C + 2I3 X12 = P + PFC + PFC + PY GC PY = PY PK = PK √ β = γ21 + γ22 . To solve LMI feasibility problem in Eq. (17), the CVX toolbox in MATLAB is used [13]. Using P, PK and PY obtained from Eq. (17), K and Y are computed using K = P−1 PK and Y = P−1 PY . If the number of outputs ny is equal to the number of unknown inputs nd , then the unknown disturbance can be represented as Dd(t) = D1 d1 (t) + D2 d2 (t) where d1 (t) includes (nd − 1) number of unknown inputs, d2 (t) includes remaining unknown input and D1 , D2 ∈ R3×1 are full column rank. In this case, the estimation error will be uniformly ultimately bounded with the bound proportional to the norm of the disturbance d2 (t). The gain K and Y can be selected optimally to minimize the effects of sensor noise and the external disturbance by solving the following LMI. min δγ2 − (1 − δ)λmin (P), subject to P > 0, γ ≥ 0,   X11 βX12 X13 T −I βX12 0 <0 3 T X13 0 −γ2

Figure 2.

AN OVERVIEW OF EXPERIMENTAL CONFIGURATION.

(18) Figure 3.

where X13 = PD2 , δ ∈ [0, 1] , λmin denotes the minimum eigenvalue of a matrix, and γ denotes the L2 norm of the estimation error with respect to the disturbance d2 (t).

PLATFORMS.

PUMA and the two-link robot are rigidly attached to a work table. Experiments are conducted to estimate the position of the static as well as the moving object. A fiduciary marker is used as an object in all the experiments. For experiments involving a static object, the object is fixed to the work table. For experiments involving a moving object, the object is fixed to the endeffector of the two-link robot which follows a desired trajectory. The PUMA 560 is used to move the camera while observing the static or moving object. A mvBlueFox-120a color USB camera is used to capture images. The camera is calibrated using the MATLAB camera calibration toolbox [14]. A Core2-Duo 2.53 GHz laptop (mainworkstation) operating under Windows 7 is used to carry out the image processing and to store data transmitted from the PUMAworkstation. The image processing algorithms are written in C/C++, and developed in Microsoft Visual Studio 2008. The OpenCV and MATRIX-VISION API libraries are used to capture the images and to implement a KLT feature point tracker [15, 16]. Sub-workstations (PUMA and two-link) are composed of two Pentium 2.8 GHz PCs operating under QNX. These two

EXPERIMENTS AND RESULTS To verify the designed unknown input observer [8] for realtime implementation, two sets of experiments are conducted on a PUMA 560 serial manipulator and a two-link planar robot. The first set is performed for the relative position estimation of a static object using a moving camera. The second set is performed for position estimation of moving object. A schematic overview of experimental configuration is illustrated in Fig. 2. Testbed Setup The testbed consists of five components: (1) robot manipulators, (2) camera, (3) image processing workstation (main), (4) robot control workstation (PUMA and two-link), and (5) serial communication. Figure 3 shows the experimental platforms. A camera is rigidly fixed to the end-effector of the PUMA 560. The

4 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

Copyright © 2012 by ASME

The angular velocity of camera (rad/sec)

computers are used to host control algorithms for the PUMA 560 and the two-link robot via Qmotor 3.0 [17]. A PID controller is employed to control the six joints of the PUMA 560. A RISE-based controller [18] is applied to control the two-link robot. Control implementation and data acquisition for the two robots are operated at 1.0 kHz frequency using the ServoToGo I/O board. The forward velocity kinematics [19, 20] are used to obtain the position and velocity of the camera and tracked point. The camera velocities computed on the PUMA-workstation are transmitted to the main-workstation via serial communication at 30 Hz. The pose (position and orientation) of the tracked point and the camera are computed and stored in the sub-workstations at 1.0 KHz. The position of the camera and the point are used to compute the ground truth distance between the camera and object [ ]−1 { { } } rob j − rcam E where {R}Ee ∈ R3×3 as rob j/cam e = {R}Ee is the rotation matrix of the camera with respect to the inertial reference frame.

0.01

wx wy wz

0 −0.01 −0.02 −0.03 −0.04 −0.05 −0.06 −0.07 0

1

2

Figure 4.

3

4 5 Time (sec)

6

7

8

9

CAMERA ANGULAR VELOCITY.

The linear velocity of camera (m/sec)

0.03

Experiment I : Moving camera with a static point In this section, the structure estimation algorithm is implemented for a static object observed using a moving camera. A tracked point on the static object is observed by a downwardlooking camera. Since v px , v py and v pz are zero for a static object, the unmeasurable disturbance input d(t) is zero. An experiment set is designed to test the observer with time-varying velocities of camera in the Y and Z direction. Figures 4 and 5 show the linear and angular camera velocities. The matrices A, C and D are selected to be

vcx 0.025

vcy vcz

0.02 0.015 0.01 0.005 0 −0.005 0

1

2

Figure 5.

3

4 5 Time (sec)

6

7

8

9

CAMERA LINEAR VELOCITY.

    [ ] 0.00 −0.05 −1.0 0 100 A = 0.05 0.00 0.0  , C = , D = 1 . 010 0.00 0.00 0.00 0

x (m)

0.5

The matrix Y and gain matrix K are computed using the CVX toolbox in MATLAB [13] as

0 −0.5 0

1

2

3

4 5 Time (sec)

6

7

8

9

1

2

3

4 5 Time (sec)

6

7

8

9

1

2

3

4 5 Time (sec)

6

7

8

9

y (m)

0.4 0.2 0 0

    1.3149 0.0000 −1.0000 0.0000 K = 0.0000 1.3149  , Y =  0.0000 0.0000 . 0.0590 −0.1256 2.5125 0.0000

z (m)

2 1 0 0

The estimation result is illustrated in Figs. 6 and 7. The steadystate RMS errors in the position estimation in X, Y and Z coordinates are 0.0029 m, 0.0210 m and 0.0412 m, respectively.

Figure 6. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED (SOLID) POSITION OF A STATIC POINT WITH RESPECT TO A MOVING CAMERA.

5 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

Copyright © 2012 by ASME

The angular velocity of camera (rad/sec)

Errors in position estimation (m)

1.2

ex ey ez

1 0.8 0.6 0.4 0.2 0 −0.2 0

Figure 7.

1

2

3

4 5 Time (sec)

6

7

8

9

0.01

wx wy wz

0 −0.01 −0.02 −0.03 −0.04 −0.05 −0.06 0

POSITION ESTIMATION ERROR FOR A STATIC POINT.

1

2

Figure 8.

3

4 Time (sec)

5

6

7

CAMERA ANGULAR VELOCITY.

Experiment II : Moving camera with moving object

The linear velocity of camera (m/sec)

In this section, the observer is used to estimate the position of a moving object using a moving camera. A downward-looking camera observes a moving point fixed to the moving two-link robot arm. In this case, the object is moving in the X − Y plane with unknown velocities v px (t) and v py (t). The estimation result using the observer gain obtained from the method in [8] and from the convex optimization method are compared. In the experiment Set 2, the linear velocity of camera has two time-varying velocities to test the observer with more generalized trajectory of the moving camera.

vcx

0.01

vcy

0

vcz

−0.01 −0.02 −0.03 −0.04 −0.05 −0.06 0

Set 1 In this experiment set, the observer is tested with a time-varying linear velocity of camera along the X direction. The camera velocities are shown in Figs. 8 and 9. The matrices A, C and D are given by

1

2

Figure 9.

    [ ] 0.00 −0.05 0.00 1 1 0 0 A = 0.05 0.00 −0.30 , C = , D = 0 . 010 0.00 0.00 0.00 0

3

4 Time (sec)

5

6

7

CAMERA LINEAR VELOCITY.

x (m)

0.5 0 −0.5 0

The matrix Y and gain matrix K are computed using the CVX toolbox in MATLAB and are given as

1

2

3

4 Time (sec)

5

6

7

1

2

3

4 Time (sec)

5

6

7

1

2

3

4 Time (sec)

5

6

7

y (m)

0 −0.05 −0.1 0

    1.2204 0.0000 0.0000 0.0000 K = 0.0000 1.2204 , Y = 0.0000 −1.0000 . 0.3731 0.0000 0.0000 7.4618

z (m)

1 0.5 0 0

The estimation result is illustrated in Figs. 10 and 11. The RMS errors and peak errors of the estimated position in the steadystate are given in Tab. 1.

Figure 10. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED (SOLID) POSITION OF A MOVING POINT WITH RESPECT TO A MOVING CAMERA.

6 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

Copyright © 2012 by ASME

0.5

Errors in position estimation (m)

Errors in position estimation (m)

ex ey ez

0.6

0.4 0.3 0.2 0.1 0 −0.1 −0.2 0

Figure 11.

1

2

3

4 Time (sec)

5

6

7

0.5 0.4 0.3 0.2 0.1 0 −0.1 −0.2 0

POSITION ESTIMATION ERROR FOR A MOVING POINT.

ex ey ez

0.6

Figure 13.

1

    0.0003 0.0000 0.0000 0.0000 K = 0.0000 0.3886 , Y = 0.0000 −1.0000 . 0.0000 0.0000 0.0000 3.0000

The estimation result is illustrated in Figs. 13 and 12. Table 1 shows the steady-state RMS error and peak error of the position estimates using the convex optimization approach.

x (m) y (m)

1

2

3

4 Time (sec)

5

6

7

1

2

3

4 Time (sec)

5

6

7

2

3

4 Time (sec)

5

6

z (m)

7

POSITION ESTIMATION ERRORS.

w/o optimization

w/ optimization

RMS in x (m)

0.0345

0.0227

RMS in y (m)

0.0031

0.0016

RMS in z (m)

0.0740

0.0416

peak error in x (m)

0.1273

0.0566

peak error in y (m)

0.0093

0.0036

peak error in z (m)

0.2385

0.0857

    1.2892 0.0000 0.0000 0.0000 K = 0.0000 1.2892 , Y = 0.0000 −1.0000 . 0.2313 0.0000 0.0000 4.6261

1

1

6

The matrix Y and gain matrix K are computed as

0.5 0 0

5

    [ ] 0.00 −0.05 −1.00 1 100 A = 0.05 0.00 −0.30 , C = , D = 0 . 010 0.00 0.00 0.00 0

0

0 −0.02 −0.04 0

4 Time (sec)

Set 2 In this experiment set, the linear camera velocities along the X and Y direction are time-varying, and the camera angular camera velocity is constant. The camera velocities are depicted in Figs. 14 and 15. The matrices A, C and D are given by

0.5

−0.5 0

3

POSITION ESTIMATION ERROR FOR A MOVING POINT.

Table 1.

Set 1 with convex optimization approach In this experiment, a comparison between the gain computation using LMIs in Eqs. (17) and (18) is given. The matrix Y and gain matrix K for convex optimization are computed using the CVX toolbox as

2

7

The estimation result is depicted in Figs. 16 and 17. The steadystate RMS errors and peak errors of the position estimation in X, Y and Z coordinates are given in Tab. 2.

Figure 12. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED (SOLID) POSITION OF A MOVING POINT WITH RESPECT TO A MOVING CAMERA.

7 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

Copyright © 2012 by ASME

wx wy wz

0

Errors in position estimation (m)

The angular velocity of camera (rad/sec)

0.01

−0.01 −0.02 −0.03 −0.04 −0.05 −0.06 −0.07 0

1

2

3

4

5 6 Time (sec)

7

8

0.4 0.3 0.2 0.1 0 −0.1 −0.2 0

Figure 17.

vcx vcy

0.03

1

2

3

4

5 6 Time (sec)

7

8

9

POSITION ESTIMATION ERROR FOR A MOVING POINT.

Set 2 with convex optimization approach In this experiment, a comparison between the gain computation using LMIs in Eqs. (17) and (18) is given. The matrix Y and gain matrix K for convex optimization are computed using the CVX toolbox as

0.035 The linear velocity of camera (m/sec)

0.5

9

Figure 14. CAMERA ANGULAR VELOCITY.

ex ey ez

0.6

vcz 0.025 0.02

    0.0003 0.0000 0.0000 0.0000 K = 0.0000 0.3886 , Y = 0.0000 −1.0000 . 0.0000 0.0000 0.0000 3.0000

0.015 0.01 0.005 0

1

2

3

Figure 15.

4

5 6 Time (sec)

7

8

9

The estimation result is illustrated in Figs. 19 and 18. Table 2 shows the RMS error and peak error of the position estimates using the convex optimization approach.

CAMERA LINEAR VELOCITY.

0.2 x (m)

x (m)

0.2 0 −0.2 0

1

2

3

4

5 6 Time (sec)

7

8

0 −0.2 0

9

0 −0.1 0

1

2

3

4

5 6 Time (sec)

7

8

9

3

4

5 6 Time (sec)

7

8

9

1

2

3

4

5 6 Time (sec)

7

8

9

1

2

3

4

5 6 Time (sec)

7

8

9

1 z (m)

z (m)

2

0 −0.1 0

1 0.5 0 0

1

0.1 y (m)

y (m)

0.1

1

2

3

4

5 6 Time (sec)

7

8

0.5 0 0

9

Figure 16. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED (SOLID) POSITION OF A MOVING POINT WITH RESPECT TO A MOV-

Figure 18. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED (SOLID) POSITION OF A MOVING POINT WITH RESPECT TO A MOV-

ING CAMERA.

ING CAMERA.

8 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

Copyright © 2012 by ASME

ASME 2012 5th Annual Dynamic Systems and Control Conference joint with the JSME 2012 11th Motion and Vibration Conference DSCC2012-MOVIC2012 October 17-19, 2012, Fort Lauderdale, Florida, USA

DSCC2012-MOVIC2012-8778

EXPERIMENTAL RESULTS FOR MOVING OBJECT STRUCTURE ESTIMATION USING AN UNKNOWN INPUT OBSERVER APPROACH

Sujin Jang, Ashwin P. Dani, Carl D. Crane III and Warren E. Dixon∗ Department of Mechanical and Aerospace Engineering University of Florida Gainesville, Florida, 32611 Email: [email protected];[email protected];[email protected];[email protected]

velocities observed by a moving camera with known camera motions. A new method based on an unknown input observer (UIO) is developed in [8] to estimate the structure of an object moving with time-varying velocities using a moving camera with known velocities.

ABSTRACT An application and experimental verification of the online structure from motion (SFM) method is presented to estimate the position of a moving object using a moving camera. An unknown input observer is implemented for the position estimation of a moving object attached to a two-link robot observed by a moving camera attached to a PUMA robot. The velocity of the object is considered as an unknown input to the perspective dynamical system. Series of experiments are performed with different camera and object motions. The method is used to estimate the position of the static object as well as the moving object. The position estimates are compared with ground-truth data computed using forward kinematics of the PUMA and the two-link robot. The observer gain design problem is formulated as a convex optimization problem to obtain an optimal observer gain.

The contributions of this work is to experimentally verify the unknown input observer in [8] for structure estimation of a moving object. A series of experiments are conducted on a PUMA 560 and a two-link robot. A camera is attached to the PUMA and the target is attached to the moving two-link robot. The camera images are processed to track a feature point while camera velocities are measured using the joint encoders. To obtain the ground-truth data, the distance between the origin of the PUMA and origin of the two-link robot is measured and positions of the camera and moving object with respect to respective origins are obtained using the forward kinematics of the robots. The estimated position of object is compared with the ground-truth data. The experiments are conducted to estimate the structure of a static as well as a moving object keeping the same observer structure. The experiments prove the advantage of the observer in the sense that a-priori knowledge of object state (static or moving) is not required. A convex optimization problem is solved for computing an observer gain to reduce the effects of noise and disturbances.

INTRODUCTION Recovering the structure of a moving object using a moving camera has been well studied in literature the past decade [1–8]. In [6], a batch algorithm is developed by approximating the trajectories of a moving object using a linear combination of discrete cosine transform (DCT) basis vectors. Batch algorithms use an algebraic relationship between 3D coordinates of points in the camera coordinate frame and corresponding 2D projections on the image frame collected over n images to estimate the structure. Hence, batch algorithms are not useful in real-time control algorithms. For visual servo control or video-based surveillance tasks, online structure estimation algorithms are required. Recently, a causal algorithm is presented in [7] to estimate the structure and motion of objects moving with constant linear ∗ Address

PERSPECTIVE CAMERA MODEL In this section, the kinematic relationship of the moving camera and the object, and geometric relationship of image formation is briefly described.

all correspondence to this author.

1 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

Copyright © 2012 by ASME

[14]

[15] [16]

[17]

[18]

[19] [20]

disciplined convex programming. On the WWW. URL http://cvxr.com/cvx/. Bouguet, J., 2010. Camera calibration toolbox for matlab. On the WWW. URL http://www.vision.caltech.edu/bouguetj/. Tomasi, C., and Kanade, T., 1991. Detection and tracking of point features. Tech. rep., Carnegie Mellon University. Shi, J., and Tomasi, C., 1994. “Good features to track”. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 593– 600. Loffler, M., Costescu, N., and Dawson, D., 2002. “Qmotor 3.0 and the qmotor robotic toolkit - an advanced PC-based real-time control platform”. IEEE Contr. Syst. Mag., 22(3), pp. 12–26. Patre, P. M., Dupree, K., MacKunis, W., and Dixon, W. E., 2008. “A new class of modular adaptive controllers, part II: Neural network extension for non-LP systems”. In Proc. Am. Control Conf., pp. 1214–1219. Spong, M., and Vidyasagar, M., 1989. Robot Dynamics and Control. John Wiley & Sons Inc., New York. Crane, C. D., and Duffy, J., 1998. Kinematic Analysis of Robot Manipulators. Cambridge.

10 Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 07/14/2015 Terms of Use: http://asme.org/terms

Copyright © 2012 by ASME

Experimental Results for Moving Object Structure ...

Email: sweetoboe@ufl.edu;ashwin31@ufl.edu;ccrane@ufl.edu;wdixon@ufl.edu ... tasks, online structure estimation algorithms are required. Recently, a causal ...

367KB Sizes 2 Downloads 200 Views

Recommend Documents

Experimental Results
polynomial (since the complexity of the network increases with each training and ...... W., Identification of fuzzy systems by means of an auto-tuning algorithm and.

Implementation and Experimental Results of ...
GNU Radio is an open source software development toolkit that provides the ... increasing α, we shall see that the geometry of the modulation constellation ...

Visual Homing: experimental results on an autonomous ...
agent, and the method can thus be used without a compass sensor. Moreover ..... The authors acknowledge the support of the European Com- mission under ...

Tired of Moving Mountains? Getting Retention Results ...
business. It is so central to your work, and to the success of your institution, that it can't be viewed as an issue of the .... force's time should be spent deciding on a plan of action that ... dinner in a local restaurant and movie passes or ticke

moving object segmentation in video using stationary ...
per, we propose a video segmentation algorithm for tracking .... For video sequences in CIF and .... tors,” 33rd Annual Conference on Information Sciences.

Moving Object Detection Based On Comparison Process through SMS ...
To shed lightweight on the matter, we tend to gift two new techniques for moving object detection during this paper. Especially, we tend to .... For our experiments, we used a laptop running Windows Vista. ... pursuit technique for proposing new feat

Experimental and Simulation Results of Wheel-Soil ...
where v is the component of wheel carrier velocity in the horizontal direction, ω is the ..... mobile robots”, Proceedings of the 4th International. Conference on ...

The results of an experimental indoor hydroponic Cannabis growing ...
New Zealand is the most geographically isolated country in the. world. ... annually, with the typical cultivation period being between .... Zealand bedroom, which is a space commonly used to grow Cannabis indoors. .... PDF. The results of an experime

Moving Object Tracking in Driving Environment
applications such as surveillance, human robot interaction, action recognition ... Histogram-based object tracking methods like .... (The Development of Low-cost.

Moving Object Detection Based On Comparison Process through SMS ...
2Associate Professor, Dept of CSE, CMR Institute of Technology, ... video. It handles segmentation of moving objects from stationary background objects.

moving object recognition using improved rmi method - CiteSeerX
e-mail: [email protected], [email protected] ... framework for use in development of automated video surveillance systems. RMI is a specific ...

moving object recognition using improved rmi method - CiteSeerX
plays a major role in advanced security systems and video surveillance applications. With the aim to recognize moving objects through video monitoring system, ...

A Stream Field Based Partially Observable Moving Object Tracking ...
object tracking. In Section III, our proposed tracking algorithm which combines the stream field and RBPF is presented. Then, our proposed self-localization and object tracking ... motion planning and obstacle avoidance in mobile robotic domain [13-1

Visual Homing: experimental results on an autonomous ...
conversion is not required. .... The potential map of the same environment with the RMS (a) and ... used to create the map have been acquired on a grid with a.

Experimental Results of a Plasma Wakefield ...
Los Angeles, CA, USA. &. Karl Kusche, Jangho Park, Igor Pogorelsky, Daniil Stolyarov, Vitaly Yakimenko. Accelerator Test Facility @ Brookhaven National ... Gradient of. 35 MeV/m (ILC). 150MeV/m (CLIC). • Limited e.g. by wall breakdown*. • Plasmas

Experimental Results Prediction Using Video Prediction ...
RoI Euclidean Distance. Video Information. Trajectory History. Video Combined ... Training. Feature Vector. Logistic. Regression. Label. Query Feature Vector.

experimental demonstration of structure estimation of a ...
STS. )−1. ST. The solution. ˆ θ to the problem in Eq. 2–15 gives the optimized camera projection center and focal length. 2.4 A point tracking algorithm : KLT ...... and W. E. Dixon, “Asymptotic tracking for systems with structured and unstru

EC6301 Object Oriented Programming and Data Structure 123- By ...
EC6301 Object Oriented Programming and Data Structure 123- By EasyEngineering.net.pdf. EC6301 Object Oriented Programming and Data Structure 123- By ...

EC6301 Object Oriented Programming and Data Structure 11- By ...
EC6301 Object Oriented Programming and Data Structure 11- By EasyEngineering.net.pdf. EC6301 Object Oriented Programming and Data Structure 11- By ...

EC6301 Object Oriented Programming and Data Structure 11- By ...
... and Data Structure 11- By EasyEngineering.net.pdf. EC6301 Object Oriented Programming and Data Structure 11- By EasyEngineering.net.pdf. Open. Extract.

EC6301 Object Oriented Programming and Data Structure 123- By ...
EC6301 Object Oriented Programming and Data Structure 123- By EasyEngineering.net.pdf. EC6301 Object Oriented Programming and Data Structure 123- By ...

Object Oriented Programming Structure of Java ...
Structure of Java ... ➢because the Java compiler thinks it possible that ..... Th f ti ll. h i i J. ▻ The function-call mechanism in Java supports this possibility, which is ...

EC6301 Object Oriented Programming and Data Structure 11- By ...
virtual function: A genetic function, with a specific return type, extended later for each new argument. type. ... work array: A temporary array used for the storage of intermediate results during processing. Question Bank with ... Class needs access