A Model-free Redundancy Resolution Technique for Visual Motor Coordination of a 6 DOF robot manipulator Swagat Kumar∗ , Amit Shukla† , Ashish Dutta‡ and Laxmidhar Behera§ Department of Electrical Engineering Indian Institute of Technology, Kanpur U.P., India, Pin - 208016 ∗ [email protected]; † [email protected]; ‡ [email protected]; § [email protected]

Abstract— In this paper, visual motor coordination of a 6 DOF robot manipulator is considered. It is difficult to analytically derive inverse kinematic relationships for such manipulators. The problem becomes more challenging owing to the presence of multiple solutions for the inverse-kinematic relationship between robot end-effector position and joint angle vector. Many of the current redundancy resolution techniques necessitate explicit orientation information which cannot be obtained from visual feedback. Hence such techniques cannot be used for visual motor coordination of redundant manipulators. In this paper, it is demonstrated that a feasible inverse kinematic solution may be obtained by using input-output space clustering along with the KSOM algorithm. The method is innovative in the sense that it does not require any orientation information for resolving redundancy and it is model-independent. The efficacy of the proposed method is illustrated through simulations on a 6 DOF PUMA 560 manipulator model.

I. I NTRODUCTION Visual motor coordination is the task of guiding a robot manipulator to reach a target point in its workspace using visual feedback. This involves primarily two tasks namely, extracting the coordinate information of robot end-effector and target point from camera images and then using inverse kinematic relationship to find out the joint angle movement necessary to reach the target point. The first task involves camera calibration to find out an approximate model of the camera so that a mapping from pixel coordinates to real world coordinates is obtained. Tsai’s algorithm [1] is one of the most widely used method for this purpose. Since it is difficult to derive inverse kinematic relationship for a robot manipulator, learning paradigm based on neural networks has been used for a long time. Kuperstein’s model [2] based on topology-conserving maps is one of the foremost learning model for visual motor coordination. Later, Martinetz et.al. [3] provided a more general model based on Kohonen’s self-organizing map [4] which make use of threedimensional network topology to match with the workspace. His work was later extended by many authors including Walter and Schulten [5] who also introduced a “neural-gas” network architecture for this task. While these approaches were model free, Behera et.al. [6] showed that it is possible to increase the accuracy and minimize training time by using a hybrid approach where the the network model is trained off-line based on camera and manipulator model and then fine-tuned using the actual system.

The above techniques have been used successfully to solve inverse kinematics problem of a 3 DOF manipulator. The success of these techniques for 3 DOF manipulator lies in the fact that there exists a unique inverse kinematic relationship between end-effector position and three joint angles. However, things become difficult for a 6 DOF manipulator where we have a 6 dimensional joint angle vector and a 3 dimensional output position vector. The inverse kinematic relationship between 3 dimensional position vector and a 6 dimensional joint angle vector is not unique. In other words, different sets of joint angles may lead to same endeffector position thus giving rise to redundancy. A number of techniques have been reported in literature to resolve this redundancy problem. Depending on the application requirements and choice of controller, redundancy can be resolved at position, velocity or acceleration level. In most control schemes, control input is expressed in form of a reference velocity or acceleration [7][8][9]. Therefore, most of the redundancy resolution techniques are proposed at velocity and acceleration level. However, these techniques make use of manipulator dynamics and hence are model-dependent. This paper considers the case of redundancy resolution at position level and we do not make use of either manipulator kinematics or dynamics thus making it a model-independent scheme. At position level, redundancy may be resolved by imposing extra constraints on one or more joints. This includes assigning additional task or explicitly specifying orientation at the target point. Since, it is not easy to extract orientation information from camera images, many of these methods may not be used for visual motor coordination. A method called “input-output space clustering” is suggested for getting a feasible solution for the inverse kinematic problem. This is an extension of KSOM based algorithm proposed by Martinetz et.al.[3] where weight vector associated with each neuron is extended by combining the joint angle vector along with the Cartesian position vector during training phase. In earlier approaches, the clustering was carried out only in the 3-dimensional Cartesian space. During training phase, it is always possible to generate random joint angle values and obtain corresponding end-effector positions. Hence, this additional information (knowledge of joint angles) can be used to perform clustering both in Cartesian space as well as joint angle space. Once clustering

is completed, each 3-dimensional Cartesian input vector will select a winner neuron based on the proximity of its weight vector (only the 3-dimensional Cartesian part). However, remaining part of the extended weight vector provides a rough estimate of joint angles for the winner neuron. Hence, a local update of weight as well as angle vector would be sufficient to reach the target point. The efficacy of this method is demonstrated later in the simulation section. Visual motor coordination using input-output space clustering was earlier used by Nimit and Behera [10] to demonstrate its efficiency in terms of computational requirement. However, the redundancy resolution was not considered in that work and it dealt with a 3DOF robot manipulator case. On the other hand, we are considering redundancy resolution for a 6DOF manipulator in this paper. Note that the inverse kinematic solution obtained through this technique is not necessarily optimal. It only gives a feasible solution as shown later in this paper. Following section describes the system model for visual motor coordination followed by learning algorithm in section III. The simulation and results are provided in section IV followed by conclusion in section V.

The state of manipulator is specified by three Cartesian coordinates [x y z]T and three orientation [θx θy θz ] of its end-effector. To put the end effector of a manipulator at a specified target position ut , the transformation Θ(ut ) that maps the input vector ut = [x y z]T to the corresponding joint angle vector θ = [θ1 θ2 θ3 θ4 θ5 θ6 ] is required. This one-to-many mapping may not give us a unique solution and hence we face the challenge of redundancy resolution because of the fact that we have a 3-dimensional task space and a 6-dimensional joint space.

II. S YSTEM M ODEL Visual motor coordination setup is shown in Figure 1. It consists of a stereo-camera system, a robot manipulator and processing unit for running algorithm. Image processing part is used to extract 4 dimensional image coordinate vector p = [u1 , u2 , u3 , u4 ]T for current robot position and target point to be reached in the workspace. The Neural network module implements Martinetz’s KSOM algorithm [3] to find the necessary joint angle vector θ, which in turn drives the robot manipulator through a joint-servo module.

In this paper, we show that it is possible to obtain a feasible joint angle vector θ directly from the 3-dimensional Cartesian position vector using input-output space clustering. We also call it a model-independent method of redundancy resolution as it does not make use of manipulator kinematics in obtaining the joint angle vector for a given target point. It is to be noted that the training data for simulation is generated using forward kinematic model of a robot manipulator as shown below. However in actual experimentation, one need not use any forward kinematic model. During training, random joint angle values are applied to the robot manipulator and its corresponding end-effector position is obtained from stereo-camera system. Thus, the algorithm is model-independent.

Neural Network

(x, y, z)

(u1 , u2 ) Image Processing (u3 , u4 ) Camera 2

Camera 1

~ θ

Joint Servo

Note that the problem of one-to-many mapping may be overcome by explicitly specifying the orientation at the target point as well. In such a case, it is possible to find a transformation Θ(ut ) that uniquely maps a 6-dimensional input vector ut = [x y z θx θy θz ]T to a 6-dimensional output joint angle vector θ. However, one still needs to choose a suitable orientation out of many such feasible orientations at the target point. The problem becomes more severe in case of visual motor coordination owing to the fact that the orientation information may not be obtained easily through visual feedback.

We have considered the Unimation PUMA 560 robot model for simulation. It is a robot with six degrees of freedom with all rotational joints (6R mechanism). Its D-H parameters [11] are given in Table I where a2 = 0.4318 m, a3 = 0.2032 m, d2 = 0.12446 m, d3 = 0.4318 m.

i 6DOF Manipulator

Fig. 1.

Visual Motor Coordination Setup

In this paper, we demonstrate our algorithm by directly working with 3-dimensional Cartesian coordinate vectors for the sake of simplicity and later show that it works equally well for 4-dimensional image coordinate vectors where we don’t use any real world information about robot end-effector and target positions.

αi−1

ai−1

di

θi

1

0

0

0

θ1

2

-90

0

0

θ2 θ3

3

0

a2

d3

4

-90

a3

d4

θ4

5

90

0

0

θ5

6

90

0

0

θ6

TABLE I D-H PARAMETER OF PUMA 560

The joint limits are given below:

The learning scheme used in the present work runs as follows [10]:

−170.0 < θ1

< 170.0

−225.0 < θ2 −250.0 < θ3

< 45.0 < 75.0

−135.0 < θ4 −100.0 < θ5

< 135.0 < 100.0

−180.0 < θ6

< 180.0

(1)

The orientation of end-effector is computed as: r13 r23

= −c1 (c23 c4 s5 + s23 c5 ) − s1 s4 s5 = −s1 (c23 c4 s5 + s23 c5 ) + c1 s4 s5

r33

= s23 c4 s5 − c23 c5

(2)

The position of end-effector is computed as = c1 [a2 c2 + a3 c23 + d4 s23 ] − d3 s1

py pz

= s1 [a2 c2 + a3 c23 − d4 s23 ] + d3 c1 = −a3 s23 − a2 s2 − d4 c23

(3)

where ci = cos θi , si = sin θi , cij = cos θi cos θj and sij = sin θi sin θj i, j = 1, 2, . . . 6. USING

= =

v 1 − v0 θ1out − θ0out

(6) (7)

∆θγ

=

θ0out − θγ − Aγ (v0 − wγ )

(8)

∆Aµ

=

k∆vk−2 (∆θout + Aµ · ∆v)∆vT

wγ θγ

← ←





wγ + h0µγ (ut − θγ + 0 h0µγ ∆θγ Aγ + 0 h0µγ ∆Aγ

(9) wγ )

(10) (11) (12)

0

The functions hµγ and hµγ are defined as kµ − γk ) (13) 2σ 2 0 kµ − γk hµγ = exp(− ) (14) 2σ 0 2 Here, µ represents index of winner neuron and γ represents lattice index. The parameters , 0 , σ, σ 0 and σmix vary during the training time depending on current iteration and can be expressed as below:  (t/tmax ) ηf inal η = ηinitial ηinitial hµγ = exp(−

px

III. V ISUAL M OTOR C OORDINATION

∆v ∆θout

KSOM

In this paper, a model-free approach to resolve redundancy problem is considered. That is, we do not make use of manipulator geometry in learning the inverse kinematic relationship. We use the same Kohonen SOM based neural architecture as used by Martinetz [3], Schulten [5] and Behera et.al. [6]. Each node is associated with a weight vector wγ , a Jacobian matrix Aγ and a joint angle vector θγ . Given a position target ut , a winner neuron µ is selected based on its Euclidean distance metric in the workspace. The neuron whose weight vector is closest to the target is declared winner. The arm is given a coarse movement θ0out which is the initial output of the network given by X θ0out = s−1 hmix (θγ + Aγ (ut − wγ )) (4) γ γ

P ) ( kµ−γk 2 where s = γ hmix , hmix = e 2σmix and µ represents the γ γ winner neuron. θ0out gives a coarse movement to the arm such that end-effector reaches a position v0 . A correcting fine movement θ1out is evaluated as follows: X hmix Aγ (ut − v0 ) (5) θ1out = θ0out + s−1 γ γ

This corrective movement results in a final movement of the end-effector to v1 . Although one can use several such corrective movements to increase the accuracy of tracking, we are using just one corrective movement and achieving a good tracking accuracy.

In the above equation, η ∈ {, 0 , σ, σ 0 }, t is the current iteration and tmax is the total number of iterations to be performed by the network. A. Redundancy resolution through input-output space clustering While generating training data, joint angles are randomly selected within their limits (θL ≤ kθi k ≤ θU ) and corresponding end-effector positions are obtained from the robot forward kinematics model for simulation. However in actual experiment, forward kinematic model is not needed. One just needs to move the robot by different joint angles and obtain its end-effector positions through suitable means like stereo-camera system. In the earlier approaches [5][3][6], the clusters are formed only in the Cartesian input space. This approach has been used to solve the inverse kinematic problem for a 3 DOF manipulator. However, this approach fails in case of a 6 DOF manipulator where we have a 3-dimensional Cartesian input space and a 6-dimensional joint angle space. Some approaches solve this problem by providing explicit orientation information along with target position to the algorithm thus equalizing the dimensions of both input and output space. In visual motor coordination, the position of the target point as well as manipulator end-effector is obtained from stereo-vision cameras. Since the orientation information can not be obtained through visual feedback, many of the current methods can not be applied for resolving redundancy of a 6 DOF manipulator. In this subsection, we demonstrate how

one can obtain a feasible inverse kinematic solution just using the position information obtained from vision cameras. In this technique, we concatenate the weight vector of each node with its joint angle vector during training phase, thus making a 9-dimensional weight vector. In input-output space clustering, we have following dimensions for various KSOM parameters: Wγ = [wγ | θγ ]T : 9 × 1 Ut = [ut | θJ ]T : 9 × 1 θJ : 6 × 1



θγ : 6 × 1 wγ : 3 × 1 ut : 3 × 1 Aγ : 6 × 3 θγ : 6 × 1 v0 : 3 × 1

IV. S IMULATION

v1 : 3 × 1 Where Wγ and Ut are extended weight and target input vectors respectively. θ J is the manipulator joint angles generated during training. θγ , wγ and Aγ are quantities associated with each lattice neuron represented by its index γ. v(0,1) are visual feedback obtained from cameras representing the current end-effector position in the workspace. Our task is to learn the linear mapping Θ(ut ) = θγ + Aγ (ut − wγ )

(15)

One can take 6 × 6 dimensional Jacobian matrix Aγ for this task. However, that would necessitate a 6 dimensional target vector ut comprising both position as well as orientation. Since, the orientation information is difficult to obtain through vision cameras, we aim to obtain a feasible solution for inverse kinematics without using orientation information. Note that the 9-dimensional weight vector Wγ is used for clustering only during training phase. Once the clustering is done, a 3-dimensional input vector automatically selects a winner based on proximity of its 3-dimensional weight vector wγ . This winner neuron automatically gives the information about corresponding joint angles necessary at that location, obtained from the remaining part of the weight vector Wγ . Hence, only a local update of θ and w around winner neuron is enough to obtain a good accuracy in reaching the target. The input-output space clustering technique consists of following two steps: •

Training phase: – Generate joint angle vector θ J randomly within the bounds (1) and obtain the corresponding end effector position xd from its forward kinematics. – Form a 9 dimensional input vector combining Cartesian position and joint angles together as shown below. Ut = [xd | θJ ]T

– Input-Output space clustering : update 9 dimensional weight vector using equation (10) so as to minimize kUt − Wγ k. – Joint angle vector associated with each node θγ is not updated and directly assigned the values obtained from last 6 elements of weight vector Wγ . – The Jacobian matrix Aγ is updated as per equations (9) and (12). Note that the increment ∆Aµ depends on 3-dimensional visual feedback vectors v0 and v1 . Testing Phase: – A 3-dimensional target point in the Cartesian space ut is presented to the network. – It selects a winner neuron µ with minimum kut − wµ k. – The parameters wγ , θγ and Aγ are updated as per equations (10), (11) and (12) respectively. AND

R ESULTS

The position as well as joint angle vectors are normalized in a range of -1 to +1. The training data is generated using forward kinematic equations (2)-(3). A 3-dimensional neural lattice with 10 × 10 × 10 nodes is selected for this task. Various learning parameters are : i = 0.9, f = 0.1, 0i = 0.9, 0f = 0.1, σi0 = σmixi = 2.5, σf0 = σmixf = 0.01. The subscript i and f denote the initial and final values of corresponding parameters. The network is trained for 30000 iterations. After training, the lattice neurons spread out to topologically distribute themselves in the 9-dimensional input space as shown in Figures 2 - 5. The Figure 2 shows that after training neurons distribute themselves so as to capture the 3-dimensional Cartesian workspace. The Figure 3 is the top view of the manipulator workspace. The Figures 4 and 5 show the distribution of neurons in joint-angle space. During testing phase, the network is presented with following trajectory: x

= 0.24 sin θ

y z

= 0.24 = 0.24 cos θ

(16)

where θ varies from 0 to 360◦ . The above equations represent a circle in x−z plane. Similar trajectories may be taken in other planes as well. However, in some cases, joint angle might exceed their limits indicating that such trajectories can’t be traversed by the manipulator. The tracking achieved during testing phase is shown in Figure 6. The trajectory traversed in the joint-angle space is shown in figure 7. Only one corrective step is used to reach the target point. As can be seen, the joint angles stay within its normalized range of ±1. Hence, we are able to get a feasible solution for the inversekinematic problem. It is seen in Figure 6 that the algorithm does not track the test trajectory immediately. After training, the robot lies at some point far away from the test trajectory and it thus it requires more than one corrective step to reach the target point. Since, we are using only one corrective step,

it fails to reach the target point. As the network parameters are updated also during testing phase, after few iterations, the algorithm is able to reach the target point in one corrective step and then follows the trajectory. In case, we don’t update the network parameters during testing phase, one would more than one corrective steps to track the trajectory efficiently. Input Weights

Cartesian Space

1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

-1

-0.8

-0.6

Z

-0.4

-0.2

Θ4 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8

Input Weights

Clustering in Joint-angle space

Θ6

Fig. 5.

0

0.2

0.4

0.6

0.8

1

-1

-0.8

-0.4

-0.6

-0.2

0

0.8

0.6

0.4

0.2

1

Θ5

Clustering in joint angle space: last 3 angles

-0.8 -0.6 0.8

-0.4

0.6

-0.2

Desired Actual

0.4 0

0.2 0

0.2

X

Z

-0.2

0.4

Y

-0.4

0.6

-0.6

0.6 0.4 0.2

Fig. 2.

0

3-dimensional robot workspace

-0.2 -0.4 -0.6

0.58

-0.6

Input Weights

-0.4

0.56 -0.2

0.54 0

0.52

0.2

X

Y

0.5

0.4 0.6 0.48

Fig. 6.

Z

-0.8 -0.2 -0.4 -0.6 0.8 0.6 0.4 0.2 0 -0.8

-0.6

Tracking in the Cartesian space

0.8 -0.4

0.6 -0.2

0.4 0.2

0 0

0.2

X

-0.2

0.4

Y

-0.4 0.6

-0.6 0.8 -0.8

Fig. 3.

Top view of the workspace of robot manipulator

Clustering in Joint-angle space

for robot end-effector and target positions. However in the actual visual-motor coordination scheme, the position vectors are obtained from two cameras which give a 4-dimensional image coordinate vector. The corresponding tracking in image plane is shown in Figure 8. The two tracking trajectories correspond to two cameras. The tracking error in image plane as well as cartesian plane is shown in Figure 9. It is to be noted that the error is normalized and hence it is possible to show them on the same scale.

Input Weights

Θ3 first 3 angles last 3 angles

0.4 0.2

Θ3, Θ6

0 -0.2

-0.4

-0.4 -0.6 -0.8

-0.6

-1

0.2 -1

-0.8

0 -0.8

-0.6

-0.2 -0.4

-0.2

Θ1

0

-0.4 0.2

0.4

-0.6 0.6

0.8

-0.8

Θ2

1 -1

0.5

0

0.4 0.2

0.3 0.2

0.4

Θ1, Θ4

Fig. 4.

0 0.8

Θ2, Θ5

-0.1 1 -0.2

Clustering in joint angle space: first 3 angles

The algorithm described in the previous section assumes the availability of 3-dimensional cartesian coordinate vector

0.1

0.6

Fig. 7.

Joint angle during tracking

1.2 1

Y - pixel (normalized)

0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.6

-0.4

-0.2

0

0.2

0.4

0.6

X - pixel (normalized)

Fig. 8.

Tracking in Image Plane

0.5

Cartesian Coordinate Image Coordinate

0.45 0.4

Theta (0 to 2 Π

0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

0

Fig. 9.

1

2

3 4 Normalized Tracking Error

5

6

7

Tracking error in Image Plane and Cartesian Plane

V. C ONCLUSION In this paper, Martinetz’s KSOM algorithm [3] is modified to resolve redundancy in a 6 DOF robot manipulator. By performing clustering in input-output space, it is possible to estimate the coarse joint angle values necessary for reaching a target point in Cartesian space. Simple fine-tuning around these coarse values leads to good tracking accuracy without exceeding joint angle limits. This approach is model-free in the sense that manipulator geometry or forward kinematic relationship is not needed while finding the joint angle vector for a given 3-dimensional target point. Many conventional techniques of redundancy resolution depend on explicit orientation information which may not be easily available through visual feedback. The technique presented in this paper do not make use of any orientation information and still produces a feasible solution. However, the solution is not guaranteed to be optimal one. R EFERENCES [1] R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Journal of Robotics and Automation, RA-3(4):323–344, August 1987. [2] M. Kuperstein. Adaptive visual-motor coordination in multijoint robots using parallel architecture. Proc. IEEE International conference on Robotics and Automation, 4:1595–1602, March 1987.

[3] T. M. Martinetz, H. J. Ritter, and K. J. Schulten. Three-dimensional neural net for learning visual motor coordination of a robot arm. IEEE Transactions on Neural Networks, 1(1):131–136, March 1990. [4] T. Kohonen. Self organization and associative memory. SpringerVerlag, 1984. [5] J. A. Walter and K. J. Schulten. Implementation of self-organizing neural networks for visual-motor control of an industrial robot. IEEE Transactions on Neural Networks, 4(1):86–95, January 1993. [6] L. Behera and N. Kirubanandan. A hybrid neural control scheme for visual-motor coordination. IEEE Control System Magazine, 19(4):34– 41, 1999. [7] R. V. Patel and F. Shadpey. Control of Redundant manipulators. Springer, 2005. [8] H. Seraji. Configuration control of redundant manipulators: theory and implementation. IEEE Transactions on Robotics and Automation, 5(4):472–490, 1989. [9] H. Seraji. Task options for redundancy resolution using configuration control. In Proc. 30th IEEE conference on Decision and Control, pages 2793–2798, Brighton, UK, December 1991. [10] N. Kumar and L. Behera. Visual motor coordination using a quantum clustering based neural control scheme. Neural Processing Letters, 20:11–22, 2004. [11] J. J. Craig. Introduction to Robotics. Pearson Education, Inc., 1989.

A Model-free Redundancy Resolution Technique for ...

relationship between robot end-effector position and joint angle vector. Many of the .... It is to be noted that the training data for simulation is generated using ...

414KB Sizes 1 Downloads 228 Views

Recommend Documents

A Model-free Redundancy Resolution Technique for ...
words, different sets of joint angles may lead to same end- effector position thus ..... IEEE Journal of Robotics and Automation, RA-3(4):323–344, August. 1987.

Redundancy, Redundancy, Redundancy: The Three ...
Mar 31, 2010 - visual understanding step, a central component of intelli- gence, achieves ..... cording to correspondence points, which we call anatomical landmarks, or .... For the former task, we used an in-house database of close to 1500 ...

Redundancy, Redundancy, Redundancy: The Three ...
Mar 31, 2010 - Automatic detection and tagging of anatomical structures, ...... (e) shows a case of strong patient motion, and the algorithm handled it robustly.

A Novel Technique A Novel Technique for High ...
data or information within the cover media such that it does not draw the diligence of an unsanctioned persons. Before the wireless communication data security was found. Processing and transmission of multimedia content over insecure network gives s

Content Aware Redundancy Elimination for Challenged Networks
Oct 29, 2012 - Motivated by advances in computer vision algorithms, we propose to .... We show that our system enables three existing. DTN protocols to ...

Modelfree Monte Carlolike Policy Evaluation
Mar 30, 2010 - In Proceedings of The Thirteenth International Conference on Artificial Intelligence and. Statistics (AISTATS) 2010, JMLR W&CP 9, pp 217224, ...

A Soft Edge Smoothness Prior for Color Image Super-Resolution
May 21, 2009 - and unsupervised solution for by the spectral clustering tech- nique. Thus ...... from the Georgia Institute of Technology, Atlanta, in. 1981 and ...

Redundancy or Mismeasurement? A Reappraisal of ...
a study of the role of money in forecasting inflation and/or nominal income, and more recently. 1This paper adopts the definition of stability used by Friedman and Kuttner (1992). Specifically, money demand is stable if there exists an identified mon

Modelfree Monte Carlolike Policy Evaluation - Orbi (ULg)
May 19, 2010 - Many techniques for solving such problems use an oracle that evaluates the performance of any given policy in order to determine a ...

Modelfree Monte Carlolike Policy Evaluation
Mar 30, 2010 - can be based on a Monte Carlo (MC) approach. ○. In this paper, the only information is contained in a sample of one step transitions of the system. ○. In this context, we propose a ``ModelFree Monte Carlo (MFMC) estimator'' of the

a technique for canceling impulse noise in images ...
Also, there are vigorous efforts in exploring the application of CS in many different fields such as data mining, DNA microarrays, astronomy, tomography, digital.

A Novel Technique for Frequency Stabilising Laser Diodes
The method described in this paper is more robust and easier to set up. ..... When we call the centre frequency of the atomic transition νt and the laser frequency ...

A CT Scan Technique for Quantitative Volumetric ...
May 26, 2006 - Reconstructive Surgery at the Medical College of Wisconsin, and the ... computer for analysis with image-processing software to determine the ...

A technique for the morphological characterization of ...
Application of traditional geomorphometric techniques is hindered by the spatial variability in ... and the automated extraction of submarine drainage systems. [Pratson ..... elevation data set to generate a raster file representing the theoretical .

A Variational Technique for Time Consistent Tracking of Curves ... - Irisa
divergence maps depending on the kind of targeted applica- tions. The efficiency of the approach is demonstrated on two types of image sequences showing ...

A Circuit Representation Technique for Automated Circuit Design
automated design, analog circuit synthesis, genetic algorithms, circuit .... engineering workstations (1996 Sun Ultra), we present evolved circuit solutions to four.

A New Stratified Aquatic Sampling Technique for ...
ABSTRACT. We developed a new type of passive-sampling minnow trap that enables aquatic sampling at depths of up to 70 cm without drowning obligate air-breathers. The trap demonstrated a heightened ability to capture bottom-dwelling animals that may o

Indirect channels: a bandwidth-saving technique for ...
Sending large messages known to the recipient is a waste of bandwidth. ..... A failure of the test in line 16 means that the information about x had been removed.

A FFT technique for discrimination between faults and ...
system fault currents, which proved to provide a reliable ... The generated data were used by the. MATLAB to test the ... systems. The third winding (tertiary) in a three winding transformer is often a delta- connected ..... Columbus, Ohio, 2001.