Journal of Intelligent and Robotic Systems (2005) 43: 1–32

© Springer 2005

Fusion Strategies for Minimizing Sensing-Level Uncertainty in Manipulator Control G. C. NANDI Indian Institute of Information Technology, Allahabad, India 211 012; e-mail: [email protected]

DEBJANI MITRA Department of Electronics Engineering, Indian School of Mines, Dhanbad, India (Received: 30 December 2003; in final form: 11 January 2005) Abstract. Humanoid robotic applications require robot to act and behave like human being. Following soft computing like approach human being can think, decide and control himself in unstructured dynamic surroundings, where a great degree of uncertainty exists in the information obtained through sensory organs. In the robotics domain also, one of the key issues in extracting useful knowledge from sensory data is that of coping with information as well as sensory uncertainty at various levels. In this paper a generalized fusion based hybrid classifier (ANN-FDD-FFA) has been developed and applied for validating on generated synthetic data from observation model as well as from real hardware robot. The fusion goal, selected here, is primarily to minimize uncertainties in robotic manipulation tasks that are based on internal (joint sensors) as well as external (vision camera) sensory information. The effectiveness of present methodology has been extensively studied with a specially configured experimental robot having five degrees of freedom and a simulated model of a vision guided manipulator. In the present investigation main uncertainty handling approach includes weighted parameter selection (of geometric fusion) by a trained neural network that is not available in standard manipulator robotic controller designs. These approaches in hybrid configuration has significantly reduce the uncertainty at different levels for faster and more accurate manipulator control as demonstrated here through rigorous simulations and experimentations. Key words: sensor fusion, FDD, FFA, ANN, soft computing, manipulators, repeatability, accuracy, covariance matrix, uncertainty, uncertainty ellipsoid.

1. Introduction A large variety of robotic applications (industrial, military, scientific, medicine, welfare, household and amusement) are increasingly coming up with recent progress in which a robot has to operate in large and unstructured environment [3, 12, 15]. In most cases, the knowledge of how the surroundings are changing every instant is fundamentally important for an optimal control of robot motions. Mobile robots also essentially have to navigate and operate in very large unstructured dynamic surroundings and deal with significant uncertainty [1, 9, 19]. Whenever a robot is operating in a natural nondeterministic environment, there always exists some degree of uncertainty in the conditions under which a given job will be done. These conditions may, at times, vary while a given operation is being carried

2

G. C. NANDI AND D. MITRA

out. The major causes leading to the uncertainty are the discrepancies arising in the robot motion parameters and in the various task-defining information. The amount by which they differ from those called for in the process specifications may not always be insignificant. These deviations may be due to inaccuracies in analytical design or in reproductions of programmed motions or because of deterministic as well as random errors in the algorithms, measurement data, data transmission links, and other factors. Changes in the status of the robot like instances of malfunctions, failures, shift in the frame of reference, etc., also lead to uncertainty in the operating conditions of the robot. The presence of substantial uncertainty significantly affects the robot in the various steps of sensing the state of a task; in adapting to the changes through the control system; and in reasoning to select the actions needed to achieve a goal. In fact, it is felt that one of the key issues in extracting useful knowledge from data is that of coping with uncertainty at all levels and especially at the sensing level. Along with the quantity of the observed sensory measurements, the quality involved also need to be investigated in terms of the residual uncertainty it propagates to the desired sensory information. In robotics domain, the uncertainty problem in the sensory interpretation level is a very crucial one for specific tasks like robotised space structure manipulators, robotised surgery etc. where both high level of machine precision and human like prehension are needed The key problem in the sensing process is in making the connection between the signal outputs of all the sensors and the attributes of the three-dimensional world. One of the recent trends is to solve the problem through sensor fusion and there are numerous fusion techniques covering a very broad spectrum of application areas [10, 13]. Under the backdrop of the study of these research works, it was felt that there is a great need for evolving a generalized and easily apprehensible soft computing based sensor fusion strategy (humanoid approach) for multiple sensory systems. The humanoid approach makes it available for versatile applications. The easily apprehensible character of the development makes it particularly suitable for processing complex, highly nonlinear functional relationships between low-level sensory data and high-level information. The fusion strategies would be most suitable to apply in distributed fusion architectures as it can effectively enable us to minimize the uncertainties at any desired level. A review of some papers on uncertainty analysis in the context of manipulator control [4, 14, 16, 20, 23] shows that a common step involved in all these systems is the interpretation of identical information that has been acquired through multiple sensory units. The fused information needs to be represented with minimized uncertainty and the level of this minimization depends on task specific applications. The research study described in this paper has focused on this objective in the context of sensory guided robotic manipulations. As a token application here the challenge of improving repeatability of a very ordinary RSC type robot has been undertaken. Real-world systems are stochastic in nature having nonlinearity and uncertainty in their behaviors and hence humanoid approach of solutions are only acceptable

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

3

one in many such tasks. For multivariable input–output systems, effects of such nonlinearity and uncertainty are significant and needs to be addressed properly in order to control them effectively. Take, for example, the case of advanced robotic systems (manipulating robots having redundant degrees of freedom or mobile robots having redundant sensory systems would fall in this category). These systems require various kinds of sensors for responding intelligently to a dynamic environment. They may be equipped with external sensors such as force-torque sensors, range sensors, proximity sensors, ultrasonic and infrared sensors, tactile arrays and other touch sensors, overhead or eye-in-hand vision sensors, cross-fire, overload and slip sensing devices etc. In addition, there are also various internal state sensors such as encoders, tachometers, revolvers and others. More is the number of sensors, more is the computational complexity for controlling the system and more is its intelligence level. Since recent industrial as well as non-industrial applications need robotic systems with high level of intelligence, the complexity associated with it has to be addressed properly. For this purpose, systems equipped with multiple sensors having different ranges of uncertainties has been taken up here for study. Information obtained from different sensors are inherently uncertain, imprecise and inconsistent. Occasionally it may also be incomplete or partial, spurious or incorrect and at times, it is often geographically or geometrically incompatible amongst the different sensor views. Our knowledge of the spatial relationships among objects is also inherently uncertain. Take the example of a man-made object. It may not match its geometric model exactly because of manufacturing tolerance, human/machine errors and other natural uncertainties. Even if it does (in macro level), a sensor cannot measure the geometric features and locate the object exactly because of measurement errors. Even if it can (within certain bounded tolerance limit), a robot using the sensor may not manipulate the object exactly as intended, may be because of all cumulative errors added with the end-effector positioning errors. These errors can be reduced to a very significant level for some tasks, by reengineering the solution, structuring the working environment and using specially suited high precision equipment- but at great cost of time and equipment [20]. An alternative solution may be to develop sensor fusion strategies that can minimize and eliminate the uncertainties of any engineering system to a desired level, at a much lesser cost, incorporating all inherent uncertainties. In this paper we focus on developing a FDD- FFA-ANN based hybrid type sensor fusion strategy. The organization of the paper has been arranged as follows. Section 2 outlines the computational steps through which the overall fusion algorithm has been formulated and developed. These developments and propositions have been applied in Section 3 for validating on synthetic data of an observation model. Section 4 is dedicated towards applying the developed hybrid fusion strategies for improving repeatability of a hardware robot manipulator. Their effectiveness has been extensively studied with a specially configured RCS type experimental robot having five degrees of freedom. A neural network formulation of the fusion algorithm is also

4

G. C. NANDI AND D. MITRA

presented. Finally, in Section 5 the significant results and inferences have been listed. 2. Formulation of the Fusion Algorithm Structure The fusion algorithm structure consists of the following computational steps: (i) The uncertainties in the information derived through processing of multiple noisy sensory data are represented by individual uncertainty ellipsoids. (ii) The uncertainty ellipsoids are merged in a manner so as to minimize the volume of the fused uncertainty ellipsoid by proper assignment of optimal weighting matrices. (iii) Fusion in the Differential Domain (FDD) has been developed to further reduce the uncertainty of fused information at finer resolutions through an iterative process that predicts the correction terms for all the sensory information. These terms are then fused and applied to the fused information to increase its precision. (iv) The Fission Fusion Approach (FFA) is used to minimize uncertainties significantly for some specific sensor models where the covariance matrix of the sensory information can be “fissioned” and information from multiple measurements of the same set of sensors are available for fusion. (v) An ANN model of the manipulator has been developed for initial estimation of uncertainties (Mean Square Error) of joint sensors which could be further minimized by fusion process (FDD, FFA). The fusion methods as represented by steps (i) and (ii) give a physical or rather geometric insight for the complicated information processing as it involves the fusion of the uncertainty ellipsoids of each individual sensory information. Given a set of uncertainty ellipsoids associated with each sensor, the problem is to assign weighting matrices (Wi ) with each set of sensory system so as to minimize geometrically the volume of the fused uncertainty ellipsoid [17]. The parameter representing the information Xi ∈ Rn is usually determined from a set of sensory observational data, Di ∈ Rmi . Here, Rn represents the general n-dimensional Euclidean spaces, i denotes the ith sensor, mi is the number of independent measurements, and n is the dimension of information (i = 1, . . . , N, N being the total number of sensor units), and Xi and Di are known to be related through a known nonlinear vector function, Xi = fi (Di ) or

Di = gi (Xi ).

(1)

The fused information Xf is then made available in the linear combination Xf =

N  i=1

Wi X i ,

Wi ∈ Rn×n .

(2)

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

5

Using Lagrangian optimization, we have the weighting matrices for the geometrically optimized fusion as  N −1    −1 −1 (3) Ji Ci JiT Ji Ci JiT . Wi = i=1

Here where Ji (Di ) ∈ Rn×mi is the Jacobian matrix of fi with respect to Di and Ci ∈ Rmi ×mi is the covariance matrix of Di , for the ith sensor. 2.1. REVIEW OF RELATED WORKS AND SPECIALTY OF OUR UNCERTAINTY MINIMIZATION APPROACH BASED ON FDD ( FUSION IN THE DIFFERENTIAL DOMAIN ) The theoretical basis of a number of currently available uncertainty minimization approaches based on either on classical Bayesian decision theory [21], or on information theoretic approach [7] and other soft computing approaches like fuzzy logic, Rough set theory, artificial neural network or hybridization of them. Most uncertainty handling classifiers which are developed based on Baysian decision theory works on centralized architecture where data from different sensors are sent to a single location, i.e. the central fusion node, from where the fused data are distributed to various users. These architectures are theoretically optimal, conceptually simpler and fusion node is able to make decisions based on all the system information [10]. However, it presents several drawbacks like high computational load of the fusion node, high communication bandwidth to send all the sensor data, and inflexibility to fusion node failure and system or sensor changes which makes such systems specially unsuitable for some robotic applications. In hierarchical architectures, as it is well known, several fusion nodes are arranged in a hierarchy with the lowest nodes processing sensory data and sending results to higher level nodes. The higher level nodes may also possibly provide some feed back. Many times data is communicated to the higher levels at a rate slower than the sensory observation rate or due to some other reasons the higherlevel node at times may collect processing results only periodically. In such cases architecture allows significant savings in communication. The computational load in this architecture can be reduced as each node may be implemented on a different processor. In a fully distributed architecture, there are multiple fusion nodes with no fixed superior/subordinate relationship. Each node processes the data provided by its corresponding sensor and can communicate with any other node, to be used in their corresponding fusion process. The different topologies for the distributed architecture are defined by the connectivity pattern of the fusion nodes. The hierarchical and distributed architectures have the following advantages: • Modularity. • Robustness to the failure of nodes. • Flexibility and extensibility.

6

G. C. NANDI AND D. MITRA

• No need to maintain a large centralized data base. • Easy user accessibility to fusion results. Being motivated by these advantages we have formulated our fusion architecture based on ANN-FDD & FFA which works synergistically and takes the advantages of both. The nature of architecture has been illustrated with the help of one fusion module in Figure 1.

Figure 1. Simplified diagram of one module of generalized hybrid fusion algorithm.

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

7

In the FDD approach, the absence of dynamic uncertainties in the differential domain has been assumed since for fine manipulations the sensory data are expected to give less erroneous information. Intuitively we can also experience this with our vision perception; suppose when we try to estimate an object which is hundred meter away, the error associated with estimation will be much more than when we estimate the same object from a couple of centimeters away (we called here the sensor is operating in differential domain). Mathematical Formulation Let Xdf be the residual consensus error or uncertainty that remains in the sensory information after geometric fusion through the optimal weighting matrices. If the original error function is redefined in the neighborhood of the optimized error, it should be possible to find another Xdf , which would increase and/or decrease around the error function. It is quite logical to expect that the sensors in the neighborhood of its goal point will yield more accurate and less erroneous information. Let us represent the sensory information, sensory data and noise in the differential domain, for the ith sensory data (i = 1, . . . , S, S is the total number of sensory data) by Xdi , Ddi and ndi , respectively. The noise, as random measurement errors, is expressed to be additive to the mapping known through Equation (1) in the following manner: Ddi = gi (Xdi ) + ndi .

(4)

The noise ndi is assumed as a multivariate random vector with a S × S positive definite covariance matrix Qdi . Treating Xdi as an unknown non-random vector and ndi having a zero mean and a Gaussian distribution, the conditional probability density function p(Ddi | Xdi ) is maximized by determining the maximum likelihood estimator and Xdf is predicted through the following derived iteration [6]  −1 T −1   G Qdi Ddi − gi (Xdo ) , (5) Xdf = Xdo + GT Q−1 di G where the vector Xdo , taken as an initial estimate of Xdi , is determined from the preliminary fusion results. The value of Xdo can also be obtained if previous iteration of some other estimation procedure has been followed or some a priori information is available. In the analysis here it is assumed that Xdo is sufficiently close to Xdi and   ∂g1 ∂g1 ···  ∂Xd1 ∂Xdn   . ..    . G= . (6) . .  ∂gS ∂gS  ··· ∂Xd1 ∂Xdn By performing singular value decomposition of the fused information, the residual uncertainty that remains in the fused information is predicted in both magnitude

8

G. C. NANDI AND D. MITRA

and direction. This uncertainty or error, taken as the initial estimate, Xdo in the iteration process described by (5), should in principle yield uncertainty values that closely encompass the original one. The initial values of Ddi , in that case, are obtained through (4) by taking Xdi = Xdo for all the sensors. The measure of the amount of noise ndi in the differential domain that actually leads to the given uncertainty in the sensory information is found out by random manipulations. This step is critical for getting precise sensory information in the two-tier information processing system by repeatedly applying Equation (5). It manipulates the optimized sensory information in coarse stage towards fine stage in such a manner that it ensures the variance in the differential domain to vary about the variance that existed before the onset of the first iteration. Subsequently, the iteration process not only predicts a reduced uncertainty or variance of the fused sensory information in the differential domain, but also gives a measure of the corresponding manipulative noise in the sensory data. Using the Jacobian matrices Ji , the corresponding correction terms in each sensory information are obtained. These terms, after fusion with the original weighting matrices can be applied to the fused information. The latter then represents the information with reduced uncertainty. This strategy which has been developed in [18] has

2.2. THE FISSION FUSION APPROACH ( FFA ) While working with many sensor models, it is found that for multidimensional information, the fusion algorithm based on geometric optimization, becomes difficult to apply specially when the covariance matrix of the sensory information is close to singular. Even otherwise, it is felt that when too many data are present, the entirety of the data cannot be processed directly, since in most cases, it may be conflicting in nature and has to be filtered prior to or after employment in the data processing algorithms. Further, for a number of redundant sensors (similar or dissimilar), measuring the same attribute or information Xi ∈ Rn (n being the dimension of information), the noise in low level sensory data propagates to the individual dimensions of Xi , in different manners depending on the actual mapping between the lower and higher level sensory information. Higher the intelligence level and degree of redundancy of the system, more complex will be this difference. Hence it is felt that a strategy has to be found wherever possible for synthesizing Wi by fusing the disintegrated (granular) values of Xi , i.e. considering each dimension of Xi separately. This indicates the “fission” of the covariance matrix. This approach named as FFA is found to be appropriate and useful in sensory models where multiple measurements are available and the sensory characteristics are more or less known as priori. Sensory information in general can be more appropriately characterized by the contaminated Gaussian model [7] that includes a small fraction ε of a second probability measure. This model is used here for characterizing each dimension

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

9

of the sensory information separately, by representing the probability distribution function (pdf) curve as 1 − ε −(1/2)(X−Xm )2 /σ 2 ε 2 2 i1 + √ e e−(1/2)(X−Xm ) /σi2 , Pi (X) = √ 2πσi1 2πσi2

(7)

where Xm represents the mean value of sensory information, with σi22  σi12 and ε is usually limited between 0.03 and 0.1. The spirit of this distribution is that most of the time the variance of the concerned information is σi12 , but occasionally it may have a large variance σi22 due to miscellaneous causes. Especially for autonomous robots operating in an environment which is changing dynamically, the common type of sensors (like vision, tactile, range, proximity, etc.) used can normally be expected to provide quite accurate observations within a specific range, but at times may provide spurious measurements under specific situations like miscalibration, improper illumination conditions, software failure and the like. If the fusion structure is interpreted in the context of intra sensor fusion based on N repeated multiple measurements measuring the same attribute X, any two reading say Xi and Xj can, in general, be characterized by Pi (X) and Pj (X) distributions of (7), respectively. These readings will have different variance measures since (Ji Ci JiT ) = Jj Cj JjT and hence σi2 = σj2 in most cases. This makes the conditional probability function Pi (X | Xi ). to be different from Pj (X | Xj ). Hence if fusion is done on the basis of individual dimensions of the sensory information X, better fusion results with reduced uncertainty is possible. 3. Fusion to Improve Sensory Information In multi sensor fusion systems with redundant and/or complimentary sensors, each sensor can always be considered as individual sources of uncertain information, able to communicate, co-operate and co-ordinate with other members of the sensing group. Based on this structure, Durrant-Whyte [7] has presented sensor models described as a probabilistic function of state and decisions communicated from other information sources. They have treated three components of this sensor model: the observation model, that processes the measurement characteristics; the dependency model that describes the sensor’s dependence on other information sources; and the state model that characterizes the sensor’s dependence on its location and internal state. Of these, the observation model, which is basically the model of sensor noise and error, needs to be studied more extensively. The noise that is present has to be evaluated and reduced to a maximum possible extent. Fusion strategies is shown to be effectively useful for this purpose by considering the combination of information from four different sensors when different mathematical models represent each of their characteristics. If the nonlinearity characteristic of each of the sensor [5] is different, the propagation of noise to the sensory reading will also be different and

10

G. C. NANDI AND D. MITRA

fusion can suitably reduce the overall uncertainty. Here it has been assumed that the characteristic of four different sensors is known to have the following mathematical relations between the sensory data readings y and their actual values x. S1. Sensor 1: y1 S2. Sensor 2: y2 S3. Sensor 3: y3 S4. Sensor 4: y4

= k1 x, = x k2 , = x + sin k3 x, = xe−k4 x ,

(8) (9) (10) (11)

where k1 , k2 , k3 , k4 are known constants. Though these mathematical relations may be known quite accurately, but the sensors may have uncertainty or inconsistency at any time. The presence of this random additive noise in the sensory data is represented as yi = yˆi + y i ,

(12)

where yˆi and yi are the undisturbed sensory data and the Gaussian disturbance, respectively, for the four sensors (i = 1, . . . , 4). Thus there is now uncertainty associated with all x s computed through the mappings defined by (8) to (11). This simply implies that the x’s obtained from the different sensors are different depending on the random imprecision in a given surrounding or condition. Interpreting x as the sensory information and y as sensory data, the four sensor observation models is jointly stated as xi = fi (yi ),

(13)

i = 1, . . . , 4, corresponds to the four sensors, and the respective fi s represent the four relations specified by (8) to (11). The fusion strategies are now applied to this simulation in steps. For the results shown, the following values of the constants are assumed: k1 = 1.5; k2 = 1.04; k3 = 0.4; k4 = 0.008. The expected inaccuracies in the sensory data, yi in (12) have been simulated here through random number generators, using the MATLAB function, randn such that NS ∗ randn(1, 1) represents white Gaussian noise of power NS2 . Henceforth, NS has been referred to as the strength of the noise generator. Figure 2 shows the sensor characteristics for four different sensors for a particular value of NS . In the plots no particular units have been assigned, since the data have been created synthetically and the main purpose of the plots is to illustrate the induced nature of nonlinearity of the four sensors and the different manner in which the net error propagates to x from y. The net error incorporated in x more or less retains the random nature of the additive noise induced in y and is independent of the value of x. But the uncertainty that propagates to x does not retain this nature. The variance of x is   ∂xi 2 2 σi . (14) V [xi ] = ∂yi

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

11

Figure 2. Characteristics of four different sensors chosen for fusion.

Here σi2 represents the variance of the sensory data readings, yi and ∂xi /∂yi is made known from (13). In the simulation, for different measurements of x, σi2 are computed from about 100 generated values of errors corresponding to a given noise power. Figure 3 shows the manner in which the variance of x depends on the value of x for all the four sensors. Sensor 3, because of the sinusoidal term, exhibits the maximum change of variance for the given range of x. Sensor 1 being a linear one shows the minimum change of variance. The overall variance of x is optimized by assigning suitable weighting or weightage parameters, Wi which using (3) is −1  4  (V [xi ])−1 (V [xi ])−1 (15) Wi = i=1

and the fused information is xfused =

4 

Wi xi .

(16)

i=1

The minimized variance of the fused information is V [xf ] = Wf Cf WfT ,

(17)

where Wf = (W1 W2 W3 W4 ) ∈ R1×4 , Cf = diag(σ12 , σ22 , σ32 , σ42 ) ∈ R4×4 . This minimized variance is shown by the dotted lines in Figure 3. It is to be noted that by using geometric fusion approach the effect of random uncertainty

12

G. C. NANDI AND D. MITRA

Figure 3. The variance of individual and fused information. Table I. Individual and fused information readings of the four sensors and their variances Actual x 0.1 0.5 0.6 2 8 10 12 18 20 25

S1: x1

S2: x2

S3: x3

S4: x4

xfused

(V [x1 ] × 10−4 )

(V [x2 ] × 10−4 )

(V [x3 ] × 10−4 )

(V [x4 ] × 10−4 )

(V [xf ] × 10−4 )

0.0988 (7.5556) 0.4883 (7.5556) 0.6359 (7.5556) 1.9737 (7.5556) 7.7256 (7.5556) 9.5047 (7.5556) 11.6917 (7.5556) 18.3032 (7.5556) 20.6224 (7.5556) 25.6313 (7.5556)

0.1016 (18.8725) 0.4983 (17.0000) 0.5807 (16.4160) 1.9081 (14.9257) 8.3523 (13.2629) 9.7240 (13.1025) 12.1179 (12.8738) 18.4938 (12.4457) 20.7864 (12.3299) 25.4238 (12.1329)

0.1001 (8.6774) 0.4833 (8.7665) 0.5872 (8.8112) 2.0639 (10.5192) 7.9679 (47.1569) 9.5624 (35.7253) 11.7668 (17.0777) 17.7270 (10.433) 19.5863 (16.7381) 24.9068 (39.4443)

0.0985 (20.4040) 0.5247 (22.4126) 0.6090 22.1407 1.9237 (27.3239) 8.0127 (72.385) 9.3930 (90.2744) 11.7341 (131.2938) 18.1099 (364.1508) 18.6802 (398.9300) 24.1365 (955.0958)

0.0996 (2.8605) 0.4931 (2.8379) 0.6072 (2.8415) 1.9808 (3.0210) 7.9577 (4.1191) 9.5738 (4.0364) 11.8286 (3.6205) 18.1732 (3.2124) 20.4284 (3.6270) 25.4778 (4.1464)

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

13

Table II. Optimized weightage parameters of fusion for the four sensors Actual x

W1

W2

W3

W4

0.1 0.5 0.6 2 8 10 12 18 20 25

0.3786 0.3756 0.3761 0.3998 0.5452 0.5342 0.4792 0.4252 0.4800 0.5488

0.1516 0.1708 0.1731 0.2024 0.3106 0.3081 0.2812 0.2581 0.2942 0.3417

0.3296 0.3237 0.3225 0.2872 0.0873 0.1130 0.2120 0.3079 0.2167 0.1051

0.1402 0.1299 0.1283 0.1106 0.0569 0.0447 0.0276 0.0088 0.0091 0.0043

reduces drastically as we are assigning more weightage to the less uncertain information and less weightage to the more uncertain information. This is the basic idea behind the geometric fusion methodology. The fusion results have also been shown in Tables I and II for different values of x, by assuming the variance of sensory data to be same for all the four sensors, taking σ12 = σ22 = σ32 = σ42 = 0.0017. Table I records the sensory information readings for the four individual sensors and also the fused ones. The figures in the brackets indicate the respective variances of the same. The corresponding values of W1 , W2 , W3 and W4 have been tabulated in Table II. As observed from the tables and figures, the above fusion methodology minimizes the uncertainty in sensory information significantly. However, our architecture can perform much better in the sense of reducing the residual inaccuracies further by redefining the error function within the bounded region and minimizing it recursively as described in details next. The FDD approach tested on this problem to see if there is a scope for further improvement of precision. For this the process described through Equation (5) is applied recursively to this multiple sensory system interpreting the G matrix as   ∂y1 ∂y2 ∂y3 ∂y4 T . (18) G= ∂x ∂x ∂x ∂x The result of applying Equation (5) has been shown for two particular values of x through Figures 4(a)–(c). The solid lines in these figures represent the variance of the information before applying FDD, and the rest of the curve shows the same for each iteration. In this equation, the initial estimate Xdo , is substituted with  V [xf ] and [Ddi − gi (Xdo )] is substituted with manipulative random noise whose covariance matrix Qdi , is taken to be of very small magnitude (as it represents noise in the differential domain). Several runs are performed repetitively for different

14

G. C. NANDI AND D. MITRA

(a)

(b) Figure 4. (a) and (b) show the iterative FDD process for x = 1; (c) shows the iterative FDD process for x = 5.

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

15

(c) Figure 4. (Continued.)

values of x. The results have been shown in Figures 4(a) and (b) for x = 1 and in Figure 4(c) for x = 5. From the plots it is noted that for these runs the noise can be manipulated so as to vary the variance close to the variance of information that existed before the onset of FDD. For example, the 15th iteration in Figure 4(a), the 10th iteration in Figure 4(b), the 8th iteration in Figure 4(c) can be selected as a suitable one. Hence, the noise introduced in the sensory data level for these particular iteration numbers may be noted in order to evaluate the correction terms. The correction terms are obtained by multiplying the noise data with the respective Jacobian derivatives obtained from (13). The correction terms obtained from one of the runs of Figure 4 for all the four sensors has been shown through Figures 5(a)–(d). The values of the same for a particular iteration from the plots are noted as 0.0040 for S1, 0.0024 for S2, −0.0006 for S3 and 0.0061 for S4. The variance of the fused information obtained earlier was 3.6948 × 10−4 using the weightage parameters as W1 = 0.4890, W2 = 0.2668, W3 = 0.1596 and W4 = 0.0845. The same parameters are used to fuse the four correction terms and apply the fused correction term (3.0354 × 10−3 ) to the fused information. The latter is under this condition represented with a reduced variance, which is predicted from the iteration curves to be 3.3186 × 10−4 . This corresponds to about 10% reduction of variance of the fused information. The above strategy may be

16

G. C. NANDI AND D. MITRA

Figure 5. The correction term characteristics for a particular information reading for four sensors.

repeated for further improving the precision depending on the specific application where the sensory information is to be used. In this manner the FDD approach ensures the improvement of the overall sensory characteristics in a multisensor system considerably.

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

17

4. Experimental Verification of Hybrid (Neuro-FFA-FDD) Approaches for Handling Uncertainties in Improving Repeatability of Robotic Manipulator In robotic manipulations, there are a number of sources of uncertainties: (i) uncertainties associated with sensors, (ii) uncertainties associated with actuators, (iii) uncertainties associated with modeling. In the present investigation, attention is focussed towards uncertainties associated with sensors and their minimization. Most industrial robots execute simple repetitive tasks by playing back prerecorded or preprogrammed sequences of motions that have been previously guided or taught by a user. For this type of performance, the robots do not need any information about its working environment. External sensors are not that important here, as manipulators have to simply move to goal points that have been taught. A “taught” point is that to which the manipulator is moved physically, the corresponding joint position sensors are determined, and the joint-angle values stored. Subsequently, in the next command to the robot to return to the same point in space, each joint is moved to the stored value. The precision with which a manipulator can return to a “taught” point is specified by the factor “repeatability of the manipulator”. An indispensable capability for most manipulator application is to provide a high speed and high precision trajectory. In such applications the repeatability of these manipulators need to be quantified as accurately as possible. For this, the analytical description of the spatial displacement of the robot, as a function of time is primarily required. This, in particular, depends on the functional relation between the joint angle variables and the position and orientation (with respect to a reference co-ordinate frame) of the robot arm end-effector. There are three representations of this position and orientation: descriptions in Cartesian space, joint space and actuator space. However, here, mainly the mapping between joint space and Cartesian space (also called task-oriented or operation space) has been considered. The manner of propagation of uncertainty from joint space to Cartesian space has been studied by working out the forward kinematics of an experimental robot, specially configured for the purpose. The robot with very poor repeatability has been chosen deliberately to demonstrate how the developed strategy can improve the robot’s performance. The repeatability of the system, considering the effects of additive random noise in the joint angle measurements, has been evaluated and minimized. The general motivation of repeatability analyses comes from the fact that the most pre-dominant form of industrial robot applications is of repetitive task performance, which is mainly done through preprogrammed functional operations. However, in most of the applications, to endow the machines with relevant information about the working space, and impart them greater degrees of intelligence for flexible interaction with the environment, the robots are equipped with external state sensors and they need to calculate the joint space through complex inverse kinematics. This is in

18

G. C. NANDI AND D. MITRA

contrast to the “teach and playback” mode, where the goal point is never specified in Cartesian co-ordinates and hence the inverse kinematics problem never arises. Systems which allow goals to be described in Cartesian terms are capable of moving the manipulator to those points in the workspace which were never taught and to which it has perhaps never gone before. Such points are called “computed points” in contrast to the “taught points”. The precision with which a computed point can be attained, i.e. the accuracy of the manipulator, lower bounded by the repeatability, is affected by the precision of parameters appearing in the kinematic equations of the robot. Errors in the knowledge of the D-H parameters will result in joint angle computations (through the inverse kinematic equations) that will also be in error. Minimizing these errors necessitates the application of the proposed fusion strategies in a recursive manner to the stochastic part of the manipulator sensory error. Suitably devised calibration techniques can improve the accuracy of a manipulator to a large extent through estimation of that particular manipulator’s kinematic parameter [11]. This global calibration error, being deterministic can be compensated beforehand, but it does not allow the compensation of the local stochastic disturbance or uncertainty introduced as additive random noise to the low-level sensory measurement data. This uncertainty propagates to the to the higher level information depending on the mapping between the two levels and needs to be analyzed. For most such manipulation tasks involving computed points, redundant or complimentary sensor data are simultaneously used and hence the uncertainty can very well be analyzed in the framework of sensor fusion. Extensive research and development is being carried out in areas like robot assisted surgery, manipulating objects in space shuttle, cargo bay and others where the robots require high-level path-planning and execution strategies. A number of redundant sensors are a must for increasing its interaction with the complex working domain. The proposed sensor fusion strategies are especially suitable for such applications. For the ease of analysis, while the fusion strategies have been illustrated experimentally in the next section for improving the repeatability characteristics of an experimental robot, the application of the same has also been demonstrated for the accuracy characteristics of a two-link simulated robot manipulator, having a redundant vision sensor. A neural network formulation of the latter has been presented here which works synergistically with FDD-FFA and gives very good results as described in details in Section 4.2. 4.1. APPLICATION OF FFA TO IMPROVE ROBOT ’ S REPEATABILITY Design of Experiment A revolute joint manipulator with five degrees of freedom (excluding the gripper) was specially configured for the experimental purposes. The range of the servomotor rotation in each joint was from −90◦ to +90◦ (corresponding to motor positions of −1400 units to +1400 units) and so the configuration was made to extract the maximum volume of the working envelope of the robot. As mentioned earlier

19

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

(a)

(b)

(c)

(d)

(e) Figure 6. One cycle of repeatability testing experiment with four different positions of the robot.

the considered robot has coarse repeatability and accuracy. This type of robot has been selected specifically to demonstrate how the developed hybrid classifier can improve repeatability even for such robots. The experimental set up with one cycle of experimentation has been shown in Figures 6(a)–(e). The link parameters of this robot were determined as below in Table III.

20

G. C. NANDI AND D. MITRA

Table III. Link parameters of the configured robot ij

αi−1

1 2 3 4 5

0 0 −90◦ 0 +90◦

li−1 0 l1 l2 l3 l4

= 8.6 cm = 7.0 cm = 4.0 cm ≈0

di

θi

0 0 d3 = 4.5 cm 0 d5 = 1.0 cm

θ1 θ2 θ3 θ4 θ5

Having established the D-H parameters and frame systems for each of the links, defined according to the convention mentioned above, next a homogenous transformation matrix is developed to compute the link transformations describing the relationship between the assigned frames and the position (pX , pY , pZ ) of the endeffector location with respect to the base of the manipulator is obtained as follows:   pX = c1 c2 (c3 s4 d5 + c3 l3 + s3 c4 d5 + l2 ) − s2 d3 + l1   − s1 s2 (c3 s4 d5 + c3 l3 + s3 c4 d5 + l2 ) + c2 d3 = c1 l1 − d3 (c1 s2 + s1 c2 ) + (c3 s4 d5 + c3 l3 + s3 c4 d5 + l2 )(c1 c2 − s1 s2 ) (19) = c1 l1 + c12 l2 + c12 c3 l3 + c12 s34 d5 − d3 s12 ,   pY = s1 c2 (c3 s4 d5 + c3 l3 + s3 c4 d5 + l2 ) − s2 d3 + l1   + c1 s2 (c3 s4 d5 + c3 l3 + s3 c4 d5 + l2 ) + c2 d3 = s1 l1 + d3 (c1 c2 − s1 s2 ) + (c3 s4 d5 + c3 l3 + s3 c4 d5 + l2 )(s1 c2 + c1 s2 ) (20) = s1 l1 + s12 l2 + s12 c3 l3 + s12 s34 d5 + d3 c12 , pZ = −s3 s4 d5 − s3 l3 + c3 c4 d5 = c34 d5 − s3 l3 + D.

(21)

Here c1 stands for cos θ1 , s1 stands for sin θ1 , c12 for cos(θ1 + θ2 ); D was noted as 10 cm. Analyses of Experiment To analyze and minimize the error of system repeatability of the configured robot, the approach of geometric fusion, (pX , pY , pZ ), as obtained from the developed kinematic equations is treated as fissioned sensory information dependent on the joint angle sensory data and the following were determined for the positional information: (i) the uncertainty propagation from the Joint space (θi , i = 1, . . . , 4) to the Cartesian space as net error, (ii) its covariance matrix, and (iii) the principal axes of its uncertainty ellipsoid.

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

21

Table IV. Joint angle values for the four positions of the configured robot Position number

θ1 degrees

θ2 degrees

θ3 degrees

θ4 degrees

Pos 1 Pos 2 Pos 3 Pos 4

3.21 −36.00 64.28 25.71

48.60 57.08 30.21 32.14

−64.02 −49.88 50.65 −38.57

51.68 69.42 35.67 51.42

On the table on a piece of graph paper a number of end-effector positions were marked. Those points could be reached by different combinations of motor positions. For one such point robot is allowed to have four intermediate positions. To reach those plotted positions the motor positions were selected at random through teach-pendent mode. The motor positions were then changed by about 0 to ±200 units (corresponding to a change of 0 to ±120 in the joint angle values) from their initial position and additive relative errors of −3% in θ1 , 2.5% in θ2 , −4.6% in θ3 , and 3.2% in θ4 were found to be incorporated in them. The servomotor positions correspond to the following joint angles in the four different positions of the robot has been tabulated in Table IV. The net error propagating to the positional X, Y and Z coordinates for incremental changes in the motor positions are shown in Figures 7(a)–(c). The characteristics show for four different positions of the robot, how the uncertainty or net error propagates to pX , pY and pZ information respectively, as derived above. The plots clearly indicate that: (i) for a given position, the net error propagates in a different manner for each of the pX , pY and pZ coordinates, (ii) for any given coordinate, the magnitude of the error is affected by the initial position occupied by the robot. For example, under the same conditions, the net coordinates of Figure 7(b) are markedly different for positions 2 and 3. The same trend is observed in the other plots also. It is hence felt that, to improve the repeatability characteristic, the covariance matrix of the positional information Qp , and the ellipsoidal representation of its uncertainty can be used as a useful tool. For evaluating these, random noise in the joint angle values for a particular strength of the noise generator are simulated and the covariance matrix C, of θi is obtained as   0.0020 −0.0005 0.0006 0.0000  −0.0005 0.0034 −0.0005 −0.0008   C= (22)  0.0006 −0.0005 0.0025 −0.0003  . 0.0000 −0.0008 −0.0003 0.0024 Through the singular value decomposition (SVD) of Qp , the three principal axes of the uncertainty ellipsoid are obtained for each of the positions of Table IV. The

22

G. C. NANDI AND D. MITRA

(a)

(b) Figure 7. (a), (b) and (c) show how uncertainty propagates along three orthogonal directions for four different positions of the robot.

nature of variation of these axes has been shown in Figures 8(a)–(c). Here, axis-1 corresponds to the longest principal axis, while axis-3 to the shortest one for each position. The product of the three semi-principal axes as obtained from the plots of Figure 8 is proportional to the magnitude of the total uncertainty, i.e. the volume of

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

23

(c) Figure 7. (Continued.)

the uncertainty ellipsoid. For a particular position, the ellipsoid volume is seen to show a small variation in response to changes in the joint angles. It is however seen to be quite different for different positions. Hence, the three-dimensional uncertainty associated with the repeated movement of the configured robot to different locations has been tested and evaluated for further analysis. Arbitrarily chosen taught points are taken and for each position, the manipulator joint angle values are noted. Table V records the motor positions (P1 to P5 ) and the respective joint angle values (θ1 to θ5 ) for ten such location points. For each of the locations, the manipulator is next programmed to move by the same joint angle from its home or some other position. The above task was performed for three different runs for many such location points. The position to which it moved were recorded in all the three runs. Corresponding to the three runs of the experimental robot reaching a particular location (pX , pY , pZ ), the deviations from their actual values were converted to an approximate equivalent error in the joint angle values. The joint angle disturbances were numerically simulated from a set of 100 random error values to correspond as close as possible to the deviations that manifest in pX , pY and pZ . The strength of the noise generator and the corresponding covariance matrix of the disturbance in joint angle sensory data values, were noted to be slightly different for the different taught point positions. The experimental measurements of Table V were converted in this manner into equivalent simulated noisy joint angle values. The relative error, RE = (θactual − θerror )/θactual in percentage that

24

G. C. NANDI AND D. MITRA

(a)

(b) Figure 8. (a), (b) and (c) show three axes of uncertainty ellipsoids for four different positions of the robot.

closely approximates the (pX , pY , pZ ) values were recorded in each of the three runs and have been tabulated for the same 10 location points in Table VI. The magnitude and direction specifications (with respect to the reference frame attached to ground link) of the uncertainty ellipsoids in each of the three runs were noted for all the position points and then fused using the optimized weighting ma-

25

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

(c) Figure 8. (Continued.) Table V. Motor position and joint angle values for different taught points Position number

P1 (θ1 in

P2 (θ2 in

P3 (θ3 in

P4 (θ4 in

P5 (θ5 in

of taught points

degrees)

degrees)

degrees)

degrees)

degrees)

1

81 (5.21) 515 (33.11) − 78 (−5.01) 832 (53.49) 360 (23.14) 1251 (80.42) 651 (41.85) 1391 (89.42) − 89 (−5.72) 1204 (77.40)

− 204 (−13.11) − 512 (−32.91) − 360 (−23.14) − 1202 (−77.2714) 124 (7.97) − 1360 (−87.4286) − 1291 (−82.99) − 1290 (−82.93) 320 (20.57) − 1360 (−87.43)

447 (28.74) 237 (15.24) 885 (56.89) 434 (27.90) 744 (47.83) 879 (56.51) 519 (33.36) − 555 (−35.68) 315 (20.25) 40 (2.57)

− 186 (−11.96) 125 (8.04) 555 (35.68) 124 (7.97) 800 (51.43) 1104 (70.97) 1024 (65.83) 645 (41.46) − 595 (−38.25) − 1115 (−71.68)

− 555 (−35.68) − 763 (−49.05) − 608 (−39.09) − 657 (−42.23) − 640 (−41.14) − 524 (−33.68) − 44 (−2.83) − 885 (−56.89) − 1245 (−80.04) − 725 (−46.61)

2 3 4 5 6 7 8 9 10

26

G. C. NANDI AND D. MITRA

Figure 9. Volume of the uncertainty ellipsoids in three runs for ten different taught points.

trices obtained from FDD and FFA. Figure 9 shows the volume of the uncertainty ellipsoids of the position information obtained in each of the three runs for the same 10 points compared with that of the fused one. The minimized volume of the latter is very clearly observed from the plots. For example, for position number 5, uncertainty ellipsoids (in magnitude and orientation) for the three runs were as follows: The volume of the fused uncertainty ellipsoid was obtained as 2.0326 × 10−4 units. The columns of the orientation matrix represent unit vectors corresponding to the directions of the three principal axes of the uncertainty ellipsoid. The optimized weighting matrices used in the fusion were:  0.4142 0.0859 −0.0901 W1 =  −0.0568 0.2703 0.0637  ; 0.0126 0.0130 0.3130 



 0.3461 0.0099 −0.0099 0.0405  W2 =  −0.0403 0.2930 −0.0207 −0.0213 0.3694 

0.2397  W3 = 0.0971 0.0081

and

 −0.0959 0.1000 0.4366 −0.1043  , 0.0083 0.3175

satisfying the constraint of their sum being an identity matrix.

27

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

Table VI. Simulated noise in joint angle values for the different taught points Position number

pX

pY

pZ

RE (%)

RE (%)

RE (%)

RE (%)

RE (%)

of taught points

cms

cms

cms

in θ1

in θ2

in θ3

in θ4

in θ5

1

19.8648 19.9050 19.8899

3.7808 3.7198 3.7913

8.9902 9.0691 9.0712

2.57 1.06 2.74

2.74 −0.52 3.44

−2.37 1.82 2.01

−1.59 −0.14 2.58

1.29 0.57 1.46

2

18.2891 18.5688 18.5505

9.5732 8.9898 9.0168

9.8919 −2.23 9.8112 1.30 9.8191 1.10

1.46 −1.75 −1.66

1.98 −4.82 −4.35

3.65 −3.17 1.53

0.39 −0.96 1.19

3

19.5873 −1.7728 19.6607 −1.4507 19.7585 −1.4050

6.5158 −4.20 6.5716 3.46 6.6698 4.08

−3.39 1.44 3.28

−2.51 −1.01 2.07

−1.66 −0.18 −0.16

−2.90 1.02 −0.11

4

17.1906 17.1484 16.9654

8.9673 0.91 8.8477 0.685 8.9803 −2.42

2.37 2.38 −1.95

1.57 −4.36 2.04

−3.38 −4.32 1.01

−0.84 0.95 −0.81

5

14.7536 12.8555 14.6749 12.7422 14.6379 12.9745

6.9376 −1.61 6.8389 0.29 6.9602 −3.23

3.29 −2.83 0.86

3.05 −1.58 2.90

−3.54 1.41 −0.47

0.78 2.46 −0.23

6

11.8473 11.5393 12.1919 11.1799 11.7627 11.9713

5.9881 −0.42 6.0003 1.73 6.0945 −1.15

−1.75 −1.73 0.30

−1.45 −1.89 2.01

−2.62 −0.02 −2.16

−2.35 −2.19 0.98

7

17.9060 17.9303 18.0895

1.9226 1.5681 1.9217

7.7014 −1.64 7.6050 1.40 7.6119 2.09

0.20 0.34 3.01

0.35 −1.27 −1.48

4.59 −0.31 0.77

1.14 −1.51 1.22

8

10.9172 13.9706 11.6622 13.5160 10.2814 14.6836

8.0551 −0.29 7.9759 3.93 7.8784 −1.63

−2.79 −0.95 0.46

3.70 1.29 −2.24

9.71 7.49 6.79

−1.89 −9.84 −4.75

9

17.1975 17.7800 17.8823

9.4145 1.64 9.4690 −7.99 9.6950 −3.65

−11.12 9.79 13.85

−12.69 −7.97 10.57

1.03 −0.38 −0.23

−5.79 4.17 3.54

12.4180 10.8955 10.1422 −1.73 12.6113 10.7109 10.0952 −0.15 13.1499 11.0056 10.2578 4.97

−2.96 −2.48 4.86

−3.99 9.92 −3.65

−2.51 −8.03 7.48

3.29 −6.66 −7.43

10

6.7884 6.8374 6.6211

6.5425 5.7037 5.6522

The weighting matrices determined were found to be different for different points. For position number 10, they were   0.1062 −0.0953 0.0460 W1 =  −0.4142 0.5416 −0.1067  ; 0.0095 0.0045 0.3340

28

G. C. NANDI AND D. MITRA

Table VII. Shape and size of the uncertainty ellipsoid in three test runs Test run

First

Volume 10.6131 × 10−4 Orientation 0.744 −0.431 matrix −0.667 −0.526 0.042 −0.733

Second units 10.4495 × 10−4 0.511 0.745 −0.448 0.527 −0.666 −0.545 −0.679 0.041 −0.709

Third units 10.6825 × 10−4 0.495 0.749 −0.425 0.510 −0.661 −0.528 −0.704 0.042 −0.735

units 0.508 0.533 −0.676



 0.4275 0.0404 −0.0324 0.0542  and W2 =  −0.1823 0.2406 −0.0109 −0.0049 0.3381   0.4662 −0.0550 −0.0136 0.0525  . W3 =  −0.2319 0.2179 0.0014 0.0004 0.3279 The parameters of the weighting matrices that minimize the variance of the fused information are observed to be strong functions of the joint angles that vary in different positions. In dynamic environment, manipulators mostly have to work with “computed points” based on external sensory information that is inherently uncertain. A neural network model that can learn the complex mapping between the weighting parameters and the sensory observational data (joint angles and others that may be present) hence would be highly efficient in minimizing the sensory uncertainties of manipulators and improving its accuracy. 4.2. NEURAL NETWORK MODEL FOR VISION GUIDED MANIPULATION Autonomous systems deployed in unstructured, complex and dynamic environments have to necessarily involve the utilization of multisensor inputs. This is indeed a challenging research task requiring the resolution of several subtasks. Amongst these, one of the most important problems is to analyze and integrate the (in) dependent information from different sensor modalities. In this section, as a specific illustration, we consider the fusion of vision sensory information into the kinematic model of a manipulator for the precise guiding of the end-effector to its desired position and orientation. One common approach to the problem of visionguided positioning involves “camera calibration” prior to the maneuver with the help of calibration points and estimation of the parameters of the model relating the three-dimensional physical space to the two-dimensional image space [22]. Chen et al. [2] discusses some inherent difficulties associated with such position control methods based on pre-maneuver calibration and present some experimental results in support of “camera space manipulation” which is more advantageous in many respects. It is felt that an important aspect relevant to all these type of

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

29

(a)

(b) Figure 10. (a) and (b) show testing performance of fusion of synthetic data of vision sensor models.

approaches is the uncertainty propagation to the terminal end-effector position from the errors in the parameter estimates of the camera model and the internal joint sensor model. For a specified Cartesian point, this will be a measure of the accuracy. This uncertainty has been evaluated and minimized in the approaches of the developed fusion strategies. Corresponding to a fixed number of arbitrary positions in the workspace, faulty or erroneous sensory readings of the joint angle

30

G. C. NANDI AND D. MITRA

sensor and camera sensor are generated by simulation using a vision camera model. Fifty measurements are repeated for each of the positions giving rise to the input data set and the 8 elements of W1 and W2 generated in the fusion process as the output or target data set. Hence, the architecture is implemented with four neurons in the input layer and eight neurons in the output layer. The number of hidden layers is determined after some trial and error as two, each with five neurons. The transfer function for the neurons in the two hidden layers are assumed to be tan-sigmoid while for the last output layer neurons, linear transfer functions are assigned. This structure gives fairly good convergence for the given data set using different types of backpropagation training algorithms. Two-third of the generated data set are used to train the network while the remaining are used for testing the trained network. All the inputs in the training set are placed in one matrix and the weights and biases of the network are updated only after the entire training set is applied to the network (Batch Training). The “Training” and “Testing” performance of the neural network has been shown in Figures 10(a) and (b) respectively for the Gradient Descent training algorithm available in the Neural Network Toolbox of MATLAB. The “Training” performance shows the Mean Square Error (MSE) obtained in each epoch of the training process. The inputs of a testing data set are applied to the trained network and the outputs of the latter are compared with those in the testing set. The mean of the square of the errors for all the eight outputs of the network has been noted as the “mean square error of the net output”. This is computed for each of testing data sets and recorded as the “Testing” performance. The final MSE of the trained network near the convergence is close to 0.0175 in about 900 epochs. Since the outputs of the network are to denote the element of the weighting matrices of the fusion process, the order of the MSE obtained in the training process, can be stated to be sufficiently good for the given problem. This indicates that the training processes are quite effectively justified and that a trained network fed with noisy sensory data can predict the weighting matrices of the fusion algorithm with reasonable accuracy.

5. Conclusions Future humanoid robots will have to work in a multisensor framework, the fused information needs to be represented with minimized uncertainty. The level up to which the minimization would be significant is once again depends on specific application and the sophistication of the classifier handling the fused information. This paper has proposed and developed a hybrid sensor fusion classifier which consists of three levels of fusion – Geometric Fusion, Fission Fusion Approach (FFA) and Fusion in the Differential Domain (FDD). These are directed towards the objective of minimizing uncertainties associated with any type of information that has been acquired through multiple sensors or sensory units. The essence of the FFA technique is based on “Fissioning” (dimensionality reduction of) the covariance matrix and considering each dimension of information separately for fusion.

FUSION STRATEGIES FOR MINIMIZING SENSING-LEVEL UNCERTAINTY

31

The FDD processing technique uses feedback from the fusion processor (i.e. the fused information) to each of the individual sensory data. The formulation for this has been presented through a recursive iteration process that predicts correction terms for the different sensory information. By fusing these terms in the differential domain using the same weighting matrices, the fused information is represented with minimized uncertainty in a recursive way. FDD has been applied to improve the sensory characteristics of a set of sensors having different degrees of nonlinearity. The results presented are significant because for the meaningful interpretation of information from multiple sensors, each of the individual sensor models has to be accurately developed with respect to its uncertainty characterizations. Only then can conventional fusion tools be applied to fuse information with quantifiable accuracy. The repeatability characteristics of a configured robot has been experimented, analyzed and improved through the application of FFA. This enables the joint space to be redefined more precisely. Also, the weighting matrices that minimize the variance of the fused information are observed to be strong functions of the joint angles. These observations motivated the attempt to formulate a neural network architecture for the fusion algorithm applied to manipulation tasks of a robot guided by an external vision sensor. The trained network is able to predict the components of the optimal fusion weighting matrices directly from the noisy sensory data applied as inputs. The “Training” and “Testing” performances has been shown for a particular batch-training algorithm and the results have been shown to be very effective. As future work fuzzy-rough set tools are being incorporated in the fusion classifier developed here for handling large sensory information having redundancy and contradictions.

References 1. 2. 3. 4.

5.

6. 7. 8.

Barshan, B. and Durrant-Whyte, H. F.: Inertial navigation systems for mobile robots, IEEE Trans. Robotics Automat. 11(3) (1995), 328–342. Chen, W. Z., Korde, U. A., and Skaar, S. B.: Position control experiments using vision, Internat. J. Robotics Res. 13(3) (1994), 199–208. Dallaway, J. L., Jackson, R. D., and Gosine, R. G.: An interactive robot control environment for rehabiltation applications, Robotica 11 (1993), 541–551. Di, X., Ghosh, B. K., Ning, X., and Tzyh, J. T.: Intelligent robotic manipulation with hybrid position/force control in an uncalibrated workspace, in: Proc. IEEE Internat. Conf. on Robotics and Automation, May 1998, pp. 1671–1676. Dodd, T., Bailey, A., and Harris, C. J.: A data driven approach to sensor modeling, estimation, tracking and data fusion, in: M. Bedworth and J. O’Brien (eds), EuroFusion 98, Internat. Conf. on Data Fusion, 1998, pp. 103–111. Duda, R. O., Hert, P. E., and Stork, D. G.: Pattern Classification, 2nd edn, Wiley, New York, 2001. Durrant-Whyte, H. F.: Sensor models and multisensor integration, Internat. J. Robotics Res. 7(6) (1988), 97–113. Feng, L. and Brandt, R. D.: An optimal control approach to robust control of robot manipulators, in: IEEE Internat. Conf. on Control Applications, September 1996, pp. 31–36.

32 9.

G. C. NANDI AND D. MITRA

Golfarelli, M., Maio, D., and Rizzi, S.: Correction of dead-reckoning errors in map building for mobile robots, IEEE Trans. Robotics Automat. 17(1) (2001), 37–47. 10. Hall, D. L. and Llinas, J.: An introduction to multisensor data fusion, Proc. IEEE 85(1) (1997), 6–23. 11. Hayati, S.: Robot arm geometric link parameter estimation, in: Proc. of the 22nd IEEE Conf. on Decision and Control, December 1983. 12. Hirzinger, G. et al.: Advances in robotics: The DLR experience, Internat. J. Robotics Res. 18(11) (1999), 1064–1087. 13. Klein, L. A.: Sensor and Data Fusion Concepts and Applications, SPIE Publications, San Jose, CA, USA, 1993. 14. Langlois, D., Elliott, J., and Croft, E. A.: Sensor uncertainty management for an encapsulated logical device architecture. Part II: A control policy for sensor uncertainty, in: American Control Conference, June 2001, pp. 4288–4293. 15. Lopes, L. S. and Connell, J. H.: Sentience in robots: Application and challenges, IEEE Intelligent Systems 16(5) (2001), 66–69. 16. Mao-Lin, N. and Meng, J. E.: Decentralized control of robot manipulators with couplings and uncertainties, in: American Control Conference, 28 June 2000, pp. 3326–3330. 17. Nakamura, Y., Advanced Robotics-Redundancy and Optimization, Addison-Wesley, Reading, MA, 1990. 18. Nandi, G. C. and Mitra, D.: Development of a sensor fusion strategy for robotic application based on geometric optimization, J. Intelligent Robotic Systems 35 (2002), 171–191. 19. Pomerleau, D. A.: Neural Network for Mobile Robot Guidance, Kluwer Academic, Boston, 1993. 20. Smith, R., Self, M., and Cheeseman, P.: Estimating uncertain spatial relationships in robotics, in: Autonomous Robot Vehicles, Springer, Berlin, 1990, pp. 167–193. 21. Sung-Bae, C.: Pattern classification for biological data mining, Ghosh and Pal (eds), in: Soft Computing Approach to Pattern Recognition and Image Processing, Series in Machine Perception, Artificial Intelligence, Vol. 53, 2002. 22. Tsai, R. Y.: Synopsis of recent progress on camera calibration for 3-D machine vision, in: O. Khatib et al. (eds), The Robotics Review, MIT Press, Cambridge, MA, 1989, pp. 146–159. 23. Von Collani, Y., Ferch, M., Zhang, J., and Knoll, A.: A general learning approach to multisensor based control using statistic indices, in: Proc. of IEEE Internat. Conf. on Robotics and Automation, April 2000, pp. 3221–3226. 24. Yager, R. R.: A general approach to the fusion of imprecise information, Internat. J. Intelligent Systems 12 (1997), 1–29.

Fusion Strategies for Minimizing Sensing-Level ... - Springer Link

Humanoid robotic applications require robot to act and behave like human being. ...... (a) and (b) show testing performance of fusion of synthetic data of vision ...

946KB Sizes 2 Downloads 228 Views

Recommend Documents

Development of a Sensor Fusion Strategy for Robotic ... - Springer Link
minimize the absolute error almost to zero by repeated fusion in this domain for a .... obtained by lateral displacement of camera and adding the SSD values from ...

Minimizing Maximum Lateness on Identical Parallel ... - Springer Link
denote the objective value of schedule S. We call a batch containing exactly B ... A technique used by Hall and Shmoys [9] allows us to deal with only a constant ...

Cooperative Caching Strategies for Minimizing ...
objects caching strategies for minimizing content provisioning costs in networks with homogeneous and ... wireless networks to improve data access efficiency.

Review Article The potential for strategies using ... - Springer Link
Jul 31, 2003 - Intrinsic, or primary, brain tumours usually do not metastasise to ... nutraceutical when it is used at a pharmacological dose in treatment of a ...

Mixed strategies in games of capacity manipulation in ... - Springer Link
Received: 30 September 2005 / Accepted: 22 November 2005 / Published online: 29 April 2006 .... (2005) report that many high schools in New York City .... Abdulkadiro˘glu A, Pathak PA, Roth AE (2005) The New York City high school match.

Exploiting Graphics Processing Units for ... - Springer Link
Then we call the CUDA function. cudaMemcpy to ..... Processing Studies (AFIPS) Conference 30, 483–485. ... download.nvidia.com/compute/cuda/1 1/Website/.

Evidence for Cyclic Spell-Out - Springer Link
Jul 23, 2001 - embedding C0 as argued in Section 2.1, this allows one to test whether object ... descriptively head-final languages but also dominantly head-initial lan- ..... The Phonology-Syntax Connection, University of Chicago Press,.

MAJORIZATION AND ADDITIVITY FOR MULTIMODE ... - Springer Link
where 〈z|ρ|z〉 is the Husimi function, |z〉 are the Glauber coherent vectors, .... Let Φ be a Gaussian gauge-covariant channel and f be a concave function on [0, 1].

Tinospora crispa - Springer Link
naturally free from side effects are still in use by diabetic patients, especially in Third .... For the perifusion studies, data from rat islets are presented as mean absolute .... treated animals showed signs of recovery in body weight gains, reach

Chloraea alpina - Springer Link
Many floral characters influence not only pollen receipt and seed set but also pollen export and the number of seeds sired in the .... inserted by natural agents were not included in the final data set. Data were analysed with a ..... Ashman, T.L. an

GOODMAN'S - Springer Link
relation (evidential support) in “grue” contexts, not a logical relation (the ...... Fitelson, B.: The paradox of confirmation, Philosophy Compass, in B. Weatherson.

Bubo bubo - Springer Link
a local spatial-scale analysis. Joaquın Ortego Æ Pedro J. Cordero. Received: 16 March 2009 / Accepted: 17 August 2009 / Published online: 4 September 2009. Ó Springer Science+Business Media B.V. 2009. Abstract Knowledge of the factors influencing

Quantum Programming - Springer Link
Abstract. In this paper a programming language, qGCL, is presented for the expression of quantum algorithms. It contains the features re- quired to program a 'universal' quantum computer (including initiali- sation and observation), has a formal sema

BMC Bioinformatics - Springer Link
Apr 11, 2008 - Abstract. Background: This paper describes the design of an event ontology being developed for application in the machine understanding of infectious disease-related events reported in natural language text. This event ontology is desi

Isoperimetric inequalities for submanifolds with ... - Springer Link
Jul 23, 2011 - if ωn is the volume of a unit ball in Rn, then. nnωnVol(D)n−1 ≤ Vol(∂D)n and equality holds if and only if D is a ball. As an extension of the above classical isoperimetric inequality, it is conjectured that any n-dimensional c

Probabilities for new theories - Springer Link
where between 0 and r, where r is the prior probability that none of the existing theories is ..... theorist's internal programming language"(Dorling 1991, p. 199).

A Process Semantics for BPMN - Springer Link
Business Process Modelling Notation (BPMN), developed by the Business ..... In this paper we call both sequence flows and exception flows 'transitions'; states are linked ...... International Conference on Integrated Formal Methods, pp. 77–96 ...

Unsupervised Learning for Graph Matching - Springer Link
Apr 14, 2011 - Springer Science+Business Media, LLC 2011. Abstract Graph .... tion as an integer quadratic program (Leordeanu and Hebert. 2006; Cour and Shi ... computer vision applications such as: discovering texture regularity (Hays et al. .... fo

Candidate quality - Springer Link
didate quality when the campaigning costs are sufficiently high. Keywords Politicians' competence . Career concerns . Campaigning costs . Rewards for elected ...

Mathematical Biology - Springer Link
Here φ is the general form of free energy density. ... surfaces. γ is the edge energy density on the boundary. ..... According to the conventional Green theorem.

Artificial Emotions - Springer Link
Department of Computer Engineering and Industrial Automation. School of ... researchers in Computer Science and Artificial Intelligence (AI). It is believed that ...

Property Specifications for Workflow Modelling - Springer Link
workflow systems precisely, and permit the application of model checking to auto- ... diate interactions between the traveller who wants to buy airline tickets and the ... also a specification language, and to the best of our knowledge there is ...