TA2-1.3

2005 International Conference on Control and Automation (ICCA2005) June 27-29, 2005, Budapest, Hungary

Bayesian Control Limits for Statistical Process Monitoring Tao Chen, Julian Morris and Elaine Martin

Abstract— This paper presents a Bayesian approach, based on infinite Gaussian mixtures, for the calculation of control limits for a multivariate statistical process control scheme. Traditional approaches to calculating control limits have been based on the assumption that the process data follows a Gaussian distribution. However this assumption is not necessarily satisfied in complex dynamic processes. A novel probability density estimation method, the infinite Gaussian mixture model (GMM), is proposed to address the limitations of the existing approaches. The infinite GMM is introduced as an extension to the finite GMM under a Bayesian framework, and it can be efficiently implemented using the Markov chain Monte Carlo method. Based on probability density estimation, control limits can be calculated using the bootstrap algorithm. The proposed approach is demonstrated through its use for the monitoring of a simulated continuous chemical process.

I. INTRODUCTION The on-line monitoring of the performance of manufacturing processes is necessary to help ensure process safety and the delivery of high quality, consistent product. In recent years a number of multivariate statistical projection techniques including principal component analysis (PCA) and partial least squares (PLS), have been proposed for the development of statistical process performance monitoring schemes [3], [8], [10]. It is assumed that the normal behavior of the process can be characterized by the first few principal components or latent variables that are extracted from historical data, collected when the process was operating under normal conditions. Hence the definition of the normal region is critical to the successful implementation of process monitoring schemes. Traditionally the control limits, that is, the thresholds used to define the normal operating region, are constructed based on the assumption that the process data is drawn from some well characterised probability distribution function (pdf) 1, such as the Gaussian distribution. This is the fundamental assumption when developing the limits for Hotelling’s T 2 [7], and thus it is assumed that the pdf of the nominal data is restricted to the form of a simple parametric probability function, which may not be a valid approximation for data generated from complex manufacturing processes. Semiparametric or non-parametric probability density estimation may give rise to statistically more appropriate control limits. The authors are with Centre for Process Analytics and Control Technology, School of Chemical Engineering and Advanced Materials, University of Newcastle, Newcastle upon Tyne, NE1 7RU, United Kingdom. Emails: [email protected],

[email protected], [email protected] 1 Strictly

speaking, the paper estimates the pdf of the principal components or latent variables, not the original process data.

0-7803-9137-3/05/$20.00 © 2005 IEEE

409

A number of probability density estimation techniques have been reported in the literature for the calculation of control limits, for example kernel density estimation (KDE) [1], [8]. The performance of KDE is dependent on the smoothing parameter of the kernel functions, which can be estimated from the data using cross validation. However KDE is not suitable for modeling high-dimensional data because of the so-called curse of dimensionality phenomenon. That is, with increasing dimensionality, the data points become more sparsely distributed in the data space. A semi-parametric model may alleviate this problem and in this paper mixtures of basic probability functions, such as the Gaussian mixture model (GMM) are considered. Recently the GMM has been successfully applied in process monitoring and fault detection [2]. This paper increases the number of mixtures in a GMM to infinity, motivated by the theoretical fact that an infinite GMM is capable of approximating any probability density function to any accuracy. Infinite GMM removes one of the obstacles when applying the GMM to practical problems, that is the issue of selecting the number of mixtures. Furthermore, with the rapid increase in computational power, the infinite GMM can be efficiently implemented using the Markov chain Monte Carlo (MCMC) method under a Bayesian framework. In addition, MCMC avoids the local optimal estimation problem associated with maximum likelihood parameter estimation. After the probability density function has been estimated, the control limits are obtained using the bootstrap. The proposed approach is demonstrated through its application for the monitoring of a simulated continuous chemical process. II. GAUSSIAN MIXTURE MODEL A brief introduction to the finite Gaussian mixture model, whose mixing weight is given by a Dirichlet process prior, is first presented prior to deriving an infinite Gaussian mixture model when the number of mixtures tends to infinity. A. Finite Gaussian Mixture Model The probability density function of data, x = {x1 , · · · , xn }, can be modeled by finite mixtures of Gaussian distributions with k components: p(x|µ, s, π) =

k 

  πj G µj , s−1 j

(1)

j=1

where µ = {µ1 , · · · , µk } are the means, s = {s1 , · · · , sk } are the precisions (inverse variances), π = {π1 , · · · , πk } are the mixing weights (which must be positive and sum to one),

and G is a Gaussian distribution. For simplicity, the data are assumed to be scalar. The extension to multivariate case will be discussed later. The classical approach to estimating the GMM parameters, (µ, s, π), is to maximize the data likelihood using the expectation-maximization (EM) algorithm [4]. The EM algorithm guarantees to converge to a local maximum, with the quality of the maximum being heavily dependant on the random initialization of the algorithm. On the contrary, the Bayesian approach defies prior distributions over the GMM parameters, and the inference is performed with respect to the posterior probability of the parameters. As opposed to achieving an ”optimal” estimate of the parameters, Bayesian inference uses the Monte Carlo method to generate samples from the posterior distribution, and by averaging over the Monte Carlo samples, the problem of local maximum can be overcome. This paper will discuss the mixture models under a Bayesian framework, mainly based on the formulation given in [11]. The priors and conditional posterior distributions will be defined for the component means, the precisions, and the mixing weights. In general, the priors are specified via “hyper-parameters”, which themselves are given higher level priors. This apparently complex hierarchical structure has been justified in the literature on Bayesian inference [9], [11], [13]. Component Means The component means are given Gaussian priors: p(µj |λ, r) ∼ G(λ, r

−1

Component Precisions The component precisions are given Gamma priors: p(sj |β, ω) ∼ Ga(β, ω −1 )

(6)

where β and ω are again hyper-parameters with priors given by: p(β −1 ) ∼ p(ω) ∼

Ga(1, 1)

(7)

Ga(1, σx2 )

(8)

The conditional posterior precisions are obtained by multiplying likelihood and prior: p(sj |c, x, µj , β, ω) ∼ 

β + nj  Ga β + nj , ωβ + i:ci =j (xi − µj )2 )

(2)

where prior mean, λ, and prior precision, r, are hyperparameters that are common to all components. The hyperparameters themselves are given vague Gaussian and Gamma hyper-priors:

(9)

Here c = {ci , i = 1, · · · , n} is introduced to indicate that the data point, xi , belongs to mixture ci . The conditional posteriors for hyper-parameters, β and ω, can also be obtained by multiplying the respective likelihoods and hyper-priors. Mixing Weights As for the general case of mixture models [9], [13], the mixing weights are given Dirichlet priors [6] with concentration parameter α/k: p(π1 , · · · , πk |α) ∼ Dirichlet(α/k, · · · , α/k)

)



(10)

Sampling for the mixing weights can be indirectly realized by sampling for the indicators, whose probability is conditional on the mixing weights: p(c1 , · · · , ck |π1 , · · · , πk ) =

k

n

πj j

(11)

j=1

p(λ) p(r)



G(µx , σx2 )



Ga(1, σx−2 )

(3) ∝r

−1/2

exp(−rσx2 /2)

(4)

where µx and σx2 are the mean and variance of the data points. The shape parameter of the Gamma prior is set to unity, corresponding to a very vague distribution. These conjugate priors guarantee the posteriors are also Gaussians, from which Monte Carlo samples can be easily drawn. To make inferences with respect to component means, the conditional posterior distributions for µj are obtained by multiplying the likelihood (eq. (1)) by the prior (eq. (2)), resulting in a Gaussian distribution:  p(µj |c, x, sj , λ, r) ∼ G

x ¯j nj sj + λr 1 , n j sj + r n j sj + r

 (5)

where x ¯j and nj are the mean and the number of data points belonging to mixture j, respectively. Similarly, the conditional posterior for λ and r can be obtained by the multiplication of their likelihoods and hyper-priors to enable Monte Carlo sampling.

410

By integrating out the mixing weights as a result of the properties of the Dirichlet integral, the prior for the indicators is only dependent on α. Furthermore, to use Gibbs sampling for the discrete indicators, ci , the conditional prior for a single indicator, given all the other indicators, is required and can be obtained as follows: n−i,j + α/k (12) n−1+α where the subscript −i indicates all indices except i and n−i,j is the number of data points, excluding xi , that are belong to mixture j. The posteriors are given by the multiplication of the likelihood and the prior: p(ci = j|c−i , α) =

p(ci = j|c−i , µj , sj , α) ∝ n−i,j + α/k 1/2 s exp(−sj (xi − µj )2 /2) (13) n−1+α j Finally the concentration parameter for the Dirichlet distribution, α, is given an inverse Gamma prior, p(α−1 ) ∼ Ga(1, 1).

Monte Carlo Sampling The posteriors for the parameters and hyper-parameters were defined in the preceding section. Therefore Markov chain Monte Carlo samples can be generated iteratively to approximate these posteriors. For finite mixtures of Gaussians, Gibbs sampling proceeds by updating the hyperparameters and parameters iteratively as follows: 1) Sample α; sample c given new α. 2) Sample λ and r; sample µ given new λ and r. 3) Sample β and ω; sample s given new β and ω. 4) Repeat step 1 – 3 until convergence, or maximum number of iterations is reached. The convergence can be diagnosed by examining the autocorrelation of MCMC samples over iterations. After convergence, “pseudo-independent” samples (that is, samples with low auto-correlation coefficient) are obtained by selecting every sample in a number of iterations. B. Infinite Gaussian Mixtures The previous discussions have been restricted to a finite number of mixtures. However the selection of the number of mixtures is a real issue in practical applications. The likelihood of the data will be a maximum when the number of mixtures is equal to the number of training data points, which results in “over-fitting”. One solution is to use validation or cross-validation, which selects the number of mixtures by maximizing the likelihood over the training and validation data sets simultaneously. Bayesian methodology pushes this problem to the limit, that is, inference is performed on an infinite number of mixtures. The computation with infinite mixtures is finite through the use of “represented” and “unrepresented” mixtures. Represented mixtures are those that have training data associated with them whilst Unrepresented mixtures, which are of infinite number, have no training data associated with them. By using unrepresented mixtures, the task of selecting the number of mixtures is avoided. With the exception of the indicators, the conditional posteriors for the infinite limit, for all the other model parameters/hyper-parameters, are obtained by substituting in krep , the number of represented mixtures, for k in the above equations. For the indicators, let k → ∞ in eq. (12), and the conditional prior will give the limits: n−i,j > 0 : p(ci = j|c−i , α)

=

other : p(ci = other|c−i , α)

=

n−i,j n−1+α α n−1+α

(14)

The above priors allow the indicators to be associated with unrepresented mixtures. Therefore there are a finite number of represented mixtures and an infinite number of unrepresented mixtures. Without training data, an infinite number of unrepresented mixtures will have identical distributions as defined by the prior. Therefore there is no requirement to differentiate between these infinite unrepresented mixtures, which can be approximated by a finite number of mixtures

411

sampled from the prior. Similar to the finite mixtures, the posteriors for the indicators are given by: n−i,j > 0 : p(ci = j|c−i , µj , sj , α) ∝ n−i,j 1/2 s exp(−sj (xi − µj )2 /2) n−1+α j other : p(ci = other|c−i , λ, r, β, ω, α) ∝

α p(xi |µj , sj )p(µj , sj |λ, r, β, ω)dµj dsj n−1+α (15) The likelihood with respect to the unrepresented mixtures is an integral over the prior for the mixture parameters. However this integral is not analytically tractable. Neal [9] proposed an efficient Monte Carlo sampling strategy to approximate this integral, allowing the number of represented mixtures to vary according to the data along with the MCMC iterations. Therefore the complete sampling procedure for infinite mixtures of Gaussians is similar to that for finite mixtures, except for the sampling for the indicators. C. Multivariate Generalization The extension to multivariate observations is straightforward. The means and precisions become vectors and matrices respectively, and their prior and posterior distributions become multivariate Gaussian and Wishart. Similar modifications apply to the hyper-parameters and their priors. Alternatively diagonal covariance matrices for the Gaussian mixtures can be chosen. These ignore the correlation between the variables. This limitation can be largely overcome by using more mixtures than required by the full covariance matrices. The use of diagonal covariance matrices considerably simplifies the training of the mixture models, and reduces the number of parameters. For D-dimensional data, a full covariance matrix introduces D(D + 1)/2 free parameters, whereas a diagonal one only requires D parameters. Since selecting the appropriate number of mixtures is not an issue in infinite Gaussian mixtures, and hence the diagonal covariance matrices are adopted in this study. III. CONTROL LIMITS The objective of statistical process performance monitoring is to detect changes in the behavior of a manufacturing process. Once a probability distribution has been developed that reflects normal operation, control limits are required to detect any departure of the process from its standard behavior. For example, a control limit of 99% defines a region that encompasses 99% of the nominal process data. A process is classified as abnormal when new data falls outside the 99% nominal region. When the probability distribution p(x|µ, s, π) is available, the 99% control limit can be a likelihood threshold, t, satisfying the following integral:

p(x|µ, s, π)dx = 0.99 (16) x:p(x)>t

With a Gaussian mixture model, this integral is not analytically tractable and therefore it is not possible to obtain

TABLE I P ROCESS FAULTS . Fault A/C feed ratio C Feed Temperature Reactor cooling water valve Condenser colling water inlet temperature Condensor cooling water valve The combination of IDV(12) and IDV(15)

the threshold directly. One possible solution is to approximate this integral by generating Monte Carlo samples from the probability distribution function. The problem with this method is that, as the mixture parameters are averaged over a number of MCMC iterations, the resultant probability density obtained is quite smooth, and has a heavy tail. Therefore confidence limits based on these samples may fail to identify some abnormal process data. A more reasonable approach is to generate samples from nominal process data via the bootstrap [8]. For each set of bootstrap samples, a control limit can be calculated following the algorithm described above. This procedure is repeated a number of times to obtain averaged control limits. IV. CASE STUDY This section applies the proposed approach to the monitoring of the Tennessee Eastman process, which was presented in [5] as a benchmark for developing advanced process control and monitoring techniques. This process consists of a set of unit operations (reactor/separator/stripper/compressor) with two simultaneous gas-liquid exothermic reactions and two by-product reactions. It can be operated under different modes according to production requirements. In this study, the simulation software is run with a decentralized control strategy [12]. The process has 12 manipulated variables and 41 measurements. However some of the quality measurements, such as product concentration, are only available infrequently in industrial scale plant. Hence only 22 measurements, plus 12 manipulated variables, were used to build the process model. The sampling interval was set to 0.02 h. Initially the process is run for 20 hours under normal operating conditions giving 1000 data points, of which 500 points were randomly selected to define the nominal operating region. The remaining 500 data points were reserved for investigating the false alarm rate. Then a number of faults, summarized in Table I, are introduced into the process. According to initial investigations, fault “IDV(1)” is relatively easy to identify, whereas “IDV(12+15)” is the most challenging. For each fault scenario, 300 data points were generated. PCA is performed on the nominal data set, reducing the 34 variables to 5 principal components, which explain 44.4% of the total variance. Then the test data sets, under both normal and faulty operations, are projected onto these principal components to obtain the score vectors. Intuitively it seems that the nominal data are collected while the process is run with small variations about steady state,

412

Type Step Random variation Sticking Random variation Sticking

TABLE II N UMBER OF IDENTIFIED FAULTY DATA POINTS OVER TIME , STARTING FROM WHEN THE FAULT

Model Hotelling’s T 2 Inf. GMM

≤ 0.5h 5 10

“IDV(12+15)” IS INTRODUCED . ≤ 1h 5 17

≤ 2h 24 55

≤ 3h 71 105

≤ 6h 176 229

40

35

30 Number of Represented Mixtures

Case IDV(1) IDV(10) IDV(14) IDV(12) IDV(15) IDV(12+15)

25

20

15

10

5

0

0

Fig. 1.

10

20

30

40

50 60 MCMC Iterations

70

80

90

100

Number of represented mixtures over MCMC iterations.

which may justify the Gaussian assumption in Hotelling’s T 2 . However experiments show that this assumption is problematic. 100 MCMC iterations were used to sample for the infinite GMM. Fig. 1 shows that the MCMC samples reach equilibrium after approximately 10 iterations, resulting in represented mixtures of the order of 25∼35. The predictive probability for the test data is calculated by averaging over 20 MCMC samples from the previous 60 iterations. The bootstrap procedure is repeated 50 times to obtain a robust estimate of the 99% control limit. As an example, Fig. 2 shows the monitoring results, where the fault “IDV(12+15)” is introduced at sample point 500. It can be seen that Hotelling’s T 2 is not sensitive to this fault in the initial stage. The process can still be identified as normal before sample point 560, which is equivalent to 0.4 h after the fault occurs. In contrast, the likelihood based control limit using an infinite GMM is capable of detecting the fault much earlier. Table II provides a clear illustration that the

TABLE III M ONITORING ERROR RATES (%) IN VARIOUS FAULT SCENARIOS . Model

False Alarm

Hotelling’s T 2 Inf. GMM

0.6 1.0

Missed Error IDV(1) IDV(10) 2.3 27.7 1.0 21.7

120

T2

80

60

40

T2 limit (99%) 20

0

100

200

300

400

500

600

700

800

600

700

800

Sample Number

45

40

35

Negative Log Likelihood

30

25

20

15

99% limit

10

5

0

IDV(12+15) 41.3 23.7

in terms of missed error rates.

100

0

IDV(14) 17.7 10.3

100

200

300

400

500

Sample Number

Fig. 2. Process monitoring charts. Top: Hotelling’s T 2 ; Below: infi nite GMM control limit.

infinite GMM can correctly identify more faulty data points. The unsatisfactory performance of T 2 statistic is due to its Gaussian assumption, which may be a poor approximation, even when the nominal process data are assumed to be obtained from small random variations around steady state. Table III examines two types of errors in process monitoring, that is, the false alarm rate and missed error rate, under various fault scenarios. For both Hotelling’s T 2 and the infinite GMM, the false alarm rates are negligible. The missed error rate for IDV(1), which is easy to identify, was also low for both methods. However when tested on more challenging faults, without the Gaussian assumption, the infinite GMM is consistently superior to Hotelling’s T 2

413

V. CONCLUSIONS This paper introduces infinite Gaussian mixture modeling as a tool for calculating control limits for multivariate statistical process monitoring. Although intensive research has focused on extracting information from multivariate process data for performance monitoring, many algorithms still rely on the Gaussian assumption to build the traditional Hotelling’s T 2 control limits from the extracted principal components or latent variables. Without the Gaussian assumption, the infinite Gaussian mixture model provides a Bayesian approach to estimating the probability density function of the nominal process data, thereby enabling the calculation of the control limits based on the bootstrap. The infinite Gaussian mixture model can be efficiently implemented using Markov chain Monte Carlo sampling. The proposed framework was evaluated on a simulated industrial continuous process, with promising results being achieved. One advantage of the infinite Gaussian mixture model is that the number of represented mixtures is inferred automatically from the training data set. Because of the use of “unrepresented” mixtures, as well as averaging parameters over a number of Monte Carlo samples, the resulting probability density function is quite smooth. The smoothness reflects the uncertainty of the underlying probability function given limited data, and enables the model to adapt to new data rapidly. On-going work is focused on the on-line adaptation of the infinite Gaussian mixture model. VI. ACKNOWLEDGMENT The authors acknowledge the constructive suggestions from Radford Neal on infinite Gaussian mixtures. T. Chen would like to acknowledge the financial support of the EPSRC KNOW-HOW (GR/R19366/01) and Chemicals Behaving Badly II (GR/R43853/01), and the UK ORS Award for his PhD study. R EFERENCES [1] Q. Chen, U. Kruger, M. Meronk, and A. Y. T. Leung, “Synthesis of t 2 and q statistics for process monitoring,” Control Engineering Practice, vol. 12, pp. 745–755, 2004. [2] S. W. Choi, J. H. Park, and I.-B. Lee, “Process monitoring using a gaussian mixture model via principal component analysis and discriminant analysis,” Computers and Chemical Engineering, vol. 28, pp. 1377–1387, 2004. [3] A. Cinar and C. Undey, “Statistical process and controller performance monitoring,” in Americal Control Conference, 1999, pp. 2625–2639. [4] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the em algorithm,” Journal of Royal Statistical Society B, vol. 39, pp. 1–38, 1977. [5] J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Computers and Chemical Engineering, vol. 17, pp. 245– 255, 1993.

[6] T. S. Ferguson, “A bayesian analysis of some nonparametric problems,” Annals of Statistics, vol. 1, pp. 209–230, 1973. [7] H. Hotelling, “Multivariate quality control,” in Techniques of Statistical Analysis, C. Eisenhart, M. W. Hastay, and W. A. Wallis, Eds. New York: McGraw-Hill, 1947. [8] E. B. Martin and A. J. Morris, “Non-parametric confi dence bounds for process performance monitoring charts,” Journal of Process Control, vol. 6, pp. 349–358, 1996. [9] R. M. Neal, “Markov chain sampling methods for dirichlet process mixture models,” Department of Statistics, University of Toronto, Canada, Tech. Rep. No. 9815, 1998. [10] P. Nomikos and J. F. MacGregor, “Monitoring batch processes using

414

multiway principal component analysis,” AIChE Journal, vol. 40, pp. 1361–1375, 1994. [11] C. E. Rasmussen, “The infi nite gaussian mixture model,” in Advances in Neural Information Processing Systems 12, S. A. Solla, T. K. Leen, and K.-R. M¨uller, Eds. MIT Press, 2000. [12] N. L. Ricker, “Decentralized control of the tennessee eastman challenge process,” Journal of Process Control, vol. 6, pp. 205–221, 1996. [13] M. West, P. Muller, and M. D. Escobar, “Hierarchical priors and mixture models, with applications in regression and density estimation,” in Aspects of Uncertainty, P. R. Freeman and A. F. M. Smith, Eds. John Wiley, 1994, pp. 363–386.

Neural Network H∞ State Feedback Control with Actuator Saturation ...

39, pp. 1–38, 1977. [5] J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Computers and Chemical Engineering, vol. 17, pp. 245–.

670KB Sizes 0 Downloads 69 Views

Recommend Documents

Optimal Adaptive Feedback Control of a Network Buffer.
system to obtain a robust quasi optimal adaptive control law. Such an approach is used ..... therefore reduces to the tracking of the singular value xsing given by eq. (8). For the .... [7] I. Smets, G. Bastin, and J. Van Impe. Feedback stabilisation

Optimal Adaptive Feedback Control of a Network Buffer
American control conference 2005. Portland, Oregon, USA - Juin 8-10 2005. Optimal Adaptive Feedback Control of a Network Buffer – p.1/19 ...

Optimal Adaptive Feedback Control of a Network Buffer.
Mechanics (CESAME) ... {guffens,bastin}@auto.ucl.ac.be ... suitable for representing a large class of queueing system. An ..... 2) Fixed final state value x(tf ) with x(tf ) small, tf free. ..... Perturbation analysis for online control and optimizat

pdf-0749\radial-basis-function-rbf-neural-network-control-for ...
... apps below to open or edit this item. pdf-0749\radial-basis-function-rbf-neural-network-contr ... design-analysis-and-matlab-simulation-by-jinkun-liu.pdf.

Neural Network Toolbox
3 Apple Hill Drive. Natick, MA 01760-2098 ...... Joan Pilgram for her business help, general support, and good cheer. Teri Beale for running the show .... translation of spoken language, customer payment processing systems. Transportation.

Neural Network Toolbox
[email protected] .... Simulation With Concurrent Inputs in a Dynamic Network . ... iii. Incremental Training (of Adaptive and Other Networks) . . . . 2-20.

Neural Network Toolbox
to the government's use and disclosure of the Program and Documentation, and ...... tool for industry, education and research, a tool that will help users find what .... Once there, you can download the TRANSPARENCY MASTERS with a click.

Feedback Control Tutorial
Design a phase lead compensator to achieve a phase margin of at least 45º and a .... Both passive component variations are specified in terms of parametric ...

Using token leaky bucket with feedback control for ...
Oct 28, 2002 - Consider a host computer connected to the network through a single-server queuing system with constant service rate as depicted in fig. 1.

Output Feedback Control for Spacecraft with Coupled ...
vehicles [2], [10], the six-DOF rigid body dynamics and control problem for ... adaptive output feedback attitude tracking controller was developed in [12]. Finally ...

Output feedback control for systems with constraints and ... - CiteSeerX
(S3) S is constrained controlled invariant. Our goal is to obtain conditions under which there exists an output feedback controller which achieves constrained ...

Sealed actuator
Jun 22, 2001 - a magnetic coupling drive system, a magnetic ?uid seal drive system have ..... Hei 5-122916 may be referred to for the details of the resolver and ... partition Wall to come in contact With the outer circumfer ential surface of the ...

subband adaptive feedback control in hearing aids with ...
hearing aid hardware and software as well as knowledge regarding hearing .... analog-to-digital converter (ADC) and digital-to-analog converter (DAC) are.

Neural Network Toolbox - Share ITS
are used, in this supervised learning, to train a network. Batch training of a network proceeds by making weight and bias changes based on an entire set (batch) of input vectors. Incremental training changes the weights and biases of a network as nee

Covert Attention with a Spiking Neural Network
Neural field. ▻ Spatio-temporal ... Neural field. ▻ Spatio-temporal ... Input Map. Focus Map. Error measure. ▻ Stimulus occupied a spatial position for a time ...

Self-organization of a neural network with ...
Mar 13, 2009 - potential Vi a fast variable compared to the slow recovery variable Wi. i is the .... 0.9gmax blue line, and the others black line. b and c The average in-degree and ... active cells a powerful drive to the inactive ones. Hence,.