Chaotic Time Series Approximation Using Iterative Wavelet-Networks E. S. Garcia-Treviño,V. Alarcon-Aquino Departamento de Ingenieria Electrónica Universidad de las Américas Puebla, Sta. Catarina Mártir. Cholula, Puebla. C.P. 72820. MEXICO /-mail: [email protected], [email protected] Abstract This paper presents a wavelet neural-network for learning and approximation of chaotic time series. Wavelet-networks are inspired by both feed-forward neural networks and the theory underlying wavelet decompositions. Wavelet networks a class of neural network that take advantage of good localization properties of multiresolution analysis and combine them with the approximation abilities of neural networks.. This kind of network uses wavelets as activation functions in the hidden layer and a type of backpropagation algorithm is used for its learning. Comparisons are made between a wavelet-network and the typical feed-forward networks trained with the back-propagation algorithm. The results reported in this paper show that wavelet networks have better approximation properties than its similar backpropagation networks.

1. Introduction Wavelet neural networks are a novel powerful class of neural networks that incorporate the most important advantages of multiresolution analysis introduced by Mallat in 1989 [6]. Zhang and Benveniste [8] found a link between the wavelet decompositions theory and neural networks. Their approach use a basic backpropagation wavelet network learning algorithm. These networks preserve all the features of common neural networks, like universal approximation properties, but in addition, present an explicit link between the network coefficients and some appropiate transform. In neural networks two types of activation functions are commonly used; global, as in Backpropagation networks (BPN), and local, as in Radial Basis Function networks (RBFN). Both networks have

different approximation properties, and given enough nodes, both networks are capable of approximating any continuous function with arbitrary accuracy [7]. With global activation functions, adaptation and incremental learning are slow due to the iterations of many nodes, and convergence is not guaranteed. Also, global functions do not allow local learning or manipulation of the network. These problems are overcome in neural networks with local activation functions. In recent years, several researches have been looking for better ways to design neural networks. For this purpose they had analyzed the relationship between neural networks, approximation theory and functional analysis. In functional analysis any continuous function can be represented as a weighted sum of orthogonal basis functions. Such expansions can be easily represented as neural networks which can be designed for the desired error rate using the properties of orthonormal expansions [7]. Unfortunately, most orthogonal functions are global approximators, and suffer from the disadvantage mentioned above. In order to take full advantage of orthonormality of basis functions, and localized learning, we need a set of basis functions which are local and orthogonal. Wavelets are functions with these features. In wavelet theory we can build simple orthonormal bases with good localization properties. Wavelets are a new family of basis functions that combine powerful properties such as orthogonality, compact support, localization in time and frequency, and fast algorithms. Wavelets have generated a tremendous interest in both theoretical and applied areas, especially over the past few years [10]. Wavelet based methods have been used for approximation theory, pattern recognition, compression, numerical analysis, computer science, electrical engineering, physics etc. Wavelet networks are a class of neural networks that employ wavelets as activation functions. These

have been recently researched as an alternative approach to the traditional neural networks with sigmoidal activation functions. Wavelet networks have attracted great interest, because of their advantages over other networks schemes (see e.g., [9]).There are two main research groups that combine wavelet theory and neural networks, both with a comprehensive framework. The first is based on the work of Zhang and Benveniste [8] [11], that introduces a (1+½) layer neural network based on wavelets. This approach uses simple wavelets, and wavelet-network learning is performed by the standard back-propagation type algorithm as the traditional neural network. The second is based on the work initiated by Bakshi and Stephanopoulos [7], in which wavelets are used in a structure similar to radial basis function networks, but a non-iterative and hierarchical method is used for learning [5]. We based this work on the first approach, combining the ideas of the feed-forward neural networks and the wavelet decompositions. Basic neurons of this kind of wavelet networks are multidimensional wavelets and the neuron parameters are the dilation and translation coefficients as well as the weights. Function approximation involves estimating the underlying relationship from a given finite inputoutput data set, and it has been the fundamental problem for a variety of applications in pattern classification, data mining, signal reconstruction, and system identification [4]. Recently, feed-forward neural networks such as multilayer perceptrons and radial basis function networks have been widely used as an alternative approach to function approximation since they provide a generic black-box functional representation. Furthermore, these networks have shown to be capable of approximating any continuous function defined on a compact set in Å8 , where Å denotes real numbers, with arbitrary accuracy. The main goal of this paper is to show the general performance of wavelet networks in a typical application of neural networks. Particularly, we present wavelet networks applied to the approximation of chaotic time series. For this purpose comparisons are made between a wavelet network, tested with the "Gaussian-Laplacian" wavelet with stochastic gradient type algorithm for learning, and the typical feedforward network trained with the backpropagation algorithm [3].The results reported in this work show clearly that wavelet networks have better approximation properties than its similar backpropagation networks. The reason for that is the firm theoretical foundations of wavelet networks, that combine, the mathematical methods and tools of multiresolution analysis with the neural network

framework. The remainder of this paper is organized as follows. Section 2 briefly reviews wavelet theory. Section 3 describes Zhang-Benveniste's wavelet networks structure. In Section 4 chaotic time series are presented. In Section 5 comparisons are made and discussed between wavelet networks and backpropagation networks. Finally Section 5 presents the conclusions of this work.

2. Review of wavelet theory Wavelet transforms involve representing a general function in terms of simple, fixed building blocks at different scales and positions. These building blocks are generated from a single fixed function called mother wavelet by translation and dilation operations. The continuous wavelet transform considers a family " B, <Œ ß Èl+l +

<+ß, ÐBÑ œ

(1)

where + − Å ß , − Å, with + Á !, and <Ð † Ñ satisfies the admissibility condition. For discrete wavelets the scale (or dilation) and translation parameters in (1) are chosen such that at level 7 the wavelet +!7 <Ð+!7 BÑ is +!7 times the width of <ÐBÑ. That is, the scale parameter Ö+ œ +!7 À 7 − ™× and the translation parameter Ö, œ 5,! +!7 À 7ß 5 − ™×. This family of wavelets is thus given by <7ß5 aBb œ +!

7Î#


Ð#Ñ

so the discrete version of wavelet transform is .7ß5 œ Ø1ÐBÑß <7ß5 ÐBÑÙ

7Î#

œ +!

' _ 1ÐBÑ
Ð$Ñ

Ø † ß † Ù denotes the P# -inner product. To recover 1ÐBÑ from the coefficients Ö.7ß5 ×, the following stability condition should exist [2], # Em1ÐBÑm# Ÿ " "l 1ÐBÑß <7ß5 ÐBÑ¡l# Ÿ Fm1ÐBÑmÐ% ßÑ 7−™ 5−™

with E  ! and F  _ for all signals 1ÐBÑ in P# ÐÅÑ denoting the frame bounds. These frame bounds can be computed from +! , ,! and <ÐBÑ [12]. The reconstruction formula is thus given by 1ÐBÑ ¸

# " " 1ÐBÑß <7ß5 ÐBÑ¡<7ß5 ÐBÑÞ E  F 7−™ 5−™ Ð&Ñ

Note that the closer E and F , the more accurate the reconstruction. When E œ F œ ", the family of wavelets then forms an orthonormal basis [12]. 2.1 Orthonormal bases and multiresolution analysis The mother wavelet function <ÐBÑ, scaling +! and translation ,! parameters are specifically chosen such that <4ß8 ÐBÑ constitute orthonormal bases for P# ÐÅÑ [6], [12]. To form orthonormal bases with good timefrequency localisation properties, the time-scale parameters Ð,ß +Ñ are sampled on a so-called dyadic grid in the time-scale plane, namely, +! œ # and ,! œ ", [6], [12]Þ Thus, from Eq. (2) substituting these values, we have a family of orthonormal bases, <7ß5 ÐBÑ œ #

7Î#


7

B  5 bß

_

x

b2

a2

σ

ω1

σ

ω2

g

........ bN σ ωN

Figure 1. The so-called Ð"  "# Ñ-layer neural network.

where B is the input, 1 is the output, ,'s are the bias of each neuron, 5 are the activaciton function and finally +'s and A's are the first layer and second layer ( "# layer) coefficients, respectively. For this approach, we note that, if 5 ( † ) is a continuous discriminatory function, then finite sums of the form 1ÐBÑ œ ! =3 5 Ð+3X B  ,3 Ñ R

Ð10Ñ

3œ"

Since the idea of multiresolution analysis is to write a signal 1ÐBÑ as a limit of successive approximations, the differences between two successive smooth approximations at resolution #7" and # give the detail signal at resolution #7 . In other words, after choosing an initial resolution P, any signal 1ÐBÑ − P# ÐÅÑ can be expressed as [6], [12]: 1ÐBÑ œ "-Pß5 9Pß5 ÐBÑ  " ".7ß5 <7ß5 ÐBÑß _

are dense in the space of continuous functions, where A3 , ,3 − dß +3 − d 8 Þ In other words, given any continuous functions 0 defined on [!ß "]8 , and any X  !, there is a sum 1ÐBÑ of the form (10), for which ¸1ÐBÑ  0 ÐBѸ  X for all B − [!ß "]8 Þ Based on the so-called Ð"  "# Ñ-layer neural network, Zhang and Benveniste [8] propose a wavelet network structure of the form 1ÐBÑ œ ! =3 <ÒH3 ÐB  >3 ÑÓ  1 R

3œ"

7œP5−™

Ð)) where the detail or wavelet coefficients Ö.7ß5 × are given by Eq. (7), while the approximation or scaling coefficients Ö-Pß5 × are defined by _

b1

a1

Ð(Ñ

2.2 Discrete Wavelet Transform: Decomposition and Reconstruction

-Pß5 œ #PÎ# '_ 1ÐBÑ9Pß5 ˆ#P B  5 ‰.B

As stated previously, we based this work on the wavelet network approach introduced by Zhang and Benveniste [8]. This wavelet network structure has two layers organized as the form of the so-called Ð"  "# Ñlayer neural network showed in Figure 1,

aN

and the reconstruction formula is obtained from Eq. (5). A formal approach to constructing orthonormal bases is provided by multiresolution analysis (MRA) [6]. The idea of MRA is to write a function 1ÐBÑ as a limit of successive approximations, each of which is a smoother version of 1ÐBÑ. The successive approximations thus correspond to different resolutions [6].

5−™

3. Description of wavelet-networks

Ð'Ñ

Using Eq. (3), the orthonormal wavelet transform is thus given by  1ÐBÑß <7ß5 ÐBÑ¡ = _ #7Î# ' 1ÐBÑ<7ß5 a#7 B  5 b.B

Equations (7) and (9) express that a signal 1ÐBÑ is decomposed in details Ö.7ß5 × and approximations Ö-Pß5 × to form a multiresolution analysis of the signal [6].

Ð*Ñ

Ð11Ñ

where 1 is an additional and redundant parameter, introduced to help dealing with nonzero mean functions on finite domains, since the wavelet <ÐBÑ is zero mean; the dialtion matrices H3 ' s are diagonal matrices built from dilation vectors. This network structure is illustrated in Figure 2.

t1

D1 t2

x

ψ

g D2

ψ

ω2

DN

1. To restrict the wavelets in or near the domain W, select another domain X such that X § W § d 8 , and require Ð1%Ñ >3 − X ß 3 œ !ß "ß ÞÞÞß R  "

g

........ tN

Parameter Constraints. Let 0 À W p d be the function to be approximated, where W § d 8 is a closed domain. The constraints on the parameters are listed now [8] :

ω1

ψ

ωN

2. To avoid each wavelet to be too much compressed, select X  ! and require

Figure 2. Ð"  "# Ñ-layer wavelet neural network.

In this kind of wavelet networks, each neuron is replaced by a wavelet. This approach is different from the wavelet decomposition of functions, because the translation and dilation in the new structure are iteratively adjusted according to the given function to be approximated. 3.1 Learning algorithm

Ð1#Ñ

This stochastic gradient algorithm recursively minimize the criterion ("#) using inputÎoutput observations. This algorithm modifies the vector ) after each measurement ÐB5 ß C5 Ñ in the opposite direction of the gradient of -Ð)ß B5 ß C5 Ñ œ "# Ò1) ÐB5 Ñ  C5 Ó#

Ð1&Ñ

3. To prevent the total volume of wavelet supports from being too small, select Z  ! and require ! Ð./> H3 Ñ"  Z

R "

Ð1'Ñ

3œ!

As we mention above, these wavelet networks uses a stochastic gradient type algorithm to adjust the network parameters. If we denote by ) the vector collecting all the parameters of the network is denoted by 1) ÐBÑ. The learning algorithm should minimize the following criterion. GÐ)Ñ œ "# IÖÒ1) ÐBÑ  CÓ# ×

H3"  X Mß 3 œ !ß "ß ÞÞÞß R  "

Ð1$Ñ

3.2 Implementation The parameters of network (11) are difficult to learn because of its structural nonlinearity. Local minima for neural network learning have been discussed in detail in the neural network literature. Due to the fact that this kind of wavelet networks use a type of gradient algorithm we suspect also the existence of undesirable local minima for wavelet network learning. Therefore, some additional processing is needed to avoid divergence or poor convergence. Such a processing involves both constraints on the adjustable parameters and a procedure to initialize the recursive learning algorithm.

So we have a constrained minimization problem. This constraints are used in each iteration after the modification of the parameter vector with the stochastic gradient to project the parameter vector in the restricted domain. Network Initialization. The idea of initialization is somehow similar to the wavelet decomposition. More precisely, the initial values for the network parameters are derived by using the decomposition formula (8). Using noisy input/output measurements ÖBß 0 ÐBÑ×, we can get some rough estimate of [ Ð2ß >Ñ in (8) with an appropriate finite number of observations. For the case of one dimensional approximation problem, assume that 0 ÐBÑ is the function to be approximated, over the domain · œ [+ß , ] by a network of the form 3Ñ 1ÐBÑ œ ! =3 <Ò ÐB> =3 Ó  1

R

Ð1(Ñ

3œ"

The wavelet network initialization then consist in the evaluation of the parameters 1 , A3 , >3 and =3 for 3 œ "ß #ß ÞÞÞR . To initialize 1 we need to estimate the mean, from its available observations, of the function 0 ÐBÑ and set 1 to this estimated mean. A3 's are simply set to zero. To initialize >3 and =3 select a point : between + and , : +  :  , . Then we set >" œ :ß =" œ 0Ð,  +Ñ Ð")Ñ where 0  ! is a properly selected constant (the typical value 0 is !Þ&) Then, interval [+ß , ] is divided into two parts by the point :. In each sub-interval, we recursively repeat the same procedure which will initialize ># ß =# and >$ ß =$ and so on, until all the

wavelets are initialized. This procedure applies exactly in this form when the number of neurons used is a power of 2. For the case, when the number of neurons in the network is not a power of 2, the recursive procedure is applied as long as possible, then the remaining >3 are initialized at random value for the finest remaining scale. For the the selection of the point : inside [+ß , ] (and recursively points which divide all the subintervals), we use the following equation: 3ÐBÑ œ where

4 ÐBÑ '+, 4 ÐBÑ.B

Ð"*Ñ

4ÐBÑ œ ¹ .0.BÐBÑ ¹

3ÐBÑ is a "density function" that must be estimated from the available noisy input/output observations ÖBß 0 ÐBÑ×. A very rough estimate of 4 ÐBÑ is used for this purpose. Then we take the point : to be the center of gravity of [+ß , ] : œ '+ B3ÐBÑ.B ,

Ð#!Ñ

4. Chaotic time series Chaos is the mathematical term for the behavior of a system that is inherently unpredictable. Unpredictable phenomena are readily apparent in all areas of life [1]. Many systems in the natural world are now known to exhibit chaos or non-linear behavior, the complexity of which is so great that they were previously considered random. The unraveling of these systems has been aided by the discovery, mostly in this century, of mathematical expressions that exhibit similar tendencies [2]. Chaos can occur in systems that have few degrees of freedom, too. The critical ingredient in many chaotic systems is what mathematicians call sensitive dependence to initial conditions. If one makes even slightest change in the initial configuration of the system, the resulting behavior may be dramatically different [1]. Chaos is part of an even grander subject known as dynamics. Whenever dynamical chaos is found, it is accompanied by nonlinearity. Naturally, an uncountable variety of nonlinear relations is possible, depending perhaps on a multitude of parameters. These nonlinear relations are frequently encountered in the form of difference equations, mappings, differential equations, partial differential equations, integral equations, or even sometimes combinations of these. We note that, for each differential equation, the specific parameters were selected because they are the

more representative values for chaotic behavior, and also are the most commonly used in the literature [1][2][14][15]. In this section, we present briefly the two chaotic time series used in this work. 4.1 Lorenz equation It was introduced by E. N. Lorenz, in 1963. It was derived from a simplified model of atmospheric interactions. The system is most commonly expressed as 3 coupled non-linear differential equations: .B (2") .> œ +ÐC  BÑ .C .> œ BÐ,  DÑ  C .D .> œ BC  -DÑ where + œ "!ß , œ #)ß - œ )Î$. 4.1 Mackey-Glass equation It was first advanced as a white blood cell production. It is a time-delay differential equation described by +BÐ>X Ñ .B (2") .> œ "B- Ð>X Ñ  ,BÐ>Ñ where + œ !Þ2ß , œ !.1ß - =10. The chaotic series described previously were considered for this work because they are used as a benchmark and also they illustrate how complex behavior can easily be produced by simple equations with non-linear elements and feedback.

5. Simulation results This work presents wavelet-networks applied to the approximation of two of the most known chaotic time series: Lorenz and Mackey-Glass Equations. Particularly, this section presents comparisons between a wavelet network, tested with the "GaussianLaplacian" wavelet with stochastic gradient type algorithm for learning, and the typical feed-forward network trained with the backpropagation algorithm. The Lorenz equation was obtained from [16] and was sampled every 0.01 time steps. For the Mackey-Glass equation X =17 with a step size of 1.0. For an appropriate learning, and taking into account the features of the activation functions used by the networks analyzed, the two series were normalized in the range of 0-1. For the basis functions of the wavelet network approach (wavenet) we test it with the socalled "Gaussian-Laplacian" wavelet: " # <ÐBÑ œ  B/ # B . For the back-propagation network (BPN), we use tangent-sigmoidal activation functions, and it was trained with a learning rate of 0.1, a

Table 1. Simulations results

Lorenz equation neurons iterations MSE 10 40 0.0186 15 40 0.0045 10 5000 0.0645 15 5000 0.0583 Mackey-Glass equation method neurons iterations MSE wavenet 10 40 0.0148 wavenet 15 40 0.0041 BPN 10 5000 0.0655 BPN 15 5000 0.0564 method wavenet wavenet BPN BPN

1

1.2 1 0.8 0.6 0.4 0.2 0 0

100

200 300 Samples

400

500

(b) Figure 3. Mackey-Glass Equation approximation with 15 neurons, (a) with BPN, (b)with Wavenet.

6. Conclusions The wavelet network described in this paper can be used for black-box identification of general non-linear systems. This method was inspired by both the neural networks and the wavelet decomposition. The basic idea is to replace the neurons by more powerful computing units obtained by cascading an affine transforms. The results reported in this paper show clearly that wavelet networks have better approximation properties than its similar backpropagation networks. The reason for this is that wavelets, in addition to forming an orthogonal basis, have the capability to explicitly represent the behavior of a function a different resolutions of input variables. Finally, it is worth noting that for a comparable number of neurons, the complexity of the input/output mapping realized by the wavelet decomposition is higher than its counterpart, realized by backpropagation networks. A more detailed study of the chaotic features of the series presented here, and the analysis of the performance for this kind of networks applied to real data will be reported in a forthcoming paper.

7. References:

0.8

Output

1.4

Output

momentum term of 0.07. The architecture used for the Backpropagation network was: 1-10-1 (one input layer, ten hidden units and one output unit), and 1-15-1 (one input layer, fifteen hidden units and one output unit). Table 1 shows that wavelet networks outperform the BPN, in terms of MSE (Mean Square Error), in both cases, when we use ten and fifteen neurons (see also Figure 3). This is due to the fact that neural networks based on conventional single-resolution schemes cannot learn difficult time series, with abrupt changes in its behavior; consequently, the training process often cannot converge or converge very slowly, and then, the trained network may not generalize well. It is also clear from Table 1 that wavelet-networks require a less number of iterations to perform time series prediction. We can see in Figure 3 that for the time series analyzed, the wavelet-networks show a better approximation than its similar backpropagation-network. For the case of the MackeyGlass equation, in which we work with 500 samples for training, we can see that, for 15 neurons, the backpropagation-network only can learn the first 100 samples of the mapping. This is due to the fact that the learning of these networks is based on a singled resolution scheme.

[1] R. L. Devaney, "Chaotic Explosions in Simple Dynamical Systems", The Ubiquity of Chaos, Edited by Saul Krasner. American Association for the Advancement of Science. Washington D. C., U.S.A. 1990

0.6

0.4

[2] J. Pritchard, "The Chaos CookBook: A practical programming guide", Part of Reed International Books. Oxford. Great Britain. 1992.

0.2 0 0

100

200 300 Samples

(a)

400

500

[3] S. Y. Kung, "Digital Neural Networks", Department of Electrical Engineering, Princeton University, Prentice Hall. New Jersey. U.S.A. 1993. [4] S. Sitharama Iyengar, E.C. Cho, Vir V. Phoha, "Foundations of Wavelet Networks and Applications", Chapman & Hall/CRC. U.S.A. 2002 [5] A. A. Safavi, H. Gunes, J.A. Romagnoli, "On the Learning Algorithms for Wave-Nets", Department of Chemical Engineering, The University of Sydney. Australia. 2002, [6] S. G. Mallat, "A theory for Multiresolution Signal Decomposition: The Wavelet Representation", IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol II. No 7. July 1989 [7] B. R. Bakshi, G.Sthepanopoulos, "Wavelets as Basis Functions for Localized Learning in a Multiresolution Hierarchy", Laboratory for Intelligent Systems in Process Engineering, Department of Chemical Engineering. Massachusetts Institute of Technology, Cambridge, MA 02139, 1992 [8] Q. Zhang, A. Benveniste, "Wavelet Networks", IEEE Transactions on Neural Networks. Vol 3. No 6. July 1992 [9] E. A. Rying, Griff L. Bilbro, and Jye-Chyi Lu. "Focused Local Learning With Wavelet Neural Networks", IEEE Transactions on Neural Networks, Vol. 13, No. 2, March 2002. [10] B. Jawerth, W. Sweldens. "An overview of wavelet based multiresolution analyses", SIAM, 1994. [11] Xeiping Gao, Fen Xiao, Jun Zhang, Chunhong Cao. "Short-term Prediction of Chaotic Time Series by Wavelet Networks", WCICA, Fifth World Congress on Intelligent Control And Automation, 2004. [12] I, Daubechies. "Ten Lectures on Wavelets", New York. SIAM. 1992. [13] A. A. Safavi, J. A.Romagnoli "Application of Wavenets to Modellling and Optimisation of a Chemical Process" Proceedings., IEEE International Conference on Neural Networks, 1995. [14] S. H. Strogatz "Nonlinear Dynamics and Chaos" Addison Wesley Publishing Company, USA, 1994. [15] S. N. Rasband "Chaotic Dynamics of Nonlinear Systems" John Wiley & Sons Inc., USA, 1990. [16] E, R Weeks. http://www.physics.emory.edu/~weeks/

Paper Zhang-Chaotic4

continuous function can be represented as a weighted sum of orthogonal basis functions. Such expansions can be easily represented as neural networks which ...

245KB Sizes 1 Downloads 232 Views

Recommend Documents

paper
fingerprint, hand geometry, iris, keystroke, signature, voice and odour [1]. In recent ... As most of the cryptographic keys are lengthy and in random order, they are ...

paper
fingerprint, hand geometry, iris, keystroke, signature, voice and odour [1]. In recent years, biometric methods ... authentication [1]. Biometric cryptosystems are ...

paper
In this paper we discuss statistical methods for the investigation of cross- cultural differences ... The ACI data have been analyzed before to find the dimensional ...

Paper
the case where the cost of R&D for one firm is independent of its rivals' R&D .... type of market structure: in 2001, while only 0.2% of the IT service companies in.

Paper
swers has a dedicated category named poll and survey. In our approach, we collect poll and ..... question-answering server. In Proceedings of TREC. Hearst, M.

paper
Page 1 ... own architecture, which can be deployed across different designs by the use of a ... has its own tile and routing architectures, and supporting CAD.

paper
Jun 15, 2009 - In the early 1800s, property owners on New ... In this paper, we specifically consider the extent to which Business Improvement .... California cities finds that one-fifth of cities have BIDs; this number rises to one-half for cities o

Paper
Apr 3, 2007 - originally posted, accountancy age, david pescovitz, tom sanders, global ... howard schultz, shaun nichols, san francisco, de la, hong kong, ...

Paper
Abstract. This paper discusses an optimization of Dynamic Fuzzy Neural Net- .... criteria to generate neurons, learning principle, and pruning technology. Genetic.

Paper
Model-driven architecture (MDA) [1] is a discipline in software engineering .... for bi-directional tree transformations: a linguistic approach to the view update.

Paper
Aug 24, 2009 - Email: [email protected]. University of North ... Email: [email protected]. 1 ...... 0) = 1 a d Prob V = 0. ∣. ∣. ∣(Z.

paper
(1)Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr, Pasadena, CA, 91109, United. States - E-mail: [email protected].

Paper
Graduate School of Information Science and Technology. University of Tokyo, Hongo .... Languages (POPL), Long Beach, California. (2005) 233–246. 7. Jouault ...

Paper Title (use style: paper title) - Sites
Android application which is having higher graphics or rendering requirements. Graphics intensive applications such as games, internet browser and video ...

Paper Title (use style: paper title) - GitHub
points in a clustered data set which are least similar to other data points. ... data mining, clustering analysis in data flow environments .... large than the value of k.

Paper Title
Our current focus is on the molecular biology domain. In this paper .... (5) X (: a list of experimental results), indicating that Y .... protein names in biomedical text.

Non-paper
Oct 5, 2015 - integrating adaptation into relevant social, economic and environmental ... (b) Prioritizing action with respect to the people, places, ecosystems and sectors that are most vulnerable to ... 10. There shall be a high-level session on ad

paper - GitHub
LEVEL to~. UNIVRSU~Y OF MARYLAD. COMPUTER SCIENCE CETER. COLLEGE PARK, MARYLAD. Approved frpb~. budnUk~led10. 3 116 ...

[Paper Number]
duced-cost approaches to an ever growing variety of space missions [1]. .... each solar panel; bus and battery voltage; load currents; and main switch current.

Paper Title (use style: paper title)
College of Computer Science. Kookmin ... of the distinct words for clustering online news comments. In ... This work was supported by the Basic Science Research Program through .... is performed on class-wise reviews as depicted in Fig. 1(b).

Paper Title (use style: paper title)
School of Electrical Engineering, KAIST .... [Online]. Available: http://yann.lecun.com/exdb/mnist/. [5] Design Compiler User Guide, Synopsys, Mountain View, CA, ...

Paper Title (use style: paper title)
on the substrate, substrate pre-deposition process, and Pd deposition .... concentration is below the ignition threshold, which is often important for such a sensor.