Evolving Signal Processing Algorithms by Genetic Programming Ken C. Sharman, Anna I. Esparcia Alcázar & Y. Li Department of Electronics & Electrical Engineering The University of Glasgow G12 8LT Scotland e-mail: [email protected], [email protected]

systems that are optimally matched to the problem at hand, thereby providing high performance in the underlying signal processing task. This is accomplished by extending the basic set of GP operators to include ones which implement the fundamental functions and processing tasks required by typical DSP systems. We also present some enhancements to standard GP, including a simulated annealing learning algorithm which assists the adaptation process.

Abstract - We introduce a novel genetic programming (GP) technique to evolve both the structure and parameters of adaptive digital signal processing algorithms. This is accomplished by defining a set of node functions and terminals to implement the basic operations commonly used in a large class of DSP algorithms. In addition, we show how simulated annealing may be employed to assist the GP in optimising the numerical parameters of expression trees. The concepts are illustrated by using GP to evolve high performance algorithms for detecting binary data sequences at the output of a noisy, non-linear communications channel.

To summarise, we are using two evolutionary methods to design DSP algorithms: • Genetic Programming structure of the system.

1. Introduction Genetic Programming (GP) is concerned with the evolution of executable computer programmes. This is in contrast to standard genetic algorithms (GA) whose aim is to evolve data (usually in the form of strings of bits or other symbols). References [1] and [2] provide a good overview of some of the application areas where GP has proved successful. In this paper, we show how GP may be used to develop high performance algorithms for a large class of digital signal processing (DSP) applications.

evolves

the

• Simulated Annealing evolves numerical parameters of the system.

the

The paper is organised as follows: Section 2 provides a brief overview of Genetic Programming. In section 3 we introduce methods for representing general purpose difference equations as functional expression trees, and we describe some novel GP processing nodes for implementing DSP systems. These include nodes which provide time-delays, recursion, and non-linear processing. In section 4 we develop a simulated annealing learning algorithm for optimising the numerical parameters of an expression tree. Finally, in section 5, we present some experimental results in applying the methods to the problem of evolving a DSP algorithm for detecting binary data at the output of a noisy non-linear channel.

Currently, most signal processing algorithms have a structure that is fixed - it is only the numerical parameters of the algorithm that are variable and adapted to the observed data. This is restrictive in the sense that the structure of the algorithm must be chosen apriori, leading to poor performance when the algorithm is not well matched to the problem.

2. Genetic Programming Genetic Programming (GP) is a subset of the class of Genetic Algorithms (GAs). Their main difference is that GAs deal with fixedlength strings over some finite alphabet,

In this paper, we use GP techniques to evolve both the structure and parameters of a class of DSP algorithms. The aim of this is to yield

1

(usually binary), whereas in GP, the individuals which constitute a population are symbol strings that code for executable computer programs. These programs are coded as functions written in polish notation, representing expression trees. For example, the function,

f (a, b, c ) = a + b * c

expression trees that each inherit different characteristics from both parents. • Mutation: A minor operator which involves the random selection of a node and either the replacement of its associated subtree with a randomly generated one or the changing of its type.

(1)

More sophisticated implementations of GP use genotype strings that code for multiple expression trees. These consist of a main tree and a set of auxiliary function trees (sometimes called automatically-definedfunctions or ADFs) which are called by the main tree as subroutines. Terminal nodes in the auxiliary trees are used to provide arguments to the encoded subroutine functions. This is useful in problems where the overall task can be decomposed into modular units, or has inherent symmetry.

is written in polish notation as, (+ ( a ( * b c ))))

or as the expression tree, + a

* b

c

The elements of trees are called nodes. According to the function they represent (i.e. the position in the tree), these nodes can be classified into:

General Structure

• Functions: Processing nodes that consume one or more input values and produce a single output value (e.g. + and * in the above example ). These provide the internal cells in expression trees.

We will now show how a class of general purpose discrete time signal processing systems can be implemented as expression trees, and therefore coded as strings to be evolved using GP [3].

• Terminals: Nodes which represent external inputs, constants and zero argument functions. These are the leaves of the expression tree.

Consider the single-input single-output system described by the following difference equation,

The string representation of the polish notation is used as a genotype which codes for the phenotype function induced by the associated expression tree. A fitness value is associated to each phenotype. This measures the performance of the function coded by the expression tree in solving the problem at hand. The evolution of tree structures proceeds in a similar manner to the standard GA: an initial population of trees is generated at random and this is then evolved by means of genetic operators, described as follows:

where y n is the system output at time n, X n is a vector of the present and the most recent previous inputs to the system, Yn−1 represents the most recent outputs from the system, and Fn () is the (time-variant) function describing how the system output is related to its inputs and previous outputs. A point to note is that the system function may change over time. This is particularly true in adaptive systems which attempt to track time-varying trends in the input data.

3. Representing Signal Processing Algorithms as Expression Trees

y n = Fn ( X n , Yn −1 )

(2)

Delay Nodes

• Selection Couples of parent trees are selected for reproduction on the basis of their fitness. The most usual selection mechanisms are Fitness Proportionate Reproduction, and Tournament Selection.

The class of systems described in (2) can be represented as expression trees by introducing terminal nodes which return the values of the previous inputs to and outputs from the tree. These will be labelled xN and yN respectively, where N is an integer index indicating the delay time (see table 1). The xN nodes provide inputs to the tree, while the yN nodes yield time-recursion by allowing previous outputs

• Crossover: This is performed by selecting a random node in each of the parent trees and exchanging the associated subtrees. The procedure produces a pair of child

2

from the tree to be used as current inputs. We also introduce a single-argument function node labelled Z which implements a single sample delay. This node set provides a rich base for constructing many types of signal processing systems. For example, the second order digital filter, described by the difference equation, [4],

to add data storage to every node, even if its previous value is not required, but more importantly, we would also need to provide a means for the accessor nodes to address the internal nodes of interest, which might be difficult in large expression trees. A much more practical method is to keep track of the output values only at certain nodes of interest. This is accomplished using the node type labelled psh. The psh node is a singleargument function which stores the value of its argument at the top of a global stack memory object and then passes this value to its output. The stkN terminal node is the complement of psh and retrieves values from the stack memory. The N value of the stkN node indicates the memory location from where the data is retrieved. Two stack objects are used simultaneously - one by the psh nodes and the other by the stkN nodes. This is done so that during execution of the expression tree, the current output values at certain nodes can be saved for reuse at the next time instant, and at the same time, the stored values from the previous time instant can be retrieved and used as input data to the tree.

y n = c 0 x n + c1 x n −1 + c 2 x n −2 + c3 y n −1 + c 4 y n − 2 (3) can be represented by the following tree, (+ (+ (+ (+ (* c3 y1) (* c4 y2)) (* c2 x2)) (* c1 x1)) (* c0 x0))

The mapping of a signal processing system to a tree structure is far from unique - the digital filter in (2) could equally well be implemented by the tree, (+ (+ (+ (* c0 x0) (* c1 (Z x0))) (+ (* c2 (Z (Z x0))) (* c3 y0))) (* c4 (Z y0)))

In fact, there are infinitely many different tree structures for a particular system function. We see this as a very positive point in the application of GP to algorithmic evolution because there will be a great many genotypes for each system algorithm. Hence, it will be easier to find a suitable genotype during evolution than in the case when the mapping from genotype to algorithm phenotype is one to one.

To illustrate the application of the psh and stkN nodes, a possible genotype coding for the modular digital filtering algorithm in (4) is, (+ (psh {+ x0 (+ (* c1 stk0) (* c2 (Z stk0)))} )(+ (* c3 stk0) (* c4 (Z stk0))))

Nodes for Local Time-Recursion

In this expression tree, the sub-tree enclosed in braces evaluates the term pn and pushes this value onto the stack memory ready for the next cycle. The stk0 node therefore returns the value, p n−1 , which can be delayed by the Z node to get p n−2 .

In addition to the time-recursion implemented by the yN node type, we have also developed a method for obtaining local time-recursion within an expression tree. This is done using a special terminal node type which returns the previous time sample value from any node in the tree. When combined with the Z (delay) function, we can then access the value at any node in the tree at any previous time instant. This is important for developing modular solutions to certain problems. For example, the biquadratic digital filter section in canonical form is described by the coupled equations, [4], p n = x n + c1 pn −1 + c2 pn − 2 y n = p n + c 3 p n −1 + c 4 p n − 2

Time-varying and Adaptive Systems So far, we have only considered expression trees whose structure is fixed over time. Hence they cannot implement the time-variation inherent in the general system described by (2). One way of producing a time-varying expression tree is to introduce a terminal node which returns the current time index, n. This would enable the tree to modify its function according to the current time, therefore producing a time-varying function. This approach is, however, considered infeasible, since it will generally lead to massive tree sizes which are difficult to evolve within reasonable space and time bounds.

(4)

The most straightforward way of implementing local recursion is to add data storage to keep track of the previous output values at each and every one of the tree’s nodes. A set of terminal nodes could then be defined to access these values. This is, however, impractical because we would need 3

Symbol

Arity

+

2

Description Addition

Symbol

Arity

Description

yN

0

Previous output from the expression tree. The index N indicates 1+ the delay factor (e.g. y2 returns

y n−3

-

2

Subtraction

cN

0

Constant value. N is an index to a table of constants whose values may be predefined or chosen at random.

*

2

Multiplication

fN

variable

/

2

Division - if second argument is 0, then the node output is set to a large maximum value

argN

0

The Nth argument to an ADF

+1

1

Increment

psh

1

Push the argument value onto the stack

-1

1

Decrement

stkN

0

Retrieve the Nth item from the stack.

*2

1

Multiply by two

Z

1

Unit sample time delay

/2

1

Divide by 2

nlN

1

Non-linear transfer function. N indicates the amount of non-linearity.

xN

0

System input data. N indicates the delay. (e.g. x2 returns

AvgN

N

The average of its N arguments.

Execute the Nth ADF function tree.

x n−2 ) Notes: The suffix N that appears in many of the node symbols is an integer in the range 0...255.

Table 1 - The Function & Terminal Node Set for Evolving DSP Algorithms

An alternative approach for producing timevarying signal processing systems is to note that, in many applications, the required timevariation of the system is relatively slow with respect to the sampling rate of the input data. In other words, over short time intervals (relative to the duration of the input data), the system may be considered as time-invariant. Thus, we can use time-invariant expression trees which are replaced at regular intervals. This is easily implemented using a timevariant fitness function, and allowing the evolution of tree structures by GP to continue as new input data is observed.

g( x ) =

1 − e − βx 1 + e − βx

(5)

where the parameter β is a linear function of the N parameter in the nlN node. This parameter is under direct genetic control and consequently, it can be evolved along with the tree structure. This provides an adaptive amount of non-linear processing. Example - a Recurrent Neural Network It has been shown that a 3-node recurrent neural network can be used as an efficient signal processor for equalisation of noisy nonlinear communications channels, [5]. The system architecture of such a neural network is shown in figure 1. Using the node definitions given in table 1, one way of expressing this as a GP genotype is to use ADFs as follows,

Non-linear Transfer Functions It is well known that many important signal processing problems are non-linear. However, most of the current generation of signal processing algorithms tend to use linear systems, mainly because the techniques of optimisation and on-line adaption, (e.g. gradient descent and least squares methods), have been thoroughly developed for linear systems only. In this work, we take an alternative view of the optimisation problem, and linearity constraints do not apply to GP evolution. To develop non-linear systems, we have used a special single-argument function node (nlN) which has a non-linear transfer function. The N attached to a non-linear node is an integer label indicating the amount of non-linearity in the transfer function of the node. The actual transfer function of the nlN class of nodes is similar to the sigmoid transfer function commonly employed in neural networks [6]. It is defined as,

y = (+ f1 (* c0 (+ f2 f3))) f1 = (psh nl1 avg x0 stk0 stk1 stk2) f2 = (psh nl2 avg x0 stk0 stk1 stk2) f3 = (psh nl3 avg x0 stk0 stk1 stk2)

Here, each cell in the network is represented by its own ADF expression tree (f1, f2 and f3) which are invoked by the main tree, y. The main tree executes each ADF tree, but only returns the value of the output node, which is represented by ADF f1 (if c0 = 0). The psh nodes in each ADF store the computed cell outputs on the stack, and these are accessed from the previous cycle using the stkN nodes. (NB: In section 4 we show how to represent the connection weights in a neural network).

4

Because GP evolution proceeds by assigning a fitness value to each tree structure, it is straightforward to incorporate different optimisation criteria according to the problem at hand. For instance, it is trivial to use the minimum absolute error,

w22

2 w32

w24

w21 w12

w23 w13

1

1 N

y3 3

w31

w11 w14

∑ y (n) − y (n) $

(7)

n =1

or other powers of the error between the desired and actual system outputs.

w33

w34

N

4

4. Learning by Annealing The Numerical Values Problem in GP

x

We have found that one of the main problems in applying standard GP to generating signal processing algorithms is that it is difficult to evolve accurate values for the numerical parameters in an algorithm. The standard way of doing this has been to use a set of terminal nodes cN which return constant numbers. If the algorithm requires numerical values that are not present in this terminal set, (which will almost certainly be the case), then they can be generated by operations within the expression tree. For example, the tree,

Figure 1 - A Fully Recurrent Neural Network. Each processing cell labelled P1-3 implements a sigmoidal transfer function on the average of the cell’s input values. The connecting links have independent strengths labelled wij. The system output is taken from cell P3 and the input is applied to each cell simultaneously.

Thus, the basic set of GP node definitions shown in table 1 is able to code for recurrent neural network architectures. This leads to the possibility that neural networks and other such systems can be evolved by GP.

(+ (/2 c1) (/2 /2 c1))

where the node c1 returns the value 1, is one way of representing the number 0.75. Although this method can in theory yield any rational number, the resulting trees can become very large, especially when high precision is required. This places a significant burden on the evolutionary algorithm.

Fitness Functions The classical way of optimising signal processing algorithms has been to use a minimum mean squared error criterion (mse), i.e., 1 N

N

∑ y (n) − y (n) $

2

(6)

Node Gains

n =1

As an alternative, we have developed special tree nodes that allow scaling of data as it flows through the tree. This significantly reduces the requirement for numerical parameters to be evolved by the GP. Each link between a pair of nodes in a tree has gain value that modifies the data as it passes from node to node through the tree. These gain values are optimised using a simulated annealing algorithm.

where y$( n) is the actual output of the system at time n and y( n) is the desired output of the system. This performance measure can be measured for each phenotype expression in a population and used to define the fitness values for each individual, (e.g. 1/(1+mse) ). The main reason why the minimum least squares criterion is so popular is that that the theory of least squares optimisation is well developed and there are many classical optimisation algorithms available for carrying out this task. However, minimum mean squared error may not be the best criterion to use in certain applications (e.g. in non-linear systems or in cases where non Gaussian noise is present).

Consider the link between the output of a node labelled P and the input to a following node labelled Q.

p

5

x

y

α pq

q

The link has a strength of α

pq

, (a real

The task we examine is the so called channel equalisation problem, a brief description of which now follows. A sequence of symbols (a signal)• , s , is transmitted through a noisy non-linear communications channel to provide a set of observations, o ,

number), and the relationship between the value at the output of node P, ( x ), and the input to node Q, ( y ) is

y = α pq x

(8)

o = C( s) + n

The Simulated Annealing Algorithm

where n is a vector of noise samples, and C() is the function that describes the mapping from the channel input to its output. The problem consists of finding a system that approximately restores s from the observations o . Thus,

Fitness maximisation with respect to the node gains is accomplished using the following annealing algorithm which is applied after a new tree has been produced by crossover or mutation during evolution. Let { α pq (i) } be the values of all the node gains in a tree at iteration i during the annealing process, and let f (i) be the fitness of the tree using this set of node gains. The annealing proceeds as follows,

s$ = F ( o )

{ pq

(i) } to get { α ′pq (i) }.

2) Evaluate the fitness,

s$ − s

f ′(i ) using these

perturbed parameters.

(11)

n

continue.

s channel

4) Else accept the perturbation with f ' (i )− f (i ) T

2

This is known as trained adaptation (see figure 3).

f ′(i ) > f (i) ) then accept the perturbation, α pq (i + 1) = α ′(i ) , and 3) If (

probability e

(10)

where s$ is the estimate of the signal, and F() is the function describing system function that is to be found (commonly called the equalising filter). The classical approach to this problem involves sending an initial known signal and finding the filter that minimises the mean square output error,

while( not terminated ) do 1) Perturb { α

(9)

+

o +

adaptive filter

s +

e -

and continue The unobservable signal, s, is distorted by the channel and corrupted by the additive noise, n. The objective of the adaptive filter is to restore s from the noisy observations, o. The dashed line indicates a signal path used during trained adaptation of the restoring filter.

5) Reduce the temperature T according to annealing schedule

} Figure 2 - The Annealing Algorithm for Adapting the Node Gains

Figure 3 - The Channel Equalisation Problem

The Recursive Least Squares Algorithm

5. An Example: The Channel Equalisation Problem

The recursive least squares (RLS) algorithm is currently a popular choice for solving the channel equalisation problem and others like it, [5]. We have employed this algorithm to compare with the algorithms evolved using GP. The RLS algorithm attempts to minimise the mean squared error between the estimated signal at the equaliser output and the training signal by adjusting the coefficients of a linear

The problem To demonstrate the concepts presented in this paper we show how GP can evolve a high performance algorithm for a commonly encountered signal processing task. Although we do not claim to present an in depth evaluation of the GP approach to evolving DSP algorithms, we do however aim to illustrate the power of the method, and its potential for further development.



Bold lower case letters indicate vectors whose elements are successive time samples of a data sequence.

6

digital filter. The equations controlling the adaptation of the filter are shown in figure 4.

Population size = 250 Tournament Size = 10 Maximum Gene Size = 100 Mutation Probability = 0.001 Max. No. of ADFs = 2 Max. No. of Arguments per ADF = 1

N −1

x$ t = ∑ c k y$ t − k = c T y$ e t k =0

c = [c 0

where:

y$ e , t

=

[y$ t

c1

y$ t − 1

L

T

c N −1 ]

and

y$ t − N + 1 ]

L

Figure 5- Parameters for GP Evolution

During the adaptation period, at every time instance t, the filter’s error

et = x t − x$ t

is calculated along with the

Results

Kalman gain vector kt and the inverse of the correlation matrix Pt, via the recursive equations:

kt = Pt =

A solution obtained after a GP run was the following tree

Pt − 1 y$ e∗, t

[Main]

ω + y$ eT t Pt − 1 y$ e∗ , t

1 Pt − 1 − k t y$ eT , t Pt − 1 ω

[

([0.738831]nl136 ([-0.0683594]* ([-4.85413]psh [2.22839]c86) ([1.91559]-1 ([-1.38428]* ([-4.07288]z ([1.94122]+1 [-1.70074]stk0)) ([-1.42334]/ ([2.61475]psh ([1.35864]f1 ([4.99847]f1 ([1.91559]-1 ([-1.38428]* ([4.07288]z ([-1.94122]+1 [-1.70074]stk0)) [1.59943]x0))))) ([-1.01593]f1 ([-0.463257]f1 ([2.54028]f1 ([-2.99438]/ [0.379944]x0 [2.41058]x0)))))))))

]

where 0 < ω < 1 is the forgetting factor, “*” denotes here the complex conjugate and “T” denotes the transpose of a vector. Finally the filter’s coefficients are updated via:

c t = c t −1 + k t e t

[f1] ([1.64276]- ([-1.69617]- [2.11823]arg0 ([-4.03778]/ [2.80853]arg0 [-2.38647]arg0)) [1.92596]arg0)

Figure 4- The Recursive Least Squares Adaptive Filtering Algorithm

This solution uses only one ADF, f1, with one argument, and one constant, c86, whose value is -9.9097. The numerical values shown between brackets are the node gains.

Experimental Procedure We used a channel model consisting of a linear filter followed by a non-linear function. The equations relating the channel input sequence, { sn } (a pseudo random binary sequence), and the observations, { on }, were, c n = 0.7c n −1 + sn on = cn + 0.5 * cn2 + nn

The bit-error-rate was found to be zero. It is to be noticed that due to the thresholding process involved in the calculation of the BER this doesn’t imply a normalised fitness value close to one.

(12)

A comparison between the original signal and the outputs of both the GP filter and the RLS filter is made in figure 6. Fifty values are taken, after the first 200 were removed. It can be noticed that the shape (though not the scale) of the original signal and that of the output of the GP filter are exactly the same.

where { nn } is the additive gaussian noise, and n is the discrete time sample index. The variance of the noise was chosen such that the resulting observation to noise variance was 20 dB. 500 samples of the observations were generated and used to train the RLS adaptive filter. The same data set was used to provide a fitness measure for the GP evolution. The parameters of the GP population were as shown in figure 5. After adaptation, a further 1000 data samples with a different noise realisation were generated and processed by both the filter obtained by the RLS algorithm and the highest fitness expression tree from the GP evolution. We then counted the number of misclassified symbols at each system’s output to provide a measure of the average bit-error-rate (BER).

7

Neural Networks, vol. 5, pp. 54-65, 1994.

6. Conclusions We have shown that it is possible to use GP and simulated annealing techniques to evolve digital signal processing algorithms. This was accomplished by defining a set of tree nodes consisting of the basic operations commonly used in DSP algorithms. These nodes allowed non-linear and time-recursive processing algorithms to be evolved by GP. We also introduced nodes whose output gains were optimised by a simulated annealing algorithm.

[7] K. Kinnear (ed.), “Advances in Genetic Programming”. MIT press, 1994.

1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

Our preliminary results are very encouraging, and we have observed many instances where the algorithms produced by GP significantly outperform existing systems. We stress however, that there is still much to investigate in this domain. For instance, we are currently interested in evaluating the statistical properties of the GP technique so that we may determine confidence measures for its evolved products. In addition, we point out that the GP method is very computationally expensive and currently only suited to simulation, off-line batch processing, or in applications where the data sampling frequency is very small.

0.8 0.6 0.4 0.2 0 -0.2

References

-0.4

[1] J. Koza, “Genetic Programming: On the programming of computers by means of natural selection”. The MIT Press, 1992

-0.6 -0.8

[2] J. Koza, “Genetic Programming II: Automatic discovery of reusable programs”. The MIT Press, 1994

2 1.5 1

[3] K.C. Sharman & A.I. Esparcia Alcázar, “Genetic Evolution of Symbolic Signal Models”. 2nd IEE/IEEE Workshop on Natural Algorithms in Signal Processing, 1993.

0.5 0 -0.5 -1

[4] J. G. Proakis & D. G.Manolakis, “Digital Signal Processing : Principles, Algorithms and Applications”, Macmillan, 1992

-1.5

Figure 6:

[5] G. Kechriotis, E. Zervas & E.S. Manolakos, “Using Recurrent Neural Networks for Adaptive Communication Channel Equalization”. IEEE Transactions on Neural Networks, vol. 5, pp. 267-278, 1994.

Top: Training signal (50 samples) Middle: Output of the GP-evolved equalising filter Bottom: Output of a ten-tap adaptive filter optimised by the RLS algorithm

[6] P.J. Angeline, G.M. Saunders & J.B. Pollack, “An Evolutionary Algorithm that Constructs Recurrent Neural Networks”. IEEE Transactions on

8

Evolving Signal Processing Algorithms by Genetic ...

numerical parameters of expression trees. The concepts are illustrated by using GP to evolve high performance algorithms for detecting binary data sequences ...

109KB Sizes 0 Downloads 144 Views

Recommend Documents

Digital Signal Processing Principles Algorithms Applications By ...
Digital Signal Processing Principles Algorithms Applications By Proakis Manolakis 3rd Edition.pdf. Digital Signal Processing Principles Algorithms Applications ...

Online PDF Signal Processing for 5G: Algorithms and ...
technology, implementation and practice in one single ... techniques employed in. 5G wireless networks will ... MIMO and 3D-MIMO along with orbital angular.

Online PDF Signal Processing for 5G: Algorithms and ...
PDF online, PDF new Signal Processing for 5G: Algorithms and Implementations (Wiley - IEEE), Online PDF Signal Processing for 5G: Algorithms and Implementations (Wiley - IEEE) Read PDF Signal Processing for 5G: Algorithms and Implementations (Wiley -

Online PDF Signal Processing for 5G: Algorithms and ...
data, cloud service, machine- to-machine (M2M) and ... practical solutions for new spectrum opportunities ... device-to-device (D2D) communications and cloud.

Read PDF Signal Processing for 5G: Algorithms and ...
A comprehensive and invaluable guide to 5G technology, implementation and ... It is anticipated that new techniques employed in 5G wireless networks will not ...

EC6511-DIGITAL-SIGNAL-PROCESSING-LAB- By EasyEngineering ...
EC6511-DIGITAL-SIGNAL-PROCESSING-LAB- By EasyEngineering.net.pdf. EC6511-DIGITAL-SIGNAL-PROCESSING-LAB- By EasyEngineering.net.pdf. Open.

Digital Signal Processing - GitHub
May 4, 2013 - The course provides a comprehensive overview of digital signal processing ... SCHOOL OF COMPUTER AND COMMUNICATION. SCIENCES.

Genetic Algorithms and Artificial Life
In the 1950s and 1960s several computer scientists independently studied .... individual learning and species evolution a ect one another (e.g., 1, 2, 13, 37 ... In recent years, algorithms that have been termed \genetic algorithms" have ..... Bedau

Genetic Algorithms and Artificial Life
... in the population. 3. Apply selection and genetic operators (crossover and mutation) to the population to .... an environment|aspects that change too quickly for evolution to track genetically. Although ...... Princeton University Press, Princeto

Genetic Algorithms and Artificial Life
In the 1950s and 1960s several computer scientists independently studied .... logical arms races, host-parasite co-evolution, symbiosis, and resource ow in ...

Genetic Algorithms and Artificial Life
In the 1950s and 1960s several computer scientists independently studied ... ther developed by Holland and his students and colleagues at the University of .... If if the environment is stable so that the best things to learn remain constant, then th

Genetic Algorithms in Search Optimization and Machine Learning by ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Genetic ...