A New Approach in Synchronization of Uncertain Chaos Systems Through Particle Swarm Optimization Fei Gao

Ju-Jang Lee

Department of Electrical Engineering and Computer Science Korea Advanced Institute of Science and Technology Daejeon, 305–701, Republic of Korea Email: [email protected]

Department of Electrical Engineering and Computer Science Korea Advanced Institute of Science and Technology Daejeon, 305–701, Republic of Korea Email: [email protected]

Abstract—The topics of uncertain chaos synchronization have drawn considerable attentions in the last decades. To investigate them in a novel way, an application through particle swarm optimization (PSO) simulating the swarm intelligence is proposed. With two novel techniques, boundary restriction strategy for PSO and transferring the problems of chaos synchronization into a series of multimodal nonlinear optimization problems, synchronization of H´enon systems with different cases of system parameters identical, different and uncertain from different initials are discussed respectively. Numerical experiments done by PSO demonstrate the effectiveness and efficiency of the proposed ideas.

I. I NTRODUCTION The fields of nature, science and technology are taking on more and more obviously not fabricative but intrinsic chaotic phenomena and fractal characters [1], [2] in recently years. In recently years, growing interests from physics, chemistry, biology and various fields have stimulated the studies of chaos for control, synchronization and optimization [3]–[6]. Many techniques have been put for chaos control [5], [6] since the pioneering work of Hubler [7], Ott E. [4] and others [2] in 1990s. In which, OGY method [4] is most well-known method using the exponential sensitivity of chaotic systems to tiny perturbation to direct the system towards a desired target in a short time. In the last decade, the subjects of uncertain chaos synchronization have received considerable attentions. Since the studies [8] made by Pecora and Carroll in chaos synchronization, many other researches on controlling chaos systems to be synchronized have been published [9], [10] with the cases of system parameters either known or uncertain. And the latter is quite important because it is hard to receive complete information about parameters in applications. With techniques in [11], [12], parameters identification and synchronization of chaotic systems can be achieved simultaneously. The Nature has created many miracles like us, and the evolutionary algorithms (EA) are modern intelligent algorithms simulating the ideas of Nature Evolution as key elements in their design and implementation [13], [14]. Although simplistic 978-1-4244-2171-8/08/$25.00 ⓒ2008 IEEE

from a biologist’s viewpoint, these algorithms are sufficiently complex to provide robust and powerful adaptive search mechanisms [14]. Many representative EA have been proposed, such as Simulated annealing [15], Evolution strategy [16], Particle swarm optimization (PSO) [17]–[19] and Differential Evolution algorithm [19]–[21] etc. PSO is a relatively new EA type method relation to artificial neural nets, Fuzzy Logic, and Evolutionary Computing, developed by Dr. Eberhart and Dr. Kennedy in 1995 [17], inspired by social behavior of bird flocking or fish schooling. In past several years, PSO has been successfully used across a wide range of application fields as well as in specific applications focused on a specific requirement for the two reason following. The first it is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods. And the second reason that PSO is attractive is that there are few parameters to adjust. One version, with slight variations, works well in a wide variety of applications [17], [18]. In the IEEE World Congress on Computational Intelligence 2006 [22], Fussy system, Neural Networks and EA are regarded as the three main researching areas of computational intelligence. Especially PSO are considered as one of the most popular topics of EA. In this paper, PSO with a novel boundary restriction is applied into the synchronization of chaotic systems, which could be formulated as a series of multimodal numerical optimization problems. Simulations results based on H´enon chaos, with different cases of system parameters identical, different and uncertain from different initials respectively, the experiments done demonstrate the effectiveness of the proposed ideas. The following paper is organized as follows. In Section 2, the synchronization of chaotic systems is formulated as numerical optimization problems. Section 3 provides brief review for PSO and a novel BR is put. Simulation results and analysis are provided in Section 4. Finally, Section 5 summarized the concluding remarks.

1069

II. C HAOS S YNCHRONIZATION Consider the following discrete chaotic chaos system: x(k + 1) = f (x(k)) , k = 1, 2, . . . N

(1)

where state x(k) ∈ Rn , f : Rn → Rn is continuously differentiable. Let y(k + 1) = g(y(k)) (2) and (1) be two given chaots systems with different initial state x(0) = x0 = y(0) = y0 . Our problem tackled in this paper is to consider the synchronization problem of system (1) and (2) using the driveresponse configuration. That is to say, if the system (1) is regarded as the drive system, a suitable response system with control force should be constructed to synchronize the drive system. A. Systems of Parameters Identical Firstly we consider the case of g ≡ f in (2), that is, system (2) is identical to system (1) just with different initials. In order to synchronize systems (1) and (2), the following response system is considered: x(k + 1) = f (x(k)) (3) y(k + 1) = f (y(k)) + K(k)(y(k) − x(k)) Chaotic synchronization by feedback is to select the feedback matrix K(k) s.t. x(N ) − y(N ) → 0. To achieve this, using the idea from Ref. [23], assume that feedback only acts on the first component, i.e., K11 (k) = 0 and all other components of K(k) are zeros. For convenience, denote K11 (k) := K(k). Then min x(N ) − y(N )2 ⎧ x(k + 1) = f (x(k)) ⎪ ⎪ ⎪ ⎪ y1 (k + 1) = f1 (y(k)) + K(k)(y1 (k) − x1 (k)) ⎪ ⎪ ⎨ yi (k + 1) = fi (y(k)), i = 2, ..., n ⎪ let K(k) = K11 (k) ⎪ ⎪ ⎪ |K(k)| ≤ κ ⎪ ⎪ ⎩ x(0) = y(0)

(4)

The above objective functions are multi–dimensional constrained numerical optimization problem, which can be resolved by Newton type methods with worse performance for its multimodal. So we choose the objective function f (K) = x(N ) − y(N )2

(5)

and select the well known DE methods to resolve (K(0), K(1), . . . , K(N − 1)). When N is larger, for the sensitivity of the chaos system, the objective function (5) is hardly to minimize. So we have to synchronize online, for instance, we choose N = 3 in every cycle. That is, for those k ∈ K in each cycle s.t. x(N ) − y(N )2 > δ = 0.03

let the new x(0) = x(k ),y(0) = y(k ).

(6)

B. Systems of Parameters Different and Uncertain Secondly we take the case of of g = f in (2) into account, for this synchronization is so difficult, only the chaos systems with same structures and different systematic parameters are considered. In order to synchronize systems (1) and (2) in 2 dimensions now, the following response system is considered: ⎧ x(k + 1) = f (x(k)) ⎪ ⎪ ⎪ ⎪ y1 (k + 1) = f1 (y(k)) + K1 (k)(y1 (k) − x1 (k)) ⎪ ⎪ ⎨ +K2 (k)(y2 (k) − x2 (k)) (7) (k + 1) = f2 (y(k)) y ⎪ 2 ⎪ ⎪ ⎪ let |K(k)| ≤ κ ⎪ ⎪ ⎩ x(0) = y(0) Thirdly the case of g is uncertain in (2) is taken into account, which has the same structure with f but the systematic parameter is uncertain, and the correspondent response system is similar to (7). III. PARTICLE S WARM O PTIMIZATION PSO belongs to the category of Swarm Intelligence methods closely related to the methods of Evolutionary Computation, which consists of algorithms motivated from biological genetics and natural selection. A common characteristic of all these algorithms is the exploitation of a population of search points that probe the search space simultaneously [17]–[19]. PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations [17]. However, unlike GA, PSO has no evolution operators such as crossover and mutation. The dynamics of population in PSO resembles the collective behavior and self–organization of socially intelligent organisms [18], [19]. At stepk, each particle Xi (k) = (xi,1 (k), . . . , xi,D (k)) keeps track of its coordinates in the problem space which are associated with the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbestPi (k) = (pi,1 (k), . . . , pi,D (k)) . Another ”best” value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle, called lbestLi (t) = (li,1 (t), . . . , li,D (t)) . When a particle takes all the population as its topological neighbors, the best value is a global best and is called gbest Qg (k) = ((qg,1 (k), . . . , qg,D (k)). The particle swarm optimization concept consists of, at each time step, changing the velocity Vi (k) = (vi,1 (k), . . . , vi,D (k))

(8)

of each particle toward its pbest and gbest locations (PSO without neighborhood model). Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward pbest and gbest locations [17]. That is ⎧ Ai,d (k) = rand (0, c1 ) · [pi,d (k) − xi,d (k)] ⎪ ⎪ ⎨ Bi,d (k) = rand (0, c2 ) · [qg,d (k) − xi,d (k)] (9) vi, d (k + 1) = w · vi, d (k) + Ai,d (k) + Bi,d (k) ⎪ ⎪ ⎩ xi, d (k + 1) = χ [xi, d (k) + vi, d (k + 1)]

1070

factor χ = 0.9, value of velocity weight at the beginning and the end are ws = 0.95 and we = 0.2 in (10).

w := w − k ·

ws − we T

(10)

ws , we are the initial and end weight respectively, T is the evolution generation, c1 is called Cognition Acceleration Constant, c2 is called Social Acceleration Constant, where χ is Constriction factor, normally χ = 0.9. The cognitive parameter c1 determines the effect of the distance between the current position of the particle and its best previous position Pi on its velocity. On the other hand, the social parameter c2 plays a similar role but it concerns the best previous position,Pgi attained by any particle in the neighborhood. rand(a, b) denotes random in [a, b], in this way, the randomicity is introduced to PSO. Vi (k) is limited by a max velocity Vmax . Though PSO without neighborhood model converges fast, sometimes it relapses into local optimal easily. So an improved edition of PSO with circular neighborhood model is also proposed to ameliorate convergence through maintaining more attractors. Let Ni = {Xi−r , . . . , Xi−1 , Xi , Xi+1 , . . . , Xi+r } be a neighborhood of radius r of the i-th particle, Xi (local variant). Then, lbest Li (k) is defined as the index of the best particle in the neighborhood ofXi , i.e., f (Pgi ) ≤ f (Pj ),

j = i − r, . . . , i + r.

Let the experiments proceed 50 times independently. The results suggest boundary-crossing take place in the last some dimensions. Fig. 1 shows the normal PSO’s results of last 3 dimensions in minimizing Banana function in 50 dimensions.

600 400 n = 50

where w is called velocity weight defined as Eq. (10)

50 2

(11)

0 1 Fig. 1.

n = 48

Normal PSO

n = 50

100 50 0 50

for φ > 4, where φ = c1 + c2 , and κ = 1. A complete theoretical analysis of the derivation of Eq. (14), can be found in [17], [18]. Example. Banana function [13]

i=1

3

n = 49

vi, d (k + 1) = χ [w · vi, d (k) + Ai,d (k) + Bi,d (k) + Ci,d (k)] (13) where χ is Constriction factor, normally χ = 0.9 or determined by 2κ , (14) χ= |2 − φ − φ2 − 4φ|

f1 (x) =

0 −200

The neighborhood’s topology is usually cyclic, i.e., the first particle X1 is assumed to follow after the last particleXN . Now the updating mechanism is given: Ci,d (k) = rand (0, c3 ) × [li,d (k) − xi,d (k)] vi, d (k + 1) = w · vi, d (k) + Ai,d (k) + Bi,d (k) + Ci,d (k) (12) where c3 is called Neighborhood Acceleration Constant, the other parameters are the same as those in (9). Sometimes Vi (k) can be modified [18] as

N −1

200

3 2

n = 49 Fig. 2.

0 1

n = 48

Normal PSO with BR

[100(xi+1 − x2i )2 + (xi − 1)2 ], x ∈ [−500, 500]50

(15) The optimization of Banana function (15) is difficult, we choose PSO without neighbor model to seek its minimum. With the termination is the evolution generation is T = 5000 or the objective is less than 10−10 , the size of the population M = 40, cognitive acceleration c1 = c2 = 2, constriction

When xi, d (k + 1) > ubd , it suggests that the individual has a trend to cross the ubd in its dimension d. But if this trend happens more than one dimension and in evolution process, this way may cause the PSO ineffective. To maintain the trend in some degree, we propose a novel way as Eq.(16), then a novel strategy Boundary Restriction (BR) is proposed as

1071

xi, d (k + 1) =

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

0.2

xi, d (k + 1), if lbd ≤ xi, d (k + 1) ≤ ubd ; ubd − br · rand(0, 1) · (ubd − lbd ), if xi, d (k + 1) > ubd ; lbd + br · rand(0, 1) · (ubd − lbd ), if xi, d (k + 1) < lbd .

0.1 x1−y1

below:

(16) where ub = (ub1 , ub2 , . . . , ubD ) is the upper boundary, lb = (lb1 , lb2 , . . . , lbD ) is the lower boundary and br ∈ [0.1, 0.5] . In this way, the population will not cross the boundaries. Fig. (2) shows the normal PSO with BR (where br = 0.1 and the other parameters are the same as above) results of last 3 dimensions in minimizing Banana function in 50 dimensions. And from Fig. 1 and Fig. 2 we can conclude that the BR strategy can eliminate the pneumonia of boundary-crossing.

0 −0.1 −0.2 −0.3 0

20

40

Fig. 3.

60 n

80

100

120

100

120

(x1 − y1 ) online

IV. S IMULATIONS

2 1 K

Choose one form of the famous discrete chaotic system H´enon chaos as below, that is [24], [25]

cos α − sin α x1 , ϕ(α) = Φ(X) = ϕ(α) sin α cos α x2 − x21 (17) where α = arccos(0.24). Construct the first response system related to system (3) as below ⎧

y1 ⎪ ⎪ + K1 (k) ∗ (y1 (k) − x1 (k)) Φ(Y ) = ϕ(α) ⎪ ⎪ y2 − y12 ⎨ let K(k) = K1 (k) ⎪ ⎪ |K(k)| ≤1 ⎪ ⎪ ⎩ y(0) = (0.486, 0.014) (18) Choose the initial x(0) = (0.2, 0.3) in (17) and we select PSO with BR strategy (where br = 0.3) to realize chaos synchronization through the system (17) and (18) above. In all the simulations, we let the PSO run 100 times independently. In simulation, we set N = 5, δ = 0.03. For the PSO, we choose the evolution generation of PSO in each cycle T = 1500, the size of the population M = 40 in [−2, 2], cognitive acceleration c1 = c2 = 2, constriction factor χ = 0.9, value of velocity weight at the beginning and the end are ws = 0.95 and we = 0.4, and velocity weight w varies as equation (10). With the parameters set above, PSO can succeed in probability 98%. And we select one of the successful cases as below. Fig. 3 show the process of H´enon chaos (17) with the corresponding K (Fig. 4) detected by PSO. Construct the second response system related to system (7) as below

y1 − x1 y1 K K + Φ(Y ) = ϕ(β) 1 2 y2 − y12 y2 − x2 (19) where β = arccos(0.22), initial x(0) = (0.8, 0.5), y(0) = (−0.8, 0.388), N = 3, δ = 0.03. And the parameters of PSO are the same as above. With these parameters except T = 500 in each cycle, PSO can succeed in probability 98%. And we

0 −1 −2 0

20

40 Fig. 4.

60 n

80

Correspondent K

select one of the successful cases as below. Fig.(5) show the process of H´enon chaos (17) with the correspondent K1 (Fig. 6 ) and K2 (Fig. 7 ) detected by PSO. And the last response system is similar to (19) but the β = arccos[0.22 + 0.4 ∗ rand(0, 1)], initial x(0) = (0.8, 0.5), y(0) = (−0.8, 0.388). And the parameters of PSO are the same as the former experiment. With these parameters, PSO can succeed in probability 96%. And we select one of the successful cases as below. Fig.8 show the process of H´enon chaos (17) with the correspondent (Fig. 6 ) and K2 (Fig. 7 ) detected by PSO. As the figures above show that PSO is a efficient method in uncertain chaos feedback synchronization. V. C ONCLUSION From the viewpoint of optimization, it was formulated as multi-dimensional numerical optimization problem to synchronize the uncertain chaotic systems. PSO with BR strategy was applied to shows the idea of applying it into the topics. To the best of our knowledge, this is the first report of applying PSO

1072

2

2

1.5

1.5 x1−y1

x1−y1

1 1 0.5

0

0 −0.5 0

−0.5 20

40

80

100

−1 0

120

2

2

1

1

0 −1 −2 0

40

60

n

80

100

80

100

80

100

(x1 − y1 ) online

0 −1

20

40

60 n

Fig. 6.

80

100

−2 0

120

20

K1

40

n

1

1 K2

2

0

60 K1

Fig. 9.

2

0 −1

−1 −2 0

20 Fig. 8.

K1

K1

60 n

(x1 − y1 ) online

Fig. 5.

K2

0.5

20

40

60 n

Fig. 7.

80

100

−2 0

120

K2

20

40 Fig. 10.

1073

n

60 K2

to deal with the problems of chaos synchronization. Simulation results based on H´enon chaos demonstrated the effectiveness and efficiency of PSO. Moreover, it also illustrated the simplicity and easy implementation of the other kind of EA for applications.

[24] Wolfram Research Inc., H´enon map, http://mathworld.wolfram.com/ HenonMap.html [25] Henon M., ”A two-dimensional mapping with a strange attractor”, Comm. Math. Phys. 50 (1976) 69–77.

ACKNOWLEDGMENTS We thank the anonymous reviewers for their constructive remarks and comments. The work is Supported by Brain Korea(BK)21 project from Korean government and NNSFC No. 10647141 from China. R EFERENCES [1] Chen G, Dong X., From chaos to order: methodologies, perspectives, and applications. Singapore: World Scientific; 1998. [2] Rui J.P. de Figueiredo, and Chen G., Nonlinear Feedback Control Systems, An Operator Theory Approach. New York: Academic Press, 1993. [3] Caponetto R., Fortuna L., Fazzino S. et.al, ”Chaotic sequences to improve the performance of evolutionary algorithms”. IEEE Trans Evolution Computation, 7(3) (2003) 289–304. [4] Ott E., Grebogi C., Yorke J.A., ”Controlling chaos”. Phys. Rev. Lett. 64 1196–1199. [5] Kapitaniak T., ”Continuous control and synchronization in chaotic systems”. Chaos, Solitons & Fractals 7 (1995) 237–244. [6] Pecora L., Carroll T., ”Synchronization in chaotic systems”. Phys. Rev. Lett. 64 (1990) 821–824. [7] Hubler A. W., ”Adaptive control of chaotic system”. Helv. Phys. Acta. 62 (1989) 343–346. [8] Pecora L. M., Carroll T. L., ”Driving systems with chaotic signals”. Phys. Rev. A 44 (1991) 2374–2383. [9] Ushio T., ”Synthesis of synchronized chaotic systems based on observers”. Int. J. Bifurc. Chaos 9 (1999) 541-546. [10] Cao Y.J., Cheng S. J., ”Controlling chaotic system via phase space reconstruction technique”. Int. J. Bifurc. Chaos 13 (2003) 467-471. [11] Chen S. H., L¨u J.H., ”Synchronization of an uncertain unified chaotic system via adaptive control”. Chaos, Solitons & Fractals 14 (2002) 643647. [12] Chen S. H., L¨u, J.H., ”Parameters identification and synchronization of chaotic systems based upon adaptive control”. Phys. Rev. A 299 (2002) 353-358. [13] Michalewicz Z., Fogel D. B.,How to Solve It:Modern Heuristics, Berlin: Springer –Verlag, 2000. [14] Whitley D., ”An overview of evolutionary algorithms: Practical issues and common pitfalls”, Information and Software Technology, 43 (14) (2001) 817–831. [15] A. Dekkers, E. Aarts, ”Global optimization and simulated annealing”. Mathematical Programming 50 (1991) 367–393. [16] H. G. Beyer, H. P. Schwefel, ”Evolution strategies: a comprehensive introduction”. Natural Computing 1 (2002) 35–52. [17] J. Kennedy, R.C. Eberhart, ”Particle swarm optimization”, in IEEE Int. Conf. on Neural Networks, (Perth, Australia, 1995) 1942–1948. [18] Ioan Cristian Trelea, ”The particle swarm optimization algorithm: convergence analysis and parameter selection”. Inf. Process. Lett., 85 (6) (2003) 317-325. [19] David Corne, Marco dorigo, Fred Glover, New Ideas in Optimisation (Advanced Topics in Computer Science), McGraw-Hill Education Press, 1999. [20] R. Storn, K. Price, ”Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces”. Journal of Global Optimization 11 (1997) 341–359. [21] Gao F., Tong H. Q., ”Control a Novel Discrete Chaotic System through Particle Swarm Optimization”, Proc. of the 6th World Congress on Intelligent Control and Automation, Dalian, China, 2006.6, 3330-3334 [22] IEEE, 2006 the IEEE World Congress on Computational Intelligence, http://www.wcci2006.org/ [23] Liu B., Wang L., Jin Y. H. et. al., ”Directing orbits of chaotic systems by particle swarm optimization”, Chaos, Solitons & Fractals 29 (2006) 454–461.

1074