Computational Intelligence Optimization:
Particle Swarm Optimization and Ant Colony Optimization, A Gentle Introduction Department of Computer Science and Engineering Sogang University Hyeong Soo Chang System Modeling & Optimization Lab., Sogang University
1
Intelligence
Still in debate for the definition
Dictionary definition: the ability to understand and profit from experience, having the capacity for thought and reason
In 1950, Turing believed that it would be possible for a computer with 109 bits of storage space to pass a 5-minute version of his test for computer intelligence with 0.7 probability by the year 2000. Has his belief come true?
Intelligent agent: a minimum decision making unit
System Modeling & Optimization Lab., Sogang University
2
1
Computational Intelligence(CI)
The study of adaptive mechanisms to enable or facilitate “intelligent” (decision making) behavior in complex and changing environments based on mathematical models of biological systems.
A sub-branch of AI, Automatic Control, Operations Research, and Computational Biology
System Modeling & Optimization Lab., Sogang University
3
CI Main Paradigms
Neural Networks (NN), Evolutionary Computing (EC), Swarm Intelligence (SI), Fuzzy Systems (FS), Hybrid of these, etc.
NN models biological neural systems.
EC models natural evolution (ex: genetic algorithm).
FS originated from studies of how organisms interact with their environment.
based on fuzzy sets and fuzzy logic (approximate reasoning)
In a sense, FS models common sense in human reasoning.
System Modeling & Optimization Lab., Sogang University
4
2
Swarm Intelligence (SI)
Originated from the study of colonies, or swarms of social organisms
Social behavior increases the ability of an individual to adapt.
(Collective) Intelligence arises from interactions among individuals having simple behavioral intelligence.
Each individual in a swarm behaves in a distributed way with a certain information exchange protocol.
The Internet is, in a sense, a swarm intelligence!
A young (interdisciplinary) hot research field.
Much attention is being paid to SI for designing decentralized control systems (multi-agent systems).
Plenary talks given by Y. C. Ho at ACC/IEEE CDC 2004.
System Modeling & Optimization Lab., Sogang University
5
SI-based Optimization
This talk introduces gently two recently proposed SIbased optimization techniques:
Particle Swarm Optimization (PSO)
An optimization technique designed for continuous optimization problem
Based on model of the social behavior of bird flocks
Ant Colony Optimization (ACO)
General purpose meta-heuristic for Combinatorial Optimization Problem (COP)
Based on model of ant colony’s social behavior
System Modeling & Optimization Lab., Sogang University
6
3
Optimization Problem
Let S be the set of solutions for a given problem.
An objective (fitness) function f : S → ℜ is given.
The optimization problem is to find arg max f ( s ) or max f ( s ). s∈S
s∈S
Continuous Search Space: S=Rn
Discrete Search Space: S={0,1}n (binary string of the length n) (ex: 0-1 Knapsack, TSP, etc.)
Commonly called COP.
The given domain needs to be mapped to the search space by a mapping (general purpose meta-heuristic).
System Modeling & Optimization Lab., Sogang University
7
Why SI for Optimization?
Continuous Optimization Problem
Nondifferentiable, highly nonlinear and complex, many local-maxima
Search speed
Discrete Optimization Problem
An algorithm is regarded as efficient or “good” if there exist a polynomial P(n) such that the time required for solving any problem instance of size n is bounded above by P(n).
NP-Complete problems: Nobody has found so far any good algorithm for any problem in this class.
It has been proved that if a good algorithm exists for some problem in this class, then a good algorithm exists for all NP-Complete problems.
Note: a continuous problem can be approximated by discretization.
What do we do then?
System Modeling & Optimization Lab., Sogang University
8
4
Particle Swarm Optimization (PSO)
Developed by Kennedy and Everhart
J. Kennedy, R.C. Everhart, Particle Swarm Optimization, Proc. of the IEEE Int. Conf. on Neural Networks, Vol. 4, pp. 1942-1948, 1995.
A population-based search algorithm based on the simulation of the social behavior of birds within a flock.
The initial intent of the particle swarm concept was to graphically simulate the graceful and unpredictable choreography of a bird flock, the aim of discovering patterns that govern the ability of birds to fly synchronously, and to suddenly change direction with a regrouping in an optimal formation.
From the initial objective, the concept evolved into a simple and efficient optimization algorithm.
System Modeling & Optimization Lab., Sogang University
9
Particle Swarm Optimization (PSO)
Solution space : n-dimension Euclidian space Fitness function f : ℜ → ℜ n
Swarm : a set of particles denoted by P Particle : a potential solution Pi (i=1,…,m)
Particle Pi consists of the following components:
r Position xi (t ) = ( x1 , x2 ,..., xn ), x j ∈ ℜ at step t
r Velocity vi (t ) = (v1 , v2 ,..., vn ), v j ∈ ℜ
gbest is xi (t ) such that f ( xi (t )) ≥ f ( x j (t )) for all x j (t ), Pj ∈ P * * n x is a position such that f ( x ) ≥ f ( y ), y ∈ ℜ We call x* an optimal solution position.
System Modeling & Optimization Lab., Sogang University
10
5
Particle Swarm Optimization (PSO) fitness
Basic Concepts of PSO Maximum fitness
fitness of r position xi
r velocity v1 of
particle P1 on r position x1
particle P1 gbest
r position xi
r x1
x*
r x2
Solution space 11
System Modeling & Optimization Lab., Sogang University
Particle Swarm Optimization (PSO) 1. Initialize the swarm in solution space
r t = 0, Pi ∈ P (t ) is random position xi (t ) within the solution space r Velocity vi (t ) = (0,...,0)
P1
r x1 (t ) System Modeling & Optimization Lab., Sogang University
P3
r x3 (t )
P2
x*
r x2 (t ) 12
6
Particle Swarm Optimization (PSO) 2. Evaluate fitness of individual particles r Fitness of Pi = f ( xi (t ))
fitness of P3
fitness of P1
P1
fitness of P2
P3
r x1 (t )
r x3 (t )
P2
r x2 (t )
x*
13
System Modeling & Optimization Lab., Sogang University
Particle Swarm Optimization (PSO) 3. Modify gbest, pbest and velocity
r r (2) If f ( xi (t )) > gbest , (1) If f ( xi (t )) > pbesti , r r r r r r (a) gbest = f ( xi (t )), (b) x gbest = xi (t ) (a) pbesti = f ( xi (t )), (b) x pbesti = xi (t ) r r r r r r (3) vi (t ) = vi (t − 1) + ρ1 ( x pbesti − xi (t )) + ρ 2 ( x gbest − xi (t )) ( ρ1 , ρ 2 are random variables) gbest
fitness of P2
P1
r x1 (t )
r v1 (t )
System Modeling & Optimization Lab., Sogang University
r v2 (t )
P3
r x3 (t )
x*
P2
r x2 (t )
14
7
Particle Swarm Optimization (PSO) r r r r r r vi (t ) = vi (t − 1) + ρ1 ( x pbesti − xi (t )) + ρ 2 ( x gbest − xi (t )) ( ρ1 , ρ 2 are random variables) 2-dimensonal solution space
fitness
x r x pbest1 pbest1 r r ( xrpbest − xr1 (t )) ρ1 ( x pbest11 − x1 (t ))
r v1 (t )
P1
r v1 (t − 1)
r x1 (t )
r
gbest
r x gbest
r
r
r
ρ1 ( x pbest − x1 (t )) + ρ 2 ( xgbest − x1 (t ))
r r ρ 2((xrx gbest −− xrx1((tt))))
1
1
x gbest
0 15
System Modeling & Optimization Lab., Sogang University
Particle Swarm Optimization (PSO) 4. Move each particle to a new position r r r xi (t ) = xi (t − 1) + vi (t )
t ← t +1 gbest
> pbest1
r v1 (t )
r x pbest1
>
P1
r x1 (t )
System Modeling & Optimization Lab., Sogang University
P2 vr (t ) 2
P3
r r x3 (t ), x gbest
x*
r x2 (t ) 16
8
Particle Swarm Optimization (PSO) 5. Repeat until convergence or a stopping cond. is met t=1
gbest
P1
P3
r x1 (t )
r x3 (t )
P2
r x2 (t )
x*
17
System Modeling & Optimization Lab., Sogang University
Particle Swarm Optimization (PSO) 5. Repeat until convergence or a stopping cond. is met t=2
gbest
P1
P3
P2
r r x1 (t ) x3 (t )
x* x2 (t )
System Modeling & Optimization Lab., Sogang University
r
18
9
Particle Swarm Optimization (PSO) 5. Repeat until convergence or a stopping cond. is met t=3
gbest
P1
P3
r x1 (t r) System Modeling & Optimization Lab., Sogang University
P2
r
x* x2 (t )
x3 (t )
19
Particle Swarm Optimization (PSO) 5. Repeat until convergence or a stopping cond. is met t=4
gbest
P2
r
x* = x2 (t ) System Modeling & Optimization Lab., Sogang University
20
10
Particle Swarm Optimization (PSO) 5. Repeat until convergence or a stopping cond. is met t=*
gbest
P1=P2=P3
r
r
r
x* = x1 (t ) = x2 (t ) = x3 (t ) System Modeling & Optimization Lab., Sogang University
21
Particle Swarm Optimization (PSO)
Basic flow of PSO 1. Initialize the swarm from the solution space. 2. Evaluate fitness of individual particles. 3. Modify gbest, pbest and velocity. 4. Move each particle to a new position. 5. Go to step 2, and repeat until convergence or a stopping condition is satisfied.
System Modeling & Optimization Lab., Sogang University
22
11
Particle Swarm Optimization (PSO)
Other details
ρ1 = r1c1 , ρ 2 = r2 c2 , with r1 , r2 ~ U (0,1),
and c1 and c2 are positive acceleration constants. c1 + c2 ≤ 4
If
J. Kennedy, The Behavior of Particles, in V.W. Porto, N Saravanan, D. Waagen (eds), Proc. of the 7th Int. Conf. on Evolutionary Programming, 1998, pp 581-589.
c1 + c2 > 4, velocities and positions tend to explode toward infinity.
lbest (in the local best version of PSO)
While lbest is slower in convergence than gbest, lbest results in much better solutions. R.C. Everhart, R.W. Dobbins and P. Simpson, Computational Intelligence PC Tools, Academic Press, 1996.
System Modeling & Optimization Lab., Sogang University
23
Particle Swarm Optimization (PSO)
Other details cont.
Usually set c1 and c2 to 2.
Usually, an upper limit is placed on the velocity in all dimensions and the limit depends on the range of the domain.
The convergence of PSO has not been proved yet.
Binary-PSO for discrete optimization problem has been developed:
J. Kennedy, R.C. Everhart, A Discrete Binary Version of the Particle Swarm Algorithm, Proc. of the Conf. on Systems, Man, and Cybernetics, 1997, pp. 4104-4109.
System Modeling & Optimization Lab., Sogang University
24
12
Particle Swarm Optimization (PSO)
Some applications
Y. Shi, R.C. Everhart, Empirical Study of Particle Swarm Optimization, Proceedings of the IEEE Congress on Evolutionary Computation, Vol 4, 1999, pp 1945-1950. (study on benchmark functions)
A multiagent-based particle swarm optimization approach for optimal reactive power dispatch Zhao, B.; Guo, C.X.; Cao, Y.J.; Power Systems, IEEE Transactions on Volume 20, Issue 2, May 2005 Page(s):1070 - 1078
A particle swarm optimization approach for optimum design of PID controller in AVR system Zwe-Lee Gaing; Energy Conversion, IEEE Transactions on Volume 19, Issue 2, June 2004 Page(s):384 - 391
Use of intelligent-particle swarm optimization in electromagnetics Ciuprina, G.; Ioan, D.; Munteanu, I.; Magnetics, IEEE Transactions on Volume 38, Issue 2, Part 1, March 2002 Page(s):1037 - 1040
Inversion of ocean color observations using particle swarm optimization Slade, W.H.; Ressom, H.W.; Musavi, M.T.; Miller, R.L.; Geoscience and Remote Sensing, IEEE Transactions on Volume 42, Issue 9, Sept. 2004 Page(s):1915 - 1923
Efficiency-Constrained Particle Swarm Optimization of a Modified Bernstein Polynomial for Conformal Array Excitation Amplitude Synthesis Boeringer, D.W.; Werner, D.H.; Antennas and Propagation, IEEE Transactions on Volume 53, Issue 8, Part 2, Aug. 2005 Page(s):2662 - 2673
System Modeling & Optimization Lab., Sogang University
25
Ant Colony Optimization (ACO)
PSO is based on model of the choreography of relatively small swarms, where all individuals have the same behavior and characteristics.
The next technique considers swarms that consist of large numbers of individuals, where individuals typically have different morphological structures and tasks – but all contributing to a common goal.
Such swarms model distributed systems where components of the system are capable of distributed operation.
Roughly, 108 living organisms in the earth; Only 2% of those are social insects: Only 2% of all insects live in swarms where social interaction is the most important aspect to ensure survival; Of these social insects, 50% are ants!
Ant colonies consist of from 30 to millions of individuals.
System Modeling & Optimization Lab., Sogang University
26
13
Real Ant Behavior A single ant has a limited capability. But an ant colony is highly efficient, capable to find shortest paths between foods and the nest (real experiment). Communication through a chemical substances pheromone, which is accumulative and also evaporative.
System Modeling & Optimization Lab., Sogang University
27
Real Ant Behavior Initial route (t=0)
System Modeling & Optimization Lab., Sogang University
28
14
Real Ant Behavior Change the environment (t=1) ACB
System Modeling & Optimization Lab., Sogang University
29
Real Ant Behavior Converging behavior to the shorter path
System Modeling & Optimization Lab., Sogang University
30
15
Ant Colony Optimization (ACO)
Originally developed by Dorigo, M. (1992). Optimization, Learning and Natural Algorithms. Ph.D.Thesis, Politecnico di Milano, Italy, in Italian. Concepts of ACO
Solution space represented by “Construction graph” A graph G = (V , E ) V is a set of vertices in graph G E is a set of arc(i , j ) and i, j ∈ V Each arc(i,j) is associated with pheromone intensity and visibility value.
Solution : Sequence of vertices constructed by artificial ant’s random walks Artificial ant
A small amount of memory Random walk depends on pheromone intensities and visibility values.
System Modeling & Optimization Lab., Sogang University
31
Ant Colony Optimization (ACO)
Concepts of ACO cont.
Pheromone intensity
A value assigned to each arc in the construction graph
Social or collective “opinion” for each arc combined/determined from the pheromone (individual opinion) of each ant w.r.t. the solution generated by the ant
Visibility value
A value of so-called “greedy function” for incorporating heuristic knowledge on the problem
System Modeling & Optimization Lab., Sogang University
32
16
Ant Colony Optimization (ACO)
Example : Traveling Salesman Problem (TSP)
Let G=(V,E) be a connected graph with n vertices
Hamiltonian cycle is a round-trip path along n edges of G that visits every vertex once and returns to its starting vertex.
Let V be a set of cities, and E be a set of links between i, j∈V and each link (i,j) has a static distance dij, TSP is to find the minimum distance Hamiltonian cycle.
An NP-Complete Problem 33
System Modeling & Optimization Lab., Sogang University
Ant Colony Optimization (ACO) 1. Representation of a problem by a construction graph 1 d61=1
d12=1 d14=5
6
2 d25=5
d34=1 5
System Modeling & Optimization Lab., Sogang University
3
d36=5
d56=1
A given problem
d23=1
d45=1
4
Construction graph 34
17
Ant Colony Optimization (ACO) 2. Initialization of ACO parameters
Set the number of ants m (= n = 6). Set pheromone intensity τ ij (0) = τ 0 for all (i, j ) ∈ E. Set starting city of each ant. ant 1
ant 2
d12=1
1 d61=1
d14=5
2 d25=5
d23=1
ant 3
ant 6
6
d36=5
d56=1
3
d34=1 5
d45=1
ant 5
4 ant 4
35
System Modeling & Optimization Lab., Sogang University
Ant Colony Optimization (ACO) 3. Solution generation by ant’s random walk
Ant k’s transition probability from node i to j for its random walk: [τ ij (t )]α [ηij ]β p (t ) = ∑ k [τ il (t )]α [ηil ]β l∈N i 0 k ij
if j ∈ N ik else
t : time step k N i : adm. neighbor of node i for ant k
ant k
? 1
? 2
ηij : visibility value( = 1/d ) ij α , β : weight factor
? 6
? 3
? 5 System Modeling & Optimization Lab., Sogang University
? 4 36
18
Ant Colony Optimization (ACO) 4. Update pheromone intensities
At the end of traveling, for arcs included in the solution, τ ij (t + 1) = (1 − ρ ) ⋅ τ ij (t ) + ρ ⋅ ∆ τ ρ ∈ (0,1] : evaporation factor ∆τ : the sum of pheromone deposits
by the ants that traversed (i,j) at t. The amount of the pheromone deposit by ant k for (i,j) depends on the solution quality generated 6 by ant k at t. A reinforcement occurs toward the better solution components (arcs).
1
2
3
5
4 37
System Modeling & Optimization Lab., Sogang University
Ant Colony Optimization (ACO) 5. Repeat until convergence or stopping cond. is satisfied
Remind [τ ij (t )]α [ηij ]β if j ∈ N ik p (t ) = ∑ k [τ il (t )]α [ηil ]β l∈N i 0 else Ant is more likely to select an arc with more pheromone while constructing a random solution. k ij
1
2
6
3
5 System Modeling & Optimization Lab., Sogang University
4 38
19
Ant Colony Optimization (ACO)
Basic flow of ACO 1. Represent the solution space by a construction graph. 2. Initialize ACO parameters. 3. Generate random solutions from each ant’s random walk 4. Update pheromone intensities. 5. Go to step 3, and repeat until convergence or a stopping condition is satisfied.
System Modeling & Optimization Lab., Sogang University
39
Ant Colony Optimization (ACO)
Some applications
Ant colony optimization for routing and load-balancing: survey and new directions Kwang Mong Sim; Weng Hong Sun; Systems, Man and Cybernetics, Part A, IEEE Transactions on Volume 33, Issue 5, Sept. 2003 Page(s):560 - 572
An improved ant colony optimization algorithm and its application to electromagnetic devices designs Ho, S.L.; Shiyou Yang; Wong, H.C.; Cheng, K.W.E.; Guangzheng Ni; Magnetics, IEEE Transactions on Volume 41, Issue 5, May 2005 Page(s):1764 - 1767
Ant colony system application to macrocell overlap removal Alupoaei, S.; Katkoori, S.; Very Large Scale Integration (VLSI) Systems, IEEE Transactions on Volume 12, Issue 10, Oct. 2004 Page(s):1118 - 1123
Data mining with an ant colony optimization algorithm Parpinelli, R.S.; Lopes, H.S.; Freitas, A.A.; Evolutionary Computation, IEEE Transactions on Volume 6, Issue 4, Aug. 2002 Page(s):321 - 332
Dorigo M. and Gambardella L.M. (1997). Ant Colony System: A Cooperative Learning Approach to The Traveling Salesman Problem. IEEE Transactions on Evolutionary Computation. Vol. 1:1 33-56
System Modeling & Optimization Lab., Sogang University
40
20
Ant Colony Optimization (ACO)
The convergence of ACO has been proved.
W.J. Gutjahr, A graph-based ant system and its convergence, Future Generation Computer Systems, 2000, vol. 16, pp. 873-888. Somewhat different formulation from Dorigo’s.
Directed construction graph G=(V,E) In G, a unique node is marked as the so-called start node Every ant starts its random walk from the start node and updates its pheromone on his back-tracking the path it traversed. Any solution generated by any ant by random walk is a feasible solution.
System Modeling & Optimization Lab., Sogang University
41
Others
“Optimization Based on Bacterial Chemotaxis,” in IEEE Trans. on Evolutionary Computation, Vol. 6, pp. 16-29, 2002. “An Artificial Immune System Approach With Secondary Response for Misbehavior Detection in Mobile ad hoc Networks,” IEEE Transactions on Neural Networks, Vol. 16, pp. 1076 – 1087.
System Modeling & Optimization Lab., Sogang University
42
21