Proceedings of the International Conference on Sensing, Computing and Automation Copyright c 2006 Watam Press

A Novel Optimal PID Tuning and On-line Tuning Based on Particle Swarm Optimization F. Gao and H. Q. Tong Department of Mathematics, School of Science Wuhan University of Technology 122 Luoshi Road, Wuhan, 430070 Hubei, China [email protected] Abstract - PID controller is an extremely important type of controller. Though lots of methods put to determine the optimal PID parameters, their performance are highly sensitive to the initial guess of the solution. A novel particle swarm optimization with an established deflection technique is applied in PID tuning and on-line tuning as a novel technique for optimum adaptive control, by transforming the problems of PID controller into corresponding optimization problems. The details of applying the proposed method are given and the experiments done show the proposed strategy is effective and robust.

I. INTRODUCTION The PID (proportional-integral-derivative) algorithm was devised in the 1940s, and remains remarkably useful and applicable over a large range of process challenges. PID controllers are used to control process variables ranging from fluid flow, level, pressure, temperature, consistency, density etc. It is a robust easily understood algorithm that can provide excellent control performance despite the varied dynamic characteristics of process plant [1]. Normally PID algorithms executes on Programmable Logic Controllers, Distributed Control System or single loop or stand alone controllers. And PID is also the basis for many advanced control algorithms and strategies [2]. In order for control loops to work properly, the PID loop must be properly tuned. Standard methods for tuning loops and criteria for judging the loop tuning have been used for many years, but should be reevaluated for use on modern digital control systems [1, 2]. While the basic algorithms have been unchanged for many years and are used in all distributed control systems, the actual digital implementation of the algorithm has changed and differs from one system to another and from commercial equipment to academia. For many years a variety of different methods have been used to determine optimal PID parameters, such as hillclimbing, gradient methods, simplex methods, expert system etc. Though these methods have the advantages of good performance in optimization, there are some disadvantages exist such as sensitivity to initials, convergence to local optimal and a hard work in dealing with the knowledge data mining. Evolutionary algorithms (EAs) is an umbrella term used to describe computer-based problem solving systems which use computational models of some of the known mechanisms of EVOLUTION as key elements in their design and implementation. Although simplistic from a biologist’s

viewpoint, these algorithms are sufficiently complex to provide robust and powerful adaptive search mechanisms [3-6]. Particle Swarm Optimization (PSO)is a relatively new computational intelligence tool relation to artificial neural nets, Fuzzy Logic, and EA, developed by Dr. Eberhart and Dr. Kennedy in 1995[3], inspired by social behavior of bird flocking or fish schooling. In past several years, PSO has been successfully used across a wide range of application fields as well as in specific applications focused on a specific requirement for the two reason following. The first it is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods. And the second reason that PSO is attractive is that there are few parameters to adjust. One version, with slight variations, works well in a wide variety of applications [3, 7, 8]. In this paper, a novel PID controller tuning and on–line tuning approach based on the PSO is proposed to design robust PID parameters by transforming the problems of PID controller into correspondent optimization problems. The rest of this paper is organized as follows. In Section II, the main concepts of PID controller and the transformation are introduced. Section III gives the main idea of PSO and some technique to progress PSO. Details of applying PSO in PID control and experimental results are reported and analyzed in Section IV. The paper concludes with Section V. II. OPTIMAL PID TUNING AND ON–LINE TUNING A. The Main Concept of PID The PID controller response combines three response mechanisms as a whole: proportional response – proportional to the gap between the reading and the set point, integral response – proportional to the integral of the changes between the past and present reading vs. the set point and derivative response – proportional to the rate of change of the reading. By adjusting the weights on the three responses, one can almost always insure a stable, fast reacting control dynamics [1]. PID controller is three-term linear, and it makes the control windage error (t ) = rin(t ) − yout (t ) between the desired input value rin(t ) and the actual output yout with controller: ⎛ T derror (t ) ⎞ 1 t u (t ) = k p ⎜ error (t ) + ∫ error (t )dt + d ⎟ (1) 0 T1 dt ⎝ ⎠ or another transfer function form ⎛ ⎞ 1 G ( s ) = U ( s ) /E ( s ) = K p ⎜ 1 + + TD s ⎟ ⎝ T1 s ⎠

182

(2)

where the proportional gain is K p , the magnitude of the error plus the integral gain is K I = K p /T1 and the integral of the error plus the derivative gain K d = K p × T1 . This signal u will be sent to the plant (system to be controlled), and the new output yout will be obtained. This new output yout (t ) will be sent back to the sensor again to find the new error signal error (t ) . The PID controller takes this new error signal and computes its derivative and its integral again. This process goes on and on. The proportional controller K p will have the effect of reducing the rise time and will reduce, but never eliminate, the steady-state error. An integral control K I will have the effect of eliminating the steady-state error, but it may make the transient response worse. A derivative control K d will have the effect of increasing the stability of the system, reducing the overshoot, and improving the transient response. Effects of each of controllers K p , K d and K I are dependent on each other [2]. B. PID Controller Tuning Let the system to be controlled is the transfer function (3) G ( s ) = 400 / ( s 2 + 50 s ) The goal of PID controller is to show how each of K p , K d and K I contributes to obtain: fast rise time, minimum overshoot, no steady-state error. To gain the satisfied dynamic properties of process, choose time integral of the error signal error (t ) ’s absolute value as objective function to be minimized. And to avoid too much control, we add the input value rin(t ) ’s squire in the objective function. That is, J =∫

(w



1

0

e(t ) + w2 u 2 (t ) ) dt + w3 ⋅ tu

(4)

where w1 , w2 , w3 is the weight value, u (t ) is control signal output, tu is the rise time. To avoid overshoot ( ey (t ) < 0 ), we take a penalty in the objective function when the overshoot becomes. Then (4) is limited as below: J =∫



0

(w

1

e(t ) + w2 u 2 (t ) + w4 ey (t ) ) dt + w3 ⋅ tu

(5)

where w4 is the weight value subject to w4 << w1 , ey (t ) = y (t ) − y (t − 1) , y (t ) is the output of the system controlled. With the objective function J ’s minimization, a good combination of K p , K d and K I is obtained to get the better control. C. PID Controller On-line Tuning The main concept of on–line tuning PID controller is tuning the PD parameters at each sampling time. Take the system (3) for instance, let errori (i ) be the error of parameters combination i at time k , and de(i ) is changing rate of i ’s position’s track error. And the objective function to be optimized is below: J (i) = α P × errori(i) + β P × de(i) (6)

where α P , β P is the weight value. To avoid overshoot ( errori (t ) < 0 ), we take a penalty in the objective function when the overshoot becomes. Then (6) is limited as below: J (i ) = J (i) + 100 errori(i) (7) Then the problem of PID controller on–line tuning is transferred into that of minimization of a function. To reduce the blindness in initial optimization, the efforts in computation and bound of the parameters to be optimized, a team of k p , kd is selected experientially. III. THE MAIN CONCEPT OF PARTICLE SWARM OPTIMIZATION PSO belongs to the category of Swarm Intelligence methods closely related to the methods of Evolutionary Computation, which consists of algorithms motivated from biological genetics and natural selection. A common characteristic of all these algorithms is the exploitation of a population of search points that probe the search space simultaneously [3, 8]. PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations [9]. However, unlike GA, PSO has no evolution operators such as crossover and mutation. The dynamics of population in PSO resembles the collective behavior and self–organization of socially intelligent organisms. The individuals of the population (called particles) exchange information and benefit from their discoveries, as well as the discoveries of other companions, while exploring promising areas of the search space [10]. At step k , each particle X i ( k ) = ( xi ,1 ( k ),… , xi , D ( k )) keeps track of its coordinates in the problem space which are associated with the best solution (fitness) it has achieved so far (The fitness value is also stored). This value is called pbest Pi (k ) = ( pi ,1 ( k ),… , pi , D (k ) ) . Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle, called lbest Li (t ) = (li ,1 (t ),… , li , D (t )) . When a particle takes all the population as its topological neighbors, the best value is a global best and is called gbest Qg ( k ) = ( ( qg ,1 ( k ),… , qg , D ( k ) ) . The particle swarm optimization concept consists of, at each time step, changing the velocity Vi ( k ) = ( vi ,1 ( k ),… , vi , D ( k ) ) of each particle toward its pbest and gbest locations (PSO without neighborhood model). Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward pbest and gbest locations [9]. That is ⎧ Ai , d ( k ) = rand ( 0, c1 ) ⋅ ⎡⎣ pi , d ( k ) − xi , d ( k ) ⎤⎦ ⎪ ⎪ Bi , d ( k ) = rand ( 0, c2 ) ⋅ ⎡⎣ qg , d ( k ) − xi , d ( k ) ⎤⎦ (8) ⎨ ⎪vi , d ( k + 1) = w ⋅ vi , d ( k ) + Ai , d ( k ) + Bi , d ( k ) ⎪ xi , d ( k + 1) = xi , d ( k ) + vi , d ( k + 1) ⎩ where w is called inertia weight, c1 is called cognition acceleration constant, c2 is called social acceleration constant. 183

The cognitive parameter c1 determines the effect of the distance between the current position of the particle and its best previous position Pi on its velocity. On the other hand, the social parameter c2 plays a similar role but it concerns the best previous position Pg i , attained by any particle in the neighborhood. rand (a, b) denotes random in [a, b] , in this way, the randomicity is introduced to PSO. Vi (k ) is limited by a max velocity Vmax as below: ⎧ vij , if | vij |≤ Vmax , ⎪ vij = ⎨−Vmax , ifvij < −Vmax , ⎪V , ifvij > Vmax ⎩ max

(9)

Though PSO without neighborhood model converges fast, sometimes it relapses into local optimal easily. So an improved edition of PSO with circular neighborhood model is also proposed to ameliorate convergence through maintaining more attractors. Let N i = { X i − r ,… , X i −1 , X i , X i +1 ,… , X i + r } , be a neighborhood of radius r of the i-th particle, X i (local variant). Then, lbest Li (k ) is defined as the index of the best particle in the neighborhood of X i , i.e.,

f ( Pgi ) ≤ f ( Pj ),

j = i − r ,…, i + r .

(10)

The neighborhood’s topology is usually cyclic, i.e., the first particle X 1 is assumed to follow after the last particle X N . Now the updating mechanism is given:

where xi∗ (i = 1, 2,

k ) are k minimizers founded, λi ∈ (0,1) .

27

26.5

(11)

26

25.5

25

24.5

where χ is Constriction factor, normally χ = 0.9 and

χ = 2κ 2 − φ − φ − 4φ

(14)

In this section, the operation of the proposed technique as an optimization method on PID controller tuning and on--line tuning to (3) are illustrated with the objective function (4) with (5) and (6) with(7) respectively. The optimization of functions for PID controller tuning above is difficult, so we choose PSO without neighbor model to get optimums. And for the problems discussed, let PSO run 100 times independently. With the termination is the evolution generation is T = 100 , the size of the population M = 30 , cognitive acceleration c1 = c2 = 2 , constriction factor χ = 0.9 , value of velocity weight at the beginning and the end are ws = 0.95 and we = 0.2 , and velocity weight w varies as w = w − k ( ws − we ) T (15) For the objective function (4), k P ∈ [0, 20] , ki , kd ∈ [0,1] , w1 = 0.999 , w2 = 0.001 , w3 = 2.0 , w4 = 100 . And the simulations success in probability 99% of 100 times independent experiments. The process of optimizing the objective function (4) and the step response of PID controller with obtained parameters k P = 19.9997975239538 , ki = 0.253798161751639 , kd = 0.253798161751639 and the correspondent function (4)’s value is 24.0163557397992 are shown in Fig.1-2.

where c3 is called Neighborhood Acceleration Constant, the other parameters are the same as those in (8). Sometimes Vi (k ) can be modified [11] as (12) vi , d ( k + 1) = χ ⎡⎣ w ⋅ vi , d ( k ) + Ei , d ( k ) ⎤⎦ 2

−1

IV. SIMULATIONS

Best J

⎧ Ai , d ( k ) = rand ( 0, c1 ) × ⎡⎣ pi , d ( k ) − xi , d ( k ) ⎤⎦ ⎪ ⎪ Bi , d ( k ) = rand ( 0, c2 ) × ⎡⎣ qg , d ( k ) − xi , d ( k ) ⎤⎦ ⎪ ⎪ Ci , d ( k ) = rand ( 0, c3 ) × ⎡⎣li , d ( k ) − xi , d ( k ) ⎤⎦ ⎨ ⎪ Ei , d (k ) = Ai , d ( k ) + Bi, d ( k ) + Ci, d ( k ) ⎪ ⎪ vi, d ( k + 1) = w ⋅ vi, d ( k ) + Ei , d (k ) ⎪ xi , d ( k + 1) = xi, d ( k ) + vi , d ( k + 1) ⎩

k

F ( x) = ∏ ⎡⎣ tanh(λi x − xi∗ ) ⎤⎦ f ( x) i =1

24

0

20

40

60

80

Times

(13)

Fig. 1. Process of optimizing (8)

for φ > 4 , where φ = c1 + c2 , and κ = 1 . A complete theoretical analysis of the derivation of (20) can be found in [11, 12]. It is known to all that there is no algorithm fit to any problems. When the objective function f ( x) is full of local optimums and more than one minimizer is needed, some established techniques is often combined with PSO to guarantee the detection of a different minimizer, such as deflection and stretching are introduced. Suppose objective function is f ( x) , we use deflection technique [13] as below to generate the new objective function F ( x) :

184

100

120

1.4

350

300

1.2

250 1

u

rin,yout

200 0.8 150

0.6 100 0.4 50

0.2

0

0

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

-50

0.1

0

0.01

0.02

0.03

0.04

Time(s)

0.05

0.06

0.07

0.08

0.09

0.1

0.07

0.08

0.09

0.1

0.07

0.08

0.09

0.1

Time(s)

Fig. 2. The step respose of PID controller

Fig. 4.

For the objective function (6), each generation the maximum number of iterations was T = 30 with parameteres α P = 0.95 , β P = 0.05 , k P ∈ [9.0,12.0] , kd ∈ [0.2, 0.3] and the size of the population M = 40 , cognitive acceleration c1 = c2 = 2 , constriction factor χ = 0.9 , value of velocity weight (15) at the beginning and the end are ws = 0.95 and we = 0.2 in PSO. The individual have been constrained in the corresponding region for each test problem. And the simulations success in probability 98% of 100 times independent experiments. The step response of PID controller with parameters k P , kd ∈ [0,1] , the controller u (k ) ’s changes with t in PID tuning are shown separately in Fig.3~6.

u (k ) ’s changes

12.1

12

kp change

11.9

11.8

11.7

11.6

11.5

11.4

0

0.01

0.02

0.03

0.04

0.05

0.06

Time(s)

1.4

Fig. 5.

1.2

k P ’s changes

1

0.8 0.3

0.6 0.28 0.4

kd change

rin,yout

0.32

0.2

0.26

0.24 0

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

Time(s)

0.22

Fig. 3. The step response of PID controller tuning on-line 0.2

0

0.01

0.02

0.03

0.04

0.05

0.06

Time(s)

Fig. 6.

kd ’s changes

From the Figures above, we can conclude: when it is the initial control process (error less than about 0.5), kP rises and kd declines; when the error is larger than 0.5, kP declines and kd rise to avoiding the error varies too fast; and when the 185

overshoot comes, kP rises and kd declines again to reduce the error. V. CONCLUSIONS An application to PID tuning and on-line tuning through PSO is proposed. From the simulations above, we can conclude that PSO is efficient and robust for PID control tuning and tuning on-line. Though experiments of DE are done to system (3), we can easily derive it into the other systems. ACKNOWLEDGMENT We thank the anonymous reviewers for their constructive remarks and comments. This work is partially supported by the Science Foundation Grant No.02C26214200218 for Technology Creative Research from the Ministry of Science and Technology of China, the Chinese NSF Grant No.30570611 to H. Q. TONG, the Foundation Grant No.XJJ2004113, Project of educational research, the UIRT Project Grant No.A156 and No.A157 granted by Wuhan University of Technology in China. REFERENCES [1] D&G Sciences, “The PID control algorithm,” http://www.dgsciences.com/acs54/pid.htm, 2005. [2] Regents of the University of Michigan, "PID Tutorial," http://www.engin.umich.edu/group/ctm/PID/PID.html, 2005.

[3] R. C. Eberhart, Y. Shi, “Comparing inertia weights and constriction factors in particle swarm optimization,” Proceedings of the 2000 Congress on Evolutionary Computation. IEEE Service Center: Piscataway, NJ, pp. 8488, 2000. [4] D. Whitley, “An overview of evolutionary algorithms: Practical issues and common pitfalls,” Information and Software Technology, vol.43, no. 14, pp. 817-831, 2001. [5] F. Gao, H. Q. Tong, “Computing Two Linchpins of Topological Degree by a Novel Differential Evolution Algorithm,” International Journal of Computational Intelligence and Applications, vol.5, no. 3, pp. 335-350, 2005. [6] F. Gao, “Computing unstable Period orbits of discrete chaotic system though differential evolutionary algorithms basing on elite subspace,” Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice, vol. 25, no. 4, pp. 96-102, 2005. [7] P. Pomeroy, “An Introduction to Particle Swarm Optimization,” http://www.adaptiveview.com/articles/ipsop1.html, 2003. [8] J. F. Schutte, J. A. Reinbolt, B. J. Fregly et al., "Parallel global optimization with the particle swarm algorithm,” Int. J. Numer. Meth. Engng., vol. 61, pp. 2296-2315, 2004. [9] X. H. Hu, “Particle Swarm Optimization,” http://www.swarm– intelligence.org/, 2002. [10]Ch. Skokos et al, “Particle Swarm Optimization: An efficient method for tracing periodic orbits in 3D galactic potentials,” Mon.Not.Roy.Astron.Soc., vol. 359, pp. 251–260, 2005. [11]I. Trelea, “The particle swarm optimization algorithm: convergence analysis and parameter selection,” Inf. Process. Lett., vol. 85, no. 6, pp. 317-325, 2003. [12]D. Corne, M. Dorigo, Fred Glover, New Ideas in Optimization (Advanced Topics in Computer Science), McGraw-Hill, 2004. [13]K. Parsopoulos, M. Vrahatis, “Computing periodic orbits of nonlinear mappings through particle swarm optimization,” Proc. of the 4th GRACM Congress on Computational Mechanics, Patras, Greece, 2002

186

A Novel Optimal PID Tuning and On-line Tuning Based ...

Evolutionary Computation. IEEE Service Center: Piscataway, NJ, pp. 84-. 88, 2000. [4] D. Whitley, “An overview of evolutionary algorithms: Practical issues and.

198KB Sizes 1 Downloads 181 Views

Recommend Documents

A Self-Tuning System Tuning System Tuning System ...
Hadoop is a MAD system that is becoming popular for big data analytics. An entire ecosystem of tools is being developed around Hadoop. Hadoop itself has two ...

Policy-based Tuning for Performance Portability and ...
In this paper, we present a policy-based design idiom for constructing .... data-parallel task decompositions that instantiate a unique logical thread for every data ...

Controller Tuning - nptel
Review Questions. 1. What does controller tuning mean? 2. Name the three techniques for controller tuning, those are commonly known as Ziegler-. Nichols method. 3. Explain the reaction curve technique for tuning of controller. What are its limitation

Tuning for Query-log based On-line Index Maintenance - People
Oct 28, 2011 - 1. INTRODUCTION. Inverted indexes are an important and widely used data structure for ... query, for instance Twitter, which is not present few years back is now ... This index- ing scheme provides a high degree of query performance si

Policy-based Tuning for Performance Portability ... - Michael Garland
ABSTRACT. Although modular programming is a fundamental software development practice, software reuse within contemporary GPU kernels is uncommon. For GPU software assets to be reusable across problem instances, they must be inherently flexible and t

MCMC: Convergence and tuning of IM
console and automatically generates various graphical files as well as numerical descriptions of the MCMC dataset. ... precision about the locations of parameters. These are the two practical issues that we ... parameter as a function of iteration nu

Tuning for Query-log based On-line Index Maintenance - People
Oct 28, 2011 - structure for index maintenance in Information Retrieval. (IR) systems. ..... Computer Systems Lab, Department of Computer Science,.

Good Tuning: A Pocket Guide
any omission of credit that may have occurred and will make ... To order: Internet: www.isa.org. Phone: 919/549-8411. Fax: 919/549-8288 .... remotely set, often by another computer. PID action is ..... When I was four years old and sitting on my.

Easton Tuning Guide.pdf
plunger or arrow rest assembly so that the tip (center) of. the arrow point is correctly aligned with the type of. equipment you shoot. With Finger Release (RF, CF).

On-line PID tuning for engine idle-speed control using ... - CiteSeerX
context of engine idle-speed control; the algorithm is "rst applied in simulation on a nominal engine model, and this is ... search the parameter space to minimise the speci"ed cost .... reason their control is well suited to optimisation using.

Tuning for Non-Magnet Racing.pdf
There are slight differences between the way that you would set up a car for a wood .... Tuning for Non-Magnet Racing.pdf. Tuning for Non-Magnet Racing.pdf.