Minimizing the memory of a system Ngoc Minh DAO

∗†

and Dominikus NOLL







Institut de Math´ematiques, Universit´e de Toulouse, France Department of Mathematics and Informatics, Hanoi National University of Education, Vietnam Email: [email protected], [email protected]

Abstract—Consider a stable linear time-invariant system G(x) with tunable parameters x ∈ Rn , which maps inputs w ∈ L2 (Rm ) to outputs z ∈ L2 (Rp ). Our goal is to find a choice of the tunable parameters x∗ which avoids undesirable responses of the system to past excitations known as system ringing. We address this problem by minimizing the Hankel norm kG(x)kH of G(x), which quantifies the influence of past inputs on future outputs.

I. I NTRODUCTION Ringing generally designates undesired responses of a system to a past excitation. In electronic systems ringing arises under various forms of noise, such as gate ringing in converters, undesired oscillations in digital controllers, or input ring back in clock signals. In mechanical systems ringing effects, when combined with resonance, may accelerate breakdown. In audio systems ringing may cause echoes to occur before transients. In more abstract terms, ringing may be understood as a tendency of the system to store energy, which is retrieved later to produce undesired effects. One way to quantify this capacity uses the Hankel norm of a transfer function, which measures the effect of past inputs on future outputs. Consider a stable LTI system ( x˙ = Ax + Bw G: (1) z = Cx with state x ∈ Rnx , input w ∈ Rm , and output z ∈ Rp . If we think of w(t) as an excitation at the input which acts over the time period 0 6 t 6 T , then the ring of the system after the excitation has stopped at time T is z(t) for t > T . If signals are measured in the energy norm, this leads to the definition of the Hankel norm of a system (Z 1/2 ∞ kGkH = sup z(t)2 dt : T >0

Z

T

)

T 2

w(t) dt 6 1, w(t) = 0 for t > T

. (2)

Rn . Our goal is to tune x such that system ringing is avoided or reduced. This leads to the Hankel norm optimization program minimize kG(x)kH subject to G(x) internally stable x ∈ Rn .

We will discuss instances, where program (3) may be of interest. Then we present an algorithm to solve (3) based on techniques from eigenvalue optimization. We present a nonsmooth optimization method, and a smooth relaxation based on work of Y. Nesterov [8]. Applications are then presented in the final sections. II. H ANKEL FEEDBACK SYNTHESIS A first instance of (3) is Hankel feedback synthesis. Consider an LTI plant P (s) in standard form      A B1 B2 x˙ x (4) P :  z  = C1 D11 D12  w , y u C2 D21 0 where x ∈ Rnx is the state, u ∈ Rm2 the control input, w ∈ Rm1 the exogenous input, y ∈ Rp2 the measured output, and z ∈ Rp1 the regulated output,       C1 D11 D12 −1 B1 B2 + P (s) = (sI − A) . C2 D21 0 Let u(s) = K(s)y(s) be a dynamic output feedback control law for (4), where      x˙ K AK B K x K K: = u CK DK y with xK ∈ Rk the state of K. Then, the closed-loop transfer function of the performance channel channel w → z has the state-space representation      A(K) B(K) ξ ξ˙ Tw→z (K) : = , (5) C(K) D(K) w z

0

Here our interest is in systems (1) with tunable parameters x ∈ Rn , that is, systems of the form ( x˙ = A(x)x + B(x)w G(x) : z = C(x)x where A(x), B(x), C(x) depend smoothly on a design parameter x varying in Rn or in some constrained subset of

(3)

where ξ = (x, xK ) and   A + B2 DK C2 B2 CK A(K) = , BK C2 AK   B1 + B2 DK D21 B(K) = , BK D21 C(K) = [C1 + D12 DK C2 D12 CK ] , D(K) = D11 + D12 DK D21 .

We assume that D(K) = 0, which can be arranged by standard assumption, e.g., D11 = 0 and either D21 = 0 or D12 = 0 or K strictly proper. Minimizing the effects of past inputs on future outputs by way of an appropriate feedback may now be cast as the optimization program minimize subject to

kTw→z (K)kH K stabilizes (4) K = K(x), x ∈ Rn .

(6)

Here K(x) refers to a controller which is structured in the sense of [1], and x stands for the tunable parameters of K(x). Typical examples of structured controllers include PIDs   ri 0 0 Kpid (x) =  0 −τ rd  , 1 1 dK where x regroups the parameters ri , rd , dK , τ , or observerbased controllers, decentralized, fixed reduced order controllers, and more generally, control architectures combining basic building blocks like PIDs with filters, feed forward blocks, and much else (see [1], [6]). It is convenient to write A(x) = A(K(x)), etc., and Tw→z (x) = Tw→z (K(x)). III. S YSTEM REDUCTION System reduction is the most widely known application of Hankel norm optimization (3). It may be considered a special case of Hankel controller synthesis (6). Given a stable system ( x˙ = Ax + Bw G: z = Cx + Dw of order n, we wish to find a stable system ( x˙ = Ared x + Bred w Gred : z = Cred x + Dw of reduced order nred < n with input-output behavior as close as possible to the original system G. -

+ ? c – 6 -

e -

(7)

(10)

which is usually considered too demanding algorithmically. This has changed with the solution of the structured H∞ control problem in [1]. The tool hinfstruct developed since [1], [2], [9] and made available through [15] since 2010, allows to perform H∞ -system reduction as a special case of structured H∞ -synthesis using (9). Solving (10) by a nonsmooth optimization method was first proposed in [1], and in section X we will present tests related to (8) and (10).

IV. M AXIMIZING THE MEMORY OF A SYSTEM Within the present framework it is also possible to maximize the memory of a system G via feedback if a reference system Gref with desirable memory properties is used. As an example we consider a 2-DOF synthesis scheme of the following form

- F

-z2

v u- ? d -

-+d−-z1 6

G

y

(11) - Gref

yref

Gred

If the model matching error e = (G − Gred )w is measured in the Hankel norm then the problem minimize subject to

minimize kG − Gred k∞ subject to G − Gred internally stable x = (Ared , Bred , Cred , Dred ),

w q -+d e − K 6

G

w

the tunable parameters x being the entries of Ared , Bred , Cred . Glover [7] has shown how to compute an explicit solution to (8) when the matrices Ared , Bred , Cred are not further restricted to have specific pattern. Here we use this as a blind test of algorithm 2 by applying it to problem (8). Recall that the use of the Hankel norm in system reduction (7), (8) is to some extent artificial and mainly motivated by the fact that it leads to a linear algebra solution. The more natural approach would be H∞ -system reduction

kG − Gred kH G − Gred internally stable x = (Ared , Bred , Cred ),

(8)

is a special case of (6), where we define plant and controller as     A B 0 Ared Bred K: , (9) P :  C D −I  , Cred D 0 I 0

Assuming that Gref has desirable memory features which do not lead to ringing, the idea is to tune the parameters in feedforward filter F and controller K in such a way that G in closed loop follows Gref , independently of the input w. In other words, the undesirable part of the memory of G, which contributes to the mismatch z1 = y − yref , is reduced by minimizing kTw→z1 (F, K)kH . It may be beneficial to arrange this by adding a constraint kz2 kH 6 γH or kz2 k∞ 6 γ∞ , where z2 = u+v, in order to avoid exceedingly large controller actions. If kz2 kH 6 γH is used, this problem can be cast as a special case of (6), where plant and decentralized controller structure are



A  0   C P :  0   0 −C  AF  0 K:  CF 0

0 Aref −Cref 0 0 0 0 AK 0 CK

0 Bref −Dref 0 I I BF 0 DF 0

B 0 D I 0 −D 

B 0 D I 0 −D

    ,   

0 BK  . 0  DK

Suppose σ is a nonzero singular value of ΓG and w is an eigenvector corresponding to the eigenvalue σ 2 of Γ∗G ΓG , i.e. Γ∗G ΓG w = σ 2 w. Set z(t) = (ΓG w)(t) = CeAt x0 with Z 0 e−Aτ Bw(τ )dτ. x0 = −∞

Then 2

σ w=

> −A> τ

Γ∗G z

=B e Z

> −A> τ

= B > e−A = AK xK + BK e = CK xK + DK e

e

A> t

C > CeAt x0 dt

0 >

Notice that ( ( x˙ F = AF xF + BF w x˙ K ,K : F : v = CF xF + DF w u

>

eA t C > z(t)dt

0



=B e



Z

τ

Y x0 .

Therefore, σ 2 x0 =

Z

0

e−Aτ Bσ 2 w(τ )dτ

−∞ Z 0

can be further structured if we wish. In our experiment we will use a reduced-order filter F , and a PID structure for K.

=

>

e−Aτ BB > e−A

τ

Y x0 dτ = XY x0 .

−∞

V. H ANKEL OPERATOR As is well known, a representation of the Hankel norm kGkH amenable to computations, is obtained through the observability and controllability Gramians. Using x(−∞) = 0, the solution of (1) satisfies Z t z(t) = CeA(t−τ ) Bw(τ )dτ,

We see that x0 6= 0 since otherwise σ 2 w = 0 which is impossible. Thus, σ 2 is an eigenvalue of XY . Conversely, if σ 2 6= 0 is an eigenvalue of XY and x0 6= 0 is a corresponding eigenvector, i.e. XY x0 = σ 2 x0 , then by setting w = B > e−Aτ Y x0 we obtain w 6= 0 and Γ∗G ΓG w = σ 2 w. Hence, σi2 (ΓG ) = λi (XY ),

−∞

and if we focus on input signals w− that live for times t 6 0 and vanish for t > 0, then the output restricted to t > 0 is Z 0 z+ (t) = CeA(t−τ ) Bw− (τ )dτ, t > 0. −∞

Now the Hankel operator ΓG : L2 (−∞, 0] −→ L2 [0, ∞), defined by Z 0 (ΓG w)(t) = CeA(t−τ ) Bw(τ )dτ, t > 0,

where σi (T ) denotes the ith singular value of T and λi (Z) denotes the ith eigenvalue of a symmetric matrix Z. In particular, since Γ∗G ΓG is self-adjoint, kΓG k =

p kz+ k2 = σ1 (ΓG ) = λ1 (XY ). 06=w− ∈L2 (−∞,0] kw− k2 sup

This agrees with the definition (2) of the Hankel norm kGkH of the stable LTI system G if we truncate the input signal at −T < 0, introduce a supremum over T , and use timeinvariance to move to the right.

−∞

maps past inputs w− to future outputs z+ = ΓG w− . Note that hw, Γ∗G ziL2 (−∞,0] = hΓG w, ziL2 [0,∞)  Z ∞ Z 0 > = w> (τ )B > eA (t−τ ) C > dτ z(t)dt 0 −∞ Z ∞  Z 0 > = w> (τ ) B > eA (t−τ ) C > z(t)dt dτ, −∞

0

which implies (Γ∗G z)(τ )

Z =



B > eA

>

(t−τ )

C > z(t)dt, τ 6 0.

0

Let X and Y be the controllability and observability Gramians of the system, i.e. Z ∞ Z ∞ > > X= eAt BB > eA t dt, Y = eA t C > CeAt dt. 0

0

VI. C LARKE SUBDIFFERENTIAL OF THE H ANKEL NORM IN CLOSED - LOOP In this section we prepare our non smooth optimization approach by computing Clarke subgradients of the closed-loop objective function 2

f (x) = kTw→z (x)kH = λ1 (X(x)Y (x)) of (3). Here λ1 denotes the maximum eigenvalue of a symmetric or Hermitian matrix, and X(x) and Y (x) are the controllability and observability Gramians that can be obtained from the Lyapunov equations A(x)X + XA> (x) + B(x)B > (x) = 0, >

>

A (x)Y + Y A(x) + C (x)C(x) = 0,

(12) (13)

with A(x), etc. denoting closed-loop data (5). Notice that despite the symmetry of X and Y the product XY need not

be symmetric, but stability of A(x) in closed-loop guarantees X  0, Y  0 in (12), (13), so that we can write 1

1

1

1

λ1 (XY ) = λ1 (X 2 Y X 2 ) = λ1 (Y 2 XY 2 ),

1

which brings us back in the realm of eigenvalue theory of symmetric matrices. Using the spectral abscissa α(A) = max{Re(λ) : λ eigenvalue of A} of a square matrix A, we can replace program (6) by the following program minimize subject to

2

f (x) := kTw→z (x)kH c(x) := α(A(x)) + ε 6 0

(14)

for some fixed small ε > 0. We have X(x)  0 and Y (x)  0 on the feasible set C = {x : c(x) 6 0}, so that f is welldefined and locally Lipschitz on C. Let Mn,m be the space of n × m matrices, equipped with the corresponding scalar product hX, Y i = Tr(X > Y ). The space of m × m symmetric matrices is denoted Sm . We define Bm := {X ∈ Sm : X is positive semidefinite, Tr(X) = 1}. 1

1

Set Z := X 2 Y X 2 , Zi (x) := ∂Z(x) ∂xi , i = 1, . . . , n and pick Q to be a matrix whose columns form an orthonormal basis of the t-dimensional eigenspace associated with λ1 (Z). By [10, Theorem 3], the Clarke subdifferential of f at x is the set ∂f (x) = {(Tr(QU Q> Z1 (x)), . . . , Tr(QU Q> Zn (x))) : U ∈ Bt }. It now remains to compute Zi (x). In order to alleviate the notational burden of the formulas to come, we shall employ a standard trick to convert the dynamic feedback controller into the static feedback case. More precisely, the following substitutions are performed       AK B K A 0 B1 K→ ,A → , B1 → , CK DK 0 0k 0       0 B2 0 Ik B2 → , C1 → C1 0 , C2 → , Ik 0 C2 0     0 D12 → 0 D12 , D21 → . D21 Then A(K) = A + B2 KC2 , B(K) = B1 + B2 KD21 , C(K) = C1 + D12 KC2 , D(K) = D11 + D12 KD21 . We have Zi (x) = DK Z(x)Ki (x) 1

1

1

1

= ϕi Y X 2 + X 2 ψi X 2 + X 2 Y ϕi , (15) 1

∂K(x) where Ki (x) := ϕi := DK X 2 Ki (x), ∂xi , ψi := DK Y Ki (x), and where DK F denotes derivative of F with respect to K. From (12) and (13), and putting φi := DK XKi (x), we obtain

Aφi + φi A> = −B2 Ki (x)C2 X − X(B2 Ki (x)C2 )> >

>

− B2 Ki (x)D21 B − B(B2 Ki (x)D21 ) , >

by using the fact that DK AKi (x) = B2 Ki (x)C2 , DK BKi (x) = B2 Ki (x)D21 , DK CKi (x) = D21 Ki (x)C2 . 1 1 Since X 2 X 2 = X, (18)

Altogether, we obtain the following algorithm 1 to compute subgradients of f at x. Algorithm 1 . Computing subgradients. Input: x ∈ Rn . Output: g ∈ ∂f (x). ∂K(x) 1: Compute Ki (x) = ∂x , i = 1, . . . , n and X, Y solui tions of (12), (13), respectively. 1 1 1 2: Compute X 2 and Z = X 2 Y X 2 . 3: For i = 1, . . . , n compute φi and ψi solutions of (16) and (17), respectively. 4: For i = 1, . . . , n compute ϕi solution of (18) and Zi (x) using (15). 5: Determine a matrix Q whose columns form an orthonormal basis of the t-dimensional eigenspace associated with λ1 (Z). 6: Pick U ∈ Bt , and return g := (Tr(QU Q> Z1 (x)), . . . , Tr(QU Q> Zn (x))), a subgradient of f at x. Here, depending on the controller structure K(x), which is usually affine in x, the Ki (x) may be pre-calculated to accelerate the algorithm. Similar pre-calculations are possible in (15)-(18). VII. P ROXIMITY CONTROL ALGORITHM In this section we present the main algorithm 2 to solve (3), respectively, (6), where the constraint of internal stability in (3) is addressed through (14). Our approach uses a progress function at the current iterate x, F (·, x) = max{f (·) − f (x) − µc(x)+ , c(·) − c(x)+ }, which is successively minimized. Antecedents of this idea can for instance be found in [11] , [12], and in our own contributions [6], [9]. In [8] Nesterov proposes a method which solves specific convex nonsmooth programs by a smooth relaxation with a lower complexity than the convex bundle method. His method replaces f (x) = λ1 (Z(x)) by the smooth approximation "m # X λi (Z(x))/µ fµ (x) := µ ln e i=1

with a tolerance parameter µ > 0. We see that f (x) 6 fµ (x) 6 f (x) + µ ln m.

(16)

>

A ψi + ψi A = −(B2 Ki (x)C2 ) Y − Y B2 Ki (x)C2 − (D12 Ki (x)C2 )> C − C > D12 Ki (x)C2 ,

1

X 2 ϕi + ϕi X 2 = φi .

(17)

¯ of problem (14), we find an Therefore, to find an -solution x  2 -solution of the smooth problem min{fµ (x) : c(x) 6 0}

(19)

Algorithm 2 . Proximity control with downshifted tangents Parameters: 0 < γ < γ e < 1, 0 < γ < Γ < 1, 0 < q < ∞, 0 < c < ∞, q < T 6 ∞. 1: Initialize outer loop. Choose initial iterate x1 and matrix Q1 = Q> 1 with −qI  Q1  qI. Initialize memory control parameter τ1] such that Q1 + τ1] I  0. Put j = 1. 2: Stopping test. At outer loop counter j, stop if 0 ∈ ∂1 F (xj , xj ). Otherwise, goto inner loop. 3: Initialize inner loop. Put inner loop counter k = 1 and initialize τ1 = τj] . Build initial working model > F1 (·, xj ) = g0j (· − xj ) + 21 (· − xj )> Qj (· − xj ),

where g0j ∈ ∂1 F (xj , xj ). 4: Trial step generation. Compute

yk = argmin Fk (y, xj ) +

τk ky − xj k2 . 2

5: Acceptance test. If

ρk =

F (yk , xj ) > γ, Fk (yk , xj )

put xj+1 = yk (serious step), quit inner loop and goto step 8. Otherwise (null step), continue inner loop with step 6. 6: Update working model. Generate a cutting plane mk (·, xj ) = ak + gk> (· − xj ) at null step yk and counter k, where gk ∈ ∂1 F (yk , xj ) − Qj (yk − xj ), ak = tk (xj ) − sk , tk (·) = F (yk , xj ) − 12 (yk − xj )> Qj (yk − xj ) + j

k

gk> (·

k

− y ),

j 2

sk = tk (x )+ + cky − x k . Compute aggregate plane m∗k (·, xj ) = a∗k + gk∗> (· − xj ) at yk , where gk∗ = (Qj + τk I)(xj − yk ), a∗k = Fk (yk , xj ) − 21 (yk − xj )> Qj (yk − xj ) + gk∗> (xj − yk ).

Fk+1 (·, xj ) = max { Fk (·, xj ), mk (·, xj ) + F [2] (·, xj ), o m∗k (·, xj ) + F [2] (·, xj ) , with F [2] (·, xj ) = 21 (· − xj )> Qj (· − xj ).

7: Update proximity control parameter. Compute secondary

control parameter Fk+1 (yk , xj ) Fk (yk , xj )

and put τk+1

VIII. S YSTEMS WITH DIRECT TRANSMISSION In sections I and II we worked under the hypothesis that the tunable system (1) has no direct transmission term D. This is understood by the fact that D does not create memory effects, so that the Hankel norm (2) does not see it. Put differently, k · kH is only a semi-norm and not a norm on the space of transfer functions G = (A, B, C, D). In practical situation like Hankel synthesis (6), having to force D(x) = 0 appears rather restrictive, and it seems desirable to extend the approach (3), (6) to tunable systems G(x) with direct transmission D(x) 6= 0. We propose two methods to approach this problem. The first idea is to simply apply (3) to the system (A(x), B(x), C(x), 0) despite the presence of a term D(x) 6= 0. Namely, in experiments we often observe that decrease of f (x) = k(A(x), B(x), C(x), 0)kH in algorithm 2 ultimately forces decrease of the direct transmission term in the sense that σ(D(x)) gets reasonably small. (See e.g. experiment 3 for such a case). Informally, we say that the Hankel norm acts essentially like a norm. A formal statement would be the validity of an estimate σ(D(x)) 6 ck(A(x), B(x), C(x), 0)kH for some c > 0 and all x. However, we do not check such an estimate in practice. We simply run algorithm 2 on f (x) and monitor the behavior of σ(D(x)) in order to see whether it decreases reasonably. A second more principled way to deal with our problem of a direct transmission D(x) 6= 0 is as follows. Recall the estimate kGkH 6 kGk∞ for systems (1). This suggests extending the Hankel norm to systems G = (A, B, C, D) by setting kGkH = max{k(A, B, C, 0)kH , σ(D)},

Build new working model

ρ f k =

with µ = 2 ln m . We have used this idea to initialize the non smooth algorithm 2. The smoothed problem (19) can be solved using standard NLP software.

( τk = 2τk

if ρ f e, k < γ if ρ f e. k > γ

Increase inner loop counter k and loop back to step 4. 8: Update Qj and memory element. Update matrix Qj → Qj+1

respecting Qj+1 = Q> j+1 and −qI  Qj+1  qI. Then store new memory element ( τk+1 if ρk < Γ, ] τj+1 = 1 τ if ρk > Γ. 2 k+1 ] ] Increase τj+1 if necessary to ensure Qj+1 + τj+1 I  0. If ] ] τj+1 > T then re-set τj+1 = T . Increase outer loop counter j and loop back to step 2.

(20)

because (20) agrees with the old definition in case D = 0, and preserves the estimate kGkH 6 kGk∞ . Surprisingly, this extension is not a mere heuristic, it can even be given a physical meaning. Writing f (x) = k(A(x), B(x), C(x), 0)kH and g(x) = σ(D(x)), we consider the constrained optimization program min{f (x) : g(x) 6 γ, x ∈ Rn }.

(21)

Then we have the following Proposition 1: Suppose x∗ is a critical point of the extended Hankel norm optimization program (3), (20). Then x∗ is a Fritz John critical point of program (21) for a suitable choice of γ. In particular, x∗ is either a KKT-point of (21), or a critical point of g alone. Proof: We discuss the cases f (x∗ ) < g(x∗ ), f (x∗ ) > g(x∗ ) and f (x∗ ) = g(x∗ ) separately. In the first case we clearly have 0 ∈ ∂g(x∗ ), so x∗ is a critical point of g alone. Naturally, this is also a F. John critical point of (21) if we

put γ = g(x∗ ). In case f (x∗ ) > g(x∗ ), x∗ is a critical point of f alone, hence an unconstrained critical point of (21) for any value γ > g(x∗ ). Finally the most interesting case if f (x∗ ) = g(x∗ ). Here the optimality condition for a maximum function F (x) = max{f (x), g(x)} in [5], 0 ∈ ∂F (x∗ ), gives multipliers λ > 0, µ > 0, λ + µ = 1, such that 0 ∈ λ∂f (x∗ ) + µ∂g(x∗ ). Now in case λ > 0 we have 0 ∈ ∂f (x∗ ) + (µ/λ)∂g(x∗ ), which shows that x∗ is a KKTpoint of (21) as soon as we put γ = g(x∗ ). The case λ = 0 gives µ = 1, hence x∗ is again a critical point of g alone.  The consequence of Proposition 1 is the following. We can interpret the extended Hankel norm optimization program (3),(20) as a trade-off between minimizing the memory effect of (A(x), B(x), C(x), 0), subject to a constraint g(x∗ ) 6 γ, or dually, as of minimizing g subject to a constraint on the memory effects of (A(x), B(x), C(x), 0). Since f (x) is a valid measure of the memory or ringing effects of G, such an interpretation is meaningful. IX. E XPERIMENT 1: H ANKEL FEEDBACK SYNTHESIS In this section we apply program (6) to a classical 1-DOF control system design, using an example from [4, Chapter 2]. The open-loop system G, exogenous input w and regulated output z, are given by     d 10 − s y   , w = ny , z = p . G= 2 u s (10 + s) r The corresponding plant is  A P :  C1 C2

Sinc signal 2

1.2

2.5

1.5

1

2

0.8

1.5

0.6

1

0.4

0.5

0.2

0

1

0.5

0

−0.5

−1

−1.5 0

−0.5

−0.2

−1

−0.4

0

5 10 Time (seconds)

15

−1.5

−2

−2.5

0

5 10 Time (seconds)

15

−3

0

5 10 Time (seconds)

15

Fig. 1. Unit step (left), white noise (middle), and sinc, truncated at T = 3. Plots show ringing for controllers Kb (blue), KH (red), and K∞ (green).

which we computed using the Matlab function hinfstruct based on [1]. In order to compute KH , we start algorithm 2 at an initial stabilizing controller x1 = [3206.2, 12528.3, 11078.3, 7941.9, 13028.4, 3611.6]>

B1 0 D21

B2 D12  , 0

 1 B1 = 0 0

 D21 = 0

0 0 0

−1

 0 0 0

with f (x1 ) = 11.0658, using the stability constraint c(x) = α(A(x)) + ε with ε = 10−8 . We used the following two-stage stopping test. If the inner loop at xj finds a serious iterate xj+1 satisfying   1 B2 = 0 0  0 D12 = 1

 1 .

Following [4], we use the controller structure  −m −n  1 as2 + bs + c 0 K(x) = 3 = 1 s + ms2 + ns + p  0 a b

−p 0 0 c

 1 0  , 0  0

where x = [m, n, p, a, b, c]> regroups the unknown tunable parameters. This allows us to compare our results with the methods in [4], e.g. with the controller 219.6s2 + 1973.95s + 724.5 s3 + 19.15s2 + 105.83s + 965.95 synthesized in that reference. We also compare the optimal Hankel controller KH to the optimal H∞ controller K∞ with the same structure, Kb =

K∞ =

White noise signal 3



where   −10 0 0 0 0 A= 1 0 1 0  0 −1 10 C1 = 0 0 0 C2 = 0 1 −10

Unit step signal 1.4

7941.9s2 + 13028.4s + 3611.6 , 3 s + 3206.2s2 + 12528.3s + 11078.3

|f (xj+1 ) − f (xj )| < 10−8 , 1 + |f (xj )| then xj+1 is accepted as the final solution. On the other hand, if the inner loop is unable to find a serious step and provides three consecutive unsuccessful trial steps yk satisfying

j

x − yk < 10−7 1 + kxj k or if a maximum number of 50 allowed steps k in the inner loop is reached, then we decide that xj is already optimal. Both tests are based on the observation that 0 ∈ ∂1 F (xj , xj ) if and only if yk = xj is solution of the tangent program in the trial step generation (see [6] for theoretical results). The optimal controller obtained was x∗ = [3218.2, 12978.4, 11509.4, 7665.2, 12395.6, 3526.5]> with f (x∗ ) = 10.7493 meaning kTw→z (P, KH )kH = 3.2786, where KH := K(x∗ ) =

s3

7665.2s2 + 12395.6s + 3526.5 . + 3218.2s2 + 12978.4s + 11509.4

order 1 2 3 4 5 6 7 8 9 10 11 12 13 14

σk+1 4.046418 2.754623 1.763527 1.296531 0.629640 0.166886 0.093407 0.022193 0.015669 0.013621 0.003997 0.001179 0.000324 0.000033

iter 43 149 174 422 88 204 93 197 37 16 140 57 24 5

cpu 22.3 26.1 322.1 2458.1 78.1 238.9 642.5 1248.0 83.3 45.0 380.0 488.4 224.2 229.6

kG−Gred kH kG−Gred k∞ kG−Gred,∞ k∞

4.046418 2.754625 1.763530 1.299550 0.629640 0.166889 0.093408 0.022373 0.015682 0.013628 0.003997 0.001179 0.000324 0.000040

7.885545 4.436838 3.570347 2.507329 1.229837 0.326294 0.164105 0.035018 0.030308 0.026986 0.008041 0.002271 0.000620 0.000062

4.431248 2.843469 1.953249 1.323714 0.638380 0.168613 0.093621 0.022821 0.015774 0.013647 0.004001 0.001181 0.000324 0.000033

impulse Hankel

Impulse hinfred

XI. E XPERIMENT 3: M AXIMIZING MEMORY Here we present an illustrative example for (11), where G and Gref are defined as G(s) =

1 , s−1

Gref =

11.11 . s2 + 6s + 11.11

The filter F is chosen of order 2, which leads to 5 tunable parameters, whereas K is a PID K(s) = kp +

kd s ki + , s Tf s + 1

adding another 4 unknowns. We have added a low-pass filter W1 (s) = 0.25s+0.6 s+0.006 to the tracking signal z1 , and a weighting s filter W2 = s+0.001 on the control output z2 = u + v. The Hankel controller K = (KH , FH ) computed by algorithm 2 is −10.5691s2 − 1.6640s − 0.0671 , s2 + 0.6266s + 0.0618 1.0290e − 05 0.3105 KH (s) = 11.4876 + − . s 1.4055s + 1 FH (s) =

f, c and σ(D(K))

−4

x 10

Step size 0.05

1.1 0.04 0 0.9 c(xj)

TABLE I H ANKEL REDUCTION Gred VIA (6), (9). O PTIMAL FUNCTION VALUES kG − Gred kH COMPARED TO THEORETICAL VALUES σk+1 OBTAINED BY hankelmr.

kG − Gred,∞ k∞ , so contrary to what is often claimed in the literature, the Hankel norm reduction Gred is not a good approximation of Gred,∞ . This can also be seen in Fig. 2, where impulse responses are displayed.

f(xj), σ(D(K(xj)))

X. E XPERIMENT 2: H ANKEL SYSTEM REDUCTION Our tests use the 15th order Rolls-Royce Spey gas turbine engine model described in [13, Chapter 11], with data available for download on I. Postlethwaite’s homepage as aero0.mat. For k = 1, 2, . . . , 14, using algorithm 2, we computed reduced order systems Gred of order k, and compared the achieved objective value in (6), which in this case is f (x) = kG − Gred (x)kH , with the theoretically known optimal Hankel norm approximation error kG − Gred kH = σk+1 (G), the (k + 1)-st Hankel singular value of G. Columns 1-5 of Table I give the results. As can be seen, only at order 14 does our method show some difficulties, due to the fact that the possible reduction is very slight.

0.7

Impulse residualization Impulse truncation 150 150

150

150

100

100

100

100

50

50

50

50

0

0

0

0

−50

−50

−50

−50

35

−100

−100

−100

−100

30

−150

−150

−150

−150

−5

f(xj) j

σ(D(K(x )))

0

50 100 Outer loop iterations

0.02 0.01

c(xj)

0.5

0.03

−10 150

0

0

Number of iterations in each inner loop

50 100 Outer loop iterations

150

Memory control parameter 2.5

2

25 20

0

0.01 0.02 Time [s]

0

0.01 0.02 Time [s]

0

0.01 0.02 Time [s]

0

0.01 0.02 Time [s]

1.5 15 10

1

5

Fig. 2. Impulse responses for three channels of G (continuous lines), compared with with reduced models (dashed): Match DC (left), truncated (middle left), Hankel (middle right), H∞ (right).

For illustration we also used the gas turbine engine model to test H∞ -norm system reduction (10) by computing models Gred,∞ of reduced order k = 1, 2, . . . , 14 for the 15th order model G. The results concern columns 6 and 7 of Table I. After defining P according to (9), we use the Matlab function hinfstruct, built on the prototype [1], to compute Gred,∞ . The procedure may be accelerated by choosing a Hankel norm reduction Gred to initialize the optimization, so that the minimum is reached very fast. It is interesting to notice that kG − Gred k∞ is usually significantly large than

0

0

50 100 Outer loop iterations

150

0.5

0

50 100 Outer loop iterations

150

Fig. 3. Experiment 3. Bearing of the algorithm. Top left image shows that ultimately, decrease of the objective f (x) = k(A(x), B(x), C(x), 0)kH (solid) forces decrease of σ(D(x)) (dashed), so that k · kH acts like a norm.

For comparison, we have also synthesized the optimal H∞ control architecture of the same structure, which leads to −9.9366s2 − 1.5077s − 0.0349 , s2 + 0.9969s + 0.0273 0.2673 0.5507 K∞ (s) = 11.5131 + − . s 0.9884s + 1 F∞ (s) =

Step Response From: r To: y

Step Response From: r To: y − yref

1.4

H∞ controller

Reference model H controller ∞

0.1

1

0

0.8

−0.1

Amplitude

Amplitude

Hankel controller

Hankel controller

1.2

0.6

−0.2

0.4

−0.3

0.2

−0.4

0

[7] K. Glover. All optimal Hankel-norm approximations of linear multivariable systems and their L∞ -error bounds. Internat. J. Control, 39 (1984), no. 6, 1115–1193. [8] Y. Nesterov. Smoothing technique and its applications in semidefinite optimization. Math. Program., Ser. A, 110 (2007), no. 2, 245–259. [9] D. Noll, O. Prot, A. Rondepierre. A proximity control algorithm to minimize nonsmooth and nonconvex functions. Pac. J. Optim. 4 (2008), no. 3, 569–602. [10] M.L. Overton. Large-scale optimization of eigenvalues. SIAM J. Optim., 2 (1992), no. 1, 88–120. [11] E. Polak. Optimization: Algorithms and Consistent Approximations. Appl. Math. Sci. 124, Springer-Verlag, New York, 1997. [12] E. Polak, Y. Wardi. Nondifferentiable optimization algorithm for designing control systems having singular value inequalities. Automatica–J. IFAC 18 (1982), no. 3, 267–283. [13] S. Skogestad, I. Postlethwaite. Multivariable Feedback Control. John Wiley & Sons, Inc., Chichester, 2005. [14] K. Zhou, J. Doyle, K. Glover. Robust and Optimal Control. Prentice Hall, Upper Saddle River, 1996. [15] The MathWorks, Inc. Robust Control Toolbox.

0.2

0

2

4 6 Time (seconds)

8

10

−0.5

0

5

10 15 Time (seconds)

20

Fig. 4. Experiment 3. Comparison of optimal Hankel controller (FH , KH ) (red) and H∞ controller (F∞ , K∞ ) (green). Left image compares y with reference output yref (blue line). Right image shows step responses.

The achieved H∞ norms are kTw→(W1 z1 ,W2 z2 ) (K∞ )k∞ = 1.0322 < kTw→(W1 z1 ,W2 z2 ) (KH )k∞ = 1.2191, while the achieved Hankel norms are kTw→(W1 z1 ,W2 z2 ) (KH )kH = 0.8783 < kTw→(W1 z1 ,W2 z2 ) (K∞ )kH = 1.0277. XII. C ONCLUSION We have proposed a new methodology to reduce unwanted ringing effects in a tunable system G(x). The problem was addressed by minimizing the Hankel norm kG(x)kH of G(x), cast as an eigenvalue optimization program for the associated Hankel operator. We have proposed a non-smooth algorithm, which was shown to solve a variety of test problems successfully, and whose convergence to a critical point was previously established in [6]. A smooth heuristic, based on work of Nesterov [8], was added and used to initialize the algorithm with a favorable initial seed. R EFERENCES [1] P. Apkarian, D. Noll. Nonsmooth H∞ synthesis. IEEE Trans. Automat. Control, 51 (2006), no. 1, 71–86. [2] P. Apkarian, D. Noll. Nonsmooth optimization for multidisk H∞ synthesis. Eur. J. Control, 12 (2006), no. 3, 229–244. [3] V. Bompart, P. Apkarian, D. Noll. Nonsmooth techniques for stabilizing linear systems. Proc. American Control Conf., New York, 2007. [4] S. Boyd, C. Barratt. Linear Controller Design. Prentice Hall, New York, 1991. [5] F. H. Clarke. Optimization and Nonsmooth Analysis. John Wiley & Sons, Inc., New York, 1983. [6] M. Gabarrou, D. Alazard, D. Noll. Design of a flight control architecture using a non-convex bundle method. Math. Control Signals Syst., doi:10.1007/s00498-012-0093-z.

Minimizing the memory of a system

of the system to past excitations known as system ringing. We address this problem by minimizing the Hankel norm ... Here our interest is in systems (1) with tunable parameters x ∈ Rn, that is, systems of the form. G(x) : ..... symmetric matrices is denoted Sm. We define. Bm := {X ∈ Sm : X is positive semidefinite, Tr(X)=1}.

473KB Sizes 2 Downloads 164 Views

Recommend Documents

Minimizing memory effects of a system
norm to systems with direct transmission in a physically meaningful way. Sections 6, 7 present typical applications for the purpose of motivation of the. Hankel minimization problem. Section 8 discusses a proximal bundle algorithm used to solve the H

The Memory System
the respective program/data has to be transferred first from secondary memory. • A special hardware unit, Memory Management Unit. (MMU), translates virtual ...

Expert System for the Diagnosis of Memory Loss ...
Abstract. The paper present an account of Case-based and Rule-based Expert System for Memory Loss Disease. It will initially discuss different approaches in designing of Medical Diagnosis Expert Systems with focus on all the information about the mem

System architecture, processes, threads, memory ...
Amazon com Windows Internals, Part 1: System architecture, processes, threads .... integrate with the rest of the system· Go inside the Windows security model to ...

Minimizing latency of agreement protocols
from human-human interaction, such as an investor ordering his trust fund to sell some shares, to computer-computer interactions, for example an operating system automatically requesting the latest security patches from the vendor's Internet site. In

Use of the Term “Antley-Bixler Syndrome”: Minimizing Confusion.pdf ...
Page 1 of 2. 000. Letter to the Editor. Am. J. Hum. Genet. 77:000, 2005. Use of the Term “Antley-Bixler Syndrome”: Minimizing Confusion. To the Editor: We read with great interest the publication by Huang et. al. (2005), which made a number of si

Minimizing Movement
Many more variations arise from changing the desired property of the final ..... Call vertices vk,v3k+1,v5k+2,...,v(2k+1)(r1−1)+k center vertices. Thus we have r1 ...

Minimizing Movement
has applications to map labeling [DMM+97, JBQZ04, SW01, JQQ+03], where the .... We later show in Section 2.2 how to convert this approximation algorithm, ...

MINIMIZING THE DISCRETE LOGARITHMIC ENERGY ...
Jan 13, 2009 - z1,...,z70 and associated points x1,...,x70 of a polynomial of degree ..... Math. Soc. Online First. DOI 10.1090/S0894-0347-08-00630-9 (2008). 3.

the art of memory pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. the art of ...

The era of the imperators: A system at its limits
During the procession, a slave would be holding the golden. Etruscan crown above ... Pompey had defeated Marius's followers in Africa, Sertorius in Spain and ...

The era of the imperators: A system at its limits
During the procession, a slave would be holding the golden. Etruscan crown above ... Pompey had defeated Marius's followers in Africa, Sertorius in Spain and ...

The era of the imperators: A system at its limits
... his father-in-law Pompey. Pompey had defeated Marius's followers in Africa, Sertorius in Spain and Mithridates in Africa. The central globe is a reminder to the Roman people that his family significantly contributed to securing Rome's world domin

PDF Download 2: Virtual Memory (Operating System ...
PDF Download 2: Virtual Memory (Operating. System Source Code Secrets) Full Books. Books detail. Title : PDF Download 2: Virtual Memory (Operating q.