J Glob Optim DOI 10.1007/s10898-016-0480-y

Decoupling linear and nonlinear regimes: an evaluation of efficiency for nonlinear multidimensional optimization Christopher M. Cotnoir1 · Balša Terzi´c1,2

Received: 15 October 2015 / Accepted: 20 October 2016 © Springer Science+Business Media New York 2016

Abstract Solving a large subset of multidimensional nonlinear optimization problems can be significantly improved by decoupling their intrinsically linear and nonlinear parts. This effectively decreases the dimensionality of the problem, reduces the search space and improves the efficiency of the optimization. This decoupled approach is generalized with mathematical formalism and its superiority over standard methods empirically verified and quantified on a couple of examples involving χ 2 curve fitting to data. Keywords Multidimensional nonlinear optimization · Nonlinear chi-square fitting · Genetic algorithm

1 Introduction Multidimensional nonlinear optimization problems arise in nearly all technical fields of human endeavor: from physics [1–3] to biology [4], engineering, finance and many others [5, and references therein]. The major difficulty in solving these problems is that the parameter search space (the function domain) grows exponentially with the dimensionality of the problem. For example, if each of the D “knobs to be turned” (parameters) had n discrete settings, the total number of states (search space) of such a system would be n D . Naturally, as the vastness of the search space increases, so does the computational cost of the solution. Most multidimensional nonlinear optimization problems are not nonlinear in each of their parameters. While f (x; a, b) = ax b is a nonlinear function, it is linear in the parameter a. In such cases, the problem can be broken up into nonlinear and linear parts and solved separately [6–12]. A detailed review of the origin and history of this variable projection method is given in [5]. Because the linear problems are much easier to solve than the nonlinear, such

B

Balša Terzi´c [email protected]

1

Department of Physics, Old Dominion University, Norfolk, VA 23529, USA

2

Center for Accelerator Science, Old Dominion University, Norfolk, VA 23529, USA

123

J Glob Optim

a separate treatment promises a substantial improvement over the standard implementation by effectively reducing the dimensionality of the problem. Previous work on the variable projection method used it on test problems in the process of evaluating non-linear least squares [5,7,8, and references therein] using interval methods [9] and simplicial partitioning [10–12]. In this paper, a detailed prescription is presented on how the decoupling is implemented and generalized to any objective function for which it is applicable, including—but not limited to—the nonlinear least squares. The decoupling approach is then combined with nature-inspired direct searches—particle swarm and genetic algorithm—to drastically improve convergence. This is the first implementation of the decoupling approach with such nature-inspired methods. The superior efficiency of the decoupled implementation versus the traditional approach is clearly confirmed by comparing optimizations on identical problems. The improvements are quantified in terms of convergence rates and execution times. The remainder of the paper is outlined as follows. In Sect. 2, the decoupled approach is detailed. Section 3 outlines how the decoupled problem can be solved using derivativebased methods and direct searches; the implementation with nature-inspired direct searches— particle swarm and genetic algorithm—is described in detail. In Sect. 4, examples are used to demonstrate the improvement of the decoupled approach over standard methods for nonlinear least squares fits and for general function minimization. In Sect. 5, we discuss the results and conclude.

2 Decoupled approach: recasting the optimization problem In this section we detail the decoupled approach methodology, in order to make it easily applicable to any general optimization problem, including nonlinear least squares. While previous work [6–11] uses the decoupled approach for the nonlinear least squares problem, what we present in this section, to the best of our knowledge, is the first detailed derivation of this approach for a general nonlinear function. The standard nonlinear multidimensional optimization problem can be cast as f (q)

Minimize

Subject to g j (q) < 0, h k (q) = 0,

(1) j = 1, 2, . . . , J, k = 1, 2, . . . , K ,

where q = (q1 , q2 , . . . , q D ), with D the dimensionality of the problem (the number of independent variables), J the number of inequality constraints and K the number of equality constraints. For J = K = 0, the problem is unconstrained. The decoupled approach can be used for any objective function that takes the form  1  f i ( A) f j ( A)αi j (B) − f i ( A)βi (B) + γ (B). 2 n

G( A, B) =

n

i=1 j=1

n

(2)

i=1

Here, G represents the objective function that is to be minimized, A is a set of n linear parameters, B is a set of nonlinear parameters, f is a set of n functions, α is an n × n symmetric matrix of functions, β is a column vector of n functions and γ (B) is a function of nonlinear parameters. Of course, maximization of f (q) can be trivially recast as minimization of − f (q).

123

J Glob Optim

It is required that the system of n equations given by a = f ( A),

(3)

A = f −1 (a).

(4)

is analytically solvable for A from a:

After this condition is satisfied, we can apply the decoupled approach to G. We start by replacing the functions f by new parameters a, yielding  1  ai a j αi j (B) − ai βi (B) + γ (B). 2 n

G(a, B) =

n

n

i=1 j=1

(5)

i=1

Setting the derivatives with respect to each parameter to zero yields a linear and a nonlinear system of equations. They are given by, respectively, ∂G(a, B) = 0, ∂ak

∂G(a, B) = 0. ∂ Bm

(6)

The linear system can be evaluated as n 

ai αki (B) = βk (B),

(7)

i=1

and be represented in matrix notation by α(B)a = β(B),

(8)

where α is the matrix of functions of B, a is the column vector of solution parameters and β is the column vector of functions of B. The linear parameters are therefore given as a function of nonlinear parameters by a = α(B)−1 β(B).

(9)

We can therefore use direct search methods to scan only the search space of the nonlinear parameters in B, and calculate the objective function as required each step by first using Eq. (9) and then Eq. (5). After the substitution parameters a have been found, we can recover the original parameters A using Eq. (4).

3 Solving the decoupled problem: gradient-based versus direct search methods Various approaches can be used for solving a general nonlinear optimization problem of the form given in Eq. (1). They can be divided into gradient-based methods and direct searches. Gradient-based methods are locally convergent and use first or second derivatives. They include the conjugate gradient, steepest descent, Newton method, Levenberg-Marquardt, augmented Lagrangian, nonlinear interior point method and others. These are designed to quickly and efficiently converge to the nearest local optimum. For problems with multiple local optima, which include the vast majority of nontrivial optimization problems, initial guesses which are located in different basins of convergence will converge to different optima. This introduces the dependence of the solution on the initial guess.

123

J Glob Optim

Direct searches are designed to be globally convergent and typically do not use derivative information. They include, among others, simulated annealing, Nelder–Mead and natureinspired approaches, such as genetic algorithm, differential evolution, particle-swarm and ant colony. These approaches systematically explore the entire search space, which, in turn, makes them more slowly convergent than the gradient-based methods. However, they generally do converge toward the global optimum, at least in the limit of the number of iterations growing arbitrarily large. At the end of a finite-iteration optimization, these approaches yield a set of multiple local optima—an appreciable improvement over the single local optimum produced by gradient-based methods. As the dimensionality of the nonlinear optimization problem grows, the traditional gradient-based optimization methods become increasingly dependent on the initial guess, and thus unreliable. Therefore, for higher-dimensional problems, direct search methods are not only preferred, but required. It is instructive to comment on solving the decoupled problem with gradient-based methods. Gradient-based methods are designed to find the nearest local optimum in the basin of convergence in which the initial guess resides. The decoupled method solves for the nonlinear parameters directly, and analytically solves for the remaining linear parameters. This decoupling does not ensure that the consecutive steps—which include the solution vector composed of the nonlinear and linear parameters—of the iterative derivative-based method will be in the same basin. While this allows for sampling of other basins, potentially leading to better optima, we observed that such implementation often lacks robustness if started from arbitrary initial guesses. However, if the direct search method converges to the neighborhood of an optimum, using such a near-optimum as an initial guess to the decoupled derivativebased method will be accurate and efficient. This is indeed what is observed in our earlier work which used such hybrid methods [1,13]. Direct search methods, by construction, sample many possible solutions in various basins of convergence, which makes them unaffected by potential “jumping around” of the solutions. Decoupling of the optimization problem reduces the dimensionality by the number of linear parameters which are determined analytically. This results in the direct search methods being more efficient when solving the decoupled problem than the equivalent standard problem because the shrinking of the search space increases convergence rates. Hybrid methods combine the global convergence of the direct search methods and the fast convergence to the local minimum of the derivative-based methods to improve the efficiency of solving multidimensional nonlinear problems [1,13]. For the work presented here, we use the direct search method only—a GA in particular—so as to isolate more transparently the effects of the decoupled approach.

3.1 Numerical implementation The simulations reported here were carried out using two different direct search optimization packages, in order to illustrate that the superiority of the decoupled method is not implementation-dependent. It should be reiterated that the decoupled approach recasts the optimization problem before it is solved, and is independent of the implementation. The solutions implemented in this study serve only to illustrate the advantage of using the decoupled method in numerous situations when its use is applicable and warranted. For both implementations, the result at each generation was the best result found among that generation and all previous generations.

123

J Glob Optim

3.1.1 Package 1: Inspyred python The first package is inspyred [16], an optimization library for Python [17], which specializes in nature-inspired algorithms. This package was chosen for the ease of implementation and the abundance of various direct search methods, including particle-swarm, differential evolution, genetic algorithm, simulated annealing, and others. Inspyred’s Particle Swarm Optimization (PSO) algorithm was used. The initial population was generated with each parameter being a random uniform distribution. Population was set to 100 and the number of generations was set to 100. Execution time was tracked using the time() function in Python’s default time library.

3.1.2 Package 2: PISA The second package is the Platform and Programming Language Interface for Search Algorithms (PISA) developed at ETH Zürich [14,15]. PISA is a modular test bed system for genetic algorithms (GAs). It separates the GA parent selection process from the optimization problem evaluation and population generation processes into two programs: the selector and the variator. This design easily allows different GA selection algorithms, selector programs, to be applied to several academic bounded-domain optimization problems for performance and convergence comparisons. PISA platform was selected because of its excellent performance in our earlier work [1–3]. We used a GA as the selector and the DTLZ module for the variator. Execution time was tracked using the clock_gettime() function in time.h with CLOCK_REALTIME as the clock id, and was output after each generation. Bounds were enforced for the linear parameters with the decoupled method using a penalty method. If bounds were violated, the sum of the difference between the parameter and its upper or lower bound (as appropriate) for each linear parameter was calculated and multiplied by a coefficient of 0.1 and added to the objective function. For the configuration parameters, population size, number of parent individuals, and number of offspring individuals were all set to 100. Varying these parameters does not have appreciable effect on the comparison between the two methods. The maximum number of generations was set to 100. All other parameters were left as the default, except for the seed for DTLZ (discussed in the following section). PISA was run using a polling interval of 0.01 s.

4 Examples 4.1 Nonlinear least squares fits Fitting a nonlinear model to a set of experimental data is one of the most obvious and widespread uses of nonlinear optimization. The objective function to be minimized in such an optimization becomes the normalized sum of the squares of differences between the model and experiments at each data point (xi , yi )i=1,...,N : χ2 =

N  1 (y(xi ; q) − yi )2 , N−D σi2 i=1

(10)

where σi is the estimated error in the measurement. This is a well-known χ 2 -fit. The multiplicative constant factor in front of Eq. (10) is inconsequential for the optimization and will be excluded from the consequent formalism.

123

J Glob Optim

In the remainder of this section we verify and quantify the effectiveness of the decoupled approach on two multidimensional examples of χ 2 fits: 4-dimensional logistic function and 7-dimensional double logistic function. In each of the two the examples below, 1000 calibration data sets of 100 points each have been generated with random fluctuations around known distributions. Specifically, random values for each parameter within specific ranges are chosen. The x values are set to the integers 1–100 and the corresponding y values are calculated for each x value from the random parameters, then multiplied by a random number from 0.9 to 1.1. The non-normalized χ 2 -fits are then computed 10 times with different PRNG seeds for each of the sets using the standard method and the decoupled method. For PISA, the DTLZ seed is 142 incremented by each integer 0–9. For inspyred, the seed is each integer 0–9. The results and execution times for the 10,000 fits are then averaged and compared between the decoupled method and the standard method. All estimated errors σi are set to one.

4.1.1 4-Dimensional logistic function The logistic function models dynamical systems in which the initial exponential growth is slowed by the saturation and then followed by stagnation [18]. It is applicable to many fields, such as population dynamics, economics, bacterial growth, epidemiology and others. The standard form of the logistic function is represented by 4 parameters K −C . (11) 1 + e−r (x−) Each parameter represents a physical component of the growth. C represents the initial population, K the final population, r the rate of exponential growth and  denotes the transition in time between the initial period of exponential growth and the consequent logarithmic growth. For this reason, all parameters must be positive, because the physical values they represent are all positive. The objective function to be minimized here is y(x; C, K , r, ) = C +

G(C, K , r, ) =

n  (y(xi ; C, K , r, ) − yi )2

σi2

i=1

.

(12)

From Eq. (11) G(C, K , r, ) =

n  C2

σi2

i=1

+

+

2C(K − C) (1 + e−r (xi −) )σi2

yi2 (K − C)2 C yi (K − C)yi − − + . 2 2 2 (1 + e−r (xi −) )2 σi σi (1 + e−r (xi −) )σi σi2

(13)

Grouping terms which contain C and K yields G(C, K , r, ) = C 2

n n   1 2 + C(K − C) 2 −r σ (1 + e (xi −) )σi2 i=1 i i=1

+ (K − C)2 −C

123

n 

1

(1 + e−r (xi −) )2 σi2 i=1 n 

n n   yi2 yi yi   − (K − C) + . 2 2 1 + e−r (xi −) σi σ σ2 i=1 i i=1 i=1 i

(14)

J Glob Optim

It is now evident that C and K are linear, while r and  are nonlinear. The decoupling approach effectively reduces the dimensionality of this problem from four to two. The Eq. (32) can be further rearranged to match the standard form for the decoupled method as  2  1 2 1 + (A1 )(A2 − A1 ) (A1 )(A1 ) 2 −B 1 2 2 σi (1 + e (xi −B2 ) )σi2 n

G( A, B) =

n

i=1

i=1

 1 2 + (A2 − A1 )(A1 ) −B 1 2 (1 + e (xi −B2 ) )σi2 n

i=1

 2 1 + (A2 − A1 )(A2 − A1 ) 2 (1 + e−B1 (xi −B2 ) )2 σi2 n

i=1

n n n    yi2 yi yi   − (A1 ) − (A − A ) + , (15) 2 1 1 + e−B1 (xi −B2 ) σi2 i=1 σi2 σ2 i=1 i i=1

where A1 is C, A2 is K , B1 is r and B2 is . The transformation of linear parameters is given by a = (a1 , a2 ) = f ( A) = (A1 , A2 − A1 ), (16) and

A = (A1 , A2 ) = f −1 (a) = (a1 , a1 + a2 ).

The matrix of functions α is

 α(B) =

(17)

 κ11 κ12 , κ12 κ22

(18)

where κ11 =

n  2 , σ2 i=1 i

κ22 =

n 

2

, 2

(1 + e−B1 (xi −B2 ) )2 σi i=1

κ12 =

n 

2

(1 + e−B1 (xi −B2 ) )σi2 i=1

.

(19) The column vector of functions β is ⎡ β(B) =



n

yi2 i=1 σi2 ⎣ ⎦. yi n i=1 (1+e−B1 (xi −B2 ) )σ 2 i

The function γ (B) is γ (B) =

n  yi2 i=1

σi2

(20)

.

(21)

First, a direct search method is used on the two nonlinear parameters, B1 and B2 . In each iteration, the objective function calculated in two steps. The linear parameters are found by a = α −1 (B)β(B).

(22)

Next, the objective function is calculated by G(a, B) =

n (a +  1 i=1

a2 1+e−B1 (x−B2 ) σi2

− yi )2

.

(23)

123

J Glob Optim

(a)

(b)

Fig. 1 Logistic fit for 10 random trials of 1000 datasets with the standard (blue squares) and decoupled (red circles) methods and the particle-swarm implementation from the inspyred [16] package with 100 individuals. Left panel average χ 2 as a function of the number of generations. Right panel average χ 2 as a function of the average execution time. (Color figure online)

(a)

(b)

Fig. 2 Same as Fig. 1 except with bounds implemented using a penalty method, and using the PISA genetic algorithm [14,15]

Once the direct search method obtains the results B1 and B2 , the linear parameters a1 and a2 can be found again using Eq. (39). The transformation in Eq. (35) yields the variables A1 and A2 . The comparison of effectiveness of the two methods—standard and decoupled—is shown in Figs. 1 and 2. Figures 1a and 2a show that the decoupled method—both unconstrained and constrained, respectively—converges in about a third of generations that it takes for the standard approach. Even when the somewhat slower per-generation execution time of the decoupled method is factored in, it converges in about half of the time that it takes the standard method to converge (Figs. 1b, 2b). Both decoupled and standard implementation converge to the same solution—or at least the solutions which have the same value of χ 2 , and are therefore equivalent from the optimization standpoint—which is both expected and reassuring.

123

J Glob Optim

4.1.2 7-Dimensional double logistic function The double logistic function models dynamical systems in which two distinct consecutive logistic sequences are observed, each consisting of the initial exponential growth, followed by saturation and stagnation. It is a good model for population growth when there are two distinct sources of nutrition, one readily available and other requiring some effort in acquiring. This is most commonly observed in bacterial growth and plant disease epidemics [4]. The standard form of the double logistic function is represented by 7 parameters y(x; C, K , r, , K 2 , r2 , 2 ) = C +

K −C K2 − K + . 1 + e−r (x−) 1 + e−r2 (x−2 )

(24)

Each parameter represents a physical component of the growth. C represents the initial population, K the population at the first asymptote and K 2 the final population. r and r2 represents the rate of growth for each period, while  and 2 represent the transition between the exponential and logarithmic growth in the first and second logistic period, respectively. For this reason, all parameters must be positive, because the physical values they represent are all positive. The objective function to be minimized here is G(C, K , r, , K 2 , r2 , 2 ) =

n  (y(xi ; C, K , r, , K 2 , r2 , 2 ) − yi )2

σi2

i=1

.

(25)

In this case, C, K and K 2 are linear, and r , , r2 and 2 are nonlinear. The decoupling approach effectively reduces the dimensionality of this problem from seven to four. Representing C, K and K 2 as A1 , A2 and A3 , respectively, and r , , r2 and 2 as B1 , B2 , B3 and B4 , respectively, f −1 (a) can be derived as before A = (A1 , A2 , A3 ) = f −1 (a) = (a1 , a1 + a2 , a1 + a2 + a3 ). Also,

⎞ κ11 κ12 κ13 α(B) = ⎝ κ12 κ22 κ23 ⎠ , κ13 κ23 κ33

(26)



(27)

where κ13 = κ23 = κ33 =

n  i=1 n 

2 , (1 + e−B3 (xi −B4 ) )σi2 2

(1 + e−B1 (xi −B2 ) )(1 + e−B3 (xi −B4 ) )σi2 i=1 n  2

(1 + e−B3 (xi −B4 ) )2 σi2 i=1 ⎡

and β(B) =

,

⎤ yi i=1 σ 2 i ⎢ n ⎥ yi ⎢ i=1 ⎥ (1+e−B1 (xi −B2 ) )σi2 ⎦ . ⎣ yi n i=1 (1+e−B3 (xi −B4 ) )2 σ 2 i

, (28)

n

(29)

123

J Glob Optim

(a)

(b)

Fig. 3 Double logistic fit for 10 random trials of 1000 datasets with the standard (blue points) and decoupled (red points) methods and the particle-swarm implementation from the inspyred [16] package with 100 individuals. Left panel average χ 2 as a function of the number of generations. Right panel average χ 2 as a function of the average execution time. (Color figure online)

(a)

(b)

Fig. 4 Same as Fig. 3 except with bounds implemented using a penalty method, and using the PISA genetic algorithm [14,15]

The comparison of effectiveness of the two methods—standard and decoupled—is shown in Figs. 3 and 4. The results are quite similar to those for the logistic function described in the previous section. As expected for a higher-dimensional problem, more generations are needed for convergence of both. However, again, the decoupled method converges in about a third of generations that it takes for the standard approach. Again, with somewhat slower per-generation execution time of the decoupled method factored in, it converges in about half of the time that it takes the standard method to converge (Figs. 4b, 6b).

4.2 General function optimization The decoupling approach presented here is not limited to nonlinear least squares, but can be used to minimize an arbitrary objective function of the type given in of Eq. (5). We now illustrate this for a pedagogical example of a Rosenbrock function [19]: G(x, y) = (a − x)2 + b(y − x 2 )2 ,

123

(30)

J Glob Optim

where a and b are parameters. Without loss of generality, we set the parameters to their usual values a = 1 and b = 100, arriving at the objective function to be minimized G(x, y) = (1 − x)2 + 100(y − x 2 )2 .

(31)

Expanding yields G(x, y) = 1 − 2x + x 2 + 100y 2 − 200x 2 y + 100x 4 .

(32)

Since linear parameters are quadratic at most, x must be nonlinear, while y is linear. The new decoupling approach effectively reduces the dimensionality of this problem from two to one. The Eq. (32) can be rearranged to match the standard form for the decoupled method as   1 G( A, B) = (A1 )(A1 )(200) − (A1 )(200B12 ) + 1 − 2B1 + B12 + 100B14 , (33) 2 where A1 is y and B1 is x. The transformation of linear parameters is given by

and The matrix of functions α is

a = (a1 ) = f ( A) = (A1 ),

(34)

A = (A1 ) = f −1 (a) = (a1 ).

(35)

  α(B) = 200 ,

(36)

The column vector of functions β is The function γ (B) is

  β(B) = 200B12 .

(37)

γ (B) = 1 − 2B1 + B12 + 100B14 .

(38)

First, a direct search method is used on the nonlinear parameter, B1 . In each iteration, the objective function calculated in two steps. The linear parameter is found by a = α −1 (B)β(B).

(a)

(39)

(b)

Fig. 5 Minimization of the Rosenbrock function with a = 1 and b = 100, for 10 random trials with the standard (blue points) and decoupled (red points) methods and the particle-swarm implementation from the inspyred [16] package with 100 individuals. Left panel average f R (x, y) as a function of the number of generations. Right panel average f R (x, y) as a function of the average execution time. Note the log scale on the y-axis. (Color figure online)

123

J Glob Optim

(a)

(b)

Fig. 6 Same as Fig. 5 except with bounds implemented using a penalty method, and using the PISA genetic algorithm [14,15]

Next, the objective function is calculated by G(a, B) = (1 − B1 )2 + 100(a1 − B12 )2 .

(40)

Once the direct search method obtains the result B1 , the linear parameter a1 can be found again using Eq. (39). The transformation in Eq. (35) yields the variable A1 . The comparison of effectiveness of the two methods—standard and decoupled—is shown in Figs. 5 and 6. The results are similar to those obtained for the nonlinear least squares in the previous section, with the exception that the price for extra computations needed for implementing decoupling are negligible due to its extreme simplicity. While the Rosenbrock example is trivial—especially knowing that the analytic solution is easily derivable—it serves as an example of the benefit that the decoupling method can afford in optimization of general functions beyond those in the least squares format.

5 Discussion and conclusion For the examples presented here, and likely for most if not all others, a single evaluation of the decoupled method is slower than that of a standard approach. This is because at each iteration, the algebraic values of the linear parameters must be calculated. However, as shown in Sect. 4, it takes the decoupled method several times fewer iterations to converge, resulting in overall shorter execution time by about a significant factor (in examples here at least a factor of two). While the decoupling method is done at the level of problem formulation, its implementation with the globally-convergent nature-inspired methods—as done here— yields robust solutions. When using the decoupled method for constrained optimization, the constraints must be applied to both linear and nonlinear subsets of the problem. Linear constrained optimization is routinely done using penalty methods, simplex method or Lagrange multipliers. The decoupled method samples the lower-dimensional search space of nonlinear variables only and analytically determines the exact optimum of the remaining linear variables. This reduces the dimensionality of the problem and the search space, since any optima found by this approach will, by design, be exactly optimized in the linear variables. Such reduction in dimensionality gives the decoupled approach a significant advantage over the

123

J Glob Optim

standard implementation in which the optima found are not expected to be exact in any variables. Several example functions—both in the least squares and general format—are used to demonstrate how such decoupled formulation of the nonlinear optimization problem results in solutions which are appreciably more efficient, confirming previous work on the subject [7,9]. These substantial returns on the modest initial investment in mathematical reformulation of the problem are general and applicable to a wide range of optimization problems.

References 1. Hofler, A., Terzi´c, B., Kramer, M., Zvezdin, A., Morozov, V., Roblin, Y., Lin, F., Jarvis, C.: Innovative applications of genetic algorithms to problems in accelerator physics. Phys. Rev. Spec. Top. Accel. Beams 16, 010101 (2013) 2. Terzi´c, B., Hofler, A., Reeves, C., Khan, S., Krafft, G., Benesch, J., Freyberger, A., Ranjan, D.: Simultaneous optimization of the cavity heat load and trip rates in linacs using a genetic algorithm. Phys. Rev. Spec. Top. Accel. Beams 17, 101003 (2014) 3. Terzi´c, B., Deitrick, K., Hofler, A., Krafft, G.: Narrow-band emission in Thomson sources operating in the high-field regime. Phys. Rev. Lett. 112, 074801 (2014) 4. Hau, B., Amorim, L., Filho, A.B.: Mathematical functions to describe disease progress curves of double sigmoid pattern. Phytopathology 83, 928 (1993) 5. Golub, G., Pereyra, V.: Separable nonlinear least squares: the variable projection method and its applications. Inverse Prob. 19, R1–R26 (2003) 6. Scolnik, H.: On the solution of nonlinear least squares problem. 15(2), 18–23 (1971) 7. Golub, G., Pereyra, V.: The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate. SIAM J. Numer. Anal. 10(2), 413–432 (1973) 8. Guttman, I., Pereyra, V., Scolnik, H.: Least squares estimation for a class of non-linear models. Technometrics 15(2), 209–218 (1973) 9. Žilinskas, A., Žilinskas, J.: Interval arithmetic based optimization in nonlinear regression. Informatica 21(1), 149–158 (2010) 10. Žilinskas, A., Žilinskas, J.: A hybrid global optimization algorithm for non-linear least squares regression. J. Glob. Optim. 56(2), 265–277 (2013) 11. Paulaviˇcius, R., Žilinskas, J.: Simplicial Lipschitz optimization without the Lipschitz constant. J. Glob. Optim. 59(1), 23–40 (2014) 12. Paulavicius, R., Zilinskas, J.: Simplicial Global Optimization. Springer, New York (2014) 13. Cotnoir, C.: A Hybrid Nature-Inspired Algorithm for Multidimensional Nonlinear Optimization with Applications to Biology. Senior thesis, Old Dominion University (2015) 14. Bleuler, S., Laumanns, M., Thiele, L., Zitzler, E.: PISA—A Platform and Programming Language Independent Interface for Search Algorithms. Technical report 154, Institut fur Technische Informatik und Kommunikationsnetze, ETH Zurich (2002) 15. Bleuler, S., Laumanns, M., Thiele, L., Zitzler, E.: PISA—a platform and programming language independent interface for search algorithms. In: Fonseca, C., Fleming, P.J., Zitzler, E., Deb, K., Thiele, L. (eds.) Evolutionary Multi-Criterion Optimization (EMO 2003), pp. 494–508. Lecture Notes in Computer Science. Springer (2003) 16. Inspyred Library for Python. https://pypi.python.org/pypi/inspyred 17. The Python Programming Language. https://www.python.org/ 18. Verhulst, P.F.: Recherches mathématiques sur la loi d’accroissement de la population. Nouveaux Mémoires de l’Académie Royale des Sciences et Belles-Lettres de Bruxelles 18, 1–42 (1845) 19. Rosenbrock, H.: An automatic method for finding the greatest or least value of a function. Comput. J. 3, 175–184 (1960)

123

Decoupling linear and nonlinear regimes: an ...

where α is the matrix of functions of B, a is the column vector of solution parameters ..... example is trivial—especially knowing that the analytic solution is easily ...

885KB Sizes 1 Downloads 204 Views

Recommend Documents

Linear and Linear-Nonlinear Models in DYNARE
Apr 11, 2008 - iss = 1 β while in the steady-state, all the adjustment should cancel out so that xss = yss yflex,ss. = 1 (no deviations from potential/flexible output level, yflex,ss). The log-linearization assumption behind the Phillips curve is th

Nonlinear Robust Decoupling Control Design for Twin ... - IEEE Xplore
Nonlinear Robust Decoupling Control Design for Twin Rotor System. Q. Ahmed1, A.I.Bhatti2, S.Iqbal3. Control and Signal Processing Research Group, CASPR.

Linear and Nonlinear Optical Properties of Mesoionic ...
High resolution mass spectra (HRMS) were ... fication (CPA) system comprising of an oscillator (Maitai, .... HRMS (m/z): [M + Na] calcd for C19H16N2O5Na;.

pdf-1450\linear-and-nonlinear-programming-international-series-in ...
... the apps below to open or edit this item. pdf-1450\linear-and-nonlinear-programming-internatio ... ns-research-management-science-3rd-third-edition.pdf.

Introduction to Linear and Nonlinear Observers - Semantic Scholar
Sometimes all state space variables are not available for measurements, or it is not practical to measure all of them, or it is too expensive to measure all state space variables. In order to be able to apply the state feedback control to a system, a

Algorithms for Linear and Nonlinear Approximation of Large Data
become more pertinent in light of the large amounts of data that we ...... Along with the development of richer representation structures, recently there has.

Security Regimes
that security regimes, with their call for mutual restraint and limitations on unilateral .... made subordinate; wherever His voice can be heard, it will be raised to discourage ..... At the May 1972 summit conference in Moscow, the U.S. and Soviet.

Core, Periphery, Exchange Rate Regimes, and Globalization
The key unifying theme for both demarcations as pointed out by our ...... Lessons from a Austro-Hungarian Experiment (1896-1914)” WP CESifo, University of.

Women's Employment and Welfare Regimes ...
principally for its degree of “familialism” (leaving service provision to families). His revised ..... from a long hospital stay; 8 weeks mandatory postnatal .... Security Administration, 1999, http://www.ssa.gov/statistics/ssptw99.html. Maternit

pdf-175\nonlinear-stability-and-bifurcation-theory-an-introduction-for ...
... apps below to open or edit this item. pdf-175\nonlinear-stability-and-bifurcation-theory-an-int ... s-and-applied-scientists-by-hans-troger-alois-steindl.pdf.

Decoupling or recoupling
Jan 7, 2008 - requiring Central banks to cut rates and offer liquidity in their .... since many years mainly because of its huge current account deficit ... At hindsight the nature of the crisis has always been perplexing even for the best minds.

Core, Periphery, Exchange Rate Regimes, and Globalization
access to foreign capital they may need a hard peg to the core country currencies ..... For data sources see appendix to Flandreau and Riviere ..... to be that the only alternatives in the face of mobile capital are floating or a hard fix such .... d

LINEAR AND NON LINEAR OPTIMIZATION.pdf
and x1 x2 x3 ≥ 0. 10. 5. a) What is a unimodal function ? What is the difference between Newton and. Quasi Newton methods ? 6. b) Explain univariate method.

Vortex decoupling line and its anisotropy in ...
three-dimensional (3D) Abrikosov flux lines. In the DS the phase correlator between the adjacent layers falls to ... of the continuous 3D Abrikosov-like vortices, it is natural to ask whether the FLL in this state exhibits features similar to ... FLL

An FPGA-based Architecture for Linear and ...
setup. A comparison between results obtained from MATLAB simulations and the ... analysis, visualization, engineering graphics, and application development.

Dominant Party Regimes and the Commitment ... - Ora John Reuter
Email Alerts: .... party, manipulate court rulings on the fairness of election campaign tactics, intimidate voters, pad vote counts, and ..... the 1990s, when governors and other regional elites had carved out great swaths of de facto ...... Paradigm