International Journal of Modern Physics C, Vol. 9, No. 1 (1998) 133–146 c World Scientific Publishing Company

MICROCANONICAL OPTIMIZATION APPLIED TO THE TRAVELING SALESMAN PROBLEM

ALEXANDRE LINHARES Computa¸c˜ ao Aplicada e Automa¸c˜ ao, UFF 24210-240 Niter´ oi, RJ, Brazil E-mail : [email protected] ´ R. A. TORREAO ˜ JOSE Computa¸c˜ ao Aplicada e Automa¸c˜ ao, UFF 24210-240 Niter´ oi, RJ, Brazil E-mail : [email protected]

Received 24 October 1997 Revised 17 December 1997 Optimization strategies based on simulated annealing and its variants have been extensively applied to the traveling salesman problem (TSP). Recently, there has appeared a new physics-based metaheuristic, called the microcanonical optimization algorithm (µO), which does not resort to annealing, and which has proven a superior alternative to the annealing procedures in various applications. Here we present the first performance evaluation of µO as applied to the TSP. When compared to three annealing strategies (simulated annealing, microcanonical annealing and Tsallis annealing), and to a tabu search algorithm, the microcanonical optimization has yielded the best overall results for several instances of the euclidean TSP. This confirms µO as a competitive approach for the solution of general combinatorial optimization problems. Keywords: Combinatorial Optimization; Microcanonical Ensemble; Simulated Annealing; Traveling Salesman Problem.

1. Introduction The traveling salesman problem (TSP) has been studied since the early days of scientific computation, and is now considered the benchmark in the field of combinatorial optimization. The problem can be easily stated: given a set of cities, the goal is to find a path of minimal cost, going through each city only once and returning to the starting point. In spite of its simple formulation, the TSP has been proven to be NP-Hard, meaning that there probably does not exist an algorithm which can exactly solve a general instance of the problem in plausible processing time. The best that can be expected is thus to find approximate strategies of solution, called heuristics. If a heuristic is a general-purpose procedure which can be applied to a variety of problems, it is referred to as a metaheuristic. 133

134

A. Linhares & J. R. A. Torre˜ ao

Among the metaheuristics employed for the TSP, optimization algorithms derived from statistical physics have received a great deal of attention.1–3 Simulated annealing, as introduced by Kirkpatrick et al.,4 was the first such algorithm, and many variants of it have appeared, such as fast simulated annealing,5 microcanonical annealing,6 and Tsallis annealing.3 Recently, a new strategy has been proposed which is also based on principles of statistical physics, but which does not resort to annealing. It is called the microcanonical optimization algorithm (µO), and has so far been employed, with remarkable success, in the context of visual processing,7,8 and for task allocation in distributed systems.9 Here, we present an analysis of µO when applied to the TSP, comparing it to some annealing-based procedures (simulated annealing, microcanonical annealing and Tsallis annealing), and also to a tabu search algorithm.10 The results which we report show µO to be a very competitive metaheuristic in this domain: when considering both execution time and solution quality, it yielded the best performance of all the evaluated algorithms. In the following section, we describe the microcanonical optimization algorithm. Next, we discuss some implementation details of the alternative metaheuristics considered. In Sec. 4, we present and analyze the results obtained in our work, concluding with our final remarks in Sec. 5.

2. Microcanonical Optimization The microcanonical optimization algorithm consists of two procedures which are alternately applied: initialization and sampling. The initialization implements a local — and optionally aggressive — search of the solution space, in order to reach a local-minimum configuration. From there, the sampling phase proceeds, trying to free the solution from the local minimum, by taking it to another configuration of equivalent cost. One can picture the metaheuristic, once stuck in a local-minimum valley, as trying to evolve by going around the peaks in the solution space, instead of attempting to climb them, as in simulated annealing, for instance. This is done by resorting to the microcanonical simulation algorithm by Creutz,11 which generates samples of fixed-energy configurations (see below). After the sampling phase, a new initialization is run and the algorithm thus proceeds, alternating between the two phases, until a stopping condition is reached. In what follows, we treat in greater detail the two phases of the microcanonical optimization. A pseudocode for the algorithm is given in Appendix A.

2.1. Initialization In the initialization, µO performs a local search, starting from an arbitrary solution and proposing moves which are accepted only when leading to configurations of lower cost (lower energy, in physical terms). Optionally, an aggressive implementation of this phase can be chosen, meaning that the algorithm will always pick the best candidate in a subset of possible moves.

Microcanonical Optimization Applied to the Traveling Salesman Problem

135

In a non-aggressive implementation, the only free parameter of the initialization phase defines its stopping condition: since it cannot be rigorously established when a local minimum has been reached, it is necessary to define a maximum number of rejected moves as the criterium for interrupting this phase. In the case of an aggressive implementation (which we chose), it is also necessary to define the number of candidate moves to be considered in each initialization step (500, in our work). We also remark that, for the definition of the parameters to be employed in the sampling phase (see below), a list may be compiled, in the initialization, of those moves which have been rejected for leading to higher costs when compared to the current solution.

2.2. Sampling As already mentioned, in the sampling phase the µO metaheuristic tries to free itself from the local minimum reached in the initialization, at the same time trying to remain close, in terms of cost, to the best solution so far obtained. It implements, for this purpose, a version of the Creutz algorithm, assuming an extra degree of freedom, called the demon, which generates small perturbations on the current solution. At each sampling iteration, random moves are proposed which are accepted only if the demon is capable of yielding or absorbing the cost difference incurred. In µO, the demon is defined by two parameters: its capacity, DMAX , and its initial value, DI . The sampling generates a sequence of states whose energy is conserved, except for small fluctuations which are modeled by the demon. Calling ES the energy (cost) of the solution obtained in the initialization, and D and E the energy of the demon and of the solution, respectively, at a given instant in the sampling phase, we must have E + D = ES + DI = constant. Thus, in terms of the initial energy and the capacity of the demon, this phase generates solutions in the cost interval [ES − DMAX + DI , ES + DI ]. DI and DMAX are, therefore, the main parameters to be considered in the implementation of the sampling. In the original formulation of the algorithm, such parameters were taken, at each sampling phase, as fixed fractions of the final cost obtained in the previous initialization.7 As one of the contributions of the present work, we have proposed an adaptive strategy for the determination of such parameters: taking the list of rejected movements compiled in the initialization phase (see above), we have sorted it in growing order of the cost jumps, choosing two of its lower entries as the values of demon capacity and initial energy. The idea is that such values will be representative of the hills found in the landscape of the region being searched in the solution space, and will thus be adequate for defining the magnitude of the perturbations required for the evolution of the current solution, in the sampling phase. In our implementations of µO for the TSP, the initialization was executed until a count of 100n consecutively rejected moves was reached, where n was the number

136

A. Linhares & J. R. A. Torre˜ ao

of cities in the problem. The values of DMAX and DI were both usually taken as equal to the 5th lowest entry in the list of rejected moves compiled in the initialization, except for a certain kind of city distribution which required a change in this prescription (see Sec. 4). The sampling phase was run for only 50 iterations, and the algorithm was made to stop when reaching a count of 1000 moves without improvement in the best solution encountered. 3. Alternative Strategies In our experiments, we compared the performance of µO to those yielded by alternative strategies: simulated annealing, microcanonical annealing, Tsallis annealing and tabu search. Here we discuss some of the features of the implementation of such algorithms in our work. 3.1. Simulated annealing (SA) Simulated annealing, as proposed by Kirkpatrick et al.,4 consists in the iterated implementation of the Metropolis algorithm,12 for a sequence of decreasing temperatures. The Metropolis algorithm is a computational procedure, long known in statistical physics, which generates samples of the states of a physical system at a fixed temperature. Since such a system obeys the Gibbs distribution, the states generated at low temperatures will be low energy states. 13,14 Identifying the energy of the system with the cost function in an optimization problem, Kirkpatrick et al. proposed the following optimization strategy: Starting from an arbitrary solution, and a high temperature, the Metropolis algorithm is implemented, which means that moves are proposed which are accepted with probability p = min(1, exp (−∆E/T )), where ∆E is the cost variation incurred, and T is the current temperature. After a large number of iterations, the value of T is decreased, and the process is repeated until T ≈ 0. The initial value and rate of decrease of the temperature (which has no physical meaning in the optimization, being just a global control parameter of the process) constitute the annealing schedule of the algorithm. In our implementations, we followed the prescriptions by Cerny,1 taking the temperature to decrease by 7% of its value at each annealing step, and keeping it constant for 10n accepted moves or 100n rejected moves, whichever came first, with n being the number of cities in the problem.15 The initial temperature was empirically determined: 100 trial moves, starting from the initial random solution, were analyzed, and the initial temperature was chosen greater than the maximum cost variation observed. 3.2. Tsallis annealing This corresponds to a variant of simulated annealing, based on the statistics proposed by C. Tsallis.16 Here, the acceptance probability of the Metropolis algorithm is generalized to p = min(1, [1 − (1 − q)∆E/T ]1/(1−q)), such that SA is recovered in the limit of q → 1. By appropriately choosing the value of q, it has been claimed3

Microcanonical Optimization Applied to the Traveling Salesman Problem

137

that this algorithm can produce plausible TSP solutions in fewer steps than with fast simulated annealing.5 In our implementations, we followed the general annealing prescriptions described above for SA. As for the parameter q, specific to the Tsallis annealing, it has been suggested that the algorithm improves, in what concerns execution times, as q decreases towards −∞.3 Such general behavior was confirmed in our work, but, even though an exhaustive analysis has not been undertaken, we noticed a corresponding degradation in solution quality for q < −1. The value q = −1 was therefore employed in our experiments.

3.3. Microcanonical annealing (MA) This algorithm also corresponds to a variant of simulated annealing, now based on a simulation of the states of a physical system at fixed energy, through the Creutz algorithm,6 instead of at fixed temperature (The SA and Tsallis algorithms would thus correspond to canonical annealings). As originally proposed for visual processing applications, MA employed a lattice of demons, and was suited only for parallel implementations. In our single-demon sequential version, microcanonical annealing consists, basically, in the iterative application of the Creutz algorithm for progressively lower values of demon capacity. In our implementations, we took a demon of zero initial energy, such that, at the ith annealing step, states would be generated in the cost interval [E (i−1) − D(i) , E (i−1) ], where D(i) represents the current demon capacity, and E (i−1) represents the final energy reached in the previous annealing step. The rate of decrease of the demon capacity was the same used in the canonical annealings for temperature decrease, with the initial demon value determined similarly to the initial annealing temperature: starting from a random solution, 100 prospective moves were analyzed, and the largest cost variation was taken as the demon capacity in the first annealing step.

3.4. Tabu search In order to avoid getting entrapped in a local minimum, the tabu search algorithm selects, at each step, the best of a certain number of candidate moves (500, in our implementations), even if it leads to a higher cost, in which case the corresponding reverse move is included in a tabu list, to prevent the return to a solution already considered. In our experiments, we worked with a tabu list of 7 moves, following the suggestion of Glover,10 with each new tabu move being included in a random position in the list, so that its interdiction period would also be random. Another feature of our implementations was a so-called aspiration criterium, according to which, if a given tabu move leads to a solution which tops the best one so far encountered, its interdiction is ignored. The tabu search was made to stop at a count of 1500 moves without improvement.

138

A. Linhares & J. R. A. Torre˜ ao

4. Experiments Our performance evaluation of µO was based on the solution of several instances of the euclidean TSP, employing a path-reversal dynamics.17 This means that the solution cost was taken as the total tour length measured by the euclidean norm, and that each trial move was a replacement of a randomly selected section of the tour by its reverse. Results obtained with a Pentium 133 processor will be reported here, for the following city distributions: P100: 100 cities organized in a rectangular grid. Such a distribution, also employed by Cerny1 and by Laarhoven and Aarts,14 displays a global minimum which can be easily perceived, and is an example of degenerate topology, allowing many solutions of the same cost. P300: 300 cities randomly distributed in eight distinct clusters along the sides of a square region. The optimal path — which is not known a priori — must cross each cluster only once. PR76, PR124 and PR423: Configurations of 76, 124 and 423 cities, respectively, proposed by Padberg and Rinaldi, and compiled in the TSPLIB library.18 The corresponding optimal solutions are also shown in the TSPLIB. K200: Configuration of 200 cities proposed by Krolak and also found, along with its optimal solution, in the TSPLIB. In order to appreciate the quality of the solutions yielded by the various algorithms, we considered the distribution of the results obtained in several runs. The frequency histograms of the final costs for P100 and P300, in 50 executions, are shown in Figs. 1 and 2, where we include the results for the iterative improvement algorithm, which corresponds to implementing only the non-aggressive initialization phase of µO. From the figures, the superiority of the microcanonical optimization over the other approaches is apparent, but the tabu search and microcanonical annealing methods also proved to be competitive. SA and Tsallis annealing yielded poorer quality solutions, even though the latter was very fast. Table 1 gives an idea of the average running times involved. It is important to remark that, due to the peculiarities of implementation of each algorithm, some of them tend naturally to prolong their execution in comparison to others. For instance, µO and tabu search will only stop after reaching a certain number of iterations without improvement, which means that, even after a long period without any progress, once those algorithms find a better configuration, they are granted an additional running time (of 1000 iterations for µO, and 1500 for tabu search). The same is not true of the annealing strategies, which have their running times linked to fixed annealing schedules.

Microcanonical Optimization Applied to the Traveling Salesman Problem

Simulated Annealing

Iterative Improvement 80

Frequency

60 40 20

60 40 20

Microcanonical Annealing

4068

Tsallis Annealing Frequency

20

60 40 20 0

3700

4068

Cost

3945

3823

3700

0

Cost

4068

40

3945

60

3823

Microcanonical Optimization

Tabu Search 80

Frequency

80 60 40 20

60 40 20

0

Cost

4068

3700

4068

Cost

3945

3823

3700

0

3945

Frequency

Cost

80

80

Frequency

3945

3700

4068

3945

3823

3700

Cost

3823

0

0

3823

Frequency

80

Fig. 1. Frequency, in fifty runs, of the final costs obtained for Problem P100.

Iterative Improvement

Simulated Annealing

30

30 25

Frequency

Frequency

25 20 15 10 5 0 2725

30

3100

Cost 3475

Microcanonical Annealing

10 5

30

3100

20 15 10

3850

Tsallis Annealing

20 15 10

3100

3475

0 2725

3850

3100

Cost

Tabu Search

Cost

3475

3850

Microcanonical Optimization

30

30

25

25

Frequency

Frequency

3475

5

5

20 15 10

20 15 10 5

5 0 2725

Cost

25

Frequency

Frequency

15

0 2725

3850

25

0 2725

20

3100

3475

Cost

3850

0 2725

3100

3475

3850

Cost

Fig. 2. Frequency, in fifty runs, of the final costs obtained for Problem P300.

139

140

A. Linhares & J. R. A. Torre˜ ao Table 1. Average execution time (minutes), in five runs of µO, tabu search, microcanonical annealing (MA), Tsallis annealing, and simulated annealing (SA). Processor: Pentium 133.

P100 P300

µ0

Tabu

MA

Tsallis

SA

0:48 2:25

0:58 3:29

0:47 2:59

1:05 1:48

2:37 4:40

From such initial results, we have been led to undertake a more careful comparative analysis of µO, tabu search and microcanonical annealing. Table 2 summarizes the results obtained in 50 runs for the distributions K200 and PR76. The corresponding graphs of running time versus final cost for K200 are depicted in Fig. 3. We see that the microcanonical annealing did not show any appreciable variation in execution time, even though it performed quite poorly, in this respect, in problem K200. Tabu search, on the other hand, showed a behavior similar to that of µO, a feature which was observed for all configurations where the cities were evenly distributed over the plane, without the formation of well-defined groups. The solutions yielded by µO were slightly superior to those generated by the annealing, but required a little longer processing time in K200. A different situation was met in problems PR124 and PR439, which share the peculiar characteristic of presenting relatively distant groups of densely packed cities, in a topology quite distinct from the ones previously analyzed. Such topology gives rise to the existence of a large number of local-minimum solutions, differing only in the intra-group sequences of cities, which are very close in cost. In this kind of problem, the intrinsic divide-and-conquer nature of annealing4,6 proves to be quite invaluable, since it allows the initial optimization of the long paths between groups — which are dominant in terms of cost — leaving the finer details of Table 2. Average, maximum, and minimum values obtained in 50 runs of µO, tabu search, and microcanonical annealing (MA), for problems K200 and PR76. E means cost and t means execution time, in minutes. Processor: Pentium 133. K200

Eavg

Emin

Emax

tavg

tmin

tmax

µO Tabu MA

30160 30392 30271

29696 29869 29771

30941 31010 31009

2:00 1:28 8:42

0:59 0:52 8:33

4:00 2:35 8:48

PR76

Eavg

Emin

Emax

tavg

tmin

tmax

µO Tabu MA

108357 108736 109418

108159 108159 108159

109085 109921 111115

3:42 5:38 3:32

2:34 2:37 3:27

5:43 11:43 3:37

Microcanonical Optimization Applied to the Traveling Salesman Problem

141

Microcanonical Optimization

00:11:31 00:10:05 00:08:38

Time

00:07:12 00:05:46 00:04:19 00:02:53 00:01:26 00:00:00 29400

29900

30400

30900

31400

31900

32400

32900

33400

33900

34400

32400

32900

33400

33900

34400

32400

32900

33400

33900

34400

Cost

Microcanonical Annealing

00:11:31 00:10:05 00:08:38

Time

00:07:12 00:05:46 00:04:19 00:02:53 00:01:26 00:00:00 29400

29900

30400

30900

31400

31900

Cost

Tabu Search 00:11:31 00:10:05 00:08:38

Time

00:07:12 00:05:46 00:04:19 00:02:53 00:01:26 00:00:00 29400

29900

30400

30900

31400

31900

Cost

Fig. 3. Execution times versus final costs obtained in fifty runs for K200. Times in minutes.

the intra-group paths for posterior processing. In contrast to that, tabu search, by accepting, at each step, the least expensive move (as long as it is not tabu), restricts itself, most of the time, to short-scale changes in the solutions. Therefore, it has difficulty in processing the large-scale corrections of the paths between groups. Similarly, µO finds it hard to evolve in such topology, unless the demon parameters are chosen large enough to accomodate large-scale rearrangements. For this reason, in our implementations for PR124 and PR439, instead of the 5th entry in the list of rejected moves, we had to choose, for the demon parameters, the 25th term there. As illustrated in Fig. 4, for PR439, tabu search, which received no special tuning for this particular situation, fared worse in those problems.

142

A. Linhares & J. R. A. Torre˜ ao

Microcanonical Optimization 00:25:55

Time

00:17:17

00:08:38

00:00:00 105000

107000

109000

111000

113000

115000

117000

119000

121000

123000

125000

117000

119000

121000

123000

125000

117000

119000

121000

123000

125000

Cost

Microcanonical Annealing 00:25:55

Time

00:17:17

00:08:38

00:00:00 105000

107000

109000

111000

113000

115000

Cost

Tabu Search 00:25:55

Time

00:17:17

00:08:38

00:00:00 105000

107000

109000

111000

113000

115000

Cost

Fig. 4. Execution times versus final costs obtained in fifty runs for PR439. Times in minutes.

It is interesting, in this respect, to remark that µO seems to be more efficient than tabu search in breaking loose from local minimum configurations. The curves in Fig. 5, obtained for problem P300, illustrate this. The plots show the values of the current solution and of the best solution so far encountered, as the algorithms evolve. The tabu heuristic, once in a local minimum, accepts the best of the proposed moves, irrespective of its cost. Since moves which are quite bad can thus be accepted repeatedly, the heuristic tends to stray from the best solution so far obtained. This should be compared to the behavior of µO, where the limited capacity of the demon keeps the current and the best solutions always close. This, nevertheless, does not seem to compromise the quality of the overall optimization: the algorithm is able

Microcanonical Optimization Applied to the Traveling Salesman Problem

143

3000

Cost

2980 2960

Tabu Search

2940 2920 2900 2880 2860 2840 2820 2800 1

6

11

16

21

26

31

36

41

46

51

56

61

66

71

76

81

Steps

Cost

3000 2980

Microcanonical Optimization

2960 2940 2920 2900 2880 2860 2840 2820 2800 1

6

11

16

21

26

31

36

41

46

51

56

61

66

71

76

81

Steps

Fig. 5. Comparative evolution of current solution (fine line) and best solution (thick line), at each implementation step, for P300.

Microcanonical Optimization 25

Frequency

20 15 10 5 0 2800

2925

3050

3175

3300

Cost

Tabu Search

25

Frequency

20 15 10 5 0 2825

2950

3075

3200

3325

Cost

Fig. 6. Frequency, in fifty runs, of the final costs obtained for Problem P300, with execution time limited to 3 min.

144

A. Linhares & J. R. A. Torre˜ ao

to find a way to a near-optimal solution, passing only through intermediary states which are approximately local minima. Finally, since the quality of the final results is also a function of the execution time, and since µO and tabu search obey different stopping criteria, we also compared their performance in limited-time implementations. The distributions of results obtained in 50 runs for P300, with a time limit of 3 min, are shown in Fig. 6, which makes clear, once again, the better performance of µO. 5. Conclusions We have presented an analysis of the performance of a new heuristic — the microcanonical optimization algorithm, µO — when applied to the euclidean traveling salesman problem. When confronted with alternative approaches to the TSP (simulated annealing, microcanonical annealing, Tsallis annealing and tabu search), µO yielded the best overall results in our experiments. We have found it to be consistently faster than simulated annealing and consistently superior, in terms of solution quality, to the Tsallis annealing, even though the latter proved to be an efficient strategy for finding plausible solutions in short running times, as already claimed.3 Microcanonical annealing and tabu search also performed well in our analysis. Due to the adaptive divide-and-conquer nature of the annealing, MA was able to outperform tabu search (though not µO), in what concerns the quality of the solutions, in certain problems with highly non-uniform city distributions, which require a scale-dependent processing. In most of the other experiments, tabu search proved itself the closest competitor to µO, yielding slightly inferior results in comparable execution times. We conclude that µO is a very promising heuristic for combinatorial optimization problems, as demonstrated by its extremely robust and efficient performance in the benchmark application of the TSP.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

V. Cerny, J. Optimization Theory and Applications 45, 41 (1985). J. J. Hopfield and D. W. Tank, Bio. Cyber. 52, 141 (1985). T. J. P. Penna, Phys. Rev. E51, 1 (1995). S. Kirkpatrick, D. C. Gellat, and M. Vecchi, Science 220, 671 (1983). H. Szu and R. Hartley, Phys. Lett. A122, 157 (1987). S. T. Barnard, Int. J. Comp. Vision 3, 17 (1989). J. R. A. Torre˜ ao and E. Roe, Phys. Lett. A205, 377 (1995). J. L. Fernandes and J. R. A. Torre˜ ao, in Lecture Notes in Computer Science — Proc. 3rd. Asian Conf. on Computer Vision (Springer-Verlag, Heidelberg, 1998), to appear. S. C. S. Porto, A. M. Barroso, and J. R. A. Torre˜ ao, in Proc. 2nd. Methaheuristics Int. Conf. (INRIA, Sophia-Antipolis, 1997), p. 103. F. Glover, ORSA J. Comp. 1, 190 (1989); ORSA J. Comp. 2, 4 (1990). M. Creutz, Phys. Rev. Lett. 50, 1411 (1983). N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, The J. Chem. Phys. 21, 1087 (1953).

Microcanonical Optimization Applied to the Traveling Salesman Problem

145

13. L. E. Reichl, A Modern Course on Statistical Physics (The University of Texas Press, Austin, 1986). 14. P. J. M. Laarhoven and E. H. L. Aarts, Simulated Annealing: Theory and Applications (Kluwer Academic Publishers, Amsterdam, 1987). 15. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes: The Art of Scientific Computing (Cambridge University Press, Cambridge, 1992). 16. C. Tsallis, J. Stat. Phys. 52, 479 (1988). 17. S. Lin and B. W. Kernighan, Op. Res. 21, 498 (1973). 18. G. Reinelt, ORSA J. Comp. 3, 376 (1991).

Appendix A Here we present the pseudocode for the microcanonical optimization metaheuristic.

µO algorithm begin Let maxcycle be the maximum number of iterations without improvement of the solution cost; repeat do Initialization; do Sampling; until (maxcycle is reached) end

Fig. A.1. µO algorithm.

procedure Initialization begin Empty list-of-rejected-moves; Let maxinit be the maximum number of consecutive rejected moves; Let s be the starting solution of the initilization phase; num rejmoves ← 0; while (num rejmoves < maxinit) do begin Choose a move randomly; Call the new solution s0 ; Compute cost E of solution s; Compute cost E 0 of solution s0 ; costchange ← E 0 − E if (costchange ≥ 0) then begin Put costchange in the list-of-rejected-moves; num rejmoves ← num rejmoves + 1; end if else begin num rejmoves ← 0 s ← s0

146

A. Linhares & J. R. A. Torre˜ ao

end else end while end

Fig. A.2. Initialization procedure.

procedure Sampling begin Select DMAX and DI from the list-of-rejected-moves; Let maxsamp be the maximum number of sampling iterations; Let s be the starting solution of the sampling phase; num iter ← 0; D ← DI while (num iter < maxsamp) do begin Choose a move randomly; Call the new solution s0 ; Compute cost E of solution s; Compute cost E 0 of solution s0 ; costchange ← E 0 − E if (costchange ≤ 0) then begin if (D − costchange ≤ DMAX ) then begin s ← s0 ; D ← D − costchange; end if end if else {costchange > 0} begin if (D − costchange ≥ 0) then begin s ← s0 ; D ← D − costchange; end if end else num iter ← num iter + 1; end while end

Fig. A.3. Sampling procedure.

MICROCANONICAL OPTIMIZATION APPLIED TO THE ... - ebape

ES the energy (cost) of the solution obtained in the initialization, and D and E the energy of the demon and ... best solution encountered. 3. Alternative Strategies.

358KB Sizes 3 Downloads 137 Views

Recommend Documents

genetic algorithms applied to the optimization of ...
Systems, The University of Michigan, 1975. [6] Goldberg, D., Genetic Algorithms in Search,. Optimization and Machine Learning, Addison-Wesley. Publishing Company, 1989. [7] Huang, W., Lam, H., Using genetic algorithms to optimize controller parameter

Introduction to Linear Optimization
Online PDF Introduction to Linear Optimization (Athena Scientific Series in Optimization and Neural Computation, 6), Read PDF Introduction to Linear Optimization (Athena Scientific Series in Optimization and Neural Computation, 6), Full PDF Introduct

Route Optimization To Make The Ad-Hoc Network ...
1M.tech Student, Computer Science and Engineering , Manav Rachna ... In year 2008, Alfredo Garcia performed a work," Rational Swarm Routing Protocol ... free protocol, but it uses a new routing metric termed degree of involvement ... 1. Define N Numb

The Ultimate Beginner's Guide to Search Engine Optimization Online ...
SEO for Growth: The Ultimate Guide for Marketers, Web Designers & ... SEO 2017 Learn Search Engine Optimization With Smart Internet Marketing Strateg: ...

Introduction to Matlab Optimization toolbox -
Constrained non-linear optimization. 3. ... For all Optimization functions, the parameters must be set using the ... constraint coefficient matrices may be required.

The virtual reality applied to biology understanding: The ...
The advent of the computer and computer science, and in particular virtual reality, offers new ... rue Claude Chappe, BP 38, F-29280 Plouzané, France. Tel.