Biol. Cybern. 78, 167±173 (1998)

State-space search strategies gleaned from animal behavior: a traveling salesman experiment Alexandre Linhares National Space Research Institute of the Brazilian Ministry of Science and Technology, LAC-INPE, Av. Astronautas 1758, SaÄo Jose dos Campos, SP 12227-010, Brazil Received: 9 July 1997 / Accepted in revised form: 24 November 1997

Abstract. A widespread search strategy employed by predators in both vertebrate and invertebrate phyla is the well-known area-restricted search strategy. The generality, simplicity, and e€ectiveness of this strategy have made it emerge many times during the course of natural selection. In this work, an arti®cial intelligence state-space search procedure is developed using search guidelines gleaned from the foraging behavior of predators. This procedure, which we call predatory search, has been implemented on a NP-Hard combinatorial problem: the traveling salesman problem. Numerical results are presented for a limited set of benchmark problems, and area-restricted search seems to be e€ective: We have been able to ®nd the optimal solution to, among others, a 400-city Manhattan problem.

1 Introduction Massively multimodal NP-Hard problems constitute a great challenge to the ®elds of arti®cial intelligence, operations research, and, more broadly, discrete mathematics. These problems are characterized by the combinatorial explosion of the number of possibilities as a function of its variables. An elegant theory shows that these problems are transformable to each other, such that an ecient algorithm could solve them all, if such an algorithm exists. Unfortunately, there are strong arguments against this possibility (Garey and Johnson 1979), and researchers generally believe this class marks the start of computational intractability. The problem of this class with which we will be concerned is the wellknown traveling salesman problem. With many industrial applications and of great theoretical importance, research over the traveling salesman problem ± hereafter TSP ± lies at the heart of computer science and operations research. One distinguished feaCorrespondence to: Alexandre Linhares (e-mail: [email protected], fax: +55-12-345-6375)

ture of the problem is its mix of easy statement and hard solution, and it has become over time a benchmark problem for combinatorial optimization algorithms. The problem consists on the minimization of the round trip tour across n cities on the salesman route, usually formulated as follows: let the cyclic permutation p represent the order of the cities on the salesman route. Let the matrix Cij contain the associated cost of travel from city i to city j. For all purposes, we will be concerned with the symmetric problem, where Cij ˆ Cji for all i, j. The problem then becomes the minimization of the cost function Z, de®ned as: Zˆ

n X

Cp…i†p…i ‡ 1†

…1†

iˆ1

recall that p is cyclic, so n+1 actually means 1. The TSP has long been proved as an NP-Hard problem, even when the cities are distributed over Euclidean or even Manhattan geometries (Papadimitriou 1977), which suggests that all exact algorithms have exponential time growth as a function of n. In fact, our formulation of the problem has (n ± 1)!/2 solutions for n cities. The intractability of exact enumeration solution methods is therefore obvious, and we will be concerned with approximate solution schemes. One recent line of approach to the TSP is based on metastrategies. These metastrategies de®ne the problem as one of state-space search and employ general tactics to overcome the many local minima of the solution space. Some of these strategies have been derived from biology as, for example, evolutionary strategies (Fogel 1988), genetic algorithms (PaÂl 1993; Ambati et al. 1991), neural networks (Hop®eld and Tank 1985), and ant colonies (Colorni et al. 1991; Dorigo et al. 1996), among others.1 It is important to note that while these bio1 Another related nature-inspired approach is the well-known method of simulated annealing (Kirkpatrick et al. 1983; Cerny 1985). An interesting but unrelated method is the DNA molecular computation (Adleman 1994) model, which also seems to be able to ®nd solutions to intractable problems

168

optimization strategies are based on biological phenomena, they are not actual simulations of these phenomena. Such simulations, if achievable at all, constitute tasks of much greater complexity (Dyer 1995). These strategies are able to solve the TSP with varying degrees of success. In Colorni et al. (1996), it is pointed out that these strategies have to strike a balance between two con¯icting alternatives: (1) exploitation, i.e., to search thoroughly in promising areas, and (2) exploration, i.e., to move to distant areas potentially better than the actual one. It turns out that many predatory animals also have to deal with this con¯ict when searching for prey, and through natural selection they have developed interesting search strategies. In this work, we suggest a heuristic algorithm that resembles the way these predator species deal with this con¯ict. The predatory search algorithm is gleaned from a wellknown predatory search strategy, namely area-restricted search behavior. The organization of this paper is as follows: Section 2 presents a brief description of the foraging behavior of search-intensive predators and points out two search guidelines gleaned from it. Section 3 presents some mathematical de®nitions and the predatory search algorithm, followed by the numerical results for some benchmark instances, including a 400-city Manhattan problem solved to optimality. Problems and instances for which the algorithm should perform especially well or especially badly are discussed, and the conclusion points to some future research questions. 2 Animal behavior: the foraging process of predators Studies about the foraging behavior of predators constitute a major line of research in the ®elds of, for example, ecology, evolution, and ethology. These studies have separated the foraging process of predators into three distinct parts: (1) First, predators must search for prey; (2) then, they pursue and attack it; and (3) ®nally, handle and digest it (O'Brien et al. 1990). Of course, each of these steps requires di€erent costs per predator species (Griths 1980). Take lions, for example: Since their prey ± usually zebras or gazelles ± is large and easily spotted in their habitat, there is no great cost involved in searching. The phases of pursuit and attack (in which the risk of a fatality applies to the attacker) and handling and digestion (in which other creatures show up to join the meal) are of greater concern to such a predator. Now, contrast that to a bird or to a lizard preying on small insects: Their small size (sometimes less than 1/100th of the predator) makes them extremely hard to ®nd, and many are needed for consumption. For these creatures, the search phase is much more important than the attack or handling phases. It is therefore expected that, over the course of natural selection, these creatures have evolved ecient search strategies (Bell 1990; O'Brien et al. 1990). One of the best known search strategies used by search-intensive predators consists of the following. The predator initially searches the environment in a

straightforward way. However, when confronted with prey, predators promptly change their movement patterns, slowing down and making frequent turns. By keeping close to the point of prey capture, predators aim to spot consecutive nearby prey. This is the well-known area-restricted search strategy, documented in species as di€erent as birds (Smith 1974), lizards (Huey and Pianka 1981), numerous predatory insects, and host-seeking parasites (Nakamuta 1985), among others. This predatory behavior is seen as adaptive and ecient for various habitats and prey distributions (Smith 1974). For example, if prey are clustered or randomly distributed over the search space, area-restricted search can maximize the search success by ®nding consecutive nearby prey and avoiding long paths without success. An impressive account of the adaptability of this strategy to diverse prey distributions is given in the classical study of Smith (1974): His experiments document predators spending time proportional to the number of prey in the area in question, thereby showing an ecient cost-bene®t ratio over many di€erent prey distributions. In fact, this predatory strategy imposes so much pressure on the prey that Nobel laureate ethologist Niko Tinbergen suggested his famous ``spacing out hypothesis'': By spacing out, prey can obtain some extra security against predators. This spacing out strategy would then be of great survival value for the prey (Tinbergen et al. 1967). Engaging on intensive, area-restricted search after prey confrontation is a very general search strategy, widely used by search-intensive predators throughout both vertebrate and invertebrate phyla. We believe that this strategy is successful because it strikes a good balance between search exploration and search exploitation of potentially good areas. We also believe that a parallel of this search strategy can be traced for computational state-space search problems. In order to do so, we suggest the following search guidelines (and the corresponding implementation). 2.1 Search guideline 1 Move extensively across the search space, using all available information to give the search directions. If a search event occurs (prey is found/a new best overall solution is found), use search guideline 2 to intensify the search around that point. Predators frequently move in a straightforward manner until confronted with prey, their movement patterns being guided by the information obtained from their senses (pheromones, search image, etc.) as the predator samples the environment. However, when confronted with prey (search event), predators turn to area-restricted search in one intensi®ed e€ort towards consecutive prey capture. Guideline 1 can be easily used for the TSP and related problems: As the algorithm samples the solution space, moving from each solution to one slightly di€erent, it repeatedly selects the best solution in a subset of neighboring solutions as the one to move to (the infor-

169

Fig. 1. Sketch of the solution space. Solutions are represented by the numbered circles, with their corresponding cost projected over the Z-axis. Note that the algorithm cannot hold this graph in memory; it only works with the present solution and dynamically computes its possible neighbors. In this ®gure, solution 1 is considered to be the best overall one, but nothing prevents the algorithm from discovering a solution x that holds an even better cost and is a neighbor of one of these displayed solutions. Some restrictions to search areas are also presented: Solution 1 holds a path to solution 12 over Restriction[3], but not over Restriction[2]. From solution 1, we have the following sets with their corresponding restrictions: Restriction[0]: {1}; Restriction[1]: {1, 2}; Restriction[2]: {1, 2, 4, 6}; Restriction[3]: all solutions in the ®gure

mation that is used is the cost of neighboring solutions). If a new best overall solution is found (search event), the search should be intensi®ed in a restricted area around this improving point, as set out in search guideline 2. 2.2 Search guideline 2 After a search event, intensify the search by restricting it to a small (potentially good) area. This area restricted search is directed at consecutive nearby search events. As time passes with no further search events, gradually augment the search space, until a `give up' instant, and then return to search guideline 1. Predators restrict their search area after prey capture (search event) by slowing down and making frequent turns. In order to restrict the TSP search area, all that has to be done is to put an upper bound such that any uphill move that may transpose it is forbidden (lower bounds are naturally given by local/global minima of the problem). This upper bound, which we call a restriction, will then de®ne a restricted search space of solutions, which corresponds to a cup in Hajek (1988). In order to gradually augment the search space, a list of increasing restriction levels ± representing larger search areas ± is used. Guideline 1 is obviously concerned with the exploration of the search space by extensive search. On the other hand, guideline 2 is associated with the exploitation of potentially good areas, implemented via intense,

area-restricted search. In order to simplify the implementation, instead of using distinct procedures for each guideline, we suggest that a more general procedure be used, by setting a large restriction level when in regular search (so that it does not restrain the space of solutions) and, alternatively, by setting a low restriction level when in area-restricted search. We can now provide the details of predatory search. 3 Predatory search We de®ne a combinatorial optimization problem as the dual (W, Z), where W is the set of solutions and the function Z : W ®  maps each solution to its corresponding cost. The goal of a combinatorial minimization problem is to ®nd a solution s* Î W such that Z(s*) £ Z(s) for all s Î W. We assume that for each s Î W there is a set N(s) Ì W, known as the neighborhood of s, and a small subset N 0 …s†  N …s†, which contains 5% of the elements of N(s). A transformation from a solution s to a solution in N(s) is called a move. A move in the TSP is composed of a path reversal: given any solution, a sub sequence of cities p(q), p(q+1),   , p(q+r) is selected and reversed to p(q+r),   , p(q+1), p(q), transforming the original solution in a new one. The choice of q and r can either give a shorter TSP tour or a longer one, provided they exist in N(s). Given any solution s and by combining all possible choices of q and r, we can derive the neighborhood of s. A solution t is reachable from s if there is a path, i.e., a sequence of states s ˆ x0, x1,   , xm ˆ t such that xk+1 Î N(xk) holds for all integer-positive k < m (Hajek 1988). If for some R Î Â we have Z(xk) £ R for all states in the path, then we say that this path respects restriction R. We therefore de®ne the function A : W ´  ® 2W as the set A(s, R) Ì W of all solutions t reachable from s on which the respective path respects restriction R. That is, given a new best solution b and a restriction R, a restricted search area around b will be de®ned, as in Fig. 1. In order to implement a gradually increasing search area around the best solution found, we de®ne an ordered list of n+1 restriction levels Restriction[L], where n is the number of cities and L Î {0,1,  ,n} is called the level of the restriction.

170

Figure 1 shows a sketch of the solution space, along with some examples of restricted search areas around the best solution found. The main iteration of the algorithm is derived from guideline 1: Sample the neighborhood and, if the best solution found in the sample holds a lower cost than restriction L, make it the new solution and start over. For all practical purposes, our algorithm examines at each step a small subset, namely 5% of the neighborhood. The whole set N(s) is not analyzed as: (1) It is an O(n2) time-consuming operation, and (2) it would make the algorithm impractical, running on inde®nite cycles [note that the best solution is accepted even if it holds a higher cost than the current solution, and that after moving out of a local minimum, the algorithm would necessarily return to it, because that move would be the best one in N(s)]. After each tentative move, the algorithm increments a counter that holds the number of iterations spent in that area. As this counter reaches a critical point, the restriction level L is incremented, in order to gradually augment the search area. Now, when L reaches ën/20û, meaning that the algorithm has e€ectively searched for an improving solution in many levels without success, the algorithm ``gives up'' on area-restricted search, by setting a high value for L (L ˆ n)ën/20û). This large value represents the cost of one of the worst solutions sampled when building the restriction list, and the algorithm, given a large search area to explore, soon moves out of the original restricted area. Some special processing is done whenever a new improving solution is found: (1) Obviously, b is updated with the new best solution; (2) the list of restriction levels is recomputed from the neighborhood of b (see below); and (3) L is set to zero, so that the algorithm will look for subsequent improvements from position b before moving on. This is the moment when area-restricted search is triggered: Our implementation of predatory search can be seen as `preying on improving solutions', instead of `preying on very good solutions'. An alternative approach could trigger area-restricted search under less strict conditions, such as when a reasonably good solution (but not an improving one) is found in regular search.

the smallest jump possible to scape b (in case of a local minimum), and as the restriction level L grows, the search area A(b, Restriction[L]) is gradually increased. A pseudocode version of the algorithm is presented below.

3.1 Computing the list of restriction levels

We have selected for the numerical experiments a set of four classic TSP con®gurations. The ®rst three ± Berlin 52, Padberg-Rinaldi 124 (PR124) and Rattled Grid 195 (RAT195) ± are taken from the benchmark problem library TSPLIB (Reinelt 1991), and the fourth one consists of a grid-like con®guration of cities distributed over the Manhattan geometry (Problem Manhattan 400, or M400). This last one has been used in, for example Cerny (1985), but with a much smaller number of cities. Figure 2 displays the interaction between the best solution cost, Z(b), the actual solution cost, Z(s), and the actual restriction, Restriction[L], as the algorithm converges. As the number of iterations grow, the algorithm dynamically manages the restriction levels, in order to implement either area-restricted search or regular

The list of restriction levels needs to be recomputed when a new best overall solution is found, for a restricted search area should be de®ned around this new improving point. In order to build a list adaptable to widely di€erent TSP topologies, the following scheme is used: The restriction levels from 1 to n are computed from a sample of n solutions on the neighborhood N(b) (note that b is the new improving solution). The corresponding costs of these neighboring solutions, representative of the `jumps' necessary to leave b (which is probably a local minimum) are then ordered and put on the restriction list on positions [1. . .n]. Restriction[0] carries the new value of Z(b). Restriction [1] will carry

while L < n Construct N'(s) by sampling 5% of N(s) proposal ˆ {xj x Î N ¢(s) and " y ÎN ¢(s), Z(x)  Z (y)} if proposal Î A(b,Restriction(L)) then solution ˆ proposal if Z(solution) < Z(b) then recompute Restriction Lev els for L ˆ {0, 1,. . ., n} Lˆ0 b ˆ solution else counter ˆ counter+1 if counter > (3 n) then counter ˆ 0 L ˆ L+1 if L ˆ ë(n/20)û then L ˆ n ) ë(n/20)û end_while

Three speci®c parameters have been set after some experimentation (see pseudocode): (1) The percentage of the neighborhood evaluated at each step was 5% of the n2)n possible neighbors; (2) the number of restriction levels in area-restricted search and regular search, for which we used integer division of the number of cities, ën/20û restriction levels for each search mode; and (3) the counter threshold for the incrementation of restriction level L, for which 3n was used. The algorithm is made to stop when ën/20û levels in regular search are tried without discovering a new improving solution. In selecting these parameters, we gave priority to the quality of the ®nal solutions, therefore the running time of the algorithm grows relatively rapidly in relation to the size of the problem. Alternative parameter settings should make the algorithm considerably faster, but the ®nal solutions will be of much lower quality. 4 Numerical results

171 Table 1. Execution times for AMD 586 (133Mhz) processor

Best Average Median Worst

Berlin52

PR124

RAT195

M400

0:00:01 0:00:05 0:00:04 0:00:09

0:00:29 0:01:00 0:00:57 0:01:42

0:06:43 0:10:46 0:10:35 0:17:23

0:58:30 1:17:40 1:17:41 1:44:54

Table 2. Deviation from optimal solution

Fig. 2. Search evolution. This ®gure presents the relationship between the actual restriction, the cost of the actual solution, and the cost of the best overall solution found as a function of the number of steps of the algorithm. The data were sampled over the execution of the algorithm, at every 100 steps. During this run, there are 5 improvements, and 4 are made during area-restricted search. Notice how, after each improvement, the restriction is lowered. The restriction imposes strong pressure for the algorithm to improve the best solution within that restricted area. What results is, in many cases, the best solution contained in that restricted area. After some iterations, the restriction level is gradually raised, giving more freedom to the search. After some time without improvement, the restriction level is made so large that it no longer makes a di€erence. The best overall result achieved in this run was 59092 (Problem PadbergRinaldi 124 with optimal value 59030)

search. As can be seen, at each solution improvement, the restriction is lowered and then gradually raised. A low restriction imposes a strong pressure on the algorithm to ®nd the best solution contained in that restricted area (note that this datum was sampled over 100 iterations and does not represent a step-by-step execution). From a standpoint of robustness, it is important to study the results of the algorithm over many executions. Table 1 presents the running times for 50 executions of the algorithm for each problem considered. As has been mentioned, the priority in the parameter settings was over ®nal solution quality, and the running time grows

Average Median Worst solution

Berlin52

PR124

RAT195

M400

2.27% 0.00% 8.42%

1.58% 1.71% 6.95%

1.40% 1.29% 3.62%

0.07% 0.00% 0.20%

by approximately O(n2) as the number of cities. Table 2 presents results on the deviation from the optimal solution over the same 50 executions. Since the four problems have been solved to optimality at least once, the best solutions are not considered in the table. Figure 3 presents the distribution of results in 50 executions of the algorithm. Optimal tours obtained are also shown. For each problem, the algorithm has obtained di€erent distributions. (a) In Berlin52, the optimal solution was found a large number of times (26), due to the relatively small number of cities. Other solutions are clustered around cost 7760 and cost 8000. The worst solution found in 50 runs was 8177. (b) Problem PR124 also displayed two clusters of solutions around costs 59100 and 60000. The optimal solution, with cost 59030, was found 6 times. The worst solution found in 50 runs was 63131. (c) Problem RAT195 shows a ``normal'' distribution around cost 2350. The optimal solution was found 4 times. The worst solution found in 50 runs was 2407. (d) Problem M400 displays an impressive result: di€erent optimal solutions were found 33 times, and the worst solutions found in all 50 runs hold the next-best tour cost. Problems of this pathological structure are especially well handled by predatory

Fig. 3. Distribution of 50 runs of predatory search for each problem. Optimal tours obtained are also shown

172

Fig. 4a±d Solution improvements over search mode. a Average number of improving solutions found within restriction level 0, other arearestricted search levels, and regular search. b Percentage of improvements in each of the three modes. c and d display details of the graphs above, showing the very small number, in absolute and relative terms, of improvements found during regular search

search. For example, in a 100-cities problem (10 ´ 10 grid), optimal solutions were found in 70% of executions. The next ®gure suggests why predatory search is so e€ective in this type of problem. Figure 4 indicates how solution improvements were found in relation to the search mode: For each of the problems, improving solutions were found a large majority of times under restriction level 0. This is easily explained since the algorithm starts from arbitrary solutions, and many consecutive improvements under restriction level 0 are found until a local minimum is reached. In comparison with restriction level 0, as shown in Fig. 4a and b, the other area-restricted search levels of each problem were less able to ®nd improving solutions. However, when compared with the number of improving solutions found in regular search, as in Fig. 4c and d, area-restricted search was much more e€ective (even with improvements in Restriction[0] discarded). This is very interesting since many well-known algorithms are based on regular search only (and miss improving solutions because of their inherent inability to intensify the search around improving points). As can be seen in Fig. 4d, there were no solution improvements over regular search on the Manhattandistributed problem: Global optimal solutions were obtained with all improvements done during area-restricted search. The topology of this problem has many improving solutions near each other, and predatory search takes advantage of this fact, repeatedly ®nding consecutive improvements within restricted search areas. From an operations research standpoint, we would like to point out three limitations of predatory search. (1) It is not the fastest, nor the best algorithm for the TSP, as some operations researchers, using sophisticated mathematical structures, have been able to solve to optimality some special instances of up to thousands of cities (nevertheless, our algorithm is much easier to implement, and we believe it can be applied to other combinatorial optimization problems in a straightfor-

ward manner). (2) There are no guarantees as to how close to the optimum the best solution found will be, for Papadimitriou and Steiglitz (1977) have proved these guarantees impossible for polynomial-time local search algorithms (unless P ˆ NP). Limitations (1) and (2) also apply to the bio-optimization algorithms cited in Sect. 1. Limitation (3) is inherently connected to predatory search: We believe that it should perform poorly in the `perverse' instances suggested by Papadimitriou and Steiglitz (1978), for these problems should not have consecutive improvements reachable within a few movements. A further study will approach those problems. 5 Discussion We have been able to ®nd the optimal solution to TSPs of up to 400 cities using a state-space search strategy gleaned from animal behavior. The main idea of predatory search is to make an intensi®ed search e€ort around each point of improvement, such that if there is another even better solution nearby, it is more likely to be found. This is implemented via area-restricted searching around the improving point for some iterations. In the TSP experiments, the vast majority of solution improvements were found during area-restricted search. This area-restricted search, followed by extensive moves, is a behavior widely studied in search-intensive predator species throughout both vertebrate and invertebrate phyla. We believe an analogous procedure is of use to some computational problems that have not yielded to ecient mathematical analysis. One interesting point remains to be studied: The `cycling' behavior displayed by the algorithm. Whenever an optimization algorithm returns to a solution already visited, we say that a cycle has occurred. In predatory search, cycling occurs many times ± especially during area-restricted search. As we see it, predatory search

173

gains information as cycling occurs: as the algorithm repeatedly ®nds the same best solution in a small search area, it becomes more certain about the optimality of that solution (in respect to the restricted area). Since there are algorithms based on the opposite philosophy of preventing cycles at all times (i.e., taboo search), predatory search innovates in this aspect. We are now studying the extent of cycling behavior in predatory search in comparison with other arti®cial intelligencebased search strategies. It is also interesting to point out that predators pass over the same point many times when in area-restricted search (Smith 1974), in order to make sure they have not missed any prey. Again, as a ®nal remark, the skeptical reader might argue that predatory search does not simulate a complex behavior such as the one presented by predators in natural settings. That is true. We do not claim to be simulating animal behavior, for, as has been clearly pointed out in the literature, `the sensing, locomotive, manipulative and social skills exhibited by (¼) animals (¼) completely eclipse any kind of robotic behavior produced so far' (Dyer 1995). However, we do believe that there is insight to be gained from behavioral adaptations, and we do claim to be using, quite successfully, a simple heuristic that has found its way in nature. Acknowledgements. This work was partially funded by the FAPERJ and CAPES Foundations. I am especially grateful to Antonio De Bellis for introducing me to animal behavior back in 1994. His sudden death a year later was a great personal loss to me and to all those who knew him. I would like this paper to be a postmortem expression of gratitude to him.

References Adleman LM (1994) Molecular computation of solutions to combinatorial problems. Science 266:1021±1024 Ambati BK, Ambati J, Mokhtar MM (1991) Heuristic combinatorial optimization by simulated Darwinian evolution: a polynomial time algorithm for the traveling salesman problem. Biol Cybern 65:31±35 Bell WJ (1990) Searching behavior patterns in insects. Annu Rev Entomol 35:447±467 Cerny V (1985) Thermodynamical approach to the traveling salesman problem: an ecient simulation algorithm. J Opt Theory Appl 45:41±52

Colorni A, Dorigo M, Maniezzo V (1991) Distributed optimization by ant colonies. In: Varela F, Bourgine P (eds) Proceedings of ECAL-91, First European Conference on Arti®cial Life. Elsevier, Paris, pp 134±142 Colorni A, Dorigo M, Maoli F, Manniezzo V, Righini G, Trubian M (1996) Heuristics from nature for hard combinatorial optimization problems. Int Trans Oper Res 3:1±21 Dorigo M, Maniezzo V, Colorni A (1996) Ant system: optimization by a colony of cooperating agents. IEEE Trans Systems Man Cybern 26:29±41 Dyer MG (1995) Synthesizing intelligent animal behavior. In: Langton CG (ed) Arti®cial life: an overview. MIT Press, Cambridge, Mass., pp 111±134 Fogel DB (1988) An evolutionary approach to the traveling salesman problem. Biol Cybern 60:139±144 Garey MR, Johnson DS (1979) Computers and intractability: a guide to the theory of NP-Completeness. Freeman, New York Griths D (1980) Foraging costs and relative prey size. Am Nat 116:743±752 Hajek B (1988) Cooling schedules for optimal annealing. Math Oper Res 13:311±329 Hop®eld JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52:141±152 Huey RB, Pianka ER (1981) Ecological consequences of foraging mode. Ecology 62:991±999 Kirkpatrick S, Gellat JR, Vecchi MP (1983) Optimization by simulated annealing. Science 220:671±680 Nakamuta K (1985) Mechanism of the switchover from extensive to area-concentrated search behavior of the ladybird beetle, Coccinella septempunctata bruckii. J Insect Physiol 31:849± 856 O'Brien WJ, Browman HI, Evans BI (1990) Search strategies of foraging animals. Am Sci 78:152±160 PaÂl KF (1993) Genetic algorithms for the traveling salesman problem based on a heuristic crossover operation. Biol Cybern 69:539±546 Papadimitriou CH (1977) The euclidean traveling salesman problem is NP-Complete. Theor Comp Sci 4:237±244 Papadimitriou CH, Steiglitz K (1977) On the complexity of local search for the traveling salesman problem. SIAM J Comput 6:76±83 Papadimitriou CH, Steiglitz K (1978) Some examples of dicult traveling salesman problems. Oper Res 26(3):434±443 Reinelt G (1991) TSPLIB: a traveling salesman problem library. ORSA J Comput 3:376±384 Smith JNM (1974) The food searching behavior of two European thrushes. II. The adaptiveness of the search patterns. Behavior 59:1±61 Tinbergen N, Impekoven M, Frank D (1967) An experiment on spacing-out as a defence against predation. Behavior 28:307± 321

a traveling salesman experiment - Springer Link

Apr 23, 2005 - This procedure, which we call predatory search, has been implemented on .... used, by setting a large restriction level when in regular search (so that it ..... ECAL-91, First European Conference on Artificial Life. Else- vier, Paris ...

657KB Sizes 1 Downloads 213 Views

Recommend Documents

Asymmetric Traveling Salesman Path and Directed ...
May 28, 2010 - where the goal is to find a minimum-cost cycle visiting all nodes. ..... k-path-cycle cover can be found by a combinatorial algorithm by creating k copies of both s and ..... For a path P, let app(P) be the length of the edge used for 

Two-phase Pareto local search for the biobjective traveling salesman ...
starts from a population of good quality, in the place of using only one random so- lution as starting ...... well the preferences of the decision maker. 6.2 Reference ...

Asymmetric Traveling Salesman Path and Directed ...
May 28, 2010 - Given the set of k cliques in K, we can convert each of them into an s-t path in F. For ... a matching and map it to the corresponding paths in K′.

Tinospora crispa - Springer Link
naturally free from side effects are still in use by diabetic patients, especially in Third .... For the perifusion studies, data from rat islets are presented as mean absolute .... treated animals showed signs of recovery in body weight gains, reach

Chloraea alpina - Springer Link
Many floral characters influence not only pollen receipt and seed set but also pollen export and the number of seeds sired in the .... inserted by natural agents were not included in the final data set. Data were analysed with a ..... Ashman, T.L. an

GOODMAN'S - Springer Link
relation (evidential support) in “grue” contexts, not a logical relation (the ...... Fitelson, B.: The paradox of confirmation, Philosophy Compass, in B. Weatherson.

Bubo bubo - Springer Link
a local spatial-scale analysis. Joaquın Ortego Æ Pedro J. Cordero. Received: 16 March 2009 / Accepted: 17 August 2009 / Published online: 4 September 2009. Ó Springer Science+Business Media B.V. 2009. Abstract Knowledge of the factors influencing

Quantum Programming - Springer Link
Abstract. In this paper a programming language, qGCL, is presented for the expression of quantum algorithms. It contains the features re- quired to program a 'universal' quantum computer (including initiali- sation and observation), has a formal sema

BMC Bioinformatics - Springer Link
Apr 11, 2008 - Abstract. Background: This paper describes the design of an event ontology being developed for application in the machine understanding of infectious disease-related events reported in natural language text. This event ontology is desi

Thoughts of a reviewer - Springer Link
or usefulness of new diagnostic tools or of new therapy. 3. They may disclose new developments in clinical sci- ence such as epidemics, or new diseases, or may provide a unique insight into the pathophysiology of disease. In recent years much has bee

Candidate quality - Springer Link
didate quality when the campaigning costs are sufficiently high. Keywords Politicians' competence . Career concerns . Campaigning costs . Rewards for elected ...

Mathematical Biology - Springer Link
Here φ is the general form of free energy density. ... surfaces. γ is the edge energy density on the boundary. ..... According to the conventional Green theorem.

Artificial Emotions - Springer Link
Department of Computer Engineering and Industrial Automation. School of ... researchers in Computer Science and Artificial Intelligence (AI). It is believed that ...

Towards a Generic Process Metamodel - Springer Link
In Software Engineering the process for systems development is defined as an activity ... specialised and generalised framework based on generic specification and providing ..... user interfaces, and multimedia, and the World Wide Web;.

A Process Semantics for BPMN - Springer Link
Business Process Modelling Notation (BPMN), developed by the Business ..... In this paper we call both sequence flows and exception flows 'transitions'; states are linked ...... International Conference on Integrated Formal Methods, pp. 77–96 ...

Towards a Generic Process Metamodel - Springer Link
these problems, particularly cost saving and product and process quality improvement ... demanding sometimes, is considered to be the object of interest of ...

Bayesian optimism - Springer Link
Jun 17, 2017 - also use the convention that for any f, g ∈ F and E ∈ , the act f Eg ...... and ESEM 2016 (Geneva) for helpful conversations and comments.

Contents - Springer Link
Dec 31, 2010 - Value-at-risk: The new benchmark for managing financial risk (3rd ed.). New. York: McGraw-Hill. 6. Markowitz, H. (1952). Portfolio selection. Journal of Finance, 7, 77–91. 7. Reilly, F., & Brown, K. (2002). Investment analysis & port

(Tursiops sp.)? - Springer Link
Michael R. Heithaus & Janet Mann ... differences in foraging tactics, including possible tool use .... sponges is associated with variation in apparent tool use.

Fickle consent - Springer Link
Tom Dougherty. Published online: 10 November 2013. Ó Springer Science+Business Media Dordrecht 2013. Abstract Why is consent revocable? In other words, why must we respect someone's present dissent at the expense of her past consent? This essay argu

Regular updating - Springer Link
Published online: 27 February 2010. © Springer ... updating process, and identify the classes of (convex and strictly positive) capacities that satisfy these ... available information in situations of uncertainty (statistical perspective) and (ii) r

a prospective MRI study - Springer Link
Dec 19, 2008 - Materials and methods Twenty-four chronic low back pain. (LBP) patients with M1 ... changes has been found in a follow-up study [4]. Patients.