Relaxation heuristics for the set multicover problem with generalized upper bound constraints∗ Shunji Umetani†, Masanao Arakawa‡, Mutsunori Yagiura§ January 6, 2018

Abstract We consider an extension of the set covering problem (SCP) introducing (i) multicover and (ii) generalized upper bound (GUB) constraints. For the conventional SCP, the pricing method has been introduced to reduce the size of instances, and several efficient heuristic algorithms based on such reduction techniques have been developed to solve large-scale instances. However, GUB constraints often make the pricing method less effective, because they often prevent solutions from containing highly evaluated variables together. To overcome this problem, we develop heuristic algorithms to reduce the size of instances, in which new evaluation schemes of variables are introduced taking account of GUB constraints. We also develop an efficient implementation of a 2-flip neighborhood local search algorithm that reduces the number of candidates in the neighborhood without sacrificing the solution quality. In order to guide the search to visit a wide variety of good solutions, we also introduce a path relinking method that generates new solutions by combining two or more solutions obtained so far. According to computational comparison on benchmark instances, the proposed method succeeds in selecting a small number of promising variables properly and performs quite effectively even for large-scale instances having hard GUB constraints. keyword: combinatorial optimization, set covering problem, metaheuristics, local search, Lagrangian relaxation

1

Introduction

The set covering problem (SCP) is one of representative combinatorial optimization problems. We are given a set of m elements i ∈ M = {1, . . . , m}, n subsets Sj ⊆ M (|Sj | ≥ S 1) and their costs cj (> 0) for j ∈ N = {1, . . . , n}. We say that X ⊆ N is a cover of M if j∈X Sj = M holds. The goal of SCP is to find a minimum cost cover X of M . The SCP is formulated as a 0-1 integer programming (0-1 IP) problem as follows: X minimize cj xj subject to

j∈N X

aij xj ≥ 1, i ∈ M,

(1)

j∈N

xj ∈ {0, 1},

j ∈ N,

where aij = 1 if i ∈ Sj holds and aij = 0 otherwise, and xj = 1 if j ∈ X and xj = 0 otherwise. That is, a column aj = (a1j , . . . , amj )> of matrix (aij ) represents the corresponding subset Sj by ∗

A preliminary version of this paper was presented in [1]. Osaka University, Suita, Osaka, 565-0871, Japan. [email protected] ‡ Fujitsu Limited, Kawasaki 211-8588, Japan. [email protected] § Nagoya University, Nagoya 464-8601, Japan. [email protected]

1

Sj = {i ∈ M | aij = 1}. For notational convenience, for each i ∈ M , let Ni = {j ∈ N | aij = 1} be the index set of subsets Sj that contains the element i. The SCP is known to be NP-hard in the strong sense, and there is no polynomial time approximation scheme (PTAS) unless P = NP. However, the worst-case performance analysis does not necessarily reflect the experimental performance in practice. The continuous development of mathematical programming has much improved the performance of heuristic algorithms accompanied by advances in computing machinery [2, 3]. For example, Beasley [4] presented a number of greedy algorithms based on Lagrangian relaxation called the Lagrangian heuristics, and Caprara et al. [5] introduced pricing techniques into a Lagrangian heuristic algorithm to reduce the size of instances. Several efficient heuristic algorithms based on Lagrangian heuristics have been developed to solve very large-scale instances with up to 5000 constraints and 1,000,000 variables with deviation within about 1% from the optimum in a reasonable computing time [5, 6, 7, 8]. The SCP has important real applications such as crew scheduling [5], vehicle routing [9], facility location [10, 11], and logical analysis of data [12]. However, it is often difficult to formulate problems in real applications as SCP, because they often have additional side constraints in practice. Most practitioners accordingly formulate them as general mixed integer programming (MIP) problems and apply general purpose solvers, which are usually less efficient compared with solvers specially tailored to SCP. In this paper, we consider an extension of SCP introducing (i) multicover and (ii) generalized upper bound (GUB) constraints, which arise in many real applications of SCP such as vehicle routing [13, 14], crew scheduling [15], staff scheduling [16, 17] and logical analysis of data [18]. The multicover constraint is a generalization of covering constraint [19, 20], in which each element i ∈ M must be covered at least bi ∈ Z+ (Z+ is the set of nonnegative integers) times. The GUB constraintSis defined as follows. We are given a partition {G1 , . . . , Gk } of N (∀h 6= h0 , Gh ∩ Gh0 = ∅, kh=1 Gh = N ). For each block Gh ⊆ N (h ∈ K = {1, . . . , k}), the number of selected subsets Sj from the block (i.e., j ∈ Gh ) is constrained to be at most dh (≤ |Gh |). We call the resulting problem the set multicover problem with GUB constraints (SMCP-GUB), which is formulated as a 0-1 IP problem as follows: X minimize z(x) = cj xj j∈N

subject to

X

aij xj ≥ bi ,

j∈N X

i ∈ M, (2)

xj ≤ dh ,

h ∈ K,

j∈Gh

xj ∈ {0, 1},

j ∈ N.

This generalization of SCP substantially extends the variety of its applications. However, GUB constraints often make the pricing method less effective, because they often prevent solutions from containing highly evaluated variables together. To overcome this problem, we develop heuristic algorithms to reduce the size of instances, in which new evaluation schemes of variables are introduced taking account of GUB constraints. We also develop an efficient implementation of a 2-flip neighborhood local search algorithm that reduces the number of candidates in the neighborhood without sacrificing the solution quality. In order to guide the search to visit a wide variety of good solutions, we also introduce an evolutionary approach called the path relinking method [21] that generates new solutions by combining two or more solutions obtained so far. The SMCP-GUB is NP-hard, and the (supposedly) simpler problem of judging the existence of a feasible solution is NP-complete, since the satisfiability (SAT) problem can be reduced to this decision problem. We accordingly allow the search to visit infeasible solutions violating 2

multicover constraints and evaluate their quality by the following penalized objective function. Note that throughout the remainder of the paper, we do not consider solutions that violate the GUB constraints, and the search only visits solutions that satisfy the GUB constraints. Let w = (w1 , . . . , wm ) ∈ Rm + (R+ is the set of nonnegative real values) be a penalty weight vector. A solution x is evaluated by     X X X aij xj , 0 . (3) zˆ(x, w) = cj xj + wi max bi −   j∈N

j∈N

i∈M

P If the penalty weights wi are sufficiently large (e.g., wi > j∈N cj holds for all i ∈ M ), then we can conclude SMCP-GUB to be infeasible when an optimal solution x∗ under the penalized objective function zˆ(x, w) violates at least one multicover constraint. In our algorithm, the P initial penalty weights w ¯i (i ∈ M ) are set to w ¯i = j∈N cj + 1 for all i ∈ M . Starting from the ¯ the penalty weight vector w is adaptively controlled to initial penalty weight vector w ← w, guide the search to visit better solutions. We present the outline of the proposed algorithm for SMCP-GUB. The first set of initial solutions are generated by applying a randomized greedy algorithm several times. The algorithm ˜ then solves a Lagrangian dual problem to obtain a near optimal Lagrangian multiplier vector u through a subgradient method (Section 2), which is applied only once in the entire algorithm. Then, the algorithm applies the following procedures in this order: (i) heuristic algorithms to reduce the size of instances (Section 5), (ii) a 2-flip neighborhood local search algorithm (Section 3), (iii) an adaptive control of penalty weights (Section 4), and (iv) a path relinking method to generate initial solutions (Section 6). These procedures are iteratively applied until a given time limit has run out.

2

Lagrangian relaxation and subgradient method

For a given vector u = (u1 , . . . , um ) ∈ Rm + , called a Lagrangian multiplier vector, we consider the following Lagrangian relaxation problem LR(u) of SMCP-GUB:   X X X minimize zLR (u) = cj xj + ui bi − aij xj  j∈N

i∈M

j∈N

! = subject to

X

cj −

j∈N X

X

aij ui

i∈M

xj +

X

bi ui

(4)

i∈M

xj ≤ dh , h ∈ K,

j∈Gh

xj ∈ {0, 1},

j ∈ N.

P We refer to c˜j (u) = cj − i∈M aij ui as the Lagrangian cost associated with column j ∈ N . For ∗ any u ∈ Rm + , zLR (u) gives a lower bound on the optimal value z(x ) of SMCP-GUB (when it is feasible, i.e., there exists a feasible solution to SMCP-GUB). The problem of finding a Lagrangian multiplier vector u that maximizes zLR (u) is called the Lagrangian dual problem (LRD):  maximize zLR (u) | u ∈ Rm (5) + . For a given u ∈ Rm ˜(u) = (˜ x1 (u), . . . , x ˜n (u)) to + , we can easily compute an optimal solution x LR(u) as follows. For each block Gh (h ∈ K), if the number of columns j ∈ Gh satisfying 3

c˜j (u) < 0 is equal to dh or less, then set x ˜j (u) ← 1 for variables satisfying c˜j (u) < 0 and x ˜j (u) ← 0 for the other variables; otherwise, set x ˜j (u) ← 1 for variables with the dh lowest Lagrangian costs c˜j (u) and x ˜j (u) ← 0 for the other variables. The Lagrangian relaxation problem LR(u) has integrality property. That is, an optimal solution to LR(u) is also optimal to its linear programming (LP) relaxation problem obtained by replacing xj ∈ {0, 1} in (4) with 0 ≤ xj ≤ 1 for all j ∈ N . In this case, any optimal solution u∗ to the dual of the LP relaxation problem of SMCP-GUB is also optimal to LRD, and the optimal value zLP of the LP relaxation problem of SMCP-GUB is equal to zLR (u∗ ). ˜ is the subA common approach to compute a near optimal Lagrangian multiplier vector u gradient method. It uses the subgradient vector g(u) = (g1 (u), . . . , gm (u)) ∈ Rm , associated with a given u ∈ Rm + , defined by X gi (u) = bi − aij x ˜j (u). (6) j∈N

This method generates a sequence of nonnegative Lagrangian multiplier vectors u(0) , u(1) , . . . , where u(0) is a given initial vector and u(l+1) is updated from u(l) by the following formula: ( ) ¯ − zLR (u(l) ) zˆ(x∗ , w) (l) (l+1) (l) ui ← max ui + λ gi (u ), 0 , i ∈ M, (7) ||g(u(l) )||2 ¯ with where x∗ is the best solution obtained so far under the penalized objective function zˆ(x, w) ¯ and λ > 0 is a parameter called the step size. the initial penalty weight vector w, When huge instances of SCP are solved, the computing time spent on the subgradient method becomes very large if a naive implementation is used. Caprara et al. [5] developed a variant of the pricing method on the subgradient method. They define a core problem consisting of a small subset of columns C ⊂ N (|C|  |N |), chosen among those having low Lagrangian costs c˜j (u(l) ) (j ∈ N ). Their algorithm iteratively updates the core problem in a similar fashion that is used for solving large-scale LP problems [22]. In order to solve huge instances of SMCP-GUB, we also introduce a pricing method into the basic subgradient method (BSM) described, e.g., in [3].

3

2-flip neighborhood local search

The local search (LS) starts from an initial solution x and repeats replacing x with a better solution x0 in its neighborhood NB(x) until no better solution is found in NB(x). For a positive integer r, the r-flip neighborhood NBr (x) is defined by NBr (x) = {x0 ∈ {0, 1}n | d(x, x0 ) ≤ r}, where d(x, x0 ) = |{j ∈ N | xj 6= x0j }| is the Hamming distance between x and x0 . In other words, NBr (x) is the set of solutions obtainable from x by flipping at most r variables. We first develop a 2-flip neighborhood local search algorithm (2-FNLS) as a basic component of the proposed algorithm. In order to improve efficiency, 2-FNLS searches NB1 (x) first, and then NB2 (x) \ NB1 (x). We first describe the algorithm to search NB1 (x), called the 1-flip neighborhood search. Let X ∆ˆ zj↑ (x, w) = cj − wi , i∈ML (x)∩Sj

∆ˆ zj↓ (x, w) = −cj +

X

wi ,

(8)

i∈(ML (x)∪ME (x))∩Sj

denote the increasePof zˆ(x, w) by flipping xj = 0 → 1 and P xj = 1 → 0, respectively, where ML (x) = {i ∈ M | j∈N aij xj < bi } and ME (x) = {i ∈ M | j∈N aij xj = bi }. The algorithm 4

first searches for an improved solution obtainable by flipping xj = 0 → 1 by searching for P j ∈ N \ X(x) satisfying ∆ˆ zj↑ (x, w) < 0 and j 0 ∈Gh xj 0 < dh for the block Gh containing j, where X(x) = {j ∈ N | xj = 1}. If an improved solution exists, it chooses j with the minimum ∆ˆ zj↑ (x, w); otherwise, it searches for an improved solution obtainable by flipping xj = 1 → 0 by searching for j ∈ X(x) satisfying ∆ˆ zj↓ (x, w) < 0. We next describe the algorithm to search NB2 (x)\NB1 (x), called 2-flip neighborhood search. Yagiura el al. [8] developed a 3-flip neighborhood local search algorithm for SCP. They derived conditions that reduce the number of candidates in NB2 (x) \ NB1 (x) and NB3 (x) \ NB2 (x) without sacrificing the solution quality. However, those conditions are not directly applicable to the 2-flip neighborhood search for SMCP-GUB because of GUB constraints. Below we derive the following three lemmas that reduce the number of candidates in NB2 (x) \ NB1 (x) by taking account of GUB constraints. Let ∆ˆ zj1 ,j2 (x, w) denote the increase of zˆ(x, w) by flipping the values of xj1 and xj2 simultaneously. Lemma 1. Suppose that a solution x is locally optimal with respect to NB1 (x). Then ∆ˆ zj1 ,j2 (x, w) < 0 holds, only if xj1 6= xj2 . Proof. See A. This lemma indicates that in searching for improved solutions in NB2 (x) \ NB1 (x), it is not necessary to consider the simultaneous flip of variables xj1 and xj2 such that xj1 = xj2 = 0 or xj1 = xj2 = 1. Based on this, we consider only the set of solutions obtainable by flipping P xj1 = 1 → 0 and xj2 = 0 → 1 simultaneously. We assume that j∈Gh xj < dh holds for the block Gh containing j2 or j1 and j2 are in the same block Gh , because otherwise the move is infeasible. Let X ∆ˆ zj1 ,j2 (x, w) = ∆ˆ zj↓1 (x, w) + ∆ˆ wi (9) zj↑2 (x, w) − i∈ME (x)∩Sj1 ∩Sj2

denote the increase of zˆ(x, w) in this case. Lemma 2. Suppose that a solution x is locally optimal with respect to NB1 (x), xj1 = 1 and xj2 = 0. Then ∆ˆ zj1 ,j2 (x, w) < 0 holds, only if at least one of the following two conditions holds. P (i) Both j1 and j2 belong to the same block Gh satisfying j∈Gh xj = dh . (ii) ME (x) ∩ Sj1 ∩ Sj2 6= ∅. Proof. See B. Lemma 3. Suppose that a solution x is locally optimal with respect to NB1 (x), and for a block Gh and a pair of indices j1 , j2 ∈ Gh with xj1 = 1 and xj2 = 0, ∆ˆ zj1 ,j2 (x, w) < 0, P ↓ ∗ ME (x) ∩ Sj1 ∩ Sj2 = ∅ and j∈Gh xj = dh hold. Let j1 = arg minj∈Gh ∆ˆ zj (x, w) and j2∗ = arg minj∈Gh ∆ˆ zj↑ (x, w). Then we have ∆ˆ zj1∗ ,j2∗ (x, w) < 0. Proof. See C. Note that the condition of Lemma 3 implies that the condition (i) of Lemma 2 is satisfied. We can conclude that to find an improved solution satisfying the condition (i), it suffices to check P only one pair for each block Gh satisfying j∈Gh xj = dh , instead of checking all pairs (j1 , j2 ) with j1 , j2 ∈ Gh , xj1 = 1 and xj2 = 0 (provided that the algorithm also checks the solutions satisfying the condition (ii) of Lemma 2). 5

0 The algorithm first searches for an improved solution P x ∈ NB2 (x) \ NB1 (x) satisfying the condition (i). For each block Gh (h ∈ K) satisfying j∈Gh xj = dh , it checks the solution obtained by flipping xj1 = 1 → 0 and xj2 = 0 → 1 for j1 and j2 in Gh with the minimum zj↑2 (x, w), respectively. The algorithm then searches for an improved solution ∆ˆ zj↓1 (x, w) and ∆ˆ (j )

x0 ∈ NB2 (x) \ NB1 (x) satisfying the condition (ii). Let NB2 1 (x) denote the subset of NB2 (x) (j ) obtainable by flipping xj1 = 1 → 0. The algorithm searches NB2 1 (x) for each j1 ∈ X(x) in the ascending order of ∆ˆ zj↓1 (x, w). If an improved solution is found, it chooses a pair j1 and j2 (j )

with the minimum ∆ˆ zj1 ,j2 (x, w) among those in NB2 1 (x). Algorithm 2-FNLS searches NB1 (x) first, and then NB2 (x) \ NB1 (x). The algorithm is formally described as follows. Algorithm 2-FNLS(x, w) Input: A solution x and a penalty weight vector w. Output: A solution x. Step 1: If I1↑ (x) = {j ∈ N \ X(x) | ∆ˆ zj↑ (x, w) < 0, containing j} = 6 ∅ holds, choose j ∈ return to Step 1.

I1↑ (x)

P

j 0 ∈Gh

xj 0 < dh for the block Gh

with the minimum ∆ˆ zj↑ (x, w), set xj ← 1 and

zj↓ (x, w) < 0} = 6 ∅ holds, choose j ∈ I1↓ (x) with the minimum Step 2: If I1↓ (x) = {j ∈ X(x) | ∆ˆ ∆ˆ zj↓ (x, w), set xj ← 0 and return to Step 2. P Step 3: For each block Gh satisfying j∈Gh xj = dh (h ∈ K), if ∆ˆ zj1 ,j2 (x, w) < 0 holds for j1 and j2 with the minimum ∆ˆ zj↓1 (x, w) and ∆ˆ zj↑2 (x, w) (j1 , j2 ∈ Gh ), respectively, set xj1 ← 0 and xj2 ← 1. If the current solution x has been updated at least once in Step 3, return to Step 3.

Step 4: For each j1 ∈ X(x) inP the ascending order of ∆ˆ zj↓1 (x, w), if I2 (x) = {j2 ∈ N \ X(x) | ∆ˆ zj1 ,j2 (x, w) < 0, ( j∈Gh xj < dh for the block Gh containing j2 or ∃h, j1 , j2 ∈ Gh ), ME (x) ∩ Sj1 ∩ Sj2 6= ∅} = 6 ∅ holds, choose j2 ∈ I2 (x) with the minimum ∆ˆ zj1 ,j2 (x, w) and set xj1 ← 0 and xj2 ← 1. If the current solution x has been updated at least once in Step 4, return to Step 1; otherwise output x and exit. We note that 2-FNLS does not necessarily output a locally optimal solution with respect to NB2 (x), because the solution x is not necessarily locally optimal with respect to NB1 (x) in Steps 3 and 4. Though it is easy to keep the solution x locally optimal with respect to NB1 (x) in Steps 3 and 4 by returning to Step 1 whenever an improved solution is obtained in Steps 2 or 3, we did not adopt this option because it consumes much computing time just to conclude that the current solution is locally optimal with respect to NB1 (x) in most cases. We also note that the phase to search NB1 (x) in the algorithm (i.e., Steps 1 and 2) always finishes with the search for an improved solution obtainable by flipping xj = 1 → 0 to prevent this phase from stopping at solutions having redundant columns. Let one-round be the computation needed to find an improved solution in the neighborhood or to conclude that the current solution is locally optimal, including the time to update relevant data structures and/or memory [23, 24]. If implemented naively, 2-FNLS requires O(σ) and O(nσ) one-round time for NB1 (x) and NB2 (x) \ NB1 (x), respectively, where

6

P P σ = i∈M j∈N aij . In order to improve computational efficiency, we keep the following auxiliary data X ∆p↑j (x, w) = wi , j ∈ N \ X(x), i∈ML (x)∩Sj

∆p↓j (x, w)

=

X

wi ,

j ∈ X(x),

(10)

i∈(ML (x)∪ME (x))∩Sj

in memory to compute each ∆ˆ zj↑ (x, w) = cj − ∆p↑jP (x, w) and ∆ˆ zj↓ (x, w) = −cj + ∆p↓j (x, w) in O(1) time. We also keep the values of si (x) = j∈N aij xj (i ∈ M ) in memory to update the values ofP∆p↑j (x, w) and ∆p↓j (x, w) for j ∈ N in O(τ ) time when x is changed, where τ = maxj∈N i∈Sj |Ni | (see D). We first consider the one-round time for NB1 (x). In Steps 1 and 2, the algorithm finds j ∈ N \ X(x) and j ∈ X(x) with the minimum ∆ˆ zj↓ (x, w) and ∆ˆ zj↑ (x, w) in O(n) time, respectively, by using the auxiliary data whose update requires O(τ ) time. Thus, the one-round time is reduced to O(n + τ ) for NB1 (x). We next consider the one-round time for NB2 (x) \ NB1 (x). In Step 3, the algorithm first zj↑2 (x, w) (j1 , j2 ∈ Gh ), respectively, in finds j1 and j2 with the minimum ∆ˆ zj↓1 (x, w) and ∆ˆ O(|Gh |) time. The algorithm then evaluates ∆ˆ zj1 ,j2 (x, w) in O(ν) time by using (9), where ν = maxj∈N |Sj |. In Step 4, the algorithm first flips xj1 = 1 → 0 and temporarily updates the values of si (x) (i ∈ Sj1 ), ∆p↑l (x, w) and ∆p↓l (x, w) (l ∈ Ni , i ∈ Sj1 ) in O(τ ) time so that the memory corresponding to these keeps the values of si (x0 ), ∆p↑l (x0 , w) and ∆p↓l (x0 , w) for the x0 (j ) obtained from x by flipping xj1 = 1 → 0. Then, for searching NB2 1 (x), the algorithm evaluates ∆ˆ zj1 ,j2 (x, w) = ∆ˆ zj↓1 (x, w) + ∆ˆ zj↑2 (x0 , w) = ∆ˆ zj↓1 (x, w) + cj2 − ∆p↑j2 (x0 , w)

(11)

(in O(1) time for each pair of j1 and j2 ) only for each j2 ∈ N \ X(x) such that the value of ∆p↑j2 (x, w) has been changed to ∆p↑j2 (x0 , w) (6= ∆p↑j2 (x, w)) during the temporary update. Note

that the number of such candidates j2 that satisfy ∆p↑j2 (x0 , w) 6= ∆p↑j2 (x, w) is O(τ ). When an (j )

improved solution was not found in NB2 1 (x), the updated memory values are restored in O(τ ) time to the original values si (x), ∆p↑l (x, w) and ∆p↓l (x, w) before we try another candidate of (j ) j1 . The time to search NB2 1 (x) for each j1 ∈ X(x) is therefore O(τ P). Thus, the one-round 0 0 time is reduced to O(n + kν + n τ ) for NB2 (x) \ NB1 (x), where n = j∈N xj = |X(x)|. Because k ≤ n, ν ≤ m, τ ≤ σ, n0 ≤ n, m ≤ σ, and n ≤ σ always hold, these orders are not worse than those of naive implementation, and they are much better if ν  m, τ  σ and n0  n hold, which are the case for many instances. We also note that the computation time for updating the auxiliary data has little effect on the total computation time of 2-FNLS, because, in most cases, the number of solutions actually visited is much less than that of evaluated neighbor solutions.

4

Adaptive control of penalty weights

We observed that 2-FNLS tends to be attracted to locally optimal solutions of insufficient quality when the penalty weights wi are large. We accordingly incorporate a mechanism to adaptively control the values of wi (i ∈ M ) [8, 25, 26]; the algorithm iteratively applies 2-FNLS, updating the penalty weight vector w after each call to 2-FNLS. We call such a sequence of calls to 2-FNLS the weighting local search (WLS) according to [27, 28].

7

Let x denote the solution at which the previous 2-FNLS stops. Algorithm WLS resumes 2-FNLS from x after updating the penalty Pweight vector w. Starting from an initial penalty ¯ where we set w weight vector w ← w, ¯i = j∈N cj + 1 for all i ∈ M , the penalty weight vector w is updated as follows. Let xbest denote the best solution obtained in the current call to WLS ¯ with the initial penalty weight vector with respect to the penalized objective function zˆ(x, w) ¯ If zˆ(x, w) ≥ zˆ(xbest , w) ¯ holds, WLS uniformly decreases the penalty weights wi ← (1 − η)wi w. for all i ∈ M , where the parameter η is decided so that for 15% of variables satisfying xj = 1, the new value of ∆ˆ zj↓ (x, w) becomes negative. Otherwise, WLS increases the penalty weights by     yi (x) (12) ,w ¯i , i ∈ M, wi ← min wi 1 + δ maxl∈M yl (x) P where yi (x) = max{bi − j∈N aij xj , 0} is the amount of violation of the ith multicover constraint, and δ is a parameter that is set to 0.2 in our computational experiments. Algorithm WLS iteratively applies 2-FNLS, updating the penalty weight vector w after each call to 2-FNLS, ¯ obtained in the current call to WLS has not until the best solution xbest with respect to zˆ(x, w) improved in the last 50 iterations. Algorithm WLS(x) Input: A solution x. ˆ and the best solution xbest with respect to zˆ(x, w) ¯ obtained in the current Output: A solution x call to WLS. ˆ ← x and w ← w. ¯ Step 1: Set iter ← 0, xbest ← x, x ˆ w) to obtain an improved solution x ˆ 0 and then set x ˆ←x ˆ 0 . Let x0 Step 2: Apply 2-FNLS(x, ¯ obtained during the call to 2-FNLS(x, ˆ w). be the best solution with respect to zˆ(x, w) ¯ < zˆ(xbest , w) ¯ holds, then set xbest ← x0 and iter ← 0; otherwise, set iter ← Step 3: If zˆ(x0 , w) ˆ and xbest and halt. iter + 1. If iter ≥ 50 holds, output x ˆ w) ≥ zˆ(xbest , w) ¯ holds, then uniformly decrease the penalty weights wi for all Step 4: If zˆ(x, i ∈ M by wi ← (1 − η)wi ; otherwise, increase the penalty weights wi for all i ∈ M by (12). Return to Step 2.

5

Heuristic algorithms to reduce the size of instances

For a near optimal Lagrangian multiplier vector u, the Lagrangian costs c˜j (u) give reliable information on the overall utility of selecting columns j ∈ N for SCP. Based on this property, the Lagrangian costs c˜j (u) are often utilized to solve huge instances of SCP. Similar to the pricing method for solving the Lagrangian dual problem, several heuristic algorithms successively solve a number of subproblems, also called core problems, consisting of a small subset of columns C ⊆ N (|C|  |N |), chosen among those having low Lagrangian costs c˜j (u) (j ∈ C) [5, 6, 7, 8]. The Lagrangian costs c˜j (u) are unfortunately unreliable for selecting columns j ∈ N for SMCP-GUB, because GUB constraints often prevent solutions from containing more than dh variables xj with the lowest Lagrangian costs c˜j (u). To overcome this problem, we develop two evaluation schemes of columns j ∈ N for SMCP-GUB. Before updating the core problem C for every call to WLS, the algorithm heuristically fixes some variables x ˆj ← 1 to reflect the characteristics of the incumbent solution x∗ and the current ˆ Let u be a near optimal Lagrangian multiplier vector, and V = {j ∈ N | x∗j = x solution x. ˆj = 8

1} be an index set from which variables to be fixed are chosen. Let c˜max (u) = maxj∈V c˜j (u) be the maximum value of the Lagrangian cost c˜j (u) (j ∈ V ). The algorithm randomly chooses a variable xj (j ∈ V ) with probability c˜max (u) − c˜j (u) cmax (u) − c˜l (u)) l∈V (˜

probj (u) = P

(13)

and fixes x ˆj ← 1. We note that the uniform distribution is used if c˜max (u) = c˜jP (u) holds for all j ∈ V . The algorithm iteratively chooses and fixes a variable xj (j ∈ V ) until j∈N aij xj ≥ bi holds P for 20% of multicover constraints i ∈ M . It then updates the Lagrangian multiplier ui ← 0 if j∈F aij ≥ bi holds for i ∈ M , and computes the Lagrangian costs c˜j (u) for j ∈ N \ F , where F is the index set of the fixed variables. The variable fixing procedure is formally described as follows. ˆ u) ˜ Algorithm FIX(x∗ , x, ˆ and a near optimal Lagrangian Input: The incumbent solution x∗ , the current solution x ˜ multiplier vector u. Output: A set of fixed variables F ⊂ N and a Lagrangian multiplier vector u. ˜ Step 1: Set V ← {j ∈ N | x∗j = x ˆj = 1}, F ← ∅, and u ← u. P Step P 2: If |{i ∈ M | j∈F aij ≥ bi }| ≥ 0.2m holds, then set ui ← 0 for each i ∈ M satisfying j∈F aij ≥ bi , output F and u, and halt. Step 3: Randomly choose a column j ∈ V with probability probj (u) defined by (13), and set F ← F ∪ {j}. Return to Step 2. Subsequent to the variable fixing procedure, the algorithm updates P P P the instance to be considered by setting z(x) = j∈N \F cj xj + j∈F cj , bi ← max{bi − j∈F aij , 0} (i ∈ M ), and dh ← dh − |Gh ∩ F | (h ∈ K). The first evaluation scheme modifies the Lagrangian costs c˜j (u) to reduce the number of redundant columns j ∈ C resulting from GUB constraints. For each block Gh (h ∈ K), let θh be the value of the (dh + 1)st lowest Lagrangian cost c˜j (u) among those for columns in Gh if dh < |Gh | holds and θh ← 0 otherwise. We then define a score ρj for j ∈ Gh , called the normalized Lagrangian score, by ρj = c˜j (u) − θh if θh < 0 holds, and ρj = c˜j (u) otherwise. The second evaluation scheme modifies the Lagrangian costs c˜j (u) by replacing the Lagrangian multiplier vector u with the adaptively controlled penalty weight vector w. We define another score φj for j ∈ N , called the pseudo-Lagrangian score, by φj = c˜j (w). Intuitive meaning of this score is that we consider a column to be promising if it covers many frequently violated constraints in the recent search. The variable fixing procedure for the second evaluation scheme is described in a similar fashion to that of the first evaluation scheme by replacing the ˜ and u with penalty weight vectors w ˜ and w, respectively. Lagrangian multiplier vectors u Given a score vector ρ (resp., φ), a core problem is defined by a subset C ⊂ N consisting of (i) columns j ∈ Ni with the bi lowest scores ρj (resp., φj ) for eachP i ∈ M , (ii) columns j ∈ N with the 10n0 lowest scores ρj (resp., φj ) (recall that we define n0 = j∈N xj ), and (iii) columns ˆ for the incumbent solution x∗ and the current solution x. ˆ The core problem j ∈ X(x∗ ) ∪ X(x) updating procedure is formally described as follows. ˆ Algorithm CORE(ρ, x∗ , x) ˆ Input: A score vector ρ, the incumbent solution x∗ and the current solution x. 9

Output: The core problem C ⊂ N . Step 1: For each i ∈ M , let C1 (i)Sbe the set of columns j ∈ Ni with the bi lowest ρj among those in Ni . Then set C1 ← i∈M C1 (i). Step 2: Set C2 be the set of columns j ∈ N with the 10n0 lowest ρj . ˆ Output C and halt. Step 3: Set C ← C1 ∪ C2 ∪ X(x∗ ) ∪ X(x).

6

Path relinking

The path relinking method [21] is an evolutionary approach to integrate intensification and diversification strategies. This approach generates new solutions by exploring trajectories that connect good solutions. It starts from one of the good solutions, called an initiating solution, and generates a path by iteratively moving to a solution in the neighborhood that leads toward the other solutions, called guiding solutions. Because it is preferable to apply path relinking method to solutions of high quality, we keep reference sets R1 and R2 of good solutions with respect to the penalized objective functions ¯ with the current penalty weight vector w and the initial penalty weight zˆ(x, w) and zˆ(x, w) ¯ respectively. Initially R1 and R2 are prepared by repeatedly applying a randomized vector w, ¯ except for randomly greedy algorithm, which is the same as Steps 1 and 2 of 2-FNLS(0, w) ↑ ↑ ¯ in Step 1 (recall that we define w ¯ zj (x, w) choosing j ∈ I1 (x) from those with the five lowest ∆ˆ to be the initial penalty weight vector). Suppose that the last call to WLS stops at a solution ˆ and xbest is the best solution with respect to zˆ(x, w) ¯ obtained during the last call to WLS. x worst ˆ ˆ if Then, the worst solution x in R1 (with respect to zˆ(x, w)) is replaced with the solution x ˆ w) ≤ zˆ(x ˆ worst , w) and x ˆ 6= x0 for all x0 ∈ R1 . The worst solution xworst in R2 (with it satisfies zˆ(x, ¯ is replaced with the solution xbest if it satisfies zˆ(xbest , w) ¯ ≤ zˆ(xworst , w) ¯ and respect to zˆ(x, w)) best 0 0 x 6= x for all x ∈ R2 . The path relinking method first chooses two solutions xinit (initiating solution) and xguide (guiding solution) randomly, one from R1 and another from R2 , where we assume that zˆ(xinit , w) ≤ ˆ hold. Let ξ = d(xinit , xguide ) be the Hamming distance between soluzˆ(xguide , w) and xinit 6= x init guide tions x and x . It then generates a sequence xinit = x(0) , x(1) , . . . , x(ξ) = xguide of solutions as follows. Starting from x(0) ← xinit , for l = 0, 1, . . . , ξ − 1, the solution x(l+1) is defined to be a solution x0 with the best value of zˆ(x0 , w) among those satisfying x0 ∈ NB1 (x(l) ) and d(x0 , xguide ) < d(x(l) , xguide ). The algorithm chooses the first solution x(l) (l = 0, 1, . . . , ξ − 1) satisfying zˆ(x(l) , w) ≤ zˆ(x(l+1) , w) as the next initial solution of WLS. ˆ and xbest and the current reference sets R1 and R2 , the path Given a pair of solutions x relinking method outputs the next initial solution x of WLS and the updated reference sets R1 and R2 . The path relinking method is formally described as follows. ˆ xbest , R1 , R2 ) Algorithm PRL(x, ˆ and xbest and reference sets R1 and R2 . Input: Solutions x Output: The next initial solution x of WLS and the updated reference sets R1 and R2 . ˆ worst = arg max ˆ satisfies Step 1: Let x zˆ(x0 , w) be the worst solution in R1 . If the solution x 0 x ∈R1

ˆ w) ≤ zˆ(x ˆ worst , w) and x ˆ 6= x0 for all x0 ∈ R1 , then set R1 ← R1 ∪ {x} ˆ \ {x ˆ worst }. zˆ(x, ¯ be the worst solution in R2 . If the solution xbest satisfies Step 2: Let xworst = arg max zˆ(x0 , w) 0 x ∈R2

¯ ≤ zˆ(xworst , w) ¯ and xbest 6= x0 for all x0 ∈ R2 , then set R2 ← R2 ∪ {xbest } \ {xworst }. zˆ(xbest , w) 10

Step 3: Randomly choose two solutions xinit and xguide , one from R1 and another from R2 , where ˆ hold. Set l ← 0 and x(l) ← xinit . we assume that zˆ(xinit , w) ≤ zˆ(xguide , w) and xinit 6= x Step 4: Set x(l+1) ← arg min{ˆ z (x0 , w) | x0 ∈ NB1 (x(l) ), d(x0 , xguide ) < d(x(l) , xguide )}. If zˆ(x(l) , w) > (l+1) zˆ(x , w) holds, set l ← l + 1 and return to Step 4; otherwise set x ← x(l) , output x, R1 and R2 , and halt.

7

Summary of the proposed algorithm

We summarize the outline of the proposed algorithm for SMCP-GUB in Figure 1. The first reference sets R1 and R2 of good solutions are generated by repeating the randomized greedy ¯ When ˆ is set to x ˆ ← arg minx∈R1 ∪R2 zˆ(x, w). algorithm (Section 6). The first initial solution x using the normalized Lagrangian score ρj , the algorithm obtains a near optimal Lagrangian ˜ by the basic subgradient method BSM accompanied by a pricing method [3] multiplier vector u (Section 2), where it is applied only once in the entire algorithm. The algorithm repeatedly applies the following procedures in this order until a given time has ˆ u) ˜ (Section 5) decides the index set run out. The heuristic variable fixing algorithm FIX(x∗ , x, F of variables to be fixed, and it updates the instance to be considered by fixing variables x ˆj ← 1 ˆ (Section 5) constructs a core (j ∈ F ). The heuristic size reduction algorithm CORE(ρ, x∗ , x) problem C ⊂ N and fixes variables x ˆj ← 0 (j ∈ N \ C). The weighting local search algorithm ˆ explores good solutions x ˆ and xbest with respect to zˆ(x, w) and zˆ(x, w), ¯ respectively, WLS(x) ˆ w) (Section 3) while by repeating the 2-flip neighborhood local search algorithm 2-FNLS(x, updating the penalty weight vector w adaptively (Section 4), where the initial penalty weights P w ¯i (i ∈ M ) are set to w ¯i = j∈N cj + 1 for all i ∈ M . After updating the reference sets R1 and ˆ is generated by the path relinking method PRL(x, ˆ xbest , R1 , R2 ) R2 , the next initial solution x (Section 6). Start Randomized greedy Subgradient method Heuristic variable fixing Heuristic size reduction Weighting local search Path relinking Exit

Figure 1: Outline of the proposed algorithm for SMCP-GUB

8

Computational results

We first prepared eight classes of random instances for SCP, among which classes G and H were taken from Beasley’s OR Library [30] and classes I–N were newly generated by us in a similar

11

Table 1: The benchmark instances for SCP Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 RAIL507 RAIL516 RAIL582 RAIL2536 RAIL2586 RAIL4284 RAIL4872

#cst. 1000 1000 1000 1000 2000 2000 5000 5000 507 516 582 2536 2586 4284 4872

#vars. 10,000 10,000 50,000 100,000 100,000 200,000 500,000 1,000,000 63,009 47,311 55,515 1,081,841 920,683 1,092,610 968,672

Density 2.0% 5.0% 1.0% 1.0% 0.5% 0.5% 0.25% 0.25% 1.3% 1.3% 1.2% 0.4% 0.3% 0.2% 0.2%

Cost range [1,100] [1,100] [1,100] [1,100] [1,100] [1,100] [1,100] [1,100] [1,2] [1,2] [1,2] [1,2] [1,2] [1,2] [1,2]

Table 2: Four types of benchmark instances for SMCP-GUB (dh /|Gh |) Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5

Type1 1/10 1/10 1/50 1/50 1/50 1/50 1/50 1/100

Type2 10/100 10/100 10/500 10/500 10/500 10/500 10/500 10/1000

Type3 5/10 5/50 5/50 5/50 5/50 5/50 5/50 5/100

Type4 50/100 50/100 50/500 50/500 50/500 50/500 50/500 50/1000

manner. The random instance generator for classes I–N are available at [29]. Each class has five instances; we denote instances in class G as G.1, . . . , G.5, and other instances in classes H–N similarly. Another set of benchmark instances is called RAIL arising from a crew pairing problem in an Italian railway company P [5, 7]. PThe summary of these instances are given in Table 1, where the density is defined by i∈M j∈N aij /mn. For each random instance, we generated four types of SMCP-GUB instances (by the instance generator available at [29]) with different values of parameters dh and |Gh | as shown in Table 2, where all blocks Gh (h ∈ K) have the same size |Gh | and upper bound dh for each instance. Here, the right-hand sides of multicover constraints bi are random integers taken from interval [1, 5]. To the best of our knowledge, there are no specially tailored algorithms for the SMCP-GUB, and SMCP-GUB instances emerging from various applications have been formulated as MIP problems and solved by general purpose solvers in the literature [13, 14, 15, 16, 17, 18]. We accordingly compared the proposed algorithm with two recent MIP solvers called CPLEX12.6 and Gurobi5.6.2 and a local search solver called LocalSolver3.1. LocalSolver3.1 is not the latest version, but it performs better than more recent version 4.0 for the benchmark instances. We also compared a 3-flip neighborhood local search algorithm [8] for SCP instances. These solvers and the proposed algorithm were tested on a Mac Pro desktop computer with two 2.66 GHz Intel Xeon (six cores) processors and were run on a single thread with time limits shown in Table 3.

12

Table 3: Computation time of the tested solvers and the proposed algorithm for SMCP-GUB and SCP Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 RAIL507 RAIL516 RAIL582 RAIL2536 RAIL2586 RAIL4284 RAIL4872

MIP solvers 3600 s 3600 s 3600 s 3600 s 7200 s 7200 s 18,000 s 18,000 s 3600 s 3600 s 3600 s 18,000 s 18,000 s 18,000 s 18,000 s

Heuristics 600 s 600 s 600 s 600 s 1200 s 1200 s 3000 s 3000 s 600 s 600 s 600 s 3000 s 3000 s 3000 s 3000 s

We first compared the proposed algorithm with different evaluation schemes of variables: (i) the Lagrangian cost c˜j (u), (ii) the normalized Lagrangian score ρj , and (iii) the pseudoLagrangian score φj . We also tested the proposed algorithm without the size reduction mechanism. Table 4 shows the average objective values of the proposed algorithm with different evaluation schemes of variables for each class of SMCP-GUB instances. The third column “zLP ” shows the optimal values of LP relaxation, and the fourth column “w/o” shows the proposed algorithm without the size reduction mechanism. The fifth, sixth, and seventh columns show the results of the proposed algorithm with different evaluation schemes. The best upper bounds among the compared settings are highlighted in bold. The last row shows the average relative gaps z(x)−zbest × 100 (%) of the compared algorithm, where zbest is the best upper bounds among those z(x) |C| obtained by all algorithms in this paper. Table 5 shows the average size |N | × 100 (%) of the core problem C in the proposed algorithm for each class of SMCP-GUB instances. The detailed computational results are shown in the online supplement. We observe that the proposed algorithm with the Lagrangian cost c˜j (u) performs much worse than the algorithm without the size reduction mechanism for types 1 and 2, which indicates that the Lagrangian cost c˜j (u) does not evaluate the promising variables properly for these instances. The proposed algorithm with the normalized Lagrangian score ρj performs much better than the algorithm with the Lagrangian cost c˜j (u) for types 1 and 2, while they show almost the same performance for types 3 and 4. This is because the normalized Lagrangian score ρj takes almost the same value as the Lagrangian cost c˜j (u) for types 3 and 4. We also observed that the proposed algorithm with the pseudo-Lagrangian score φj performs better than the algorithm with the normalized Lagrangian score ρj for most of the tested instances. These observations indicate that the proposed algorithm with the pseudo-Lagrangian score φj succeeds in selecting a small number of promising variables properly even for SMCP-GUB instances having hard GUB constraints. Table 6 shows the average objective values of the proposed algorithm for each class of SCP instances, in which we omit the results for the normalized Lagrangian score ρj because it takes exactly the same value as the Lagrangian cost c˜j (u) for SCP instances. Table 7 shows the |C| average size |N | × 100 (%) of the core problem C in the proposed algorithm for each class of

13

Table 4: Computational results of the proposed algorithm with different evaluation schemes of variables for SMCP-GUB Type1

Type2

Type3

Type4

Avg. gap

Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5

zLP 1715.95 395.62 2845.47 1466.03 5668.91 2959.29 5476.58 4780.91 1508.86 367.76 2702.67 1393.72 5394.32 2820.97 5244.54 4619.25 708.03 190.16 934.96 547.93 1889.22 1104.29 2083.85 1747.88 691.14 187.71 917.48 539.06 1853.17 1088.47 2051.53 1724.53

w/o 2357.4 603.6 3913.8 2028.4 7996.8 4254.4 8672.0 8101.0 1902.4 508.6 3549.0 1875.8 7432.2 3949.4 7998.6 7694.6 765.0 212.0 1125.6 652.4 2310.2 1339.2 2708.2 2441.4 732.8 204.4 1074.8 625.0 2217.0 1286.0 2582.6 2337.6 4.69%

Score c˜j (u) 2344.8 602.6 5810.2 4194.8 11873.8 8486.8 19035.2 20613.8 1900.2 504.6 5075.8 3471.4 11496.0 7419.6 16781.4 19552.4 762.2 211.2 1115.2 640.8 2274.2 1306.8 2572.8 2347.8 736.4 203.6 1060.8 615.4 2166.8 1257.4 2484.0 2240.6 19.13%

Score ρj 2344.8 601.2 3932.8 2011.4 8083.8 4156.0 8343.0 7970.8 1900.6 502.8 3550.4 1834.6 7375.6 3830.8 7647.8 7010.2 762.2 211.2 1112.8 641.4 2263.2 1312.0 2563.8 2308.6 730.2 203.6 1064.2 615.4 2174.0 1257.8 2492.4 2233.4 2.84%

Score φj 2319.4 604.6 3800.4 1923.0 7691.8 3999.4 7762.0 7010.2 1902.8 502.6 3466.0 1791.4 7053.6 3654.4 7187.4 6638.6 762.2 211.6 1103.8 639.8 2248.8 1300.6 2542.6 2225.8 730.8 203.2 1059.4 611.2 2161.8 1250.8 2453.8 2167.4 0.56%

SCP instances. The detailed computational results are shown in the online supplement. We observe that the proposed algorithm with the Lagrangian cost c˜j (u) and the pseudoLagrangian score φj performs better than the algorithm without the size reduction mechanism, and the algorithm with the Lagrangian cost c˜j (u) performs best for the RAIL instances. These observations indicate that the proposed algorithm with the pseudo-Lagrangian score φj succeeds in selecting a small number of promising variables properly for SCP instances. We next compared variations of the proposed algorithm obtained by applying one of the following three modifications: (i) replace the 2-flip neighborhood search with the 1-flip neighborhood local search algorithm (i.e., apply only Steps 1 and 2 in Algorithm 2-FNLS), (ii) exclude the path relinking method (i.e., exclude Algorithm PRL from the proposed algorithm), and (iii) replace the randomized greedy algorithm with the uniformly random selection of j ∈ I1↑ (x), where we tested all of these three variations with the pseudo-Lagrangian score φj . Tables 8 and 9 show the average objective values of the three variations of the proposed algorithm for SMCP-GUB and SCP instances. The columns “1-FNLS”, “No-PRL” and “No-GR” show the results of the proposed algorithm with the above modifications (i), (ii) and (iii), respectively. The last col14

Table 5: The size of the core problem in the proposed algorithm with different evaluation schemes of variables for SMCP-GUB Type1

Type2

Type3

Type4

Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5

Score c˜j (u) 20.51% 10.14% 5.75% 2.79% 5.79% 2.86% 2.49% 1.19% 18.41% 9.32% 5.52% 2.74% 5.53% 2.76% 2.41% 1.15% 23.48% 11.65% 6.07% 3.04% 6.14% 3.06% 2.65% 1.26% 23.23% 11.55% 5.95% 3.03% 6.04% 3.02% 2.62% 1.24%

Score ρj 20.37% 10.12% 5.41% 2.68% 5.49% 2.73% 2.45% 1.19% 18.45% 9.26% 5.17% 2.58% 5.26% 2.64% 2.35% 1.12% 23.48% 11.65% 6.05% 3.04% 6.12% 3.07% 2.65% 1.25% 23.22% 11.55% 5.97% 3.03% 6.03% 3.03% 2.63% 1.23%

Score φj 24.84% 12.53% 6.67% 3.30% 6.65% 3.34% 2.92% 1.40% 21.90% 11.11% 6.30% 3.16% 6.31% 3.16% 2.75% 1.33% 25.75% 12.72% 7.18% 3.56% 7.18% 3.57% 3.07% 1.45% 25.18% 12.29% 7.00% 3.54% 7.07% 3.51% 3.01% 1.40%

umn “Proposed” shows the results of the proposed algorithm (without such modifications). The detailed computational results are shown in the online supplement. Comparing columns “1-FNLS” with “Proposed”, we observed that the 2-flip neighborhood local search algorithm performs much better than the 1-flip neighborhood local search algorithm for SMCP-GUB instances, while they show almost the same performance for SCP instances. The proposed algorithm performs better on average than that obtained by excluding the path relinking and that obtained by replacing the randomized greedy algorithm with the uniformly random selection of j ∈ I1↑ (x) for SMCP-GUB and SCP instances, where the performance depends on types of instances and the differences are small. These observations indicate that procedures for generating initial solutions have less influence on its performance than those in the evaluation schemes of variables and the neighborhood search procedures. We finally compared the proposed algorithm with the above-mentioned recent solvers, where we tested the proposed algorithm with the pseudo-Lagrangian score φj . Tables 10 and 11 show the average objective values of the compared algorithms for each class of SMCP-GUB and SCP instances, respectively, where the results with asterisks “∗” indicate that the obtained feasible solutions were proven to be optimal. The detailed computational results are shown in the online 15

Table 6: Computational results of the proposed algorithm with different evaluation schemes of variables for SCP Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 RAIL507 RAIL516 RAIL582 RAIL2536 RAIL2586 RAIL4284 RAIL4872 Avg. gap

zLP 149.48 45.67 138.96 104.78 276.66 209.33 415.77 348.92 172.14 182.00 209.71 688.39 935.92 1054.05 1509.63

w/o 166.4 59.6 160.2 132.6 321.8 268.2 575.6 534.2 179 183 217 717 1010 1121 1618 2.74%

Score c˜j (u) 166.4 59.6 159.0 130.8 319.2 263.0 565.2 516.4 175 182 212 693 965 1080 1564 1.28%

Score φj 166.4 59.6 158.8 130.6 318.2 264.0 565.8 518.2 178 183 216 715 988 1127 1591 1.63%

supplement. We observe that the proposed algorithm performs better than CPLEX12.6, Gurobi5.6.2 and LocalSolver3.1 for all types of SMCP-GUB instances and classes G–N of SCP instances while it performs worse than CPLEX12.6 and Gurobi5.6.2 for class RAIL of SCP instances. We also observe that the 3-flip neighborhood local search algorithm [8] achieves best upper bounds for almost all classes of SCP instances. These observations indicate that the variable fixing and pricing techniques based on the LP and/or Lagrangian relaxation are greatly affected by the gaps between lower and upper bounds, and they may not work effectively for the instances having large gaps. For such instances, the proposed algorithm with the pseudo-Lagrangian score φj succeeds in evaluating the promising variables properly.

9

Conclusion

In this paper, we considered an extension of the set covering problem (SCP) called the set multicover problem with the generalized upper bound constraints (SMCP-GUB). We developed a 2-flip local search algorithm incorporated with heuristic algorithms to reduce the size of instances, evaluating promising variables by taking account of GUB constraints, and with an adaptive control mechanism of penalty weights that is used to guide the search to visit feasible and infeasible regions alternately. We also developed an efficient implementation of a 2-flip neighborhood search that reduces the number of candidates in the neighborhood without sacrificing the solution quality. To guide the search to visit a wide variety of good solutions, we also introduced an evolutionary approach called the path relinking method that generates new solutions by combining two or more solutions obtained so far. According to computational comparison on benchmark instances, we can conclude that the proposed method succeeds in selecting a small number of the promising variables properly and performs quite effectively even for large-scale instances having hard GUB constraints. We expect that the evaluation scheme of promising variables is also applicable to other combinatorial optimization problems, because the pseudo-Lagrangian score φj = c˜j (w) (j ∈ N ) can be defined when an adaptive control mechanism of penalty weights w is incorporated in local search, and such an approach has been observed to be effective for many problems. 16

Table 7: The size of the core problem in the proposed algorithm with different evaluation schemes of variables for SCP Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 RAIL507 RAIL516 RAIL582 RAIL2536 RAIL2586 RAIL4284 RAIL4872

Score c˜j (u) 10.36% 5.17% 2.93% 1.31% 2.96% 1.31% 1.12% 0.52% 1.98% 3.46% 3.03% 0.43% 0.71% 0.71% 1.16%

Score φj 10.36% 5.25% 2.93% 1.32% 2.94% 1.33% 1.14% 0.53% 2.18% 3.84% 3.25% 0.50% 0.82% 0.81% 1.34%

Acknowledgment This work was supported by the Grants-in-Aid for Scientific Research (JP26282085, JP15H02969, JP15K12460).

References [1] Umetani S, Arakawa M, Yagiura M. A heuristic algorithm for the set multicover problem with generalized upper bound constraints. In: Proceedings of Learning and Intelligent Optimization Conference (LION). Berlin: Springer; 2013. p. 75–80. [2] Caprara A, Toth P, Fischetti M. Algorithms for the set covering problem. Ann Oper Res 2000; 98: 353–71. [3] Umetani S., Yagiura M. Relaxation heuristics for the set covering problem. J Oper Res Soc Jpn 2007; 50: 350–75. [4] Beasley JE. A Lagrangian heuristic for set-covering problems. Nav Res Logist 1990; 37: 151–64. [5] Caprara A, Fischetti M, Toth P. A heuristic method for the set covering problem. Oper Res 1999; 47: 730–43. [6] Caserta M. Tabu search-based metaheuristic algorithm for large-scale set covering problems. In: Gutjahr WJ, Hartl RF, Reimann M., editors. Metaheuristics: Progress in Complex Systems Optimization, Berlin: Springer; 2007, p. 43–63. [7] Ceria S, Nobili P, Sassano A. A Lagrangian-based heuristic for large-scale set covering problems. Math Program 1998; 81: 215–88. [8] Yagiura M, Kishida M, Ibaraki T. A 3-flip neighborhood local search for the set covering problem. Eur J Oper Res 2006; 172: 472–99.

17

Table 8: Computational results of variations of the proposed algorithm for SMCP-GUB Type1

Type2

Type3

Type4

Avg. gap

Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5

zLP 1715.95 395.62 2845.47 1466.03 5668.91 2959.29 5476.58 4780.91 1508.86 367.76 2702.67 1393.72 5394.32 2820.97 5244.54 4619.25 708.03 190.16 934.96 547.93 1889.22 1104.29 2083.85 1747.88 691.14 187.71 917.48 539.06 1853.17 1088.47 2051.53 1724.53

1-FNLS 2354.2 608.8 3850.2 1973.8 7744.2 4063.6 7949.6 7312.4 1894.2 506.4 3504.8 1831.2 7262.2 3798.0 7400.6 6893.2 761.0 211.8 1107.4 646.4 2280.8 1314.0 2572.2 2272.8 730.8 204.6 1064.6 615.8 2188.0 1271.4 2485.0 2202.4 1.98%

No-PRL 2330.2 601.6 3784.2 1960.0 7692.0 3982.0 7685.6 6914.2 1913.2 503.0 3479.8 1793.6 7062.6 3654.8 7139.4 6538.4 764.4 211.4 1112.0 641.2 2262.4 1306.6 2541.8 2229.6 732.4 203.8 1059.4 616.6 2162.0 1257.2 2447.6 2147.2 0.58%

No-GR 2342.6 604.0 3778.8 1937.0 7713.0 3988.0 7756.4 7065.2 1885.8 499.8 3452.6 1782.8 7044.0 3659.2 7219.4 6635.4 763.6 211.4 1104.0 640.6 2255.0 1304.8 2546.0 2241.8 730.8 204.2 1057.2 612.2 2160.4 1253.8 2457.0 2178.0 0.64%

Proposed 2319.4 604.6 3800.4 1923.0 7691.8 3999.4 7762.0 7010.2 1902.8 502.6 3466.0 1791.4 7053.6 3654.4 7187.4 6638.6 762.2 211.6 1103.8 639.8 2248.8 1300.6 2542.6 2225.8 730.8 203.2 1059.4 611.2 2161.8 1250.8 2453.8 2167.4 0.56%

[9] Hashimoto H, Ezaki Y, Yagiura M, Nonobe K, Ibaraki T, Løkketangen A. A set covering approach for the pickup and delivery problem with general constraints on each route. Pac J Optim 2009; 5: 185–202. [10] Boros E, Ibaraki T, Ichikawa H, Nonobe K, Uno T, Yagiura M. Heuristic approaches to the capacitated square covering problem. Pac J Optim 2005; 1: 465–90. [11] Farahani RZ, Asgari N, Heidari N, Hosseininia M, Goh M. Covering problems in facility location: A review. Comput Ind Eng 2012; 62: 368–407. [12] Boros E, Hammer PL, Ibaraki T, Kogan A, Mayorz E, Muchnik I. An implementation of logical analysis of data. IEEE Trans Knowl Data Eng 2000; 12: 292–306. [13] Bettinelli A, Ceselli A, Righini G. A branch-and-price algorithm for the multi-depot heterogeneous-fleet pickup and delivery problem with soft time windows. Math Program Comput 2014; 6: 171–97.

18

Table 9: Computational results of variations of the proposed algorithm for SCP Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 RAIL507 RAIL516 RAIL582 RAIL2536 RAIL2586 RAIL4284 RAIL4872 Avg. gap

zLP 149.48 45.67 138.96 104.78 276.66 209.33 415.77 348.92 172.14 182.00 209.71 688.39 935.92 1054.05 1509.63

1-FNLS 166.4 59.6 159.0 130.4 318.6 263.4 564.4 520.6 179 183 218 728 1000 1135 1611 1.77%

No-PRL 166.6 59.6 159.2 130.8 320.0 264.4 567.0 517.0 176 183 212 711 979 1109 1579 1.62%

No-GR 166.6 59.6 158.8 130.0 318.2 264.2 565.8 520.4 181 183 214 712 991 1135 1599 1.68%

Proposed 166.4 59.6 158.8 130.6 318.2 264.0 565.8 518.2 178 183 216 715 988 1127 1591 1.63%

[14] Choi E, Tcha DW. A column generation approach to the heterogeneous fleet vehicle routing problem. Comput Oper Res 2007; 34: 2080–95. [15] Kohl N, Karisch S. Airline crew rostering: Problem types, modeling, and optimization. Ann Oper Res 2004; 127: 223–57. [16] Caprara A, Monaci M, Toth P. Models and algorithms for a staff scheduling problem. Math Program 2003; B98: 445–76. [17] Ikegami A, Niwa A. A subproblem-centric model and approach to the nurse scheduling problem. Math Program 2003; B97, 517–41. [18] Hammer PL, Bonates TO. Logical analysis of data — An overview: From combinatorial optimization to medical applications. Ann Oper Res 2006; 148: 203–25. [19] Pessoa LS, Resende MGC, Ribeiro CC. A hybrid Lagrangean heuristic with GRASP and path-relinking for set k-covering. Comput Oper Res 2013; 40: 3132–46. [20] Vazirani VV. Approximation algorithms. Berlin: Springer; 2001. [21] Glover F, Laguna M. Tabu Search. Massachusetts: Kluwer Academic Publishers; 1997. [22] Bixby RE, Gregory JW, Lustig IJ, Marsten RE, Shanno DF. Very large-scale linear programming: A case study in combining interior point and simplex methods. Oper Res 1992; 40: 885–97. [23] Yagiura M, Ibaraki T. Analyses on the 2 and 3-flip neighborhoods for the MAX SAT. J Comb Optim 1999; 3: 95–114. [24] Yagiura M, Ibaraki T. Efficient 2 and 3-flip neighborhood search algorithms for the MAX SAT: Experimental evaluation. J Heuristics 2001; 7: 423–42. [25] Nonobe K, Ibaraki T. An improved tabu search method for the weighted constraint satisfaction problem. INFOR 2001; 39: 131–51.

19

Table 10: Computational results of the tested solvers and the proposed algorithm for SMCP-GUB Type1

Type2

Type3

Type4

Avg. gap

Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5

zLP 1715.95 395.62 2845.47 1466.03 5668.91 2959.29 5476.58 4780.91 1508.86 367.76 2702.67 1393.72 5394.32 2820.97 5244.54 4619.25 708.03 190.16 934.96 547.93 1889.22 1104.29 2083.85 1747.88 691.14 187.71 917.48 539.06 1853.17 1088.47 2051.53 1724.53

CPLEX 2590.8 680.8 4757.2 2656.4 12363.6 7607.4 16211.0 16970.8 2053.8 555.2 4382.0 3156.2 15763.6 15683.0 36794.8 36970.0 771.0 214.6 1223.6 707.4 3802.8 3318.2 8404.4 8394.0 735.2 206.8 1150.0 644.4 2734.2 1566.0 7572.6 8371.8 34.12%

Gurobi 2629.6 705.4 4544.6 2329.6 9284.4 4749.2 10152.2 8966.2 2023.8 560.0 3939.6 2132.6 7952.2 4338.4 8600.8 8154.4 769.8 216.4 1176.2 669.2 2530.6 1384.6 2691.6 2597.6 731.8 207.2 1089.6 629.8 2231.6 1299.6 2547.2 2339.8 10.44%

LocalSolver 4096.2 1217.2 7850.8 4744.6 16255.0 9940.6 21564.8 21618.8 3327.0 915.4 6514.4 3837.6 13647.0 8117.4 17614.4 17732.6 1585.4 554.8 2326.4 1802.8 4788.8 4294.4 9575.8 11805.4 1495.0 558.8 2109.0 1698.4 4654.4 3809.6 9909.4 9906.8 58.37%

Proposed 2319.4 604.6 3800.4 1923.0 7691.8 3999.4 7762.0 7010.2 1902.8 502.6 3466.0 1791.4 7053.6 3654.4 7187.4 6638.6 762.2 211.6 1103.8 639.8 2248.8 1300.6 2542.6 2225.8 730.8 203.2 1059.4 611.2 2161.8 1250.8 2453.8 2167.4 0.56%

[26] Yagiura M, Ibaraki T, Glover F. An ejection chain approach for the generalized assignment problem. INFORMS J Comput 2004; 16: 133–51. [27] Selman B, Kautz H. Domain-independent extensions to GSAT: Solving large structured satisfiability problems. In: Proceedings of International Conference on Artificial Intelligence (IJCAI). 1993. p. 291–95. [28] Thornton J. Clause weighting local search for SAT. J Autom Reason 2005; 35: 97–142. [29] Umetani S. Random test instance generator for SCP, https://sites.google.com/site/ shunjiumetani/benchmark, 2015. [30] Beasley JE. OR-Library: Distributing test problems by electronic mail. J Oper Res Soc 1990; 41: 1069–72.

20

Table 11: Computational results of the tested solvers and the proposed algorithm for SCP Instance G.1–G.5 H.1–H.5 I.1–I.5 J.1–J.5 K.1–K.5 L.1–L.5 M.1–M.5 N.1–N.5 RAIL507 RAIL516 RAIL582 RAIL2536 RAIL2586 RAIL4284 RAIL4872 Avg. gap

zLP 149.48 45.67 138.96 104.78 276.66 209.33 415.77 348.92 172.14 182.00 209.71 688.39 935.92 1054.05 1509.63

CPLEX 166.6 60.2 162.4 137.8 325.8 285.4 637.2 753.0 ∗174 ∗182 ∗211 ∗689 957 1077 1549 7.53%

Gurobi 166.6 60.2 162.4 135.2 325.2 273.6 609.8 577.4 ∗174 ∗182 ∗211 ∗689 970 1085 1562 4.39%

LocalSolver 316.6 169.2 403.4 335.8 828.2 918.6 2163.8 2382.2 187 189 227 729 1012 1148 1651 55.69%

21

Yagiura et al. 166.4 59.6 158.0 129.0 313.2 258.6 550.2 503.8 174 182 211 691 947 1064 1531 0.01%

Proposed 166.4 59.6 158.8 130.6 318.2 264.0 565.8 518.2 178 183 216 715 988 1127 1591 1.63%

A

Proof of Lemma 1

We show that ∆ˆ zj1 ,j2 (x, w) ≥ 0 holds if xj1 = xj2 . First, we consider the case with xj1 = xj2 = 1. By the assumption of the lemma, X ∆ˆ zj↓ (x, w) = −cj + wi ≥ 0 (14) i∈(ML (x)∪ME (x))∩Sj

holds for both j = j1 and j2 . Then we have zj↓2 (x, w) + zj↓1 (x, w) + ∆ˆ ∆ˆ zj1 ,j2 (x, w) = ∆ˆ

X

wi ≥ 0,

(15)

i∈M+ (x)∩Sj1 ∩Sj2

P where M+ (x) = {i ∈ M | j∈N aij xj = bi + 1}. Next, we consider the case with xj1 = xj2 = 0. By the assumption of the lemma, for both j = j1 and j2 , X ∆ˆ zj↑ (x, w) = cj − wi ≥ 0 (16) i∈ML (x)∩Sj

holds unless j belongs a block Gh satisfying

P

l∈Gh

xl = dh . Then we have

∆ˆ zj1 ,j2 (x, w) = ∆ˆ zj↑1 (x, w) + ∆ˆ zj↑2 (x, w) +

X

wi ≥ 0,

(17)

i∈M− (x)∩Sj1 ∩Sj2

where M− (x) = {i ∈ M |

B

P

j∈N

aij xj = bi − 1}.

Proof of Lemma 2

We assume that neither the conditions (i) nor (ii) is satisfied and ∆ˆ zj1 ,j2 (x, w) < 0 holds. Then, for (ii) is assumed to be unsatisfied, ME (x) ∩ Sj2 ∩ Sj2 = ∅ holds, and hence we have ∆ˆ zj1 ,j2 (x, w) = ∆ˆ zj↓1 (x, w) + ∆ˆ zj↑2 (x, w) < 0, which implies ∆ˆ zj↓1 (x, w) < 0 or ∆ˆ zj↑2 (x, w) < 0. If ∆ˆ zj↓1 (x, w) < 0 holds, then the algorithm obtains an improved solution by flipping xj1 = 1 →

0. If ∆ˆ zj↑2 (x, w) < 0 holds, the algorithm obtains an improved solution by flipping xj2 = 0 → 1, P because j1 and j2 belong to the same block satisfying P j∈Gh xj < dh or to different blocks, and in the latter case, j2 belongs a block Gh satisfying j∈Gh xj < dh by the assumption that we only consider solutions that satisfy the GUB constraints, including the solution obtained after flipping j1 and j2 . Both cases contradict the assumption that x is locally optimal with respect to NB1 (x).

C

Proof of Lemma 3

We have ∆ˆ zj1 ,j2 (x, w) = ∆ˆ zj↓1 (x, w) + ∆ˆ zj↑2 (x, w), because ME (x) ∩ Sj2 ∩ Sj2 = ∅ holds. Then

we have minj∈Gh ∆ˆ zj↓ (x, w) + minj∈Gh ∆ˆ zj↑ (x, w) ≤ ∆ˆ zj↓1 (x, w) + ∆ˆ zj↑2 (x, w) = ∆ˆ zj1 ,j2 (x, w) <

0, because minj∈Gh ∆ˆ zj↓ (x, w) ≤ ∆ˆ zj↓1 (x, w) and minj∈Gh ∆ˆ zj↑ (x, w) ≤ ∆ˆ zj↑2 (x, w) hold. Let j1∗ = arg minj∈Gh ∆ˆ zj↓ (x, w) and j2∗ = arg minj∈Gh ∆ˆ zj↑ (x, w). Then we have ∆ˆ zj1∗ ,j2∗ (x, w) = P ↓ ↑ ↓ ↑ ∆ˆ zj ∗ (x, w) + ∆ˆ zj ∗ (x, w) − i∈ME (x)∩Sj ∗ ∩Sj ∗ wi < ∆ˆ zj1 (x, w) + ∆ˆ zj2 (x, w) < 0. 1

2

1

2

22

D

Efficient incremental evaluation of solutions in 2-FNLS

We first consider the case that the current solution x moves to x0 by flipping xj = 0 → 1. Then, the algorithm first updates si (x) (i ∈ Sj ) in O(|Sj |) time by si (x0 ) ← si (x)+1, and then updates P ∆p↑l (x, w) and ∆p↓l (x, w) (l ∈ Ni , i ∈ Sj ) in O( i∈Sj |Ni |) time by ∆p↑l (x0 , w) ← ∆p↑l (x, w)−wi if si (x0 ) = bi holds and ∆p↓l (x0 , w) ← ∆p↓l (x, w) − wi if si (x0 ) = bi + 1 holds. Similarly, we consider the other case that the current solution x moves to x0 by flipping xj = 1 → 0. Then, the algorithm first updates si (x) (i ∈ Sj ) in O(|Sj |) time by si (x0 ) ← si (x)−1, and then updates P ∆p↑l (x, w) and ∆p↓l (x, w) (l ∈ Ni , i ∈ Sj ) in O( i∈Sj |Ni |) time by ∆p↑l (x0 , w) ← ∆p↑l (x, w)+wi if si (x0 ) = bi − 1 holds and ∆p↓l (x0 , w) ← ∆p↓l (x, w) + wi if si (x0 ) = bi holds.

23

Relaxation heuristics for the set multicover problem with ...

Dec 27, 2017 - number of greedy algorithms based on Lagrangian relaxation called the ...... gramming: A case study in combining interior point and simplex ...

362KB Sizes 0 Downloads 168 Views

Recommend Documents

Heuristics for the Inversion Median Problem
6 -1) can be obtained from A by a single good inversion with respect to B and ... apply good inversions can be thought of as parallel vs. serial, and stepwise .... In designing a heuristic, we must then consider how to select the next edge ..... ASM4

Problem Set 1: C
Sep 17, 2010 - Simply email [email protected] to inquire; be sure to mention your full name, your ..... Now let's add those products' digits (i.e., not the products themselves) .... http://code.google.com/apis/chart/docs/gallery/bar_charts.html.

Problem Set 5: Forensics
on cloud.cs50.net as well as filling out a Web-‐based form (the latter of which will be ..... If you feel like SFTPing that file to your desktop and double-‐ ..... There's nothing hidden in smiley.bmp, but feel free to test your program out on it

Problem Set 0: Scratch
Sep 10, 2010 - appropriateness of some discussion, contact the course's instructor. ... phone at 617-495-9000, in person in Science Center B-14, or via email ...

Problem Set 1: C
Sep 17, 2010 - on cloud.cs50.net as well as filling out a Web-‐based form, which may take a ... virtual terminal room) or lifting material from a book, website, or.

Problem Set 1: C
Problem Set 1: C due by 7:00pm on Fri 9/17. Per the directions at this document's end, submitting this problem set involves submitting source code.

Problem Set 6: Mispellings
Oct 22, 2010 - summary: This is Problem Set 6's distribution code. Notice that the log is sorted, from top to bottom, in reverse chronological order. And notice that the earliest commit (i.e., changeset) is identified labeled with 0:13d2516423d8. Tha

Problem Set 5: Forensics
on cloud.cs50.net as well as filling out a Web-‐based form (the latter of which will be available after lecture on Wed 10/20), which may ... As this output implies, most of your work for this problem set will be organized within two subdirectories.

Problem Set 0: Scratch
Sep 10, 2010 - phone at 617-495-9000, in person in Science Center B-14, or via email ... If you're running Windows (particularly a 64-bit version thereof) and ...

Problem Set 1: C
Sep 17, 2010 - Nor may you provide or make available solutions to problem sets to individuals who .... Simply email [email protected] to inquire; be sure to mention your full name, your ..... 21 http://www.nist.gov/dads/HTML/greedyalgo.html .... Anyhow,

Problem Set 6: Mispellings
Oct 22, 2010 - -rw-r--r-- 1 jharvard students 990 Oct 22 18:59 dictionary.h. -rw-r--r-- 1 jharvard students 0 Oct 22 18:59 questions.txt. -r--r--r-- 1 jharvard students 5205 Oct 22 18:59 speller.c lrwxrwxrwx 1 jharvard students 32 Oct 22 18:59 texts

Problem Set 0: Scratch
Sep 10, 2010 - appropriateness of some discussion, contact the course's instructor. ... phone at 617-‐495-‐9000, in person in Science Center B-‐14, or via ...

Problem Set 0: Scratch
Sep 10, 2010 - form that may take a few minutes, so best not to wait until the very last ... you with an email address of the form [email protected], ...

Problem Set 0: Scratch
Sep 10, 2010 - dishonesty: you may not submit the same or similar work to this course that you have submitted or will submit to another. Nor may you provide or make available solutions to problem sets to individuals who .... a bit simpler than we exp

Problem Set 0: Scratch
Sep 10, 2010 - All work that you do toward fulfillment of this course's expectations .... For clues on a Mac, select About This Mac from your Apple menu and ...

Real-Time Particle Filtering with Heuristics for 3D ...
3D motion capture with a consumer computer and webcam. I. INTRODUCTION ... 20 degrees of freedom for body poses) where multiple local minimums may ...

Improving Compiler Heuristics with Machine Learning
uses machine-learning techniques to automatically search the space ..... We employ depth-fair crossover, which equally weighs each level of the tree [12]. ...... Code to Irregular DSPs with the Retargetable,. Optimizing Compiler COGEN(T). In Internat

Problem Set 04
also has the absolute advantage in the production of the good. E) cannot have an absolute advantage in the production of the good. Table 1. Tobacco. Sugar. Razil. 20. 40 ... Table 1 shows Razil's and Uba's production costs in terms of labor-hours per

Problem Set 8: CS50 Shuttle
Nov 12, 2010 - Then cd to ~/public_html/pset8/. (Remember how?) Then run ls. You should see the below. buildings.js math3d.js passengers.js service.js index.html passengers service.css shuttle.js. All of the work that you do for this problem set will

Problem Set 4.
1.2.4 Part 4. Sunlight. With atmospheric pressure at 101.3kPa, and the pressure from the light at 1300W/3x108m/s, we have roughly 4x10−5Pa of pressure from the sunlight being only ∼ 10−10 of the total atmo- spheric pressure. Wow. Very tiny! Wou

Problem Set 3
Oct 6, 2008 - Suppose the period-t utility function, ut, is ut = lnct + b(1 − lt)1−γ/(1 − γ), b > 0, ... What is the inter-temporal elasticity of substitution of leisure.

Some Heuristics for the Development of Music Education Software ...
specific development methodology for Music Education software, we ... interchange between the members of the development team, which come from different.

Some Useful Heuristics
the oddity of natural occurrences to use of sophisticated quantitative data analyses in ... HEURISTICS CALLING FOR COMPLEX CONCEPTUAL ANALYSIS. (MEDIATED ...... heuristics is sometimes based on superficial descriptive criteria.

Problem Set 0: Scratch - CS50 CDN
Sep 10, 2010 - Academic Honesty. All work that you do toward fulfillment of this course's expectations must be your own unless collaboration is explicitly allowed in writing by the course's instructor. Collaboration in the completion of problem sets