One-dimensional cutting stock problem with a given number of setups: A hybrid approach of metaheuristics and linear programming ∗ Shunji Umetani†

Mutsunori Yagiura‡

Toshihide Ibaraki§

Abstract One-dimensional cutting stock problem (1D-CSP) is one of the representative combinatorial optimization problems, which arises in many industrial applications. Since the setup costs for switching different cutting patterns become more dominant in recent cutting industry, we consider a variant of 1D-CSP, called the pattern restricted problem (PRP), to minimize the number of stock rolls while constraining the number of different cutting patterns within a bound given by users. For this problem, we propose a local search algorithm that alternately uses two types of local search processes with the 1-add neighborhood and the shift neighborhood, respectively. To improve the performance of local search, we incorporate it with linear programming (LP) techniques, to reduce the number of solutions in each neighborhood. A sensitivity analysis technique is introduced to solve a large number of associated LP problems quickly. Through computational experiments, we observe that the new algorithm obtains solutions of better quality than those obtained by other existing approaches. keyword: one-dimensional cutting stock problem, local search, linear programming, sensitivity analysis

1

Introduction

The one-dimensional cutting stock problem (1D-CSP) is one of the representative combinatorial optimization problems, which arises in many industries such as steel, paper, wood, glass, fiber and so on. The problem is formulated as follows: We are given a sufficient number of stock rolls of the same length L and m piece types M = {1, 2, . . . , m}, where each piece type i has its length li and demand di . A cutting plan is described in terms of variables associated with cutting patterns (or patterns). A cutting pattern is a set of pieces cut from one stock roll, and is described as pj = (a1j , a2j , . . . , amj ), where aij ∈ Z + (the set of nonnegative integers) is the number of pieces of type i cut from cutting pattern pj . A cutting pattern pj is feasible if it satisfies ∑ aij li ≤ L. (1) i∈M



The final publication is available at Springer via DOI: 10.1007/s10852-005-9031-0. [email protected], Department of Advanced Science and Technology, Graduate School of Engineering, Toyota Technological Institute ‡ [email protected], Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University § [email protected], Department of Informatics, School of Science and Technology, Kwansei Gakuin University †

1

Let P denote the set of all feasible cutting patterns. 1D-CSP asks to specify a set of patterns Π = {p1 , p2 , . . . , p|Π| } and their frequencies (i.e., the number of applications) X = {x1 , x2 , . . . , x|Π| } to satisfy the demands of all piece types. The primary objective in 1D-CSP is to minimize the number of stock rolls, and the standard 1D-CSP is formulated as follows: (1D-CSP) minimize subject to

f (Π, X) = ∑ pj ∈Π



xj

pj ∈Π

aij xj ≥ di , for all i ∈ M

(2)

Π⊆P xj ∈ Z + , for all pj ∈ Π.

This problem is NP-hard, since this is a generalization of the bin packing problem (BPP), which is known to be strongly NP-hard [8]. A classical approach to 1D-CSP is to treat it as an integer linear programming (IP) problem. Since the number of all feasible cutting patterns |P | is huge in practice, Gilmore and Gomory [10, 11] proposed an ingenious column generation method, which generates necessary cutting patterns to improve the optimal values of its LP relaxation by solving the associated bounded knapsack problem. Based on this, a number of branch-and-price algorithms have been developed with certain computational success [1, 4, 5, 18, 19]. In recent years, the setup costs for changing cutting patterns have become more dominant, and it is often impractical to use many different cutting patterns. However, the above branchand-price algorithms tend to obtain solutions in which the numbers of different cutting patterns are close to the number of piece types m. Therefore, several types of algorithms have been developed to reduce the number of different cutting patterns. Haessler [13, 14] proposed a pattern generating heuristic algorithm called the sequential heuristic procedure (SHP), where a cutting plan is constructed sequentially by choosing such patterns that can be applied with high frequency and small trim loss (i.e., waste of stock roll). Foerster and W¨ascher [7] proposed a pattern combination heuristic algorithm called KOMBI, which starts from a solution obtained by one of the above branch-and-price algorithms, and combines two patterns to one, three to two or less, or four to three or less while keeping the number of stock rolls the same. Vanderbeck [20] considered a variant of 1D-CSP which minimizes the number of different cutting patterns while using a given number of stock rolls or less, and called it the pattern minimization problem (PMP). For this problem, he proposed an IP formulation that involves a huge number of binary variables and associated columns, each of which describes selecting a fixed frequency of a specific cutting pattern. He proposed an exact branch-and-price algorithm using a column generation approach, where the subproblem is a non-linear IP that can be decomposed into a number of bounded knapsack problems. Belov and Scheithauer [2] considered a different formulation for PMP, using a smaller number of variables than that of Vanderbeck’s formulation, and developed another exact branch-and-price algorithm. Burkard and Zelle [3] proposed a local search algorithm with several types of neighborhoods, which basically minimizes the number of stock rolls while using the number of different patterns as the secondary criterion. In our previous paper [17], we considered a variant of 1D-CSP, called the pattern restricted problem (PRP), which minimizes the number of stock rolls while using n different cutting patterns or less, where n is a parameter set by users. PRP is formulated as a simple extension of the standard 1D-CSP by adding a constraint on the number of different cutting patterns |Π| ≤ n, where we assume n < m. In general, it becomes easier to find a solution with smaller number of stock rolls as n becomes larger. In this sense, there is a trade-off between the number of different cutting patterns and the number of stock rolls, and we can obtain the trade-off curve by solving 2

PRP for different values of n. The main purpose of our approach is to obtain a better trade-off curve than those realized by other approaches. In this paper, we propose an improved local search algorithm having the following three features. The first is that we alternately use two types of local search processes with the 1-add neighborhood and the shift neighborhood, respectively. The 1-add neighborhood is defined to be the set of solutions obtained by increasing the number of one piece type while decreasing the numbers of some other piece types in a cutting pattern. The shift neighborhood is defined to be the set of solutions obtained by exchanging a piece in a cutting pattern with some other pieces in another cutting pattern. During the search, we need to solve the problem of computing frequencies for given cutting patterns, which we solve approximately by using its LP relaxation. As the size of the neighborhood plays a crucial role in determining the efficiency of local search, we utilize the dual of LP relaxation for reducing neighborhood size. The second is that we incorporate a sensitivity analysis technique and the criss-cross method [21], a variant of the simplex method, to solve a large number of LP relaxations quickly. The third is that we incorporate the dual simplex method to facilitate the criss-cross method, and utilize a lower bound derived from a dual feasible solution, for terminating the LP computation.

2

Local search

The local search algorithm (LS) starts from an initial feasible solution and repeatedly replaces it with a better solution in its neighborhood until no better solution is found in the neighborhood. To implement a local search algorithm, we must consider the following ingredients: (i) how to generate an initial feasible solution, (ii) how to compute the objective values, and (iii) how to construct the neighborhood. In this section, we first explain the outline of our local search for PRP, and then explain the three ingredients. To improve the performance of the LS, we incorporate it into the iterated local search (ILS) framework [16].

2.1

Outline of local search

A solution of PRP is given by a set of cutting patterns Π = {p1 , p2 , . . . , pn } and their frequencies X = {x1 , x2 , . . . , xn }. Our LS first generates an initial feasible solution Π by a variant of the first-fit heuristic algorithm (FF) known for the bin packing problem (BPP), called the modified first-fit algorithm (MFF). Our LS repeatedly replaces the current set of patterns Π with a better set of patterns Π′ in its neighborhood N B(Π) with the first admissible strategy, where we propose two different types of neighborhoods called the 1-add neighborhood N1-add and the shift neighborhood Nshift . For each set of patterns Π, to evaluate the objective value f (Π, X), our LS computes the frequencies X by a heuristic algorithm SOLVE IP, which solves the auxiliary integer programming problem IP(Π) approximately from its LP relaxation LP(Π). To measure the improvements of solutions in the neighborhood, we employ the objective value f (Π, X) of the LP relaxation LP(Π) rather than that of the original problem IP(Π), where X = {x1 , x2 , . . . , xn } ′ is an optimal solution of LP(Π). That is, our LS moves from Π to Π′ if f (Π′ , X ) < f (Π, X) holds while remembering the best integer solution found so far. Using the LP solutions X does not increase the computational time, since SOLVE IP has to solve the LP relaxation LP(Π) in its execution and we can obtain X as a byproduct of computing frequencies X. Algorithm LS(N B, Π) Input: Lengths li and demands di of all pieces i ∈ M , the length of a stock roll L, the number of different patterns n, a type of neighborhood N B, and an initial set of patterns Π. Output: A set of cutting patterns Π∗ = {p∗1 , p∗2 , . . . , p∗n } and their frequencies X ∗ = {x∗1 , x∗2 , . . . , x∗n }. 3

Step 1: Compute integer frequencies X = {x1 , x2 , . . . , xn } and continuous frequencies X = {x1 , x2 , . . . , xn } for Π by applying SOLVE IP. Set Π∗ := Π and X ∗ := X. Step 2: Select a set of patterns Π′ ∈ N B(Π) not checked yet, and compute integer frequencies ′ X ′ = {x′1 , x′2 , . . . , x′n } and continuous frequencies X = {x′1 , x′2 , . . . , x′n } by SOLVE IP. ′

Step 3: If f (Π′ , X ′ ) < f (Π∗ , X ∗ ) holds, set Π∗ := Π′ and X ∗ := X ′ . If f (Π′ , X ) < f (Π, X) ′ holds, set Π := Π′ , X := X ′ and X := X , and return to Step 2. Step 4: If all feasible set of patterns Π′ ∈ N B(Π) have been checked, output Π∗ and X ∗ and halt; otherwise return to Step 2.

2.2

Construction of an initial solution

If there is no restriction on the number of different cutting patterns, as in the standard 1D-CSP, it is easy to construct a feasible solution. However, just finding a feasible solution is not trivial for PRP. The problem of finding a feasible solution using a given number of cutting patterns n is equivalent to the bin packing problem (BPP): Decide whether all piece types M = {1, 2, . . . , m} can be packed into n stock rolls of length L; i.e., find a set of patterns Π ⊆ P that satisfies ∑ |Π| ≤ n and pj ∈Π aij ≥ 1 for all i ∈ M . To construct a feasible solution, we propose a heuristic algorithm based on the first-fit principle (FF) for BPP. After preparing n empty stock rolls of length L, FF sequentially assigns each piece type i into the stock roll with the lowest index among those having the residual length of at least li . We modify the basic FF so that every stock roll has at least one piece type, and call this algorithm as the modified first-fit algorithm (MFF). MFF first sorts all piece types i ∈ M in the descending order of demands di , where σ(k) denotes the k-th piece type in the resulting order. MFF assigns all piece types to the stock rolls in this order. In order to assign at least ∑ one piece type to every stock roll, we use an aspiration length L′ = i∈M li /n. If the processed length of the current stock roll exceeds L′ after assigning the piece type σ(k), MFF assigns the subsequent piece types to the next stock roll. Figure 1 shows examples of the first-fit placement and the modified first-fit placement. Algorithm MFF Input: Lengths li and demands di of piece types i ∈ M , the number of different patterns n, the length L of a stock roll. Output: n disjoint subsets M1 , M2 , . . . , Mn of M or ‘failure’. Step 1: Set Mj := ∅ for j = 1, 2, . . . , n, and L′ :=



i∈M li /n.

Step 2: Sort all piece types i ∈ M in the descending order of di , where σ(k) denotes the k-th piece type in this order. Set k := 1 and j := 1. ∑



Step 3: If lσ(k) ≤ L − i∈Mj li and i∈Mj li ≤ L′ hold, set Mj := Mj ∪ {σ(k)}, k := k + 1 and j := 1; otherwise set j := j + 1. If k ≤ m and j ≤ n hold, return to Step 3; otherwise go to Step 4. Step 4: If k > m holds, output M1 , M2 , . . . , Mn and halt; otherwise output ‘failure’ and halt. If MFF fails to assign all piece types, we switch to the first-fit decreasing heuristic (FFD), which has better performance to obtain a feasible solution for BPP. FFD first sorts all piece types i ∈ M in the descending order of length li (not di ), and assigns each piece type i into the stock roll of the lowest index among those having the residual length of at least li . If this attempt also fails, we finally conclude ‘failure’. 4

3

5 4

1 2 1 2 stock rolls n=3

FF 3

4

5

piece types

di : 6 σ(k): 1

4 3

2 5

5 2

L' = 7

3 4

2 3

1 4

5

MFF Figure 1: An example of FF placement and MFF placement

2.3

Solving auxiliary integer programming problem

Computing X = {x1 , x2 , . . . , xn } for a given set of patterns Π = {p1 , p2 , . . . , pn } is defined as the following integer programming problem (IP): (IP(Π)) minimize subject to

f (Π, X) = n ∑

n ∑

xj

j=1

aij xj ≥ di , for i = 1, 2, . . . , m

(3)

j=1

xj ∈ Z + , for j = 1, 2, . . . , n. Since this problem is already known to be strongly NP-hard [8], we consider finding an apb = {ˆ b proximate solution X x1 , x ˆ2 , . . . , x ˆn } and its cost f (Π, X). In our previous paper [17], we proposed a heuristic algorithm SOLVE IP which first solves the LP relaxation LP(Π) associated with IP(Π), in which the integer constraints xj ∈ Z + are replaced with xj ≥ 0 for all j = 1, 2, . . . , n. Let X = {x1 , x2 , . . . , xn } denote an optimal solution of LP(Π). SOLVE IP starts from x ˆj := ⌊xj ⌋ for j = 1, 2, . . . , n. In order to obtain an integer solution, it sorts variables xj in the descending order of their fractions xj − ⌊xj ⌋, and rounds them up to ⌈xj ⌉ in the resulting order until all demands di are satisfied. Algorithm SOLVE IP Input: Demands di of all products i ∈ M , and a set of cutting pattern Π = {p1 , p2 , . . . , pn }. b = {ˆ Output: An integer vector X x1 , x ˆ2 , . . . , x ˆn } of the patterns pj ∈ Π, or ‘failure’.

Step 1: Compute a continuous optimal solution X = {x1 , x2 , . . . , xn } of the LP relaxation LP(Π) associated with IP(Π). If LP(Π) is infeasible, output ‘failure’ and halt. Step 2: Set x ˆj := ⌊xj ⌋ for all j = 1, 2, . . . , n. Step 3: Sort all variables x ˆj in the descending order of their fractions xj − ⌊xj ⌋, and let σ(k) denote the k-th variable in this order. Set k := 1. 5



b = Step 4: If all demands are satisfied (i.e., nj=1 aij x ˆj ≥ di holds for i ∈ M ), output X ∑n {ˆ x1 , x ˆ2 , . . . , x ˆn } and halt. Otherwise if there is at least one i ∈ M such that j=1 aij x ˆ j < di and aiσ(k) > 0 hold, set x ˆσ(k) := ⌈xσ(k) ⌉. Set k := k + 1 and return to Step 4.

In the first execution of SOLVE IP, we employ the revised simplex method to solve LP(Π). After this we employ a sensitivity analysis technique and the criss-cross method [21] to facilitate the computation, where we start from the optimal simplex tableau obtained in the previous execution of SOLVE IP. We will explain its details in Section 3.

2.4

Construction of the neighborhoods

A natural definition of neighborhood for the current set of patterns Π would be replacing one pattern pj ∈ Π with another feasible pattern p′j ∈ P \ Π. However, the number of all feasible patterns |P | grows exponentially with the number of piece types m, and most of them may not lead to improvement. To overcome this, we propose two different types of neighborhoods called the 1-add neighborhood N1-add and the shift neighborhood Nshift , which are based on an improved column generation method. Gilmore and Gomory [10, 11] proposed the column generation method for the standard 1D-CSP defined in (2), which generates a cutting pattern p′ ∈ P \ Π necessary to improve the current solution (Π, X). Let Y = {y 1 , y 2 , . . . , y m } be a dual optimal solution for the LP relaxation LP(Π), where we can obtain Y by solving the following LP problem, dual to LP(Π): (DLP(Π)) maximize subject to

g(Π, Y ) = m ∑

m ∑

d i yi

i=1

aij yi ≤ 1, for j = 1, 2, . . . , n

(4)

i=1

yi ≥ 0, for i = 1, 2, . . . , m. The dual variable y i are used to generate new patterns to be added to the current set of patterns ∑ ′ Π. A new pattern p′ = (a′1 , a′2 , . . . , a′m ) ∈ P \ Π satisfying m i=1 y i ai > 1 gives a new constraint which restricts the dual feasible region of DLP(Π). Since the optimal value of LP is equivalent to that of its dual LP according to the duality theorem, the optimal value of LP(Π) is improved by adding such new patterns p′ . Figure 2 illustrates an improvement of the optimal value of LP(Π) by adding a new pattern p′ . The Gilmore and Gomory’s algorithm starts from a feasible

optimal solution

new pattern p'

dual feasible region

dual feasible region

Figure 2: The improvement of the optimal value of LP(Π) by adding a new cutting pattern p′ ∑

′ ′ set of patterns Π, and repeatedly adds the pattern p′ maximizing m i=1 y i ai until no pattern p ∑m ′ satisfying i=1 y i ai > 1 is found. In our LS, it may therefore suggest that such a new pattern p′ = (a′1 , a′2 , . . . , a′m ) ∈ P \ Π ∑ ′ maximizing m i=1 y i ai also leads to an improvement. However, this is not the case because, in

6

PRP, we need to remove another pattern pj ∈ Π from the current set of patterns Π not to increase the number of different patterns n. Note that, since we assume that the number of patterns n is much less than the number of piece types m, all patterns pj ∈ Π are included in the basis of an optimal basic solution of LP(Π) except for pathological situations. Therefore, removal of a pattern pj ∈ Π usually expands the dual feasible region. Figure 3 shows a case in which the optimal value of LP(Π) increases by replacing a pattern pj ∈ Π with a new pattern p′ ∑ ′ satisfying m i=1 y i ai > 1. This observation fills that we are required to minimize the expansion

optimal solution

new pattern p'

removed pattern pj dual feasible region

dual feasible region

Figure 3: The change of the optimal value of LP(Π) after replacing a pattern pj with a new pattern p′ of the dual feasible region after such a modification of Π. To justify this argument, we conducted preliminary computational experiment to see how ∑ ′ patterns p′ maximizing m i=1 y i ai ; i.e., minimizing their reduced costs z =1−

m ∑

y i a′i

(5)

i=1

behave. We took a random instance (m = 10) generated by CUTGEN [9]. The following data are for a solution Π (n = 10) and a randomly chosen pattern pj ∈ Π. The vertical axis of ′ Figure 4 represents the difference in the objective values f (Π′ , X ) − f (Π, X), where Π′ are given by removing the pattern pj and adding a new pattern p′ ∈ P . The horizontal axis represents ∑ ′ ′ ′ ′ the reduced cost z = 1 − m i=1 y i ai of each new pattern p ∈ P . If f (Π , X ) − f (Π, X) < 0 ′ holds, the neighbor solution (Π′ , X ) is better than the current solution (Π, X). Of course, ∑ ′ no pattern p′ satisfying 1 − m i=1 y i ai ≥ 0 improves the current solution. It is also observed, ′ however, that patterns p with larger negative reduced costs do not improve their objective values ′ f (Π′ , X ); i.e., we can not observe correlation between the horizontal and vertical axes. We then observe the same data from a different viewpoint. The vertical axis of Figure 5 represents the ′ difference in the objective values f (Π′ , X ) − f (Π, X), and the horizontal axis represents L1 ∑ ′ distance L1 (pj , p′ ) = m i=1 |aij − ai |. Stronger correlation is observed in this case. From these results, we may conclude that making the change L1 (pj , p′ ) small is more effective ∑ ′ ′ than minimizing the reduced cost z = 1 − m i=1 y i ai of the new pattern p , since it may result in a small change of the corresponding valid constraint of DLP(Π). This situation is illustrated in Figure 6. Based on the above observation, we propose the 1-add neighborhood N1-add and the shift neighborhood Nshift defined as follows: N1-add = {Π ∪ {p(i′ , j ′ )} \ {pj ′ } | i′ ∈ M + (Π), pj ′ ∈ Π}, 7

(6)

16 difference in objective value

14 12 10 8 6 4 2 0 -2 -0.6

-0.4

-0.2 0 0.2 0.4 0.6 reduced cost of the pattern

0.8

1

Figure 4: The relationship between the difference in objective values and the reduced costs of the new patterns

16 difference in objective value

14 12 10 8 6 4 2 0 -2 0

10

20 30 40 50 60 L1 distance from the original pattern

70

80

Figure 5: The relationship between the difference in objective values and the L1 distance between the new patterns and pattern pj

original pattern pj optimal solution new pattern p'

dual feasible region

dual feasible region

Figure 6: The change of the optimal value of LP(Π) by perturbing a pattern pj

8

Nshift = {Π ∪ {p1 (i′ , j1 , j2 ), p2 (i′ , j1 , j2 )} \ {pj1 , pj2 } | ′

(7)

i ∈ M (Π), pj1 , pj2 ∈ Π}, +

where M + (Π) = {i | y i > 0, i ∈ M },

(8)

p(i′ , j ′ ) is the pattern generated from pj ′ ∈ Π by the 1-add operation that increases ai′ j ′ by one, and p1 (i′ , j1 , j2 ) and p2 (i′ , j1 , j2 ) are the patterns generated by the shift operation that moves one piece type i′ ∈ M + (Π) from pj1 to pj2 (their details will be described later). Now we describe details of the 1-add neighborhood. The shift neighborhood will be explained later. Given a pattern pj ′ ∈ Π and a piece type i′ ∈ M with y i′ > 0, the 1-add operation first increases ai′ j ′ by one and then decreases the numbers of other piece types i (̸= i′ ) to make the resulting pattern feasible. To find those piece types to be decreased, we first sort all piece types i ∈ M \ {i′ } in the ascending order of y i /li while using the descending order of overproduction si =

n ∑

aij xj − di

(9)

j=1

as the secondary criterion for piece types i with y i = 0 (recall that y i ≥ 0 always holds). According to the resulting order σ(k), k = 1, 2, . . . , m − 1, we decrease aσ(k)j ′ := aσ(k)j ′ − 1, k = 1, 2, . . . , m − 1, until the resulting pattern pj ′ satisfies (1). After this, we try to add them back in the reverse order if possible. This is formally described as follows, where Lres describes the residual length of the current pattern. Operation ONE ADD Input: A piece type i′ with y i′ > 0, a pattern pj ′ = (a1j ′ , a2j ′ , . . . , amj ′ ) ∈ Π, lengths li and overproductions si for all piece types i ∈ M , the length of a stock roll L, and an optimal solution Y = {y 1 , y 2 , . . . , y m } of DLP(Π). Output: A new pattern p(i′ , j ′ ) = (a′1j ′ , a′2j ′ , . . . , a′mj ′ ) or ‘failure’. Step 1: Sort all m − 1 piece types i ∈ M \ {i′ } in the ascending order of y i /li while using the descending order of overproduction si as the secondary criterion for piece types i with y i = 0. Let σ(k) denote the k-th piece type in the resulting order. Step 2: Set a′i′ j ′ := ai′ j ′ + 1, a′ij ′ := aij ′ for all i ∈ M \ {i′ } and Lres := L − k := 1.

∑ i∈M

a′ij ′ li . Set

Step 3: If Lres ≥ 0 holds, set k := m − 1 and go to Step 6. Step 4: If there is no piece type i ∈ M \ {i′ } satisfying a′ij ′ > 0, output ‘failure’ and halt; else if k ≥ m holds, set k := 1. Step 5: If a′σ(k)j ′ > 0 holds, set a′σ(k)j ′ := a′σ(k)j ′ − 1 and Lres := Lres + lσ(k) . Set k := k + 1 and return to Step 3. Step 6: If there is no piece type i ∈ M satisfying li ≤ Lres , output the resulting pattern p(i′ , j ′ ) = (a′1j ′ , a′2j ′ , . . . , a′mj ′ ) and halt; else if k ≤ 1 holds, set k := m − 1. Step 7: If lσ(k) ≤ Lres holds, set a′σ(k)j ′ := a′σ(k)j ′ + 1 and Lres := Lres − lσ(k) . Set k := k − 1 and return to Step 6.

9

Now we explain the shift neighborhood. Given two patterns pj1 , pj2 ∈ Π and a piece type i′ with y i′ > 0, the shift operation first moves one piece type i′ from pj1 to pj2 , where we assume that pj1 and pj2 satisfy xj1 < xj2 . If the resulting pattern pj2 does not satisfy constraint (1), we remove some piece types from pj2 (some of them are put back to pj1 if it is possible under constraint (1)). This removal is done in the same order of piece types as the one used in operation ONE ADD until constraint (1) is recovered. The shift operation is formally described res as follows, where Lres 1 and L2 denote the residual lengths of patterns pj1 and pj2 , respectively. Operation SHIFT Input: A piece type i′ with y i′ > 0, a pair of patterns pj1 = (a1j1 , a2j1 , . . . , amj1 ) and pj2 = (a1j2 , a2j2 , . . . , amj2 ) ∈ Π, and their frequencies xj1 and xj2 , where xj1 < xj2 holds. Lengths li and overproductions si of all piece types i ∈ M and the length of a stock roll L, and an optimal solution Y = {y 1 , y 2 , . . . , y m } of DLP(Π). Output: A pair of new patterns p1 (i′ , j1 , j2 ) = (a′1j1 , a′2j1 , . . . , a′mj1 ) and p2 (i′ , j1 , j2 ) = (a′1j2 , a′2j2 , . . . , a′mj2 ) or ‘failure’. Step 1: If ai′ j1 = 0 holds, output ‘failure’ and halt. Step 2: Sort all piece types i ∈ M \{i′ } in the ascending order of y i /li while using the descending order of overproductions si as the secondary criterion for piece types i with y i = 0. Let σ(k) denote the k-th piece type in the resulting order. Step 3: Set a′i′ j1 := ai′ j1 − 1 and a′i′ j2 := ai′ j2 + 1. Set a′ij1 := aij1 and a′ij2 := aij2 for all ∑ ∑ ′ res ′ i ∈ M \ {i′ }. Set Lres i∈M aij1 li and L2 := L − i∈M aij2 li . Set k := 1. 1 := L − ′ ′ Step 4: If Lres 2 ≥ 0 holds, output p1 (i , j1 , j2 ) and p2 (i , j1 , j2 ), and halt; else if there is no piece ′ ′ type i ∈ M \ {i } satisfying aij2 > 0, output ‘failure’ and halt; else if k ≥ m holds, set k := 1. res res Step 5: If a′σ(k)j2 > 0 holds, set a′σ(k)j2 := a′σ(k)j2 − 1 and Lres 2 := L2 + lσ(k) . If lσ(k) ≤ L1 res holds, set a′σ(k)j1 := a′σ(k)j1 + 1 and Lres 1 := L1 − lσ(k) . Set k := k + 1 and return to Step 4.

In the 1-add neighborhood, we apply ONE ADD to all possible pairs of i′ and j ′ , and in the shift neighborhood we apply SHIFT to all triples (i′ , j1 , j2 ). As the number of positive y i in a basic optimal solution is at most the number of patterns n, we see that the size of 1-add neighborhood N1-add (Π) is O(n2 ) and the size of shift neighborhood Nshift (Π) is O(n3 ). According to the complementary slackness of linear programming, the set M + (Π) is also ∑ given by M + (Π) = {i | nj=1 aij xj = di , i ∈ M }. This means that piece types i ∈ M + (Π) are bottlenecks of the current solution, and we design the 1-add operation and the shift operation so that they break at least one bottleneck. Although the 1-add operation increases the overproduction of the piece type i′ by xj ′ , it decreases the overproductions of the piece types removed from the pattern pj ′ by xj ′ ; i.e., the 1-add operation often generates new bottlenecks while breaking one bottleneck. On the other hand, since the shift operation decreases the overproductions of the piece types moved from pattern pj2 to pattern pj1 by only xj2 − xj1 , it often breaks the bottleneck of piece type i′ without generating any new bottlenecks. Furthermore, if n is small, the 1-add operation often removes some piece types from the current set of patterns Π completely, while the shift operation pays a care not to remove any piece type from the current set of patterns Π. Due to this complementary characteristic of the shift neighborhood, we sometimes obtain better solutions than only using the 1-add neighborhood. 10

The order used for searching all sets of patterns in the neighborhood can be specified by the user. In our case, LS searches the 1-add neighborhood in the order of the pattern index j ′ = j ∗ , j ∗ + 1, . . . , n, 1, . . . , j ∗ − 1, where the pattern pj ∗ is next to the pattern modified at the last move. In the shift neighborhood, LS first selects a first pattern pj1 in the same fashion as the 1-add neighborhood, and then tries a second pattern pj2 for all j2 = j2∗ , j2∗ +1, . . . , n, 1, . . . , j2∗ −1, where the pattern pj2∗ is next to the pattern modified at the last move.

2.5

Iterated local search

It is often reported that local search (LS) alone may not attain a sufficiently good solution. To improve the situation, many variants of simple LS have been developed, and their frameworks are called metaheuristics. Iterated local search (ILS) [16] is one of them. It repeats LS from different initial solutions generated by perturbing the best solution obtained so far. Our ILS starts from the first initial solution generated by MFF, and then alternately applies two LS algorithms using the 1-add neighborhood and the shift neighborhood until no better solution is found with two neighborhoods. The subsequent initial solutions are randomly selected from the shift neighborhood of the best solution obtained by then. If the number of different patterns n is small, the generated initial solutions often become infeasible. Hence, if the next solution become infeasible, we instead generate an initial solution by a first-fit algorithm with a random sequence of piece types, called the randomized first-fit (RFF) algorithm. Here, iter denotes the current number of iterations of restarting LS from the last improvement, and max iter (an input parameter given by users) specifies the upper bound on iter. Algorithm ILS Input: Lengths li and demands di of all piece types i ∈ M , the length of a stock roll L, the number of different patterns n, and the upper bound of restarting LS max iter. Output: A set of patterns Π∗ = {p∗1 , p∗2 , . . . , p∗n } and their frequencies X ∗ = {x∗1 , x∗2 , . . . , x∗n }, or ‘failure’. Step 1: Generate the first initial set of patterns Π by MFF. If MFF outputs ‘failure’, then apply FFD and if FFD also outputs ‘failure’, output ‘failure’ and halt; otherwise compute their frequencies X by SOLVE IP. Set Π∗ := Π, X ∗ = X and iter := 0. Step 2: Apply LS(N1-add , Π) to obtain Π′ , and then LS(Nshift , Π′ ) to obtain Π′′ . If Π′′ ̸= Π′ holds, set Π := Π′′ and return to Step 2. Otherwise, set Π := Π′′ and go to Step 3. Step 3: If f (Π, X) < f (Π∗ , X ∗ ) holds, set Π∗ := Π, X ∗ := X and iter := 0; otherwise set iter := iter + 1. Step 4: If iter ≥ max iter holds, output Π∗ and X ∗ , and halt. Step 5: Select a set of patterns Π ∈ Nshift (Π∗ ) randomly. If this Π is feasible, return to Step 2; otherwise generate another set of patterns Π by RFF. If this Π is feasible, return to Step 2; otherwise set iter := iter + 1 and return to Step 4.

3

Linear programming techniques

In one iteration of LS (i.e., Steps 2 and 3 in ILS), it is necessary to solve LP(Π) for O(n2 ) sets of patterns in the 1-add neighborhood and for O(n3 ) sets of patterns in the shift neighborhood. This computation would be quite expensive if we solve them independently from scratch. To 11

overcome this, we introduce a sensitivity analysis approach, which is based on our observation that the neighbor solutions are all similar to the current solution. We now consider a set of patterns Π′ = Π ∪ {p′j } \ {pj } which is generated by replacing a pattern pj = (a1j , a2j , . . . , amj ) ∈ Π with p′j = (a′1j , a′2j , . . . , a′mj ) ∈ P \ Π. Since the instance LP(Π′ ) is similar to LP(Π), we can utilize an optimal solution of LP(Π) to solve LP(Π′ ), i.e., we start the simplex method from an optimal simplex tableau of LP(Π) instead of starting it from scratch. Let X = {x1 , x2 , . . . , xn } be an optimal solution of LP(Π), and S = {s1 , s2 , . . . , sm } be ∑ the corresponding slack variables (i.e., si = nj=1 aij xj − di ). We consider an optimal simplex ˜ N ˜ , c˜B , c˜N ) of LP(Π), where B ˜ denotes basic columns, N ˜ denotes non-basic tableau T = (B, columns, c˜B denotes reduced costs for basic variables, and c˜N denotes reduced costs for nonbasic variables. Let p˜j = (˜ a1j , a ˜2j , . . . , a ˜mj )t be the column corresponding to variable xj and q˜i = (˜b1i , ˜b2i , . . . , ˜bmi )t be the column corresponding to slack variable si . ˜ ′, N ˜ ′ , c˜′ , c˜′ ) of LP(Π′ ). We first generate We now construct a new simplex tableau T ′ = (B B N ′ ′ a new column p˜j corresponding to the new pattern pj , where its frequency and reduced cost are ∑ ′ ˜j is basic in the current tableau set to x′j = 0 and c˜′j = 1− m i=1 y i aij , respectively. If the column p ˜ T , select a non-basic column p˜k (or q˜k ) satisfying |˜ akj | > ε (or |bkj | > ε) (ε is a sufficiently small positive value), and apply the pivoting operation that exchanges the column p˜j and the non-basic column p˜k (or q˜k ). If there is no such column in the current tableau T , we conclude the failure of this procedure and solve LP(Π′ ) from scratch by the revised simplex method. In either case, we replace the column p˜j with the new column p˜′j . Figure 7 shows the above operations to the optimal simplex tableau T . Then we apply the criss-cross method [21] to the simplex tableau.

~c = 0 B

~ cN > 0

1

0 ...

~ pj

~ pk

~ p'j

...

~ N

~ B

x1 x2

...

0

1

sm-n

x1 ... xj ... xn s1 ... sm-n sm-n+1 ... xk ... sm basic columns

x'j

non-basic columns

Figure 7: Exchanging a column p˜j and a new column p˜′j in the optimal simplex tableau T Even if the simplex tableau is neither feasible (i.e., there is at least one negative variable) nor dual feasible (i.e., there is at least one negative reduced cost), the criss-cross method always obtains an optimal solution of LP(Π′ ) unless it is infeasible or unbounded. In general, the dual simplex method is faster than the criss-cross method if a dual feasible simplex tableau is given. We therefore switch the criss-cross method to the dual simplex method, whenever the simplex tableau becomes dual feasible in the computation of the criss-cross method. ∑ ′ ′ Furthermore, the dual simplex method carries a lower bound g(Π′ , Y ′ ) = m i=1 di yi of LP(Π ) during its computation; i.e., g(Π′ , Y ′ ) ≤ f (Π′ , X ′ ) holds for any feasible solution Y ′ of DLP(Π′ ). 12

If this lower bound g(Π′ , Y ′ ) exceeds the objective value f (Π, X) of the current set of patterns Π, we can immediately conclude that the new set of patterns Π′ does not improve the current set of patterns Π, and move to the next neighbor solution. Algorithm SOLVE LP Input: An optimal solution X = {x1 , x2 , . . . , xn } of LP(Π) together with its simplex tableau ˜ N ˜ , c˜B , c˜N ), where Π = {p1 , p2 , . . . , pn } is the current set of patterns, and a new T = (B, ′ pattern pj = (a′1j , a′2j , . . . , a′mj ). ′

Output: An optimal solution X = {x′1 , x′2 , . . . , x′n } of LP(Π′ ) and its simplex tableau T ′ = ˜ ′, N ˜ ′ , c˜′ , c˜′ ) of LP(Π′ ) or ‘failure’, where Π′ = Π ∪ {p′ } \ {pj }. (B j B N Step 1: If the column p˜j is basic, select a non-basic column p˜k (or q˜k ) satisfying |˜ akj | > ε (or |˜bkj | > ε), and apply the pivoting operation that exchanges the column p˜j and the nonbasic column p˜k (or q˜k ). If there is no such non-basic column, solve LP(Π′ ) from scratch ′ by the revised simplex method to obtain an optimal solution X and its simplex tableau T ′ , and go to Step 4. Step 2: Replace the column p˜j with the new column p˜′j corresponding to the new pattern p′j . Step 3: Apply the criss-cross method to the obtained tableau, and compute an optimal solution ′ X and its simplex tableau T ′ . If the simplex tableau T ′ becomes dual feasible during the computation of the criss-cross method, switch it to the dual simplex method. (In the latter case, if g(Π′ , Y ′ ) > f (Π, X) holds during the computation of the dual simplex method, then immediately output ‘failure’ and halt.) ′

Step 4: If the resulting solution X contains at least one negative xj or sk , output ‘failure’; ′ otherwise output X and T ′ . Halt.

4

Computational results

We conducted computational experiments for 18 classes of random instances generated by CUTGEN [9], where these instances are characterized by parameters L, m, ν1 , ν2 and d. Here the lengths of piece types li are random integers taken from interval [ν1 L, ν2 L] and d is the average of demands (d1 , d2 , . . . , dm ). The rules of generating li and di are described in [9], which are somewhat complicated and are omitted here. In these experiments, L was set to 1000, m was set to 10, 20 and 40, and d was set to 10 and 100. (ν1 , ν2 ) was set to (0.01, 0.2) for classes 1–6, (0.01, 0.8) for classes 7–12, and (0.2, 0.8) for classes 13–18. The parameter seed for generating random numbers in CUTGEN was set to 1994. For each class, 100 instances were generated and solved. All instances can be obtained electronically1 . We compared our iterated local search (ILS) with four existing algorithms: (i) iterated local search (UYI) using the 1-add neighborhood only [17], (ii) sequential heuristic procedure (SHP) [14], (iii) pattern combination heuristic (KOMBI) [7] and (iv) exact branch-and-price method proposed by Belov and Scheithauer (BP) [2]. Our ILS was coded in C language and run on an IBM-compatible personal computer (Pentium IV 2GHz, 1GB memory) under Linux 2.2. UYI and SHP were coded in C language and run on an IBM-compatible personal computer (Pentium III 1GHz, 1GB memory) under FreeBSD 4.2. The results of KOMBI and BP were taken from [7] and [2], respectively. KOMBI was run on an IBM-compatible 486/66 personal 1

http://www.toyota-ti.ac.jp/Lab/Kikai/5k40/cad/umetani/index-e.html

13

computer using MODULA-2 as the programming language under MS-DOS 6.0. BP was implemented in C language, where LPs were solved by ILOG CPLEX 7.5 Callable Library, and run on an IBM-compatible personal computer (Athelon XP 1GHz, 512MB memory) under Linux. As mentioned in Section 1, ILS and UYI can obtain trade-off curves between the number of different patterns n and the number of stock rolls f (Π, X). We took a random instance of class 12, and illustrate results of ILS and UYI for all n between nLB and m(= 40), where ∑ nLB = ⌈ i∈M li /L⌉ is a lower bound on the number of different patterns and m is the number of all piece types. Figure 8 shows the number of stock rolls f (Π, X) against the number of different 2000

ILS UYI SHP fLB

f: the number of stock rolls

1950 1900 1850 1800 1750 1700 1650 1600 1550 1500

20 25 30 35 n: the number of different cutting patterns

40

Figure 8: The number of stock rolls of ILS, UYI and SHP patterns n, where fLB is a lower bound of the number of stock rolls obtained by Gilmore and Gomory’s column generation method [10, 11] (i.e., the continuous optimal value for the instance without restriction on n). Since SHP has an input parameter called MAXTL, which controls maximum trim loss of each pattern, we also illustrate in Figure 8 some solutions of SHP for different values of MAXTL. We also tested these algorithms for instances of all classes, and observed that, in most cases, ILS achieved much smaller numbers of stock rolls than those of UYI and SHP for all values of n. One of the reasons is that the shift operation often breaks a bottleneck without generating new bottlenecks, while the 1-add operation often results in new bottlenecks. We next compared algorithms ILS, SHP, KOMBI and BP for all classes of random instances. Tables 1 and 2 show their computational results of SHP, KOMBI and BP, and these of ILS, respectively, where nLB and f LB describe the average of the lower bounds nLB and fLB , n and f describe the average numbers of different patterns n and stock rolls f for each class, respectively. Here, SHP was run with MAXTL=0.03. KOMBI and BP minimize the number of different patterns n while using a given fU B stock rolls or less. KOMBI and BP were run with fU B = f ∗ , where f ∗ is the optimal value of the standard 1D-CSP, computed by a branch-andprice algorithm based on column generation technique [1, 4, 5, 18, 19]. Although BP is an exact branch-and-price algorithm, the upper bound of computational time was set to 40 seconds and the best solution obtained within this time bound is output. To capture general behavior of the trade-off curves for ILS, we show in Table 2 several points of the trade-off curve, i.e., the minimum number of different patterns n using fU B stock rolls or less, where we tested fU B = ∞ and fU B = ⌈(1 + β)fLB ⌉ with β=0.05, 0.03 and 0.01. The case of fU B = ∞ was intended to find the minimum feasible n approximately, while other cases were to find small n using reasonably small number of stock rolls. From Tables 1 and 2, we first observe that SHP obtains smaller number of different patterns 14

Table 1: Computational results of SHP, KOMBI and LB SHP class m d nLB f LB n f 1 10 10 1.67 10.98 4.25 11.62 2 10 100 1.67 109.74 6.33 111.81 3 20 10 2.56 21.58 5.89 22.37 4 20 100 2.56 215.41 8.89 218.94 5 40 10 4.26 42.48 9.03 43.60 6 40 100 4.26 424.23 13.45 430.79

BP for random instances KOMBI BP n f n 3.40 11.49 3.43 7.81 110.25 6.08 5.89 22.13 – 14.26 215.93 – 10.75 42.96 10.47 25.44 424.71 18.71

7 8 9 10 11 12

10 10 20 20 40 40

10 100 10 100 10 100

4.62 4.62 8.65 8.65 16.27 16.27

49.96 499.34 93.35 931.95 176.59 1763.19

10.33 11.46 19.22 21.74 36.29 40.53

52.19 520.36 97.96 973.63 187.37 1865.41

7.90 9.96 15.03 19.28 28.74 37.31

50.21 499.52 93.67 932.32 176.97 1766.20

7.57 8.98 – – 25.18 33.75

13 14 15 16 17 18

10 10 20 20 40 40

10 100 10 100 10 100

5.54 5.54 10.52 10.52 19.85 19.85

63.23 632.10 119.36 1191.76 224.56 2242.30

10.55 10.95 20.30 21.09 38.52 40.38

65.41 654.80 124.36 1240.39 238.06 2366.95

8.97 10.32 16.88 19.91 31.46 38.28

63.27 632.12 119.93 1191.80 224.68 2242.40

8.79 9.97 – – 29.15 35.99

than KOMBI for classes 2–6 and than BP for classes 5 and 6, using a small number of additional stock rolls. However, the solutions of SHP were much worse than those of KOMBI for other classes. This indicates that SHP does not provide good solutions for instances in which the ratio of piece lengths li to the stock length L is relatively large. Generally speaking, ILS achieves smaller number of different patterns n using a reasonably small number of stock rolls. For example, compare the column of KOMBI in Table 1 and that of ILS with fU B = ⌈1.01fLB ⌉ in Table 2. Although the f values in the two columns are about the same (ILS uses slightly larger number of stock rolls), ILS attains much smaller n than that of KOMBI. Note that ILS and BP can adaptively control the trade-off between the number of different cutting patterns n and the number of stock rolls f by their input parameters. From these observations, we may conclude that ILS is useful in the sense that it provides reasonable trade-off curves for wide range of n. Table 3 gives the average computational time of SHP (MAXTL=0.03), KOMBI (fU B = f ∗ ), BP (fU B = f ∗ ) and ILS (fU B = ⌈1.05fLB ⌉) for all classes. Here, we note that CPU time for BP gives the average time to find the best solution while the others give the average time of whole execution. We give in Table 4 rough comparison of processors, Pentium III(1GHz), 486/66(66MHz), Athelon XP(1GHz) and Pentium IV(2GHz). Table 4 shows benchmark values of SPECint2000, SPECfp2000 and Mflop/s. The values of SPECint2000 and SPECfp2000 are taken from WWW site of SPEC (Standard Performance Evaluation Corporation)2 , and Mflop/s are taken from [6]. SHP is faster than ILS for classes 7–18, KOMBI is much faster than ILS taking into consideration the power of computers in use (a 486/66 processor is about several hundred times slower than a Pentium IV(2GHz) processor), and BP may be comparable to ILS. However, the average computational time of ILS is less than 30 seconds for all classes, and allowable for practical purposes. 2

http://www.spec.org/

15

class 1 2 3 4 5 6

m 10 10 20 20 40 40

d 10 100 10 100 10 100

Table 2: fU B n 1.67 1.67 2.57 2.57 4.28 4.28

7 8 9 10 11 12

10 10 20 20 40 40

10 100 10 100 10 100

5.01 5.01 9.27 9.27 16.95 16.95

54.14 541.50 101.21 1008.05 193.17 1920.39

5.52 5.75 10.05 10.30 18.32 18.40

51.35 507.92 96.35 954.55 182.88 1818.35

5.72 6.04 10.44 10.83 19.30 19.58

50.98 503.47 95.49 946.09 180.93 1799.14

5.92 6.42 11.38 12.11 22.08 22.66

50.62 500.77 94.40 936.18 178.34 1773.74

13 14 15 16 17 18

10 10 20 20 40 40

10 100 10 100 10 100

6.26 6.26 11.76 11.76 21.50 21.50

67.61 675.50 125.86 1256.92 239.64 2391.53

6.68 6.85 12.34 12.46 22.44 22.50

64.16 637.03 122.44 1217.91 231.41 2307.70

6.84 7.02 12.70 12.91 23.09 23.31

63.80 634.50 121.34 1207.63 229.14 2282.51

7.01 7.43 13.26 13.80 24.49 25.36

63.53 630.50 120.53 1196.57 226.62 2255.12

5

Computational results of ILS for random instances =∞ fU B = ⌈1.05fLB ⌉ fU B = ⌈1.03fLB ⌉ fU B = ⌈1.01fLB ⌉ f n f n f n f 15.15 2.43 12.24 2.43 12.24 2.43 12.24 149.78 3.18 114.49 3.68 113.02 4.57 111.60 28.01 3.98 23.53 4.42 23.08 4.42 23.08 278.57 4.89 223.92 5.57 221.04 7.36 218.44 55.12 6.68 45.29 7.35 44.76 9.32 43.95 546.64 7.25 442.00 8.49 435.44 12.48 429.10

Conclusion

We considered a variant of one-dimensional cutting stock problem (1D-CSP), called the pattern restricted problem (PRP), which minimizes the number of stock rolls while constraining the number of different patterns within a bound given by users, motivated by the fact that the setup costs for switching different patterns become more important in recent industry. For this problem, we proposed an iterated local search (ILS) algorithm that uses two local search (LS) algorithms based on the 1-add neighborhood and the shift neighborhood. We utilized the duals of associated linear programming (LP) problems to reduce the neighborhood sizes effectively, and incorporated a sensitivity analysis technique and the criss-cross method to solve a large number of LP problems quickly. In addition to these, we utilized a lower bound derived from a dual feasible solution of LP problem as another stopping criterion of LP computation. We compared our ILS with existing algorithms for random instances, and observed that ILS achieved a smaller number of different patterns using a small number of additional stock rolls.

References [1] G. Belov and G. Scheithauer, A branch-and-cut-and-price algorithm for one-dimensional stock cutting and two-dimensional two-stage cutting, Technical Report MATH-NM-032003, Dresden University, 2003. [2] G. Belov and G. Scheithauer, The number of setups (different patterns) in one-dimensional stock cutting, Technical Report MATH-NM-15-2003, Dresden University, 2003.

16

Table 3: class 1 2 3 4 5 6

CPU m 10 10 20 20 40 40

time in seconds for the random instances d SHP KOMBI BP ILS 10 0.04 0.35 1.94 0.10 100 0.08 1.26 5.01 0.22 10 1.56 2.10 – 0.72 100 1.57 16.41 – 2.69 10 631.74 40.03 9.15 7.55 100 107.11 383.30 14.30 23.98

7 8 9 10 11 12

10 10 20 20 40 40

10 100 10 100 10 100

0.01 0.01 0.01 0.02 0.09 0.14

0.11 0.24 1.47 3.40 36.98 77.41

0.24 1.47 – – 10.80 9.92

0.21 0.27 1.96 2.19 19.16 23.87

13 14 15 16 17 18

10 10 20 20 40 40

10 100 10 100 10 100

0.01 0.01 0.01 0.01 0.06 0.10

0.13 0.18 1.92 2.71 51.31 71.31

0.01 0.22 – – 7.47 6.93

0.26 0.31 2.01 2.21 22.01 26.84

Note 1: CPU time for BP gives the average time to find the best solution while the others give the average time of whole execution. Note 2: CPU time for SHP was on a Pentium III(1GHz) processor, that for KOMBI was on a 486/66 processor, that for BP was on an Athelon XP(1GHz) processor, and that for ILS was on a Pentium IV(2GHz) processor. [3] R.E. Burkard and C. Zelle, A local search heuristic for the real cutting problem in paper production, Technical Report SFB-257, Institute of Mathematics B, Technical University Graz, 2003. [4] Z. Degraeve and L. Schrage, Optimal integer solutions to industrial cutting stock problems, INFORMS Journal on Computing 11 (1999) 406–419. [5] Z. Degraeve and M. Peeters, Optimal integer solutions to industrial cutting-stock problems: Part2, benchmark results, INFORMS Journal on Computing 15 (2003) 58–81. [6] J.J. Dongarra, Performance of various computers using standard linear equations software, Technical Report No. CS-89-85, Computer Since Department, University of Tennessee, 2005 (available as http://www.netlib.org/benchmark/performance.ps). [7] H. Foerster and G. W¨ascher, Pattern reduction in one-dimensional cutting stock problems, International Journal of Production Research 38 (2000) 1657–1676. [8] M.R. Garey and D.S. Johnson, Computers and Intractability — A Guide to the Theory of NP-Completeness —, W.H. Freeman and Company, 1979.

17

Table 4: Performance comparison of different processors Processors SPECint2000 SPECfp2000 Mflop/s Pentium III(1GHz) 407 273 397 486/66(66MHz) – – 2.4 Athelon XP(1GHz) 369 323 477 Pentium IV(2GHz) 648 715 941 [9] T. Gau and G. W¨ascher, CUTGEN1: A problem generator for the standard one-dimensional cutting stock problem, European Journal of Operational Research 84 (1995) 572–579. [10] P.C. Gilmore and R.E. Gomory, A linear programming approach to the cutting-stock problem, Operations Research 9 (1961) 849–859. [11] P.C. Gilmore and R.E. Gomory, A linear programming approach to the cutting-stock problem — part II, Operations Research 11 (1963) 863–888. [12] C. Goulimis, Optimal solutions for the cutting stock problem, European Journal of Operational Research 44 (1990) 197–208. [13] R.E. Haessler, A heuristic programming solution to a nonlinear cutting stock problem, Management Science 17 (1971) 793–802. [14] R.E. Haessler, Controlling cutting pattern changes in one-dimensional trim problems, Operations Research 23 (1975) 483–493. [15] R.E. Johnston, Rounding algorithms for cutting stock problems, Journal of Asian-Pacific Operations Research Societies 3 (1986) 166–171. [16]

H.R. Louren¸co, O.C. Martin and T. St¨ utzle, Iterated local search F. Glover and G.A. Kochenberger eds. Handbook of Metaheuristics, Kluwer Academic Publishers (2003) 321–353.

[17] S. Umetani, M. Yagiura and T. Ibaraki, An LP-based local search to the one dimensional cutting stock problem using a given number of cutting patterns, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E86-A (2003) 1093–1102. [18] P.H. Vance, Branch-and-price algorithms for the one-dimensional cutting stock problem, Computational Optimization and Applications 9 (1998) 211–228. [19] F. Vanderbeck, Computational study of a column generation algorithm for bin packing and cutting stock problems, Mathematical Programming A86 (1999) 565–594. [20] F. Vanderbeck, Exact algorithm for minimizing the number of setups in the one-dimensional cutting stock problem, Operations Research 48 (2000) 915–926. [21] S. Zionts, The criss-cross method for solving linear programming problems, Management Science 15 (1969) 426–445.

18

One-dimensional cutting stock problem with a given ...

Apr 26, 2016 - increases ai′j′ by one and then decreases the numbers of other piece types i (̸= i. ′. ) .... Step 1: If ai′j1 = 0 holds, output 'failure' and halt.

204KB Sizes 5 Downloads 154 Views

Recommend Documents

One-dimensional cutting stock problem for a paper tube ...
Apr 27, 2016 - The constraint (C1) is also satisfied as long as the cutting groups using the same stock roll type appear consecutively in the sequence. Using feasible cutting groups, we reformulate PTCSP as a 0-1 integer linear programming problem (0

Outperforming The Market Portfolio With A Given Probability - CiteSeerX
May 18, 2011 - We do not exclude the possibility that Z(·) is a strict local martingale. Yu-Jui Huang. Outperforming The Market Portfolio With A Given Probability ...

Outperforming The Market Portfolio With A Given Probability - CiteSeerX
May 18, 2011 - Introduction. On Quantile Hedging. The PDE Characterization. Outline. 1 Introduction. 2 On Quantile Hedging. 3 The PDE Characterization. Yu-Jui Huang. Outperforming The Market Portfolio With A Given Probability ...

Nonparametric Estimation of Distributions with Given ...
Jul 30, 2007 - enue, Cambridge CB3 9DD, UK. Tel. +44-(0)1223-335272; Fax. +44-(0)1223-335299; E-mail [email protected]. 1 ...

P4 recommend a computer system for a given business purpose.pdf ...
Page 1 of 15. SULIT 55/1. SCIENCE. Ogos 2012. 1 jam. Kertas ini mengandungi 29 halaman bercetak. 55/1 © 2012 Hak Cipta BPSBPSK [Lihat Halaman ...

P4 recommend a computer system for a given business purpose.pdf ...
Windows 7 with 1GHz or faster with 32-bit for the games as they only use 32 bit and a 64-bit version. of windows 7 would be a waste of RAM, windows 7 will need 1 GB of RAM to run the OS. The Hard. drive will need 16GB free on the drive for the OS als

P4 recommend a computer system for a given business purpose.pdf ...
P4 recommend a computer system for a given business purpose.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying P4 recommend a computer ...

The Intellectual Given
May 26, 2015 - relevant doxastic attitudes, in the absence of putative reason to with- hold assent). To say that ..... Frey's strategy for proving Fermat's Last Theorem—could be salvaged by reverting to the. Horizontal ...... 59 Earlier drafts of t

A control problem for hybrid systems with discrete ...
high-level control algorithms and software for complex electro-mechanical ...... for matrices Mu,q,Aqc ∈ Rn×n, vectors bu,q,rqc,j,Bqc,j, ni ∈ Rn, and scalars bi,µ1 ...

Solving Multiobjective Multicast Routing Problem with a ...
C.2.2 [Computer-Communication Networks]: Network ... source node to a subset of destination nodes in a computer ... minimized using a degree constraints.

Solving a Dynamic Facility Location Problem with ...
Oct 20, 2015 - relaxation, mixed-integer programming, industrial application ... struction costs and transportation costs to satisfy customer demands. Oper- .... The hosting capacity of a logging camp can easily be expanded by adding.

A Problem with Single Valued Solution Concepts
be multi-valued on some games, or else violate what I call the subgame principle. ... Defining a Solution for n-Person Noncooperative Games", International ... Chapters III and IV, Working Paper s CP-416 and CP-4l7, Center for. Research in ...

The Cauchy problem at a node with buffer
Nov 29, 2011 - existence and well posedness of solutions to the Cauchy problem, by ..... Definition 6 We say that a wave (ρl,ρr) in an arc is a big shock if ρl < σ

A control problem for hybrid systems with discrete ...
is not synchronized with inputs, and that the hybrid plant contains reset maps. ..... Definition 7 (Quasi-recognizable sequential input-output maps, [21]).

THE NUMBER OF B3-SETS OF A GIVEN ...
Feb 7, 2014 - In view of the fact that Fh(n) = Θ(n1/h), one sees that c0hn1/h log ...... (55). It follows by (23) and Proposition 3.6 that. 360. eC. (A). eeCG0. (A).

Clamping device for a cutting insert
Feb 23, 2011 - 10-1994-30360, Feb. 25,. 2002. Of?ce Action in Japanese Application No. 308368/ 94, dated Mar. 23,. 2004. * cited by examiner ...

Cutting a Convex Polyhedron Out of a Sphere
mitrescu proposed an O(log n)-approximation algorithm with O(mn + n log n) running time .... We call P to be cornered if it does not contain the center o of Q ... [Sketch only] Since P contains the center o of Q, any cutting sequence, starting from .

Create cutting-edge banners and ads with Google ... Services
AdMob. • AdWords. Bring your creative vision to life. Get started using Google Web Designer with a quick training and assessment. You'll find out how to work it like a pro, and earn an Achievement. Get started with. Google Web Designer. Take the tr