Optimization Problems: Refinements Antonis Thomas Department of Information and Computing Sciences Utrecht University, The Netherlands [email protected]

Abstract. In this paper, we study a relatively new interpretation of optimization problems: refinements. In a refinement version the input is augmented with a feasible solution, and the problem is to decide whether there exists a “better” solution, i.e. a solution of larger or smaller measure in the case of maximization or minimization problems respectively. A first exploration of the properties of such problems is made, which eventually leads to the development of a framework for proving that refinement problems are N P -Complete. We give a general method to transform N P -Completeness proofs of decision versions of optimization problems to N P -Completeness proofs of the relevant refinement variants. The framework can be applied to many existing proofs of this type. An important application of these results can be found in lower-bounding kernelization, which implies a connection to parameterized complexity. We obtain as corollary for several problems, parameterized by structural parameters such as treewidth, the non-existence of polynomial kernels unless coN P ⊆ N P/poly.

1

Introduction

Finding the optimal solution among a set of feasible ones is a thoroughly researched topic. Understanding and describing the complexity of such optimization problems needs heavy persistence. Since the publication of the first N P Complete problem, several computer scientists have concentrated their efforts in admitting N P -Completeness results. Following, many important problems have been solved or categorized as “unsolvable efficiently” and others have been remained opened and intricated discussion. The main intention of this article is to propose a new way of comprehending optimization problems and eventually promote future studies by expressing new questions. Regarding parameterized complexity, one of the newer fields of complexity theory, researchers have been occupied with kernelization algorithms [13]. The problem of bounding the size of the output of such algorithms has been also treated. In [5], Bodlaender et al. discussed a framework, capable of demonstrating results regarding the non-existence of polynomial kernels for parameterized problems, under the assumption that the polynomial hierarchy does not collapse. The application of the above framework on graph problems parameterized by some structural parameter -such as treewidth, cliquewidth, maximum degree,

etc- suggested the use of refinement versions of optimization problems, thus creating the incentive for the study reported in this paper. More details follow in Section 4 where we make a step forward in settling a question set in [5] about the non-existence of polynomial kernels for all N P -Complete graph problems parameterized by treewidth. A refinement can be thought of as the problem of knowing a feasible solution and asking for the existence of a better one. That is, a solution of smaller (or larger) measure when the problem is minimization (or maximization). Defining refinements will hopefully provide us with an alternative interpretation of optimization problems and encourage future studies on the subject. A systematic analysis of the complexity aspects of refinement problems has not yet been carried out. Therefore, the purpose of this article is to define and explore the first properties of refinement problems in a formal environment. To characterize the refinement version of an optimization problem, we augment the input with a solution and decide whether another of better measure exists. For background on the complexity of optimization problems, the reader is referred to [1, 16]. Exploring the properties of the refinement problems reveals some rather unexpected facts. Although a claim that refinements are as hard as their optimization counterparts might seem apparent at first glance, there is no obvious way to prove this formally. Struggling with the refinements of N P -Hard optimization problems, unveils an interesting framework that derives N P -Completeness results for the problems that fall within. In spite of its restricted nature, the first results are easily deduced and it appears that there are many more awaiting to be discovered. The rest of this paper is outlined as follows. In Section 2 we present some notions and results from existing literature and introduce a restricted version of polynomial transformation. In Section 3 the refinement version of an optimization problem is formally defined, a completeness framework that exploits the newly defined transformations is explained and the first derived results are shown. In Section 4 we show applications of the refinements in parameterized complexity/kernelization. Finally, Section 5 finishes this article with conclusions and a number of questions that remain open.

2

Preliminaries

Most of the notions that are discussed in this section come from existing literature. More specifically, Definitions 1,2,3 and Theorem 1 origin in [1]. Definition 4 is given here for the first time. An optimization problem deals with finding the best solution from the set of feasible ones. Formally: Definition 1 ([1]). An optimization problem P is characterized by a quadruple (I, SOL, m, g), where – I is the set of instances of P – given an instance x ∈ I, SOL(x) is the set of feasible solutions

– m is the measure function; given an instance x ∈ I and a solution s ∈ SOL(x), m(x, s) provides a positive integer which is the value of the feasible solution s – g is the goal function and is either min or max Given an instance x ∈ I the goal is to find a solution s ∈ SOL(x) such that m(x, s) = g{m(x, s0 )|s0 ∈ SOL(x)} In this article we are interested in N P -optimization problems. Thus, defining the class N P O is needed. Definition 2 ([1]). An optimization problem P = (I, SOL, m, g) belongs to the class N P O if the following conditions hold: – the set of instances I is recognizable in polynomial time – there exists a polynomial q such that, given an instance x ∈ I, for any s ∈ SOL(x), m(x, s) ≤ q(|x|). Moreover, for any x and s such that m(x, s) ≤ q(|x|), it is decidable in polynomial time whether s ∈ SOL(x) – the measure function m is computable in polynomial time Definition 3 ([1]). Given an optimization problem P , its derived decision problem PD is obtained by adding a numerical bound k to each input. Given an instance x ∈ I and a positive integer k ∈ Z+ , decide whether there exists a solution SP ∈ SOL(x) such that m(x, SP ) ≥ k if g = max or m(x, SP ) ≤ k if g = min. Theorem 1 ([1]). For any optimization problem P in N P O, the corresponding decision problem PD belongs to N P . In the rest of this paper the sets of instances (I) and solutions (SOL) are subscribed to refer to a specific problem when there is more than one under consideration. Suppose PD and QD are decision problems derived from optimization problems P and Q respectively. Note that if x ∈ IP is an instance of P then w = {x, k} ∈ IPD , where k represents the numerical bound needed to pose a yes/no question, is an instance of PD . Consequently, a polynomial time many-one transformation from PD to QD would be a polynomial time computable function such that for all w = {x, k} ∈ IPD , w ∈ PD if and only if w0 = f (w) = f ({x, k}) = {x0 , k 0 } ∈ QD . Using this form to represent such instances one can interpret x as the structural (compositional) part of an instance (which can be a graph, a set, etc), independent of the numerical bound. Definition 4. Let PD and QD be the decision version problems derived from optimization problems P and Q respectively. Then a bound-independent polynomial transformation from PD to QD is a quadruple f = (t1 , t2 , t3 , g) such that: – t1 , t2 , t3 , g are polynomially computable – t1 : IP → IQ – t2 is a partial function such that ∀x ∈ IP and ∀y ∈ SOLP (x), t2 (x, y) ∈ SOLQ (t1 (x))

– t3 is a partial function such that ∀x ∈ IP and ∀y ∈ SOLQ (t1 (x)), t3 (x, y) ∈ SOLP (x) – g : IP → Z such that ∀x ∈ IP and ∀y ∈ SOLP (x), k 0 = ±k + g(x), where k = mP (x, y) and k 0 = mQ (t1 (x), t2 (x, y)) Note that the above definition is a restricted version of the polynomial-time many-one reduction. In addition, it is similar to the metric reduction defined by Krentel as the obvious generalization of a many-one reduction [16]. Our reduction aims at decision variants of optimization problems, has additionally (compared to a metric reduction) a forward transformation of solutions (t2 ) and is naturally more restricted. Function t1 is usually referred to as a polynomial transformation; it transforms an instance of an optimization problem to an instance of another optimization problem. Functions t2 and t3 are to ensure that there exists a formal way to convert solutions of the one problem to the other and vice versa. Note that these functions are usually described in the proof of a polynomial many-one reduction and more specifically in the part that explains how the yes-instances are preserved while translating from the language of the one problem to the other. The restriction of our reduction can be interpreted as the transformation of the structural part of the instance being independent from the numerical bound (k). This is illustrated by the functions t1 and g being independent of k (the only dependence is between k and k 0 ). Note that we use the negative of the size of the solution to the original problem (−k) when the transformation is from a maximization to a minimization problem or the other way around1 . Regarding polynomial transformations between decision versions of optimization problems, these restrictions are usually fulfilled. Consider the following example to demonstrate this intuition. Suppose PD , QD are decision variants, derived from graph optimization problems, such that PD is polynomially reducible to QD . The structural part of each corresponding instance consists of a graph. Therefore, {G, k} ∈ IPD is a yes-instance of PD if and only if f ({G, k}) = {G0 , k 0 } is a yes-instance of QD . Note that it seems reasonable in such cases for G → G0 (function t1 ) being independent from k; transforming the graph is actually translating one problem to the other such that only k 0 depends on k (function g). Then a solution SP in G with |SP | = k, translates to a solution SQ in G0 with |SQ | = k 0 and vice versa (functions t2 , t3 ). There appears to be a significant number of examples of reductions between optimization problems that fulfill these properties, some of which are presented in a subsequent section.

3

Optimization problem refinements

A refinement of an optimization problem is a variant where we have its input augmented with a solution and the problem is to decide whether there exists another solution with better measure (larger in case of maximization and smaller in case of minimization problem). More formally: 1

A maximization problem can be transformed to a minimization problem (and vice versa) by multiplying the target function by −1.

Definition 5 (Refinement). A refinement of an optimization problem P , noted PR , is the following decision problem: Given an instance x ∈ IP and a solution SP ∈ SOLP (x) (i.e. {x, SP } ∈ IPR ), decide if there exists a solution SP0 , such that mP (x, SP0 ) > mP (x, SP ), when gP = max, or mP (x, SP0 ) < mP (x, SP ), when gP = min. In the remainder of this paper we use the notion of a better solution. A solution S 0 is better than a solution S if its measure is larger (smaller) in case of a maximization (minimization) problem. In addition, note that a refinement problem can always be polynomially reduced to the optimization problem it is derived from. Given an instance of a refinement, one can drop the solution to obtain an equivalent instance for the corresponding optimization problem. In that sense, optimization problems are always at least as hard as their refinement counterparts. Therefore, a reduction from PR to PD is trivial (e.g. {x, SP } is a yes-instance of PR if and only if {x, k = |SP | − 1} is a yes-instance of PD , assuming P is minimization). Because of Theorem 1 we can obtain the following result as corollary: Corollary 1. For any optimization problem P in N P O, the corresponding refinement problem PR belongs to N P . A strongly related concept is that of iterative compression which was introduced in [18]. Iterative compression is a technique for showing that a minimization problem is fixed parameter tractable. An optimal solution is found by iteratively building the problem structure and compressing intermediate solutions on each iteration step. For the compression step a so-called compression routine is employed; an algorithm that given a problem instance and a corresponding solution, either calculates a smaller solution or proves that the given solution is of minimum size. The idea is that if the compression routine is fixed parameter tractable then so is the whole algorithm. Even though the use of this technique is out of the scope of this paper, it is straightforward to see that a compression routine is a constructive variant of the minimization refinement. A recent survey by Guo et al. gives further details on iterative compression [14]. Another related concept is that of reoptimization: Given an optimization instance and an optimal solution, we seek an optimal (or high quality) solution to an instance that comes of local modifications to the original. The question that arises there, is whether the knowledge of an optimal solution to the unaltered instance can help in solving the locally modified instance [3]. To this direction there exist some intractability, approximability and inapproximability results for the reoptimization variants of specific optimization problems. An analytic survey on reoptimization can be found in [3]. 3.1

Completeness framework

In this section we will propose a framework to prove NP-Completeness for refinements of optimization problems. Intuitionally, if there exists a polynomial

transformation between two optimization problems there has to exist one between their refinement counterparts. This intuition is the reason to define the restricted version of polynomial transformation in Section 2 and finally reach the following lemma. Lemma 1. Let P and Q be optimization problems with f = (t1 , t2 , t3 , g) a boundindependent polynomial transformation from PD to QD . Then f 0 = (t1 , t2 ) is a polynomial transformation from PR to QR . Proof. Given x ∈ IP and SP ∈ SOLP (x) we construct iPR = {x, SP } to be an instance of PR . Then we derive y = t1 (x) ∈ IQ and SQ = t2 (x, SP ) ∈ SOLQ (y), and construct iQR = {y, SQ } to be an instance of QR . The claim to prove is that iPR is a yes-instance of PR if and only if iQR is a yes-instance of QR . Suppose that SP0 is a solution for PR . Then SP0 is a better solution than SP for x ∈ IP . Note that since f is a bound-independent polynomial transformation, y = t1 (x) will be identical for any value of the numerical bound (k). We can 0 obtain SQ = t2 (x, SP0 ) which is a better solution than SQ for y ∈ IQ . This is because of the definition of function g; since its value depends only on the instance, k 0 will experience the same change as k. Since k is the size of a better 0 is a solution for QR . solution for x so is k 0 for y. Therefore, SQ 0 0 Conversely, suppose that SQ is a solution for QR . Then SQ is a better solution 0 0 than SQ for y ∈ IQ . Similarly, we obtain SP = t3 (x, SQ ) which is a better solution t u than SP for x ∈ IP . Therefore, SP0 is a solution for PR . Note that in the proof above by solution to the refinements we formally mean the NP-witness. In fact, Lemma 1 implies a framework for proving NPCompleteness results for the refinement versions of optimization problems. Combining it with Lemma 1 leads to the main theorem of this article. Theorem 2. Let P and Q be optimization problems with f a bound-independent polynomial transformation from PD to QD . Then, if PR is N P -Complete, so is QR . Although Theorem 2 provides a strong framework to prove NP-Completeness results for refinements, it is only of use if we have at least one N P -Complete refinement problem to begin with. The most natural way to show a refinement N P -Complete, seems to be a direct reduction from the corresponding optimization problem. Apparently, this is not always trivial. Suppose we have an instance I = {G, k} of an NP-Hard maximization graph problem. A direct reduction to its refinement analogue would impose a solution of size k − 1 in the transformed graph G0 . This implies the question of the existence a solution of size at least k, which is essentially the same as the decision version. Although a solution of size at least k in G0 does not necessarily guarantee such a solution in G. One has to secure that the imposed solution (or any part of it) cannot be used in any better solution. For example, the refinement of the clique problem can be trivially shown N P -Complete by adding a non-connected clique component of size k − 1 to the graph given. Adding the refinements of independent and dominating set to this result will form the first group of N P -Complete refinements;

all of them are proved with a direct reduction from the decision version of the corresponding optimization problem. Lemma 2 ([5]). Independent Set Refinement, Clique Refinement and Dominating Set Refinement are N P -Complete. 3.2

Direct results

At this point we will provide results that follow directly from Theorem 2. This will give deeper insight on the usability of bound-independent polynomial transformations and the proposed framework. First of all let us consider the Vertex Cover problem. The well-known reduction from Independent Set claims that for any graph G = (V, E) and subset V 0 ⊆ V , V 0 is an independent set for G if and only if V − V 0 is a vertex cover [11]. In this polynomial transformation the graph does not depend on the size of the independent set (measure bound) and thus falls into the framework under discussion. Note that in this case we use the negative of k = |V 0 | since the reduction is from a maximization to a minimization problem (k 0 = −k + |V 0 |). Corollary 2. Vertex Cover Refinement is N P -Complete. One of the refinements that cannot be trivially reduced from their optimization problem counterpart is the Steiner Tree for graphs; if one forces a solution of controlled size for this problem, there is no effortless guarantee that a better solution would not use parts of the imposed one. The statement of the derived decision problem is as follows: Steiner Tree in graphs [11] Instance: Graph G = (V, E), a weight w(e) ∈ Z+ 0 for each e ∈ E, a subset R ⊆ V , and a positive integer bound B. Question: Is there a subtree of G that includes all the vertices in R and such that the sum of weights of the edges in the subtree is no more than B? To show that the refinement of the aforementioned problem is N P -Complete, a bound-independent polynomial transformation from Vertex Cover will be demonstrated. This transformation was first described in [2], more exhaustively in [19] and here is mentioned as an example that we follow carefully, to conclude that falls into our framework. Given an instance Ivc = {G = (V, E), k} of Vertex Cover, we obtain an instance Ist = {G0 = (V 0 , E 0 ), R, w, B} of Steiner Tree using the following procedure: 1. 2. 3. 4. 5.

For any e ∈ E there is a vertex re ∈ R ⊆ V 0 and so |R| = |E| For any re , re0 ∈ R there is an edge er ∈ E 0 that connects them with w(er ) = 2 For any v ∈ V there is a vertex sv ∈ S, where S = V 0 − R, and so |S| = |V | For any sv , sv0 ∈ S there is an edge es ∈ E 0 that connects them with w(es ) = 1 For any re ∈ R and sv ∈ S, there is an edge ers ∈ E 0 that connects them with w(ers ) = 1, when e is incident to v in G, and w(ers ) = 2 otherwise

The claim is that G has a Vertex Cover of size k if and only if G0 has a Steiner Tree of size B = |R| + k − 1 = |E| + k − 1. The full proof of the claim above can be found in [19] and here the details are omitted for the sake of brevity and for being somewhat irrelevant to the rest of the content. As a matter of fact, one can observe that the proof of such a claim would provide a procedure to convert a Vertex Cover of G to a Steiner Tree of G0 and vice versa (transforming solutions of one problem to the other, function t2 and t3 ). Therefore a sketch of the proof might be fruitful for understanding better the separate parts of the framework. Suppose we are given a Vertex Cover of size k for G. Consider the subgraph of G0 that contains only vertices sv that correspond to the ones in the Vertex Cover, all re ∈ R and only the edges of cost 1. This subgraph has |R|+k = |E|+k vertices, is connected and any spanning tree will be a Steiner Tree for G0 and will have cost |E| + k − 1. Conversely, suppose we are given a Steiner Tree of size |E| + k − 1 for G0 . The tree may contain edges of weight 2 and in [19] is described a procedure that turns this to a Steiner Tree consisting only of unit edges, without disconnecting it or increasing its cost. Then, the tree spans R + k nodes, including the required set R and k nodes in S. Any node re is connected with a unit edge, which means that sv corresponding to one of the two endpoints of e is in the tree. Therefore the vertices in V that correspond to the k vertices in S that belong to the tree, cover all the edges of G. Note that in the above, transforming G to G0 is independent of the size of vertex cover (k). More specifically, function t1 is given in the first, numbered part, describing how to transform a given instance, function g in the claim (given x ∈ Ivc , g(x) = |E| − 1) 2 and functions t2 , t3 in the sketch of the claim’s proof above. Therefore, the polynomial transformation described is bound-independent and Steiner Tree for graphs falls into the framework. Corollary 3. Steiner Tree Refinement is N P -Complete. In some cases a described transformation can be bound-independent without being obvious. For example consider the reductions from Karp’s seminal article on reducibility among combinatorial problems [15] and more specifically the reduction from Vertex Cover to Feedback Vertex Set and Feedback Arc Set. These transformations are bound-independent; both transform the graph independently from the measure bound and function g returns zero (k 0 = k + 0). Functions t2 , t3 can be easilly derived from the description of the above transformations. The corollary follows. Corollary 4. Feedback Vertex Set Refinement and Feedback Arc Set Refinement are N P -Complete. Furthermore, consider the reduction from Vertex Cover to Set Covering problem from the same article. On the same reasoning as above we obtain a similar 2

In extension, k0 = k + |E| − 1 when the transformation is interpreted from the decisional viewpoint.

result. This of course holds as well for the Set Packing problem for which a reduction from the Clique problem is described. Therefore, both problems fall into our framework; the first NP-Completeness result for non-graph problems is a fact. Corollary 5. Set Covering Refinement and Set Packing Refinement are N P -Complete.

4

Applications

We have one main application for the refinement of an optimization problem, which comes from the field of parameterized complexity theory. For relevant background the reader is referred to [9, 17]. Research that has dealt with lower bounding kernelization algorithms includes [5] and its extensions in [7] and [8]. In [5], Bodlaender et al. proposed a framework to prove the non-existence of polynomial kernels for parameterized problems based on composition algorithms (under complexity theoretic assumptions). Definition 6 (Composition [5]). A composition algorithm for a parameterized problem L ⊆ Σ ∗ ×N is an algorithm that takes as input a sequence 1 , k), . . . , (xt , k)), P((x t with (xi , k) ∈ Σ ∗ × N+ for each 1 ≤ i ≤ t, uses time polynomial in i=1 |xi | + k and outputs (y, k 0 ) ∈ Σ ∗ × N+ with: – (y, k 0 ) ∈ L ⇐⇒ (xi , k) ∈ L for some 1 ≤ i ≤ t – k 0 is polynomial in k It is demonstrated in [5], that two conditions are enough to show that a natural F P T graph problem parameterized by its treewidth fits in their framework: 1. The refinement variant of the problem is compositional 2. The unparameterized version of the refinement variant is N P -Complete where a compositional problem is one that admits a composition algorithm. This technique can be also used with many other structural parameters such as maximum degree, cliquewidth, pathwidth, et cetera. In this article we will enstrengthen the implications of [5] that many graph-theoretic problems parameterized by the treewidth of their input graph could be proved to not admit polynomial kernels, unless P H = Σp3 (the polynomial hierarchy collapses to the third level) [5, 10]. Many graph optimization problems can be shown to belong to the class F P T when parameterized by their treewidth. This category includes the optimization version of the problems of Corollaries 2 and 3 [12]. Consequently, it is possible to fit the results of the previous section to the lower-bounding kernels framework. For a recent survey on fixed parameter tractable algorithms for graphs of bounded treewidth, see [6].

Note that the unparameterized version of the refinement variant, as mentioned in point 2 above, includes a tree decomposition in its input3 . For example, the unparameterized version of w-Steiner Tree Refinement, where “w-” denotes the classical problem when parameterized by the treewidth of the given graph, is Steiner Tree Refinement with Treewidth. Once N P Completeness is known for a refinement, the same result when the input is augmented by a tree decomposition is trivially obtained by techniques such as appending a single bag containing all the vertices or the result of a polynomial time approximation algorithm for treewidth (for instance look at [4]). Lemma 3. w-Steiner Tree Refinement, w-Feedback Vertex Set Refinement and w-Feedback Arc Set Refinement are compositional. Proof. To prove the first part of the lemma, suppose we are given m instances {G1 , T1 , R1 , w1 , S1 },. . . , {Gm , Tm , Rm , wm , Sm } of w-Steiner Tree Refinement. Consider the algorithm Smwhich maps this set of instances to {G, T, R, w, S}, with G the disjoint union i=1 Gi with adding a central vertex, namely c, and connecting it with one arbitrarySvertex vi ∈ Ri , ∀1 ≤ i ≤ m. Then T is the tree m obtained by the disjoint union i=1 Ti with adding one bag for each instance, that contains the vertex vi ∈ Ri that is connected to c, and connecting all these bags to a central bag that consists only ofSc and is the root for Sm Smthe tree decomm position T . Further, R = i=1 Ri , w = i=1 wi and S = i=1 Si . Note that c is not needed to be added in the required vertices; it will be used since it is the only point that connects the individual Steiner Trees. Observe that G has a Steiner Tree of smaller size if and only if ∃ i such that Gi has a Steiner Tree of smaller size. To prove the rest of the lemma, one can trivially observe that given m instances of either of the parameterized feedback set refinement problems, taking the disjoint union of the graph and the input solutions is enough, to see that the large graph has a solution of smaller size if and only if there exists a solution of smaller size for at least one of the original graphs. t u Lemma 3 combined with the NP-Completeness results of section 3.2 for all three problems discussed above, gives rise to the following theorem. Related to this, in [5] is shown the non-existence of polynomial kernels for the problems w-Independent Set (and in extension w-Vertex Cover, w-Clique) and w-Dominating Set. Theorem 3. Unless P H = Σp3 , none of the following FPT problems have polynomial kernels: w-Steiner Tree, w-Feedback Vertex Set and w-Feedback Arc Set 3

We include the tree decomposition (and not just the treewidth) as part of the input, since it is NP-Hard to verify that a given input actually contains a correct value for the treewidth.

5

Conclusions

The notion of refinement seems to worth a further investigation. Firstly, it would be beneficial to have more N P -Completeness results for the refinement versions of well-known N P -Complete problems. In addition, it should be remarked that obtaining Turing reductions is often easier: Suppose we had an oracle deciding if a better solution exists in one computational step; then iteratively and with polynomially many calls4 to that oracle, one would possibly be able to construct an optimal solution for the original problem. Problems that admit such reductions to their refinements can be of their own independent interest. Designing an algorithm able to solve an N P -Hard problem with a polynomially bounded number of calls to an oracle that answers its refinement, might provide deeper insight for the complexity of such a problem. Furthermore, the intuition about refinements derived from N P -Complete optimization problems claims that they should be N P -Complete. At first look, it seems obvious that any refinement should be hard, since a trivial solution (worst case) does not really help with the search of a sufficiently good one. Albeit, this claim is not as easy to prove as it is to be thought of. Thus, there are some refinement-specific open questions that originate from this work and might be appealing for future research. In conclusion to this article, the most substantial ones will be mentioned: – The framework suggested here is sufficient to include many known transformations, yet is too restricted from a theoretical point of view. Hence, the question of the existence of a uniform (or more general) way to prove N P -Completeness for all refinements arises. – Is it possible to enlarge the collection of problems that fall into our framework? This could be done by enforcing some generalization on the boundindependent transformation or perhaps by showing that a specific set of problems falls blindly into the framework. The set of problems that admit polynomial time approximation algorithms [1] could be the first candidate for examination (due to the fact that one can include this property to making a reduction from the optimization to the refinement). – One can expect that there is a connection between the refinements and the method of iterative compression. If one would prove fix parameter intractability for a minimization refinement, the same would hold for the corresponding compression routine. Would it be useful to prove fixed parameter intractability for the original problem in that way? Acknowledgments The author would like to thank Hans Bodlaender for his guidance, comments and reviews on preliminary versions of this document. In addition, an anonymous reviewer for pointers in the literature. 4

This could also be interpreted as a polynomially bounded number of possible measures.

References 1. Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., Marchetti-Spaccamela, A., Protasi, M.: Complexity and Approximation: Combinatorial Optimization Problems and their Approximability Properties. Springer Verlag (1999) 2. Bern, M., Plassmann, P.: The Steiner problem with edge lengths 1 and 2. Information Processing Letters 32(4), 171–176 (1989) 3. B¨ ockenhauer, H.J., Hromkovic, J., M¨ omke, T., Widmayer, P.: On the hardness of reoptimization. In: Geffert, V., Karhum¨ aki, J., Bertoni, A., Preneel, B., N´ avrat, P., Bielikov´ a, M. (eds.) SOFSEM. Lecture Notes in Computer Science, vol. 4910, pp. 50–65. Springer (2008) 4. Bodlaender, H.L.: Discovering treewidth. In: Vojt´ as, P., Bielikov´ a, M., CharronBost, B., S´ ykora, O. (eds.) SOFSEM. Lecture Notes in Computer Science, vol. 3381, pp. 1–16. Springer (2005) 5. Bodlaender, H.L., Downey, R.G., Fellows, M.R., Hermelin, D.: On problems without polynomial kernels. Journal of Computer and System Sciences 75(8), 423–434 (2009) 6. Bodlaender, H.L., Koster, A.M.C.A.: Combinatorial optimization on graphs of bounded treewidth. The Computer Journal pp. 255–269 (2007) 7. Bodlaender, H.L., Thomass´e, S., Yeo, A.: Kernel bounds for disjoint cycles and disjoint paths. In: Fiat, A., Sanders, P. (eds.) ESA. Lecture Notes in Computer Science, vol. 5757, pp. 635–646. Springer (2009) 8. Dom, M., Lokshtanov, D., Saurabh, S.: Incompressibility through colors and ids. In: Albers, S., Marchetti-Spaccamela, A., Matias, Y., Nikoletseas, S.E., Thomas, W. (eds.) ICALP. Lecture Notes in Computer Science, vol. 5555, pp. 378–389. Springer (2009) 9. Downey, R.G., Fellows, M.R.: Parameterized complexity. Springer New York (1999) 10. Fortnow, L., Santhanam, R.: Infeasibility of instance compression and succinct PCPs for NP. In: Dwork, C. (ed.) STOC. pp. 133–142. ACM (2008) 11. Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NP-completeness. WH Freeman San Francisco (1979) 12. Guo, J., Gramm, J., H¨ uffner, F., Niedermeier, R., Wernicke, S.: Improved fixedparameter algorithms for two feedback set problems. In: Dehne, F.K.H.A., L´ opezOrtiz, A., Sack, J.R. (eds.) WADS. Lecture Notes in Computer Science, vol. 3608, pp. 158–168. Springer (2005) 13. Guo, J., Niedermeier, R.: Invitation to data reduction and problem kernelization. ACM SIGACT News 38(1), 45 (2007) 14. Guo, J., Moser, H., Niedermeier, R.: Iterative compression for exactly solving nphard minimization problems. In: Lerner, J., Wagner, D., Zweig, K.A. (eds.) Algorithmics of Large and Complex Networks. Lecture Notes in Computer Science, vol. 5515, pp. 65–80. Springer (2009) 15. Karp, R.M.: Reducibility Among Combinatorial Problems, pp. 85–103. Complexity of Computer Computations, Plenum Press (1972) 16. Krentel, M.W.: The complexity of optimization problems. J. Comput. Syst. Sci. 36(3), 490–509 (1988) 17. Niedermeier, R.: Invitation to Fixed-Parameter Algorithms. Oxford University Press, USA (2006) 18. Reed, B.A., Smith, K., Vetta, A.: Finding odd cycle transversals. Oper. Res. Lett. 32(4), 299–301 (2004) 19. Trevisan, L.: Inapproximability of combinatorial optimization problems. In: Electronic Colloquium on Computational Complexity (2004)

Optimization Problems: Refinements

Complete problem, several computer scientists have concentrated their efforts in ... some structural parameter -such as treewidth, cliquewidth, maximum degree, .... PR, is the following decision problem: Given an instance x ∈ IP and a solution.

274KB Sizes 2 Downloads 229 Views

Recommend Documents

PROGRAM Optimization Problems and Their ...
Lobby of the Conference Hall. Opening and Plenary Session. Conference Hall of Omsk State Scientific Library n.a. A.S. Pushkin. 14:00 – 14:40 Opening of the ...

Optimization methods for the length and growth problems
problem. Definitions. Optimization formulation. Optimization tools. Numerical Results. The growth problem. Definitions. Optimization formulation. Numerical Results. Backward error analysis for GE −→ growth factor g(n,A) = maxi,j,k |a. (k) ij. | m

Secure Grid Service Engineering for Industrial Optimization Problems ...
Many industrial optimization problems require high computational power and hence are ideally executed in a Grid environment. Since the development and configuration of. Grid Services – especially the security configurations – is a very complex ta

Secure Grid Service Engineering for Industrial Optimization Problems ...
security configuration, and allow the orchestration of services. Keywords: Optimization in Industrial Engineering, Grid Computing, Security, Workflow,. Service Orchestration, Service-Oriented Architecture (SOA), Model Driven Development. 1 Introducti

Refinements of rationalizability for normal-form games - Springer Link
rationalizability for normal-form games on its own fails to exclude some implausible strategy choices. One example is the game given in Figure 1. It can be shown that fЕX1, Y1Ж, ЕX1, Y2Ж, ЕX2, Y1Ж, ЕX2, Y2Жg are all rationalizable; in other w

Refinements of rationalizability for normal-form games
earlier mentioned relationships between the re®nements in Section 4, and we show by means of examples in Section 5 that there are no other relationships. 2. Rationalizability and existing refinements. We consider a normal-form game q I, S, U , where

Behavioural Problems
be highly frustrating for family members, who may perceive the behaviour as “laziness” or the patient as “not pulling his or her weight”. It can be a great source of ...

SAP System Landscape Optimization
Application Platform . ... The SAP Web Application Server . ...... possible customizing settings, the results of an implementation can prove very frustrating. In trying ...

SAP System Landscape Optimization
tent master data, even beyond system boundaries. Data quality is impor- tant for all systems, but it is critical for the SAP key systems and the related CRM or ...

Optimization Services
Targeting all products - i.e. no filters used under "product extensions". Check settings ... Google Confidential and Proprietary. Segment. Example. Products. ROI. Ad Group. Product ... keywords to prevent unnecessary cost. Use findings to break ...

Chang, Particle Swarm Optimization and Ant Colony Optimization, A ...
Chang, Particle Swarm Optimization and Ant Colony Optimization, A Gentle Introduction.pdf. Chang, Particle Swarm Optimization and Ant Colony Optimization, ...

Stochastic Program Optimization - GitHub
114 COMMUNICATIONS OF THE ACM. | FEBRUARY 2016 | VOL. 59 | NO. 2 research ..... formed per second using current symbolic validator tech- nology is quite low. ... strained to sample from identical equivalence classes before and after ...

SAP System Landscape Optimization
addition, numerous enterprises also use other applications (best-of- breed) to ..... fore best suited for occasional users. ...... for the support of sales campaigns.

Convex Optimization
Mar 15, 1999 - 5.1 Unconstrained minimization and extensions . ..... It is (as the name implies) a convex cone. Example. ..... and lies in the domain of f (i.e., c. T.

chaotic optimization
search (SS) with the hallmark of memory retention ... included in the solutions, with regards to its interaction ... occur, if a “good” starting point is achieved. ..... His email address is [email protected]. GODFREY C. ONWUBOLU is the Professo

Optimization in
Library of Congress Cataloging in Publication Data. Đata not available .... sumption and labor supply, firms” production, and governments' policies. But all ...

linear optimization
Jun 30, 2005 - recommended that the reader try these examples in Excel while working .... As a final observation, notice how the data relating to the alloys was ... While this is not necessary, it does make the formula entry much easier,.

SAP System Landscape Optimization
Examples of customizing settings are company codes, plant ...... ios, which can always be restarted and easily adjusted to new constraints, must always be in ...

Semidefinite Optimization
Feb 11, 2012 - Page 1. Semidefinite Optimization. Mastermath. Spring 2012. Monique Laurent. Centrum Wiskunde & Informatica. Science Park 123. 1098 XG ...

Local Search and Optimization
Simulated Annealing = physics inspired twist on random walk. • Basic ideas: – like hill-climbing identify the quality of the local improvements. – instead of picking ...