On Approximation Resistance of Predicates

Algorithm Hardness

Subhash Khot New York University

zero-sum game

Joint work with Madhur Tulsiani and Pratik Worah

value = 0 ?

Overview of the Talk

- Introduction and Survey - Basics: Algorithmic Side - Basics: Hardness Side - Characterization of Approximation Resistance - Conclusion

Constraint Satisfaction Problems (CSP)

Constraint Satisfaction Problems (CSP) - Boolean ({−1, 1}-valued) variables and constraints.

Constraint Satisfaction Problems (CSP) - Boolean ({−1, 1}-valued) variables and constraints. - All constraints of same type, variable negation allowed.

3-SAT x1 ∨ x22 ∨ x 19 x3 ∨ x 9 ∨ x23 x5 ∨ x 7 ∨ x 9 .. .

Constraint Satisfaction Problems (CSP) - Boolean ({−1, 1}-valued) variables and constraints. - All constraints of same type, variable negation allowed.

3-SAT

CUT x1

x1 ∨ x22 ∨ x 19

x2

x3 ∨ x 9 ∨ x23 x5 ∨ x 7 ∨ x 9 .. .

x4 x5

x7 x6

x3

Constraint Satisfaction Problems (CSP) - Boolean ({−1, 1}-valued) variables and constraints. - All constraints of same type, variable negation allowed.

3-SAT

CUT x1

x1 ∨ x22 ∨ x 19

x2

x3 ∨ x 9 ∨ x23 x5 ∨ x 7 ∨ x 9 .. .

x4 x5

x7 x6

x3

x1 = 6 x2 x2 = 6 x5 x3 = 6 x4 .. .

Constraint Satisfaction Problems (CSP) 3-XOR: Linear equations modulo 2 (in ±1 variables)

Constraint Satisfaction Problems (CSP) 3-XOR: Linear equations modulo 2 (in ±1 variables) x5 · x9 · x16 = 1 x6 · x12 · x22 = −1 x7 · x8 · x15 = −1 .. .

Constraint Satisfaction Problems (CSP) 3-XOR: Linear equations modulo 2 (in ±1 variables) x5 · x9 · x16 = 1

x5 · x9 · x16 = 1

x6 · x12 · x22 = −1

x6 · x 12 · x22 = 1

x7 · x8 · x15 = −1 .. .

−→

x7 · x8 · x 15 = 1 .. .

Constraint Satisfaction Problems (CSP) 3-XOR: Linear equations modulo 2 (in ±1 variables) x5 · x9 · x16 = 1

x5 · x9 · x16 = 1

x6 · x12 · x22 = −1

x6 · x 12 · x22 = 1

x7 · x8 · x15 = −1 .. .

−→

x7 · x8 · x 15 = 1 .. .

CSP(f ) : Given predicate f : {−1, 1}k → {0, 1}. Each constraint is f applied to some k (possibly negated) variables.

Constraint Satisfaction Problems (CSP) 3-XOR: Linear equations modulo 2 (in ±1 variables) x5 · x9 · x16 = 1

x5 · x9 · x16 = 1

x6 · x12 · x22 = −1

x6 · x 12 · x22 = 1

x7 · x8 · x15 = −1 .. .

−→

x7 · x8 · x 15 = 1 .. .

CSP(f ) : Given predicate f : {−1, 1}k → {0, 1}. Each constraint is f applied to some k (possibly negated) variables. C ≡ f xi1 , x i2 , . . . , x ik−1 , xik



Constraint Satisfaction Problems (CSP) 3-XOR: Linear equations modulo 2 (in ±1 variables) x5 · x9 · x16 = 1

x5 · x9 · x16 = 1

x6 · x12 · x22 = −1

x6 · x 12 · x22 = 1

x7 · x8 · x15 = −1 .. .

−→

x7 · x8 · x 15 = 1 .. .

CSP(f ) : Given predicate f : {−1, 1}k → {0, 1}. Each constraint is f applied to some k (possibly negated) variables. C ≡ f xi1 , x i2 , . . . , x ik−1 , xik



k fixed constant. Number of variables and constraints → ∞.

Constraint Satisfaction Problems (CSP) x1 x2

x4

x5

x7

x3

x6 CSP(f ) : Given predicate f : {−1, 1}k → {0, 1}. Each constraint is f applied to some k (possibly negated) variables. C ≡ f xi1 , x i2 , . . . , x ik−1 , xik



k fixed constant. Number of variables and constraints → ∞.

Satisfiability vs Approximability

- [Schaefer 78], Dichotomy Theorem: For every predicate f : {−1, 1}k → {0, 1}, deciding the satisfiability of CSP(f ) is either in P or is NP-hard.

Satisfiability vs Approximability

- [Schaefer 78], Dichotomy Theorem: For every predicate f : {−1, 1}k → {0, 1}, deciding the satisfiability of CSP(f ) is either in P or is NP-hard. - Approximability: given a (1 − o(1))-satisfiable instance, how good an assignment can one find efficiently?

Satisfiability vs Approximability

- [Schaefer 78], Dichotomy Theorem: For every predicate f : {−1, 1}k → {0, 1}, deciding the satisfiability of CSP(f ) is either in P or is NP-hard. - Approximability: given a (1 − o(1))-satisfiable instance, how good an assignment can one find efficiently? - Let ρ(f ) =

|f −1 (1)| 2k

= Ex [f (x )].

Satisfiability vs Approximability

- [Schaefer 78], Dichotomy Theorem: For every predicate f : {−1, 1}k → {0, 1}, deciding the satisfiability of CSP(f ) is either in P or is NP-hard. - Approximability: given a (1 − o(1))-satisfiable instance, how good an assignment can one find efficiently? - Let ρ(f ) =

|f −1 (1)| 2k

- ρ(3-SAT) = 7/8,

= Ex [f (x )]. ρ(3-XOR) = 1/2.

Satisfiability vs Approximability

- [Schaefer 78], Dichotomy Theorem: For every predicate f : {−1, 1}k → {0, 1}, deciding the satisfiability of CSP(f ) is either in P or is NP-hard. - Approximability: given a (1 − o(1))-satisfiable instance, how good an assignment can one find efficiently? - Let ρ(f ) =

|f −1 (1)| 2k

- ρ(3-SAT) = 7/8,

= Ex [f (x )]. ρ(3-XOR) = 1/2.

- Observation: a random assignment to CSP(f ) satisfies ρ(f ) fraction of the constraints.

Approximable and Approximation Resistant Predicates - f : {−1, 1}k → {0, 1}.

ρ(f ) =

|f −1 (1)| . 2k

- Definition: f is called approximable if for some ε > 0, there is an efficient algorithm that given a (1 − ε)-satisfiable instance of CSP(f ), outputs a (ρ(f ) + ε)-satisfying assignment.

Approximable and Approximation Resistant Predicates - f : {−1, 1}k → {0, 1}.

ρ(f ) =

|f −1 (1)| . 2k

- Definition: f is called approximable if for some ε > 0, there is an efficient algorithm that given a (1 − ε)-satisfiable instance of CSP(f ), outputs a (ρ(f ) + ε)-satisfying assignment.

- Otherwise f is called approximation resistant (A.R.).

Approximable and Approximation Resistant Predicates - f : {−1, 1}k → {0, 1}.

ρ(f ) =

|f −1 (1)| . 2k

- Definition: f is called approximable if for some ε > 0, there is an efficient algorithm that given a (1 − ε)-satisfiable instance of CSP(f ), outputs a (ρ(f ) + ε)-satisfying assignment.

- Otherwise f is called approximation resistant (A.R.). - [GW 92]: 2-SAT and 2-XOR are approximable.

Approximable and Approximation Resistant Predicates - f : {−1, 1}k → {0, 1}.

ρ(f ) =

|f −1 (1)| . 2k

- Definition: f is called approximable if for some ε > 0, there is an efficient algorithm that given a (1 − ε)-satisfiable instance of CSP(f ), outputs a (ρ(f ) + ε)-satisfying assignment.

- Otherwise f is called approximation resistant (A.R.). - [GW 92]: 2-SAT and 2-XOR are approximable. - [Håstad 97]: For k ≥ 3, k-SAT and k-XOR are A.R.

Approximation Resistance - To show that f is approximation resistant, one shows that for every ε > 0, it is NP-hard (UG-hard) to distinguish if a CSP(f ) instance is (1 − ε)-satisfiable or is at most (ρ(f ) + ε)-satisfiable.

Approximation Resistance - To show that f is approximation resistant, one shows that for every ε > 0, it is NP-hard (UG-hard) to distinguish if a CSP(f ) instance is (1 − ε)-satisfiable or is at most (ρ(f ) + ε)-satisfiable.

Approximation Resistance - To show that f is approximation resistant, one shows that for every ε > 0, it is NP-hard (UG-hard) to distinguish if a CSP(f ) instance is (1 − ε)-satisfiable or is at most (ρ(f ) + ε)-satisfiable.

- Reduction from a hard problem to the promise gap problem:

≤ ρ(f ) + ε

≥1−ε

Approximation Resistance - To show that f is approximation resistant, one shows that for every ε > 0, it is NP-hard (UG-hard) to distinguish if a CSP(f ) instance is (1 − ε)-satisfiable or is at most (ρ(f ) + ε)-satisfiable.

- Reduction from a hard problem to the promise gap problem:

≤ ρ(f ) + ε

≥1−ε

- Captures the notion that it is hard to do better than a random assignment.

Approximation Resistance: Known Results

[Håstad 97]: k-SAT and k-XOR are A.R. (k ≥ 3).

Approximation Resistance: Known Results

[Håstad 97]: k-SAT and k-XOR are A.R. (k ≥ 3). - Best hardness for 2-SAT and MAX-CUT.

Approximation Resistance: Known Results

[Håstad 97]: k-SAT and k-XOR are A.R. (k ≥ 3). - Best hardness for 2-SAT and MAX-CUT. [ST 00, EH 05]: Graph Linearity Test predicate is A.R.

Approximation Resistance: Known Results

[Håstad 97]: k-SAT and k-XOR are A.R. (k ≥ 3). - Best hardness for 2-SAT and MAX-CUT. [ST 00, EH 05]: Graph Linearity Test predicate is A.R. - Alternate proof of [Håstad 96]: Clique on n-vertex graphs is hard to approximate within n1−ε .

Approximation Resistance: Known Results

[Håstad 97]: k-SAT and k-XOR are A.R. (k ≥ 3). - Best hardness for 2-SAT and MAX-CUT. [ST 00, EH 05]: Graph Linearity Test predicate is A.R. - Alternate proof of [Håstad 96]: Clique on n-vertex graphs is hard to approximate within n1−ε . [GHS 00]: Not-All-Equal(x1 , x2 , x3 , x4 ) is A.R.

Approximation Resistance: Known Results

[Håstad 97]: k-SAT and k-XOR are A.R. (k ≥ 3). - Best hardness for 2-SAT and MAX-CUT. [ST 00, EH 05]: Graph Linearity Test predicate is A.R. - Alternate proof of [Håstad 96]: Clique on n-vertex graphs is hard to approximate within n1−ε . [GHS 00]: Not-All-Equal(x1 , x2 , x3 , x4 ) is A.R. - Hard to color 2-colorable 4-uniform hypergraphs with O(1) colors.

Approximation Resistance: Known Results [ST 06∗ , Chan 12]: Hypergraph Linearity Test predicate is A.R. q

f : {−1, 1}q+(2 −q−1) → {0,1}. x1 , . . . , xq , {yS }S⊆[q], |S|≥2 is a satisfying assignment iff yS =

Y

xi

∀S ⊆ [q], |S| ≥ 2.

i∈S

- k-ary predicate with k + 1 satisfying assignments.

Approximation Resistance: Known Results [ST 06∗ , Chan 12]: Hypergraph Linearity Test predicate is A.R. q

f : {−1, 1}q+(2 −q−1) → {0,1}. x1 , . . . , xq , {yS }S⊆[q], |S|≥2 is a satisfying assignment iff yS =

Y

xi

∀S ⊆ [q], |S| ≥ 2.

i∈S

- k-ary predicate with k + 1 satisfying assignments. - Optimal hardness for Max-k-CSP.

Approximation Resistance: Known Results [ST 06∗ , Chan 12]: Hypergraph Linearity Test predicate is A.R. q

f : {−1, 1}q+(2 −q−1) → {0,1}. x1 , . . . , xq , {yS }S⊆[q], |S|≥2 is a satisfying assignment iff yS =

Y

xi

∀S ⊆ [q], |S| ≥ 2.

i∈S

- k-ary predicate with k + 1 satisfying assignments. - Optimal hardness for Max-k-CSP. - Hard: independent set on degree d graphs within O



d log4 d



.

Approximation Resistance: Known Results [ST 06∗ , Chan 12]: Hypergraph Linearity Test predicate is A.R. q

f : {−1, 1}q+(2 −q−1) → {0,1}. x1 , . . . , xq , {yS }S⊆[q], |S|≥2 is a satisfying assignment iff yS =

Y

xi

∀S ⊆ [q], |S| ≥ 2.

i∈S

- k-ary predicate with k + 1 satisfying assignments. - Optimal hardness for Max-k-CSP. - Hard: independent set on degree d graphs within O - Hard to color k-colorable graphs with 2Ω(k

1/3 )



d log4 d



colors [H 13].

.

Approximation Resistance: Known Results [ST 06∗ , Chan 12]: Hypergraph Linearity Test predicate is A.R. q

f : {−1, 1}q+(2 −q−1) → {0,1}. x1 , . . . , xq , {yS }S⊆[q], |S|≥2 is a satisfying assignment iff yS =

Y

xi

∀S ⊆ [q], |S| ≥ 2.

i∈S

- k-ary predicate with k + 1 satisfying assignments. - Optimal hardness for Max-k-CSP. - Hard: independent set on degree d graphs within O 1/3



d log4 d



- Hard to color k-colorable graphs with 2Ω(k ) colors [H 13]. √ ˜ - O(1/ k) soundness for Label Cover with k labels.

.

Towards Characterization of Approximation Resistance A characterization is a necessary and sufficient condition for a predicate f : {−1, 1}k → {0, 1} to be A.R. - [Håstad 97, Zwick 98] A predicate on 3 variables is A.R. iff it is implied by 3-XOR (up to variable negations).

Towards Characterization of Approximation Resistance A characterization is a necessary and sufficient condition for a predicate f : {−1, 1}k → {0, 1} to be A.R. - [Håstad 97, Zwick 98] A predicate on 3 variables is A.R. iff it is implied by 3-XOR (up to variable negations). - [Hast 05]: Out of 400 predicates on 4 variables, classified 79 to be approximation resistant and 275 to be not so.

Towards Characterization of Approximation Resistance A characterization is a necessary and sufficient condition for a predicate f : {−1, 1}k → {0, 1} to be A.R. - [Håstad 97, Zwick 98] A predicate on 3 variables is A.R. iff it is implied by 3-XOR (up to variable negations). - [Hast 05]: Out of 400 predicates on 4 variables, classified 79 to be approximation resistant and 275 to be not so. - [Chan 12]: Sufficient condition: f −1 (1) is non-degenerate subgroup of {−1, 1}k (every pair of co-ordinates is independent).

Towards Characterization of Approximation Resistance A characterization is a necessary and sufficient condition for a predicate f : {−1, 1}k → {0, 1} to be A.R. - [Håstad 97, Zwick 98] A predicate on 3 variables is A.R. iff it is implied by 3-XOR (up to variable negations). - [Hast 05]: Out of 400 predicates on 4 variables, classified 79 to be approximation resistant and 275 to be not so. - [Chan 12]: Sufficient condition: f −1 (1) is non-degenerate subgroup of {−1, 1}k (every pair of co-ordinates is independent). - [AM 09∗ ]: Sufficient condition: There exists a balanced and pairwise independent distribution on {−1, 1}k supported on f −1 (1).

Towards Characterization of Approximation Resistance A characterization is a necessary and sufficient condition for a predicate f : {−1, 1}k → {0, 1} to be A.R. - [Håstad 97, Zwick 98] A predicate on 3 variables is A.R. iff it is implied by 3-XOR (up to variable negations). - [Hast 05]: Out of 400 predicates on 4 variables, classified 79 to be approximation resistant and 275 to be not so. - [Chan 12]: Sufficient condition: f −1 (1) is non-degenerate subgroup of {−1, 1}k (every pair of co-ordinates is independent). - [AM 09∗ ]: Sufficient condition: There exists a balanced and pairwise independent distribution on {−1, 1}k supported on f −1 (1). - [AK 13∗ ]: Characterization when f is even and instance is k-partite.

Overview of the Talk

- Introduction and Survey - Basics: Algorithmic Side - Basics: Hardness Side - Characterization of Approximation Resistance - Conclusion

Goemans-Williamson’s SDP Relaxation for MAX-CUT MAX-CUT: Find a cut in the graph G([n], E ) that maximizes the fraction of edges cut.

max x1 ,...,xn ∈{−1,1}

1 X 1 − xi xj . |E | (i, j)∈E 2

SDP-Relaxation:

max v1 ,...,vn

Subject to

1 X 1 − vi · vj . |E | (i, j)∈E 2 kvi k = 1

∀ i ∈ [n].

SDP Solution for (Near-)Satisfiable CSP(f ) Instance Given: CSP(f ) instance that is (near-)satisfiable. Variables x1 , . . . , xn and constraints C1 , . . . , Cm . x1 x2

x4

x5

x7 x6

x3

SDP Solution for (Near-)Satisfiable CSP(f ) Instance Given: CSP(f ) instance that is (near-)satisfiable. Variables x1 , . . . , xn and constraints C1 , . . . , Cm . x1 x2

x4

x5

x7 x6

SDP-Solution . . .

x3

SDP Solution for (Near-)Satisfiable CSP(f ) Instance Given: CSP(f ) instance that is (near-)satisfiable. Variables x1 , . . . , xn and constraints C1 , . . . , Cm . x1 x2

x4

x5

x7 x6

SDP-Solution . . . Notation:

x3

SDP Solution for (Near-)Satisfiable CSP(f ) Instance Given: CSP(f ) instance that is (near-)satisfiable. Variables x1 , . . . , xn and constraints C1 , . . . , Cm . x1 x2

x4

x5

x7

x3

x6 SDP-Solution . . . Notation: - µ: distribution supported on f −1 (1) ⊆ {−1, 1}k .

SDP Solution for (Near-)Satisfiable CSP(f ) Instance Given: CSP(f ) instance that is (near-)satisfiable. Variables x1 , . . . , xn and constraints C1 , . . . , Cm . x1 x2

x4

x5

x7

x3

x6 SDP-Solution . . . Notation: - µ: distribution supported on f −1 (1) ⊆ {−1, 1}k . - µij = Ez∼µ [zj zj ] are the

k 2

second moments of µ.

SDP Solution for (Near-)Satisfiable CSP(f ) Instance µ: distribution supported on f −1 (1), µij = Ez∼µ [zj zj ].

SDP Solution for (Near-)Satisfiable CSP(f ) Instance µ: distribution supported on f −1 (1), µij = Ez∼µ [zj zj ]. SDP-Solution: v1

v2

x1

x2

x4

µ v3

x5

x7 x6

x3

SDP Solution for (Near-)Satisfiable CSP(f ) Instance µ: distribution supported on f −1 (1), µij = Ez∼µ [zj zj ]. SDP-Solution: v1

v2

x1

x2

x4

µ v3

x5

x7

x3

x6 - Variables have unit vectors associated: v1 , . . . , vn .

SDP Solution for (Near-)Satisfiable CSP(f ) Instance µ: distribution supported on f −1 (1), µij = Ez∼µ [zj zj ]. SDP-Solution: v1

v2

x1

x2

x4

µ v3

x5

x7

x3

x6 - Variables have unit vectors associated: v1 , . . . , vn . - Every constraint has associated local distribution: C → µ.

SDP Solution for (Near-)Satisfiable CSP(f ) Instance µ: distribution supported on f −1 (1), µij = Ez∼µ [zj zj ]. SDP-Solution: v1

v2

x1

x2

x4

µ v3

x5

x7

x3

x6 - Variables have unit vectors associated: v1 , . . . , vn . - Every constraint has associated local distribution: C → µ. - Consistency: ∀i, j ∈ C ,

vi · vj = µij .

Rounding SDP Solution for CSP(f ) Instance v1

v2

x1

x2

x4

µ v3

x5

x7 x6

Rounding:

x3

Rounding SDP Solution for CSP(f ) Instance v1

v2

x1

x2

x4

µ v3

x5

x7

x3

x6 Rounding: - Randomly project v1 , . . . , vn onto Rd .

vi → yi ∈ Rd .

Rounding SDP Solution for CSP(f ) Instance v1

v2

x1

x2

x4

µ v3

x5

x7

x3

x6 Rounding: - Randomly project v1 , . . . , vn onto Rd . - Select a function ψ : Rd → {−1, 1}.

vi → yi ∈ Rd .

Rounding SDP Solution for CSP(f ) Instance v1

v2

x1

x2

x4

µ v3

x5

x7

x3

x6 Rounding: - Randomly project v1 , . . . , vn onto Rd . - Select a function ψ : Rd → {−1, 1}. - Rounding: xi → vi → yi → ψ(yi ).

vi → yi ∈ Rd .

Rounding SDP Solution for CSP(f ) Instance v1

v2

x1

x2

x4

µ v3

x5

x7

x3

x6 Rounding: - Randomly project v1 , . . . , vn onto Rd .

vi → yi ∈ Rd .

- Select a function ψ : Rd → {−1, 1}. - Rounding: xi → vi → yi → ψ(yi ). [Rag 08∗ ]: For CSPs, this is the most general algorithm!

Overview of the Talk

- Introduction and Survey - Basics: Algorithmic Side - Basics: Hardness Side - Characterization of Approximation Resistance - Conclusion

Proving Hardness Results - Reduction from a known hard problem (Label Cover) or a conjectured hard problem (Unique Game).

Proving Hardness Results - Reduction from a known hard problem (Label Cover) or a conjectured hard problem (Unique Game). - Probabilistically Checkable Proofs (PCPs).

Proving Hardness Results - Reduction from a known hard problem (Label Cover) or a conjectured hard problem (Unique Game). - Probabilistically Checkable Proofs (PCPs). - Constructing “Gadgets" ≡ “Dictatorship Test".

Proving Hardness Results - Reduction from a known hard problem (Label Cover) or a conjectured hard problem (Unique Game). - Probabilistically Checkable Proofs (PCPs). - Constructing “Gadgets" ≡ “Dictatorship Test". - Given a boolean function, test whether it is a dictatorship function.

Proving Hardness Results - Reduction from a known hard problem (Label Cover) or a conjectured hard problem (Unique Game). - Probabilistically Checkable Proofs (PCPs). - Constructing “Gadgets" ≡ “Dictatorship Test". - Given a boolean function, test whether it is a dictatorship function. - Definition: G : {−1, 1}N → {−1, 1} is a dictatorship function if G(z1 , . . . , zN ) = zi

for some fixed

1 ≤ i ≤ N.

Dictatorship Test for CSP(f ) Fix f : {−1, 1}k → {0, 1} and µ supported on f −1 (1).

Dictatorship Test for CSP(f ) Fix f : {−1, 1}k → {0, 1} and µ supported on f −1 (1). To test if G : {−1, 1}N → {−1, 1} is a dictatorship function.

Dictatorship Test for CSP(f ) Fix f : {−1, 1}k → {0, 1} and µ supported on f −1 (1). To test if G : {−1, 1}N → {−1, 1} is a dictatorship function. Pick a random k × N matrix by picking:

Dictatorship Test for CSP(f ) Fix f : {−1, 1}k → {0, 1} and µ supported on f −1 (1). To test if G : {−1, 1}N → {−1, 1} is a dictatorship function. Pick a random k × N matrix by picking: - every column ∼ µ independently and

Dictatorship Test for CSP(f ) Fix f : {−1, 1}k → {0, 1} and µ supported on f −1 (1). To test if G : {−1, 1}N → {−1, 1} is a dictatorship function. Pick a random k × N matrix by picking: - every column ∼ µ independently and - adding small noise.

Dictatorship Test for CSP(f ) Fix f : {−1, 1}k → {0, 1} and µ supported on f −1 (1). To test if G : {−1, 1}N → {−1, 1} is a dictatorship function. Pick a random k × N matrix by picking: - every column ∼ µ independently and - adding small noise. Accept iff (G(row1 ), G(row2 ), . . . , G(rowk )) ∈ f −1 (1).

Figure: Dictatorship Test on G given f , µ.

Figure: Dictatorship Test on G given f , µ.

Completeness and Soundness of Test Completeness: If G is i th dictatorship, the test accepts with prob. 1. (G(row1 ), G(row2 ), . . . , G(rowk )) = columni ∈ f −1 (1).

Completeness and Soundness of Test Completeness: If G is i th dictatorship, the test accepts with prob. 1. (G(row1 ), G(row2 ), . . . , G(rowk )) = columni ∈ f −1 (1).

Soundness: If G is far from a dictatorship,

Completeness and Soundness of Test Completeness: If G is i th dictatorship, the test accepts with prob. 1. (G(row1 ), G(row2 ), . . . , G(rowk )) = columni ∈ f −1 (1).

Soundness: If G is far from a dictatorship, - The acceptance probability depends only on {µij }.

Completeness and Soundness of Test Completeness: If G is i th dictatorship, the test accepts with prob. 1. (G(row1 ), G(row2 ), . . . , G(rowk )) = columni ∈ f −1 (1).

Soundness: If G is far from a dictatorship, - The acceptance probability depends only on {µij }. - Choose µ appropriately so that the test accepts with probability at most ρ(f ) + o(1).

Completeness and Soundness of Test Completeness: If G is i th dictatorship, the test accepts with prob. 1. (G(row1 ), G(row2 ), . . . , G(rowk )) = columni ∈ f −1 (1).

Soundness: If G is far from a dictatorship, - The acceptance probability depends only on {µij }. - Choose µ appropriately so that the test accepts with probability at most ρ(f ) + o(1). - Possible to choose µ itself from a suitable distribution.

Completeness and Soundness of Test Completeness: If G is i th dictatorship, the test accepts with prob. 1. (G(row1 ), G(row2 ), . . . , G(rowk )) = columni ∈ f −1 (1).

Soundness: If G is far from a dictatorship, - The acceptance probability depends only on {µij }. - Choose µ appropriately so that the test accepts with probability at most ρ(f ) + o(1). - Possible to choose µ itself from a suitable distribution. Invariance Principle [Rotar0 75, MOO 05, C 05, Mossel 07].

Overview of the Talk

- Introduction and Survey - Basics: Algorithmic Side - Basics: Hardness Side - Characterization of Approximation Resistance - Conclusion

A (sort-of) Characterization by [Rag 08] and [RS 09]

- [Rag 08∗ ]: f is approximation resistant iff for every ε > 0 there exists a 1 − ε vs. ρ(f ) + ε “integrality gap" instance.

A (sort-of) Characterization by [Rag 08] and [RS 09]

- [Rag 08∗ ]: f is approximation resistant iff for every ε > 0 there exists a 1 − ε vs. ρ(f ) + ε “integrality gap" instance. - [RS 09]: For every ε > 0, this can be determined in time exp(exp(1/ε)), by a brute-force search.

A (sort-of) Characterization by [Rag 08] and [RS 09]

- [Rag 08∗ ]: f is approximation resistant iff for every ε > 0 there exists a 1 − ε vs. ρ(f ) + ε “integrality gap" instance. - [RS 09]: For every ε > 0, this can be determined in time exp(exp(1/ε)), by a brute-force search. - Gives a recursively enumerable condition.

A (sort-of) Characterization by [Rag 08] and [RS 09]

- [Rag 08∗ ]: f is approximation resistant iff for every ε > 0 there exists a 1 − ε vs. ρ(f ) + ε “integrality gap" instance. - [RS 09]: For every ε > 0, this can be determined in time exp(exp(1/ε)), by a brute-force search. - Gives a recursively enumerable condition. - But what properties of f give rise to gap instances?

A (sort-of) Characterization by [Rag 08] and [RS 09]

- [Rag 08∗ ]: f is approximation resistant iff for every ε > 0 there exists a 1 − ε vs. ρ(f ) + ε “integrality gap" instance. - [RS 09]: For every ε > 0, this can be determined in time exp(exp(1/ε)), by a brute-force search. - Gives a recursively enumerable condition. - But what properties of f give rise to gap instances? - Is it just properties of f or is the topology of the instance also important?

A (sort-of) Characterization by [Rag 08] and [RS 09]

- [Rag 08∗ ]: f is approximation resistant iff for every ε > 0 there exists a 1 − ε vs. ρ(f ) + ε “integrality gap" instance. - [RS 09]: For every ε > 0, this can be determined in time exp(exp(1/ε)), by a brute-force search. - Gives a recursively enumerable condition. - But what properties of f give rise to gap instances? - Is it just properties of f or is the topology of the instance also important? (Hint: Just f )

The Austrin-Mossel condition in a new language k

- For a distribution µ on {−1, 1}k , let ζ(µ) ∈ Rk+(2) denote the vector of first and second moments ζi = Ex ∼µ [xi ]

ζij = Ex ∼µ [xi · xj ]

The Austrin-Mossel condition in a new language k

- For a distribution µ on {−1, 1}k , let ζ(µ) ∈ Rk+(2) denote the vector of first and second moments ζi = Ex ∼µ [xi ]

ζij = Ex ∼µ [xi · xj ]

- Let C(f ) be the convex polytope C(f ) =

n

o

ζ(µ) | µ is supported on f −1 (1) .

The Austrin-Mossel condition in a new language k

- For a distribution µ on {−1, 1}k , let ζ(µ) ∈ Rk+(2) denote the vector of first and second moments ζi = Ex ∼µ [xi ]

ζij = Ex ∼µ [xi · xj ]

- Let C(f ) be the convex polytope C(f ) =

n

o

ζ(µ) | µ is supported on f −1 (1) .

- [AM 09∗ ]: f is approximation resistant if 0 ∈ C(f ).

The Austrin-Mossel condition in a new language k

- For a distribution µ on {−1, 1}k , let ζ(µ) ∈ Rk+(2) denote the vector of first and second moments ζi = Ex ∼µ [xi ]

ζij = Ex ∼µ [xi · xj ]

- Let C(f ) be the convex polytope C(f ) =

n

o

ζ(µ) | µ is supported on f −1 (1) .

- [AM 09∗ ]: f is approximation resistant if 0 ∈ C(f ). - Our condition is in terms of existence of a probability measure Λ on C(f ) with certain symmetry properties.

Transformations of a measure Λ on C(f )

- Each ζ ∈ C(f ) can be transformed by:

Transformations of a measure Λ on C(f )

- Each ζ ∈ C(f ) can be transformed by: - Permuting the underlying k variables by a permutation π (ζπ )i = ζπ(i)

(ζπ )ij = ζπ(i)π(j)

Transformations of a measure Λ on C(f )

- Each ζ ∈ C(f ) can be transformed by: - Permuting the underlying k variables by a permutation π (ζπ )i = ζπ(i)

(ζπ )ij = ζπ(i)π(j)

- Multiplying each variable xi by a sign bi ∈ {−1, 1} (ζb )i = bi · ζi

(ζb )ij = bi bj · ζij

Transformations of a measure Λ on C(f )

- Each ζ ∈ C(f ) can be transformed by: - Permuting the underlying k variables by a permutation π (ζπ )i = ζπ(i)

(ζπ )ij = ζπ(i)π(j)

- Multiplying each variable xi by a sign bi ∈ {−1, 1} (ζb )i = bi · ζi

(ζb )ij = bi bj · ζij

- Projecting ζ to coordinates corresponding to a subset S ⊆ [k].

Transformations of a measure Λ on C(f )

- Each ζ ∈ C(f ) can be transformed by: - Permuting the underlying k variables by a permutation π (ζπ )i = ζπ(i)

(ζπ )ij = ζπ(i)π(j)

- Multiplying each variable xi by a sign bi ∈ {−1, 1} (ζb )i = bi · ζi

(ζb )ij = bi bj · ζij

- Projecting ζ to coordinates corresponding to a subset S ⊆ [k]. - For S ⊆ [k], π : S → S, b ∈ {−1, 1}S , let ΛS,π,b denote the measure obtained by transforming each point in support of Λ as above.

Our Characterization - Recall that f : {−1, 1}k → {0, 1} can be written as f (x ) =

X S⊆[k]

b f (S) ·

Y i∈S

xi = ρ(f ) +

k X X t=1 |S|=t

b f (S) ·

Y i∈S

xi

Our Characterization - Recall that f : {−1, 1}k → {0, 1} can be written as f (x ) =

X S⊆[k]

b f (S) ·

Y

xi = ρ(f ) +

k X X t=1 |S|=t

i∈S

b f (S) ·

Y

xi

i∈S

- [KTW 13∗ ]: f is approximation resistant if and only if there exists a probability measure Λ on C(f ) such that for all t = 1, . . . , k ! X X X Y b bi · ΛS,π,b ≡ 0 f (S) · |S|=t π:S→S b∈{−1,1}S

i∈S

Our Characterization - Recall that f : {−1, 1}k → {0, 1} can be written as f (x ) =

X S⊆[k]

b f (S) ·

Y

xi = ρ(f ) +

k X X t=1 |S|=t

i∈S

b f (S) ·

Y

xi

i∈S

- [KTW 13∗ ]: f is approximation resistant if and only if there exists a probability measure Λ on C(f ) such that for all t = 1, . . . , k ! X X X Y b bi · ΛS,π,b ≡ 0 f (S) · |S|=t π:S→S b∈{−1,1}S

i∈S t

- If |S| = t, then ΛS,π,b is a measure on Rt+(2) . For each t, above expression is a linear combination of such measures.

Proof Structure No good Λ exists

Good Λ exists

Proof Structure No good Λ exists

Good Λ exists

Standard PCP ideas

Hardness

Proof Structure No good Λ exists

Good Λ exists

zero-sum game value > 0

Algorithm Hardness

Hardness

Algorithm

zero-sum game value = 0

Standard PCP ideas

Hardness

Proof Structure No good Λ exists

Good Λ exists

zero-sum game value > 0

Algorithm Hardness

Algo

Hardness

Algorithm

zero-sum game value = 0

Standard PCP ideas

Hardness

Proof Structure No good Λ exists

Good Λ exists

zero-sum game value > 0

Algorithm Hardness

Algo

Hardness

Algorithm

zero-sum game value = 0

Standard PCP ideas

Hardness

Proof Structure No good Λ exists

Good Λ exists

zero-sum game value > 0

Algorithm Hardness

Algo

Hardness

Algorithm

zero-sum game value = 0

Standard PCP ideas

Hardness

Figure: The Two-Player Game

The (infinite) two-player game - Similar game used by O’Donnell and Wu for Max-Cut.

The (infinite) two-player game - Similar game used by O’Donnell and Wu for Max-Cut. - Hardness player tries to present a “hard to round" instance. Each constraint has local distribution µ with moments ζ(µ). Plays a probability measure Λ on C(f ) (corresponds to instance).

The (infinite) two-player game - Similar game used by O’Donnell and Wu for Max-Cut. - Hardness player tries to present a “hard to round" instance. Each constraint has local distribution µ with moments ζ(µ). Plays a probability measure Λ on C(f ) (corresponds to instance). - Algorithm player tries to round by first projecting to random d-dimensional subspace. Plays rounding strategy ψ : Rd → {−1, 1} (d = k + 1 suffices).

The (infinite) two-player game - Similar game used by O’Donnell and Wu for Max-Cut. - Hardness player tries to present a “hard to round" instance. Each constraint has local distribution µ with moments ζ(µ). Plays a probability measure Λ on C(f ) (corresponds to instance). - Algorithm player tries to round by first projecting to random d-dimensional subspace. Plays rounding strategy ψ : Rd → {−1, 1} (d = k + 1 suffices). - PayOff = Expected fraction of constraints satisfied by ψ − ρ(f )

The (infinite) two-player game - Similar game used by O’Donnell and Wu for Max-Cut. - Hardness player tries to present a “hard to round" instance. Each constraint has local distribution µ with moments ζ(µ). Plays a probability measure Λ on C(f ) (corresponds to instance). - Algorithm player tries to round by first projecting to random d-dimensional subspace. Plays rounding strategy ψ : Rd → {−1, 1} (d = k + 1 suffices). - PayOff = Expected fraction of constraints satisfied by ψ − ρ(f ) - Value > 0 implies (a distribution over) rounding strategies which show that predicate is approximable. (since every instance corresponds to some Λ)

PayOff (Λ, ψ) of the game - A random constraint in the instance corresponds to ζ ∼ Λ.

PayOff (Λ, ψ) of the game - A random constraint in the instance corresponds to ζ ∼ Λ. - When Algorithm player tries to round SDP solution, she sees vectors with inner products according to ζ.

PayOff (Λ, ψ) of the game - A random constraint in the instance corresponds to ζ ∼ Λ. - When Algorithm player tries to round SDP solution, she sees vectors with inner products according to ζ. - Projecting gives Gaussians y1 , . . . , yk with correlation matrix corresponding to ζ (y1 , . . . , yk ∼ N(ζ)).

PayOff (Λ, ψ) of the game - A random constraint in the instance corresponds to ζ ∼ Λ. - When Algorithm player tries to round SDP solution, she sees vectors with inner products according to ζ. - Projecting gives Gaussians y1 , . . . , yk with correlation matrix corresponding to ζ (y1 , . . . , yk ∼ N(ζ)). - Expected fraction of constraints satisfied Eζ∼Λ Ey1 ,...yk ∼N(ζ) [ f ( ψ(y1 ), . . . , ψ(yk ) ) ]   X Y b = ρ(f ) + Eζ∼Λ Ey1 ,...yk ∼N(ζ)  f (S) · ψ(yi ) S6=∅

i∈S

PayOff (Λ, ψ) of the game - A random constraint in the instance corresponds to ζ ∼ Λ. - When Algorithm player tries to round SDP solution, she sees vectors with inner products according to ζ. - Projecting gives Gaussians y1 , . . . , yk with correlation matrix corresponding to ζ (y1 , . . . , yk ∼ N(ζ)). - Expected fraction of constraints satisfied Eζ∼Λ Ey1 ,...yk ∼N(ζ) [ f ( ψ(y1 ), . . . , ψ(yk ) ) ]   X Y b = ρ(f ) + Eζ∼Λ Ey1 ,...yk ∼N(ζ)  f (S) · ψ(yi ) S6=∅

- PayOff(Λ, ψ) = Eζ∼Λ Ey1 ,...yk ∼N(ζ)

hP

S6=∅

i∈S

i Q b f (S) · i∈S ψ(yi ) .

Obtaining conditions on Λ when Value = 0 - PayOff = Eζ∼Λ Ey1 ,...yk ∼N(ζ)

hP

i Q b f (S) · ψ(y ) . i S6=∅ i∈S

Obtaining conditions on Λ when Value = 0 - PayOff = Eζ∼Λ Ey1 ,...yk ∼N(ζ)

hP

i Q b f (S) · ψ(y ) . i S6=∅ i∈S

- There exists Λ which gives PayOff ≤ 0 for all ψ.

Obtaining conditions on Λ when Value = 0 - PayOff = Eζ∼Λ Ey1 ,...yk ∼N(ζ)

hP

i Q b f (S) · ψ(y ) . i S6=∅ i∈S

- There exists Λ which gives PayOff ≤ 0 for all ψ. - PayOff: a multi-linear polynomial in the variables ψ(y ) for y ∈ Rd . The polynomial is upper bounded by zero for all assignments ψ.

Obtaining conditions on Λ when Value = 0 - PayOff = Eζ∼Λ Ey1 ,...yk ∼N(ζ)

hP

i Q b f (S) · ψ(y ) . i S6=∅ i∈S

- There exists Λ which gives PayOff ≤ 0 for all ψ. - PayOff: a multi-linear polynomial in the variables ψ(y ) for y ∈ Rd . The polynomial is upper bounded by zero for all assignments ψ. - All coefficients must be 0. Coefficients are:   Z X γ(·) d C (S, π, b) · ΛS,π,b  . S,π,b

Obtaining conditions on Λ when Value = 0 - PayOff = Eζ∼Λ Ey1 ,...yk ∼N(ζ)

hP

i Q b f (S) · ψ(y ) . i S6=∅ i∈S

- There exists Λ which gives PayOff ≤ 0 for all ψ. - PayOff: a multi-linear polynomial in the variables ψ(y ) for y ∈ Rd . The polynomial is upper bounded by zero for all assignments ψ. - All coefficients must be 0. Coefficients are:   Z X γ(·) d C (S, π, b) · ΛS,π,b  . S,π,b

- Conclude that X S,π,b

C (S, π, b) · ΛS,π,b ≡ 0.

Obtaining conditions on Λ when Value = 0 - PayOff = Eζ∼Λ Ey1 ,...yk ∼N(ζ)

hP

i Q b f (S) · ψ(y ) . i S6=∅ i∈S

- There exists Λ which gives PayOff ≤ 0 for all ψ. - PayOff: a multi-linear polynomial in the variables ψ(y ) for y ∈ Rd . The polynomial is upper bounded by zero for all assignments ψ. - All coefficients must be 0. Coefficients are:   Z X γ(·) d C (S, π, b) · ΛS,π,b  . S,π,b

- Conclude that X S,π,b

- Lots of work . . .

C (S, π, b) · ΛS,π,b ≡ 0.

Overview of the Talk

- Introduction and Survey - Basics: Algorithmic Side - Basics: Hardness Side - The Characterization of Approximation Resistance - Conclusion

Concluding Remarks - We also -

Characterize approximation resistance for k-partite instances. Characterize approximation resistance for Sherali-Adams LP. Can give alternate exposition to [Rag 08∗ ]. Can extend to non-boolean predicates and mix of predicates.

Concluding Remarks - We also -

Characterize approximation resistance for k-partite instances. Characterize approximation resistance for Sherali-Adams LP. Can give alternate exposition to [Rag 08∗ ]. Can extend to non-boolean predicates and mix of predicates.

- Problem: The characterization is recursively enumerable, but is it decidable? Can Λ always be finitely supported?

Concluding Remarks - We also -

Characterize approximation resistance for k-partite instances. Characterize approximation resistance for Sherali-Adams LP. Can give alternate exposition to [Rag 08∗ ]. Can extend to non-boolean predicates and mix of predicates.

- Problem: The characterization is recursively enumerable, but is it decidable? Can Λ always be finitely supported? - Problem: Is there a linear threshold function that is approximation resistant?

Concluding Remarks - We also -

Characterize approximation resistance for k-partite instances. Characterize approximation resistance for Sherali-Adams LP. Can give alternate exposition to [Rag 08∗ ]. Can extend to non-boolean predicates and mix of predicates.

- Problem: The characterization is recursively enumerable, but is it decidable? Can Λ always be finitely supported? - Problem: Is there a linear threshold function that is approximation resistant? - Problem: Approximation resistance on satisfiable instances?

Concluding Remarks - We also -

Characterize approximation resistance for k-partite instances. Characterize approximation resistance for Sherali-Adams LP. Can give alternate exposition to [Rag 08∗ ]. Can extend to non-boolean predicates and mix of predicates.

- Problem: The characterization is recursively enumerable, but is it decidable? Can Λ always be finitely supported? - Problem: Is there a linear threshold function that is approximation resistant? - Problem: Approximation resistance on satisfiable instances? - Problem: Dichotomy conjecture?

Thank You Questions?

On Approximation Resistance of Predicates

Permuting the underlying k variables by a permutation π. (ζπ)i = ζπ(i). (ζπ)ij = ζπ(i)π(j). - Multiplying each variable xi by a sign bi ∈ {−1, 1}. (ζb)i = bi · ζi. (ζb)ij = bi ...

1MB Sizes 2 Downloads 378 Views

Recommend Documents

On the Impact of Kernel Approximation on ... - Research at Google
termine the degree of approximation that can be tolerated in the estimation of the kernel matrix. Our analysis is general and applies to arbitrary approximations of ...

On the Impact of Kernel Approximation on Learning ... - CiteSeerX
The size of modern day learning problems found in com- puter vision, natural ... tion 2 introduces the problem of kernel stability and gives a kernel stability ...

Complete Subjects and Predicates
Ms. Hale took a cab to the convention center. 3. ... The subways were nearly empty during the late night hours. 8. ..... What a close call that was .... Page 24 ...

Reflection paper on anthelmintic resistance - European Medicines ...
Apr 21, 2017 - Due to advances in molecular technology, mechanisms of ..... Research and education .... Drugs and Drug Resistance 5: (2015) 163-171.

Model Approximation for Learning on Streams of ...
cause ϵ is a user-defined parameter. The frequency of an item f in a bucket Bi is represented as Ff,i, the overall frequency of f is Ff = ∑i Ff,i. The algorithm makes.

A Note on the Power of Truthful Approximation ...
Aug 26, 2009 - Email: [email protected]. 1 ... The private information of each bidder is vi (notice that the ki's are private information too). We will assume that the ... Of course, we can also implement a weighted versions of VCG: Definition ..

On the Auslander-Bridger type approximation of modules
where each bi is a positive integer. We denote by mod(R) the category of finitely generated. R-modules, by T C n the full subcategory of mod(R) consisting of all ...

Influence of vibration resistance training on knee ...
Oct 1, 2009 - central nervous system to maximally drive the muscles volun- tarily after ..... strapped in a custom-built leather cuff that was directly connected to.

On the Spreading Resistance of Thin-Film Contacts
... Nuclear Engineering and. Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 USA ... Color versions of one or more of the figures in this paper are available online ..... Roland S. Timsit received the Ph.D degree in physics in 1970

Effect of access resistance on apparent mobility ...
electron charge, vth the average thermal velocity, kB the. Boltzmann constant and T ..... Community, through Integrated Project PULLNANO. (IST-026828) and ...

Reflection paper on anthelmintic resistance - European Medicines ...
Apr 21, 2017 - various factors including the detection limit of the method, the number of animals per group, the host species, and the level of egg excretion by ...

Subjective Standard-Setting in Gradable Predicates: On ...
Page 1 ... All remaining errors are my own responsibility. ..... description of Lisi's build, and how he looks when he stands in a group, but Afu does not know Lisi's ...

Predicates of Personal Taste and Epistemic Modals* 1 ...
Jan 7, 2007 - web: http://web.mit.edu/tamina/www/; e-mail: [email protected]. Goal: Give a unified, ..... Themes from Kaplan, Oxford Univ. Press. Lasersohn, P.

On Approximation Algorithms for Data Mining ... - Semantic Scholar
Jun 3, 2004 - problems where distance computations and comparisons are needed. In high ..... Discover the geographic distribution of cell phone traffic at.

On Approximation Algorithms for Data Mining ... - Semantic Scholar
Jun 3, 2004 - The data stream model appears to be related to other work e.g., on competitive analysis [69], or I/O efficient algorithms [98]. However, it is more ...

Lecture notes on strong approximation M. Lifshits
interested re der with suit le 4 pproxim tion tool ox4 r ther th n to fight in the ..... important in mathematical statistics and concerns the strong approximation of.

On Approximation Algorithms for Concave Mixed ...
Jul 20, 2017 - running time is polynomial in the maximum total degree of the ..... the 41st Annual Symposium on Foundations of Computer Science, FOCS '00,.

High corrosion-resistance nanocrystalline Ni coating on ...
magnesium alloy exhibited very high corrosion resistance in the rapid corrosion test illustrated in the ... resistance, which restricts the application of magnesium.

Approximation of Dynamic, Stochastic, General ...
We also extend the method to solving Hansenks real business ...... r>. 1 #a '$4k>-$. % #' a$4n> % 4z>. 4w>. 1 4c>. 4g>. 1. )# !R δ$ !Κ ! G. 4 k>-$. % ξ ! W.

On Approximation Algorithms for Data Mining ... - Semantic Scholar
Jun 3, 2004 - Since the amount of data far exceeds the amount of workspace available to the algorithm, it is not possible for the algorithm to “remember” large.

Algorithmic Computation and Approximation of ...
Some categories have multiple criteria to classify subcategories. ... measures that can be used to compute semantic similarity on other kinds of ontologies. ...... meaningful to use a σc threshold because in applications such as search engines,.

Legendre Pseudospectral Approximation of ...
are not known, at least not for large values of N. Instead, computer-codes have been developed ..... As can be seen, the ratios seem to support that the implemented method is of fourth-order accuracy in the time step ..... A, 365(2007), 278–283.