Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Remarks on Frank-Wolfe and Structural Friends Robert M. Freund, MIT

thanks to Paul Grigas (MIT) and Rahul Mazumder (Columbia)

NIPS Workshop on Greedy/FW, December 2013

1

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Outline of Topics

Review of Frank-Wolfe (Conditional Gradient) algorithm Application: Low-rank Matrix Completion Frank-Wolfe and Dual Averages algorithm Greedy Coordinate Descent and Dual Averages algorithm Frank-Wolfe for Stochastically-Smoothed Optimization

2

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Renewed Interest in Frank-Wolfe Algorithm There is much renewed interest in Frank-Wolfe algorithm due to: Relevance of applications Regression Boosting/classification Image construction Matrix completion ··· Requirements for only moderately high accuracy solutions Necessity of simple methods for huge-scale problems Structural implications (sparsity, low-rank) induced by the algorithm itself

3

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Frank-Wolfe (Conditional Gradient) Algorithm Problem of interest is: CP : f ∗ :=

min

f (x)

s.t.

x ∈P

x

P ⊂ Rn is compact and convex f (·) is convex on P let x ∗ denote any optimal solution of CP f (·) is differentiable on P it is “easy” to do linear optimization on P for any c : x˜ ← arg min {c T x} x∈P

4

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Frank-Wolfe Algorithm, continued CP : f ∗ :=

min

f (x)

s.t.

x ∈P

x

Basic Frank-Wolfe algorithm for minimizing f (x) on P Initialize at x 0 ∈ P, k ← 0, B 0 ≤ f ∗ . At iteration k : 1

Compute ∇f (x k ) .

2

Compute x˜k ← arg min{∇f (x k )T x} . x∈P

3

Update lower bound: B k+1 ← min{B k , f (x k ) + ∇f (x k )T (˜ x k − x k )}

4

Set x k+1 ← x k + α ¯ k (˜ x k − x k ), where α ¯ k ∈ [0, 1] . 5

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Some Step-size Rules/Strategies “Recent standard”: α ¯k =

2 k+2

Exact line-search: α ¯ k = arg minα∈[0,1] {f (x k + α(˜ x k − x k ))} Simple averaging: α ¯k =

1 k+1

Constant step-size: α ¯k = α ¯ for some given α ¯ ∈ [0, 1] QA (Quadratic approximation) step-size:   −∇f (x k )T (˜ xk − xk) α ¯ k = min 1, Lk˜ x k − x k k2 Dynamic strategy: determined by some history of optimality bounds, see Grigas

6

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Simple Computational Guarantee for Frank-Wolfe Algorithm Here is a simple computational guarantee:

A Computational Guarantee for the Frank-Wolfe algorithm If the step-size sequence {¯ αk } is chosen as α ¯k = k ≥ 1 it holds that: 2C f (x k ) − f ∗ ≤ k +4

2 k+2 ,

k ≥ 0, then for all

where C = L · diam(P)2 .

Similar guarantee also holds when step-sizes are determined by line-search, QA, or dynamic strategy 7

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Lipschitz Gradient, Diameter

Let k · k be a prescribed norm on Rn Dual norm is ksk∗ := maxkxk≤1 {s T x} B(x, ρ) := {y : ky − xk ≤ ρ} Diam(P) := maxx,y ∈P {kx − y k} Let L be the Lipschitz constant of ∇f (·) on P: k∇f (x) − ∇f (y )k∗ ≤ Lkx − y k for all x, y ∈ P

8

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Renewed Interest in Frank-Wolfe Algorithm There is much renewed interest in Frank-Wolfe algorithm due to: Relevance of applications Regression Boosting/classification Image construction Matrix completion ··· Requirements for only moderately high accuracy solutions Necessity of simple methods for huge-scale problems Structural implications (sparsity, low-rank) induced by the algorithm itself

9

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Norm and Affine Invariance and Metrics

Frank-Wolfe algorithm is invariant under change in norm and/or under invertible affine transformation Let C l be the smallest nonnegative scalar for which: f (x+α(y −x)) ≤ f (x)+∇f (x)T (α(y −x))+

Cl 2 α for all x, y ∈ P, α ∈ [0, 1] 2

Then C l ≤ L · (Diam(P))2

10

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Norm/Affine Invariant Metrics, continued

Symmetry of x in P: sym(x, P) := max{β ∈ R : y ∈ P ⇒ x − α(y − x) ∈ P}

If x ∈ P, then sym(x, P) ≥

dist(x, ∂P) Diam(P)

11

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

A Linear Convergence Result f (·) is u-strongly convex on P if there exists u > 0 for which: f (y ) ≥ f (x) + ∇f (x)T (y − x) +

u ky − xk2 for all x, y ∈ P 2

Sublinear and Linear Convergence under Interior Solutions and Strong Convexity ∼[W,GM] Suppose the step-size sequence {¯ αk } is chosen using the QA rule or by line-search. Then for all k ≥ 1 it holds that: ( k

f (x ) − f



≤ min

  k ) 2L(Diam(P))2 u ρ2 0 ∗ , (f (x ) − f ) 1 − k L (Diam(P))2

where ρ = dist(x ∗ , ∂P) . 12

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Norm-Invariant Version of Linear Convergence

Let C u be the largest nonnegative scalar for which: f (x+α(y −x)) ≥ f (x)+∇f (x)T (α(y −x))+

Cu 2 α for all x, y ∈ P, α ∈ [0, 1] 2

Previous bound becomes: ( k

f (x ) − f



≤ min

  u k ) C Cl 0 ∗ ∗ 2 , (f (x ) − f ) 1 − (sym(x , P)) k Cl

13

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Prototypical Example: Low-Rank Matrix Completion Let Z ∈ Rm×n be a partially known data matrix Ω denotes the entries of Z that are known, where |Ω|  m × n ZΩ denotes the entries of Z indexed in Ω We aspire to solve: Pr : z ∗ :=

min

X ∈Rm×n

s.t.

1 2

P

(i,j)∈Ω (Xij

− Zij )2

rank(X ) ≤ r

Wide applications: recommender systems, collaborative filtering, gene expression, etc. 14

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Nuclear Norm Regularization for Matrix Completion Pr : z ∗ :=

min

X ∈Rm×n

s.t.

1 2

P

(i,j)∈Ω (Xij

− Zij )2

rank(X ) ≤ r

Replace rank constraint with constraint/penalty on the nuclear norm of X X = UDV T where U ∈ Rm×r is orthonormal, V ∈ Rn×r is orthonormal D = Diag(σ1 , . . . , σr ) comprises the non-zero singular values of X

Nuclear norm is kX kN :=

Pr

j=1

σj 15

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

The Nuclear Norm, Notation, continued

X = UDV T

D = diag(σ1 , . . . , σr ) are the (non-zero) singular values of X

kX kN :=

Pr

j=1

σj

B(0, δ) := {X ∈ Rm×n : kX kN ≤ δ}

16

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Nuclear Norm Regularized Problem

We aspire to solve: Pr : z ∗ :=

min

X ∈Rm×n

s.t.

f (X ) :=

1 2

P

(i,j)∈Ω (Xij

− Zij )2

rank(X ) ≤ r

Instead let us solve: NCδ : f ∗ :=

min m×n

X ∈R

s.t.

f (X ) :=

1 2

P

(i,j)∈Ω (Xij

− Zij )2

kX kN ≤ δ

17

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Solving NCδ using Frank-Wolfe Algorithm NCδ : f ∗ :=

min m×n

X ∈R

s.t.

f (X ) :=

1 2

P

(i,j)∈Ω (Xij

− Zij )2

kX kN ≤ δ

NCδ aligns well for Frank-Wolfe algorithm: ∇f (X ) = PΩ (X − Z ) := (X − Z )Ω is viable to compute linear optimization subproblem is viable Let C ∈ Rm×n and define C • X := trace(C T X ) ˜ ← arg min X

kX kN ≤δ

{C • X }

is solved as: compute largest singular value σ1 of C with associated left and right normalized eigenvectors u1 , v1 ˜ ← −δu1 (v1 )T X

18

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Equivalent Problem on Spectrahedron Instead of working directly with B(0, δ), perhaps work with spectrahedral representation:

kX kN ≤ δ

iff

   W X   0  A := XT Y there exists W , Y for which    trace(A) ≤ 2δ

Solve the equivalent problem on spectrahedron: Sδ : f ∗ :=

min

X ,W ,Y

s.t.

P f (X ) := 12 (i,j)∈Ω (Xij − Zij )2   W X 0 XT Y trace(W ) + trace(Y ) ≤ 2δ 19

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Equivalent Problem on Spectrahedron, continued Solve equivalent problem: Sδ : f ∗ :=

min

X ,W ,Y

s.t.

P f (X ) := 12 (i,j)∈Ω (Xij − Zij )2   W X 0 XT Y trace(W ) + trace(Y ) ≤ 2δ

Matrix dimensions are doubled in Sδ But many necessary Frank-Wolfe operations can be done in original dimensions Away steps are much more straightforward to work with in Sδ 20

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Dual Averages Algorithm for Non-Smooth Maximization Consider the non-smooth concave maximization problem: MP : h∗ :=

max λ

s.t.

h(λ) λ∈Q

Dual Averages Algorithm for maxλ∈Q h(λ) Initialize at λ0 ∈ Q, x¯0 ∈ Rn , β0 ← 1 At iteration k : 1

Compute gk ∈ ∂h(λk )

2

Choose αk ≥ 0 and set x¯k+1 ← x¯k + αk gk

3

T Choose βk+1 ≥ βk and set λk+1 ← arg maxλ∈Q {¯ xk+1 λ − βk+1 d(λ)}

Here d(·) is a “prox” function (σ-strongly convex) on Q (Think d(λ) = 21 λT λ)

21

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Dual Averages for Minmax Structured Problems Now suppose h(·) has minmax structure: h(λ) := minx∈P {λT Ax} Let x˜k ← arg minx∈P {λT x k ∈ ∂h(λk ) k Ax} . Then gk := A˜ Dual Averages Algorithm for maxλ∈Q h(λ) with Minmax Structure Initialize at λ0 ∈ Q, x¯0 ∈ Rn , β0 ← 1 At iteration k : 1

Compute gk ∈ ∂h(λk ) : 1 2

x˜k ← arg minx∈P λT k Ax gk ← A˜ xk

2

Choose αk ≥ 0 and set x¯k+1 ← x¯k + αk gk

3

T Choose βk+1 ≥ βk and set λk+1 ← arg maxλ∈Q {¯ xk+1 λ − βk+1 d(λ)} Pk ˜j j=0 αj x x k := Pk j=0 αj

4

22

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Returning to Frank-Wolfe CP : f ∗ :=

min

f (x)

s.t.

x ∈P

x

f (·) has L-smooth gradient on P We can always write f (·) in conjugate form: f (x) := max{λT Ax − d(λ)} for some A , Q , and d(·) λ∈Q

If f (·) is globally smooth, then d(·) is σ-strongly convex for σ = kAk2 /L Computing gradient of f (x k ) Let λk ← arg maxλ∈Q {λT Ax k − d(λ)} . Then ∇f (x k ) = AT λk 23

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

FW with Gradient Computation in Conjugate Form

Frank-Wolfe Algorithm with Gradient Computation in Conjugate Form Initialize at x 0 ∈ P, k ← 0, B 0 ≤ f ∗ . At iteration k : 1

Compute ∇f (x k ) : 1 2

2

λk ← arg maxλ∈Q {λT Ax k − d(λ)} ∇f (x k ) = AT λk

Compute x˜k ← arg min{∇f (x k )T x} . x∈P

3

Update lower bound: B k+1 ← min{B k , f (x k ) + ∇f (x k )T (˜ x k − x k )}

4

Set x k+1 ← x k + α ¯ k (˜ x k − x k ), where α ¯ k ∈ [0, 1] .

24

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

FW is an Instance of Dual Averages Define: βk :=

1

and

k−1 Q

(1 − α ¯j )

αk :=

βk α ¯k 1−α ¯k

(1)

j=1

Theorem: Frank-Wolfe is an Instance of Dual Averages [G], ∼[B] Let {x k }, {˜ x k }, {λk } be the iterate sequences of Frank-Wolfe algorithm in minmax form, with appropriate initialization. The Frank-Wolfe iterates correspond exactly to Dual Averages iterates for solving the “shadow” problem: MP : h∗ :=

max h(λ) := minx∈P {λT Ax} λ

s.t.

λ∈Q

using {αk } and {βk } given by (1), and using the prox function d(·) from the conjugate representation of f (·) . 25

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

FW is an Instance of Dual Averages, continued

Comments: Equivalence holds even if f (·) is not globally smooth

Not necessary to “know” or compute with conjugate representation of f (·)

26

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Greedy Coordinate Descent for Unconstrained Smooth Minimization UP : f ∗ :=

min

f (x)

s.t.

x ∈ Rn

x

f (·) has L-smooth gradient on Rn Again, we can always write f (·) in conjugate form: f (x) := max{λT Ax − d(λ)} for some A , Q , and d(·) λ∈Q

d(·) is σ-strongly convex for σ = kAk2 /L Computing gradient of f (x k ) Let λk ← arg maxλ∈Q {λT Ax k − d(λ)} . Then ∇f (x k ) = AT λk 27

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Greedy Coordinate Descent, continued UP : f ∗ :=

min

f (x)

s.t.

x ∈ Rn

x

Greedy Coordinate Descent for minimizing f (x) on Rn Initialize at x 0 ∈ Rn , k ← 0 At iteration k : 1

Compute ∇f (x k ) : 1 2

λk ← arg maxλ∈Q {λT Ax k − d(λ)} ∇f (x k ) ← AT λk

2

x˜k ← arg minkxk1 ≤1 {∇f (x k )T x}

3

x k+1 ← x k + αk x˜k 28

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

GCD is an Instance of Dual Averages

Theorem: Greedy Coordinate Descent is an Instance of Dual Averages [G] Let {x k }, {˜ x k }, {λk } be the iterate sequences of the Greedy Coordinate Descent algorithm using step-sizes {αk }, and define {βk } as βk = 1 for all k. The GCD iterates correspond exactly to Dual Averages iterates for solving the “shadow” problem: MP : h∗ :=

max h(λ) := −kAT λk∞ λ

s.t.

λ∈Q

using the Dual Averages sequences {αk } and {βk }. Furthermore, for all k it holds that: h(λk ) = −k∇f (x k )k∞ .

29

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

GCD is an Instance of Dual Averages, continued Not by design, but through the Dual Averages “structure”, Greedy Coordinate Descent is driving k∇f (x k )k∞ & 0 Indeed, using the complexity theory for Dual Averages, one obtains: A Computational Guarantee for Greedy Coordinate Descent Let x ∗ solve UP, λ∗ ← arg maxλ∈Q {λT Ax ∗ − d(λ)}, and define p d(λ∗ ) αi := p for i = 0, . . . , k L(k + 1) Then:

p

d(λ∗ ) . min k∇f (x )k∞ ≤ √ √ 0≤i≤k L k +1 i

30

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

GCD is an Instance of Dual Averages, continued

Dual Averages has as instances both: “constrained greedy gradient” (Frank-Wolfe), and “unconstrained greedy gradient” (Greedy Coordinate Descent)

These results generalize to any dual-paired norms for kxk and k∇f (x)k∗

31

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Stochastic Smoothing

NDP : f ∗ :=

min

f (x)

s.t.

x ∈P

x

Suppose now f (·) is convex and non-smooth |f (y ) − f (x)| ≤ Mky − xk for all x, y ∈ P , kv k :=



vT v

g (x) denotes any subgradient of f (·) at x z ∼ B(0, u) denotes that z obeys a uniform distribution on B(0, u) Define fu (x) := Ez∼B(0,u) [f (x + z)] , for all x ∈ P 32

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Stochastic Smoothing, continued

Define fu (x) := Ez∼B(0,u) [f (x + z)] , for all x ∈ P

Stochastic Smoothing [Yousefian, Nedi´c, Shanbhag 2012] fu (·) is differentiable with L =

M

√ u

n

-Lipschitz gradient on P

∇fu (x) = Ez∼B(0,u) [g (x + z)] for all x ∈ P

f (x) ≤ fu (x) ≤ f (x) + Mu for all x ∈ P 33

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Frank-Wolfe for Stochastic Smoothed Optimization

Frank-Wolfe for Stochastic Smoothed Optimization Initialize at x 0 ∈ P, k ← 0, u0 = +∞ . At iteration k : 1

Choose uk ≤ uk−1 and sample size Tk

2

Sample z 1 , . . . , z Tk ∼ B(0, uk ) and gki ← g (x k + z i ), i = 1, . . . , Tk . PTk i Estimate g¯k := (1/Tk ) i=1 gk

3 4

Compute x˜k ← arg min{¯ gkT x} . x∈P

5

Set x

k+1

k

←x +α ¯ k (˜ x k − x k ), where α ¯ k ∈ [0, 1] .

34

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Computational Guarantees for Stochastic Smoothed FW

Computational Guarantee for Stochastic Smoothed FW [Lan2013] Supppose the step-size sequence {¯ αk } is chosen as α ¯k = for all k ≥ 0 set: √ 4 n · diam(P) √ Tk ← k and uk ← . k

2 k+2 ,

k ≥ 0, and

Then for all k ≥ 1 it holds that: E[f (x k )] − f ∗ ≤

√ 4M(1 + 2 4 n)diam(P) √ 3 k

Analysis also extends to other step-sizes as well 35

Topics

Review of FW

Matrix Completion

FW-GCD and Dual Averages

Stochastic Smoothing

Brief References M. Frank and P. Wolfe, An algorithm for quadratic programming, Naval Research Logistics Quarterly 3 (1956), pp. 95–110 P. Wolfe, Convergence Theory in Nonlinear Programming, in J. Abadie, ed., Integer and Nonlinar Programming, North-Holland, Amsterdam, 1970 J. Gu´ elat and P. Marcotte, Some Comments on Wolfe’s ‘Away Step’, Mathematical Programming 35 (1986), pp. 110–119 R. Freund and P. Grigas, New Analysis and Results for the Conditional Gradient Method, 2013, submitted P. Grigas, Dual averaging as a unifying framework in first-order methods, 2013, in preparation Z. Harchaoui, A. Juditsky, and A. S. Nemirovski, Conditional Gradient Algorithms for Norm-Regularized Smooth Convex Optimization, 2012-13. G. Lan, The Complexity of Large-scale Convex Programming under a Linear Optimization Oracle, submitted R. Mazumder and R. Freund, Statistical Loss Minimization Computation with the Frank-Wolfe Method, 2013, in preparation

36

Remarks on Frank-Wolfe and Structural Friends - cmap - polytechnique

P ⊂ Rn is compact and convex f (·) is convex on P let x∗ denote any optimal solution of CP f (·) is differentiable on P it is “easy” to do linear optimization on P for any c : ˜x ← arg min x∈P. {cT x}. Page 5. 5. Topics. Review of FW. Matrix Completion. FW-GCD and Dual Averages. Stochastic Smoothing. Frank-Wolfe Algorithm ...

566KB Sizes 1 Downloads 217 Views

Recommend Documents

Remarks on Frank-Wolfe and Structural Friends - CMAP, Polytechnique
Outline of Topics. Review of Frank-Wolfe ... Here is a simple computational guarantee: A Computational Guarantee for the Frank-Wolfe algorithm. If the step-size ...

Conditional Gradient with Enhancement and ... - cmap - polytechnique
1000. 1500. 2000. −3. −2. −1. 0. 1. 2. 3. 4 true. CG recovery. The greedy update steps might choose suboptimal atoms to represent the solution, and/or lead to less parsimonious solutions and/or miss some components p = 2048, m = 512 Gaussian me

Marginal Inference in MRFs using Frank-Wolfe - CMAP, Polytechnique
Dec 10, 2013 - Curvature + Convergence Rate. Cf = sup x,s∈D;γ∈[0,1];y=x+γ(s−x). 2 γ2. (f (y) − f (x) − 〈y − x,∇f (x)〉). ˜iMAP it it+1. 0. 0.2. 0.4. 0.6. 0.8. 1. 0. 0.1. 0.2. 0.3. 0.4. 0.5. 0.6. 0.7 entropy prob x = 1. December 1

REMARKS ON CINEMA: THE PSYCHOLOGICAL THEORY.pdf ...
Page 3 of 120. REMARKS ON CINEMA: THE PSYCHOLOGICAL THEORY.pdf. REMARKS ON CINEMA: THE PSYCHOLOGICAL THEORY.pdf. Open. Extract.

REMARKS ON l1 AND l∞−MAXIMAL REGULARITY ...
Let us call this the continuous unconditional Ritt condition. If (4.11) holds then e−tA .... Centre for Mathematics and its Applications, Australian National Univer-.

Strauss, Some Remarks on the Political Science of Maimonides and ...
Strauss, Some Remarks on the Political Science of Maimonides and Farabi.pdf. Strauss, Some Remarks on the Political Science of Maimonides and Farabi.pdf.

Remarks on Poisson actions - Kirill Mackenzie
Jan 29, 2010 - Abstract. This talk sketches an overview of Poisson actions, developing my paper 'A ... And there is an infinitesimal action of g∗ on G. Namely, any ξ ∈ g∗ ..... That is, it is T∗M pulled back to P over f . As a Lie algebroid

Acceptance remarks (249 words)
Jun 17, 2012 - In developing Ecological Footprint accounting, William Rees and Mathis ... (This research program provided the case study for .... Commission's report on the North American Free Trade Agreement (NAFTA) and in various .... In a fragment

Critical Remarks on Frege's Conception of Logic by ...
Oct 20, 2014 - axis, England, the number One, what have you—whether it falls under a given (first-level) concept. Blanchette asks us ... a “notable lack of comprehensive precision” (p. 74). Of course Frege's writing isn't .... the thought that

Remarks on the derived McKay correspondence for ...
one is interested in the geometry of Hilbert schemes of points on surfaces. Furthermore, they have applications in the description of the cup product on the cohomology of the Hilbert scheme [Leh99, LS01, LS03], enumerative geometry [KST11, Ren12], an

Wittgenstein, Remarks on Frazer's Golden Bough.pdf
სხეულის ნაწილები. Page 4 of 15. Wittgenstein, Remarks on Frazer's Golden Bough.pdf. Wittgenstein, Remarks on Frazer's Golden Bough.pdf. Open. Extract.

Zoé Chatzidakis (CNRS - ENS), Remarks on ...
Recall that a pseudo-finite field is an infinite model of the theory of finite fields. ... containing F0, and let B be the relative algebraic closure of A inside F. Then B ...

rafael moneo remarks on 21 works pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. rafael moneo remarks on 21 works pdf. rafael moneo remarks on 21 works pdf. Open. Extract. Open with. Sign I

SOME REMARKS ON THE RESNIKOFF-SALDA˜NA ...
jecture on the Fourier coefficients of a degree 2 Siegel cusp form, by proving it ..... Thus from (3.4), using the Hecke bound for the coefficients a(F, Ti), we get.

Wittgenstein, Remarks on Frazer's Golden Bough.pdf
he said it was misleading to say that what we wanted was an "analy- sis", since in science to "analyse" water means to discover some new. fact about it, e.~ that it ...