A Nonsmooth Approach to Envelope Theorems Olivier Morandy

Kevin Re¤ettz

Suchismita Tarafdarx

Preliminary Draft: November 2009

Abstract Envelope theorems are of fundamental importance in economic theory. We develop a nonsmooth approach to envelope theorems that uni…es the results across a broad class of parameterized nonlinear optimization problems that arise in economic applications. We consider general parameterized nonlinear Lipschitzian programs with both equality and inequality constraints, and allow for noninterior solutions. We develop conditions under which the value function is locally Lipschitz. We further provide su¢ cient conditions under which value functions are (i) Clarke di¤erentiable with di¤erential bounds, (ii) directionally di¤erentiable, and (iii) once-continuously di¤erentiable (C 1 ). Relative to the existing literature, we present the …rst uni…ed treatment of envelope theorems for both the classical and nonsmooth case. For the C 1 di¤erentiability case, we give the most general conditions for the existence of a classical smooth envelope theorem . We present numerous economic applications of our results, including applications to lattice programming, nonclassical growth theory and macroeconomics, Negishi methods, nonstationary dynamic lattice programming, and duopoly problems We thank Bob Becker, Bernard Cornet, Manjira Datta, Karl Hinderer, Rida Laraki, Cuong LeVan, Len Mirman, Manuel Santos, Yiannis Vailakis, and Lukasz Wo´zny for helpful discussions during the writing of this paper. Special thanks go to Andrzej Nowak for directing us to the work of Hinderer, Laraki,and Sudderth. Kevin Re¤ett thanks the Centre d’Economie de la Sorbonne (CES) and the Paris School of Economics for arranging his visits during the Spring terms of both 2008 and 2009. This is a preliminary draft of the paper. Errors may remain. Do not recirculate, rather write the corresponding author for the current version. y Department of Economics, University of Connecticut z Department of Economics, WP Carey School of Business, Arizona State University x Corresponding Author, Department of Economics, WP Carey School of Business, Arizona State University

1

1

Introduction

Since the work of Viner [62] and Samuelson [57], the envelope theorem has been a standard tool in economic analysis. Aside from its extensive use in optimization theory and general equilibrium, the envelope theorem has become an increasingly necessary ingredient in methods that characterize economic equilibrium in models of dynamic contract theory, public …nance, lattice programming, growth theory, consumer and producer theory, game theory, dynamic programming and macroeconomics. In its original incarnation, the classical envelope theorem was a continuous derivative of a value function in a parameter. The requisite mathematical structure developed to guarantee the existence of such a smooth envelope rested upon strong convexity assumptions on the primitive data of agents’s economic optimization problems. For example, early work envisioned economic agents as solving unconstrained or constrained convex optimization problems under strong interiority conditions for optimal solutions. (e.g., Samuelson [57], Rockafellar [51], Mirman and Zilcha [44], Benveniste and Scheinkman [6], and Milgrom and Segal [40]). In a recent paper by Rincon-Zapatero and Santos [50], a classical envelope theorem has been proved for problems that allow for noninterior solutions in the standard case. There is a least two important limitations in this body of work. First, when extending the classical (smooth) envelope theorem to convex problems with constraints and noninterior solutions, the constraint quali…cation imposed appears to be too strong. Second, and perhaps more importantly, for models with nonconvexities, where classical envelope theorems are not expected, few useful alternative notions of a "generalized" envelope theorem have been proposed: not proven to exist for nonlinear programs with equality and inequality constraints, allowing for noninterior solutions. Moreover, economic models with nonconvexities have become increasingly important in the recent published literature in many …elds of economics, thus this void in the literature has been a particularly serious limitation. In this paper, we propose a new approach to envelope theorems that uni…es their treatment for the diverse situations that arise in economic theory. Our approach includes not only classical envelope theorems in convex constrained programs, but also broad classes of problems with nonconvexities. In an abstract sense, our approach bears a strong resemblance to that taken in the recent nonlinear programming literature (e.g., papers by Gauvin and Tolle [23], Auslender [5], Gauvin and Dubeau [21], and Fontanie [20]), where some nonsmooth resolutions of these questions for parameterized optimization problems, that do not …t the classical convex case, have 2

been proposed. Relative to this literature, we improve upon results in two ways; …rst, we relax the requirement that objective functions be smooth, and second, we employ a weaker constraint quali…cations to obtain many of our results. The …rst set of improvements are particularly important, as in many economic applications (e.g., dynamic programming problems with nonconvexities), hypotheses implying smoothness on the objective function is not present for continuation values in a recursive representation agent decision problems even when primitives are smooth. Relative to existing methods in the economics literature (e.g., Milgrom and Segal [40] and Rincon-Zapatero and Santos [50]), we not only weaken the constraint quali…cation needed to obtain classical smooth envelope theorems, but we also unify such results within a broader approach the encompasses useful generalized envelope theorems in nonconvex Lipschitzian programming problems. In our work, the classical envelope theorem is understood as a special case of the more general nonsmooth approach that studies the directional di¤erentiability of the value function. But unlike existing approaches in economics, we do not limit our attention to classical envelope theorems. We provide su¢ cient conditions under which a broad class of parameterized economic optimization problems (i) admit di¤erential bounds that characterize the local Lipschitz structure of the value function (both for the convex parameter space case, as well as the case of arbitrary metric spaces), (ii) characterize the Clarke-Michel-Penot derivatives of the value function with explicit forms for their generalized gradients, (iii) give conditions for directional di¤erentiability of the value function and also (iv) give conditions under which the value function is continuously di¤erentiable. We also provide numerous examples and applications of these results in various areas of economics, including consumer theory, dynamic/multistage programming, lattice programming, Stackelberg games, Negishi methods in general equilibrium theory, and nonclassical growth theory in macroeconomics. It is well-known that for problems with inequality constraints and in…nitely di¤erentiable (or C 1 ) primitive data, the value function may not generally be even locally Lipschitz. ( see Gauvin and Debeau [22] for many examples). Hence, smoothness of the primitive data says little about the existence of a tractable envelope theorem. In (proper) convex problems, under suitable constraint quali…cations, the value function is immediately directionally di¤erentiable (e.g., see Rockafellar [51]). Unfortunately, this is not true in nonconvex settings. To understand the issues at hand, as well as where our work …ts in the existing literature, consider a standard parameterized nonlinear optimization problem with equality and inequality

3

constraints: V (s) = max f (a; s)

(1.0.1)

a2D(s)

In optimization problem (1.0.1), the function f (a; s) is an extended realvalued objective function with a 2 A Rn is a vector of actions, and m s 2 S R a parameter space, D(s) A is a nonempty, continuous feasible correspondence for each s 2 S. We shall often in this paper assume that we can enumerate the feasible correspondence as D(s) = fajg(a; s)

0; h(a; s) = 0g

where g : A S !Rp and h : A S ! Rq are both jointly C 1 on A S. We denote the value function by V (s), the optimal solution correspondence by A (s) (with typical element a (s) 2 A (s)). The classical envelope theorem presented in Samuelson [57] provides conditions under which C 1 envelope theorem in maximization problem (1.0.1) is available (e.g., the objective is smooth, D(s) = A for all s 2 S; where A and S are convex, and interiority of optimal solutions). Generalizations of classical envelope theorem results are provided by many authors, most notably, Danskin [13], Rockafellar [51], Mirman and Zilcha [44] and Benveniste and Scheinkman [6]. In all of these results, interiority of the optimal solution plays a key role, and the results are di¢ cult (if not impossible) to apply to programming problems with general constraints and/or noninterior optimal solutions, even for proper convex programs. See Rincon-Zapatero and Santos [50] for a discussion. Milgrom and Segal [40] produces a classical envelope theorem for problem (1.0.1) under di¤erent sets of su¢ cient conditions (including some convex problems with constraints, some allowing for A to be arbitrary, S convex). Among their results is a classical envelope theorem in convex versions of (1.0.1) with both equality and inequality constraints as well as assorted results on directional di¤erentiability of the value function in problems with constraints. More recently, Rincon-Zapatero and Santos [50] have reconsidered the classical envelope theorem in a version of (1.0.1) stated in the context of dynamic programming. They assume no equality constraints, smooth quasi-concave inequality constraints, smooth objective functions, without standard conditions on objective functions (e.g., Inada conditions) that result in interior optimal solutions. Appealing to the linear independence constraint quali…cation (LICQ), as well as the standard assumption of a di¤erential extension over the boundary of the action space for objectives and constraints, they provide su¢ cient conditions for the value function to be, at least, once-continuously di¤erentiable. 4

Unfortunately, even in the convex optimization settings, none of this work considers the weakest form of constraint quali…cation (the so-called "strict Mangasarian-Fromowitz constraint quali…cation", or SMFQC). We show this is the most appropriate constraint quali…cation relative to the question of C 1 di¤erentiability in problems with constraints, and show by example that LICQ is too strong relative to this question. Indeed, as we argue, it seems impossible to weaken the application of SMFCQ to obtain su¢ cient conditions for the classical envelope theorem: as it is argued that in (1.0.1), SMFCQ is equivalent to the uniqueness of Karush-Kuhn-Tucker multipliers. For problems with nonsmooth Clarke-regular objectives, Clarke [10] provides su¢ cient conditions for the directional di¤erentiability (actually, Clarke regularity) of the value function in unconstrained nonsmooth versions of (1.0.1) where objective functions are not necessarily concave. Clarke’s focus is on locally Lipschitz and "Clarke regular" objectives. The result of Clarke’s has found application in the work on nonclassical multisector growth in Askri and LeVan [4].1 Gauvin and Tolle [23], Auslender [5], Gauvin and Dubeau [21], and Fontanie [20] all extend Clarke’s results to nonlinear programming problems with both equality and inequality constraints. For example, Gauvin and Tolle [23] and Gauvin and Dubeau [21] study (1.0.1) with all smooth primitive data. They obtain both di¤erential bounds, as well as conditions for the existence of directional derivatives. Auslender [5] and Fontanie [20] allow for nonsmooth objectives and/or nonsmooth inequality constraints, and obtain di¤erential bounds. Relative to these papers, we weaken the constraint quali…cation and focus on the question of directional di¤erentiability with smooth inequality constraints, but locally Lipschitz objectives. Finally, a central motivation for this paper is the expanding literature on nonconvexities in economic models, and the lack of useful envelope theorems available for such models. The importance of nonconvexities in general equilibrium theory was noticed as early as the late 1950s (e.g, Farrell [18], Rothenberg [56], Koopmans [35], and Reiter [49]). More recently, the number of economic problems where nonconvexities play a key role on 1 Although Askri and LeVan [4] study models with constraints, they appeal to strong interiority assumptions on optimal solutions to adapt proofs in Clarke [10] to obtain all their results. In the context of a one-sector nonclassical optimal growth model, Dechert and Nishimura [14] and Amir, Mirman, and Perkins [2] provide directionally di¤erentiability results for the value function for the case of interior solutions, with a single equality constraint. The multisector generalization of this result is provided Amir [3] and Askri and LeVan [4]. In all of these papers, interiority is assumed, and it is not clear how the result can be adapted to the case of general inequality constraints. Our results relax all all of these conditions.

5

the analysis is extensive. For example, there has been a large number of recent papers studying optimal growth in economies with nonconvexities (e.g., Dechert and Nishimura [14], Amir, Mirman, and Perkins [2], Hopenhayn and Prescott [26], Nishimura, and Rudnicki, and Stachurski [46], and Kamihigashi and Roy[29] [30]). Multisector generalizations of the nonclassical framework is studied in Amir [3]. In related work to nonclassical growth models, Prescott, Rogerson, and Wallenius [47] and Rogerson and Wallenius [53] emphasize the importance of nonconvexities in labor services in resolving important labor market puzzles in aggregative macroeconomic models. Romer [54][55] emphasizes the importance of nonconvexities in production when studying asymptotic growth models with endogeneous technological innovation and structural transformations. Mirman, Morand and Re¤ett [42] study the question of decentralizing optimal growth models with nonconvexities, where the Lipschitzian structure of dynamic programs plays a critical role in developing nonsmooth characterizations of dynamic complementarities. Nishimura and Stachurski [45] develop a Foster-Lyopanov method to characterize conditions under which optimal dynamics in stochastic growth models with nonconvexities are stochastically stable. In their work, they are forced to introduce a convolution structure in the problem to "smooth" the value function in order obtain a useful …rst-order condition. Khan and Thomas [32] emphasize the importance of lumpy investment and nonconvexities in adjustment costs when explaining investment puzzles at the plant and aggregate level in (s; S) models of dynamic equilibrium. Finally, there is a large literature on two-part marginal pricing equilibria that emphasizes the role of nonconvexities and increasing returns in the theory of the …rm and optimal pricing equilibrium (e.g., see Brown, Heller, and Starr [9]). Many important economic models do not satisfy the standard convexity conditions needed to develop conditions where classical envelope theorems can be applied. The paper is laid out as follows. In the next section, we introduce much of the mathematical terminology we need in the paper. Section 3 develops local Lipschitz bounds on value functions for parameterized Lipschitzian optimization problems. The remainder of the paper studies envelope theorems in parameterized Lipschitzian problems in …nite dimensional Euclidean metric spaces. Section 4 states the nonlinear duality results from Tarafdar ([59]). Section 5 shows how to introduce equality constraints into our method. In Section 6, we give a number of results on di¤erential bounds and generalized envelope theorems for problems with inequality and equality constraints. These bounds are built from operations on the Lagrangian evaluated at the set of optimal solutions. In section 7, we focus on the classical envelope the6

orem. We use our methods to extend the results in the existing literature for the existence of C 1 envelope theorem by weakening the constraint quali…cation. We also give a simple example of the importance of this result. Finally in section 8 we provide numerous economic applications of our results.

2

Mathematical Preliminaries

We begin with a number of mathematical de…nitions that we shall use in this paper. See Rockafellar [51], Clarke [11], and Rockafellar and Wets [52] for further discussion.

2.1

Structural Properties of Functions

Let (X; X ), (Y; Y ); and (T; T ) be metric spaces, and f : X ! Y be a continuous function. The function f is Lipschitz with module ( or modulus) k; 0 k < 1; if for all x; x0 2 X, 0 Y (f (x); f (x )

k

0 X (x; x )

The function f is locally Lipschitz near x 2 X of modulus k(x) if f is Lipschitz of module k(x) on a neighborhood N (x; e); e > 0. A function f : X T ! Y is uniformly Lipschitz in t of modulus k if sup x2X

Y (f (x; t); f (x; t

0

))

k

0 T (t; t )

for all t; t0 2 T . Finally, f is uniformly locally Lipschitz near t 2 T of modulus k(t) if on a neighborhood N (t; e), sup jf (x; t0 )

f (x; t00 )

x2X

k(t)

T (t

0

; t00 ); t0 ; t00 2 N (t; e)

A particular type of locally Lipschitz function often used in economic optimization is a proper convex function. Let X be a convex set. A real valued function f : X ! R is (strictly) convex if for all x; y 2 X, and all 2 (0; 1) f ( x + (1

)y)

(<) f (x) + (1

)f (y)

The function f (x) is strongly convex if 9 a constant and for all 2 (0; 1) such that 7

> 0 for all x; y 2 X,

f (x) + (1

)f (y)

f ( x + (1

)y) +

1 2

(1

) kx

yk2

The function f is essentially strongly concave (resp, strongly concave, strictly concave, concave) if f is essentially strongly convex, (resp, strongly convex, strictly convex, convex). A proper convex function is a convex function f : X ! Y , where Y is the extended reals. A proper convex function is locally Lipschitz for any open set in X. In this paper, we will consider many di¤erent notions of di¤erentiability for Lipschitz functions. Consider a Lipschitz continuous function f : I Rn ! Rm of modulus k. At a point x0 2 I, we …rst consider a number of di¤erent types of generalized smoothness of f in direction x 2 Rn that will be used in some of our proofs: Upper radical right dini derivative is de…ned as: D+ f (x0 ; d) = lim sup t!0+

f (x0 + td) t

f (x0 )

Lower radical right dini derivative is de…ned as:

D+ f (x0 ; d) = lim inf

t!0+

f (x0 + td) t

f (x0 )

The upper radical left dini derivative and lower right radical dini derivative are de…ned similarly simply changing t ! 0+ to t ! 0 . The directional derivative at x0 2 X in the direction d 2 Rn is de…ned to be,

f 0 (x0 ; d) = lim

t!0+

f (x0 + td) t

f (x0 )

and Clarke’s generalized directional derivative at x0 in the direction d 2 Rn is,

f 0 (x0 ; d) = lim sup

y!x0 t!0+

f (y + td) t

f (y)

It is important to remember that Clarke generalized derivatives of Lipschitz functions always exist, while directional derivatives of such functions need 8

not. We say a function f is Clarke regular if its Clarke generalized directional derivative equals its directional derivative in all directions d. A function f is di¤ erentiable at x0 2 X if the directional derivative exist in all direction and f 0 (x0 ; d) = rx f (x0 ) d. In this case, the derivative of is given by f (x0 + h) h

rx f (x0 ) = lim h!0

f (x0 )

= lim

h!0+

f (x0 + h) h

f (x0 )

The function f has a strict derivative at x0 , denoted by Ds f (x0 ); when for all d 2 Rn . hDs f (x0 ); di = x!x lim

0

t#0

f (x + td) t

f (x)

Finally, we say that f is continuously di¤ erentiable if Ds f (x) : Rn ! Rn m is continuous at x0 . On a …nite dimension domain, a strictly di¤erentiable function is continuous di¤erentiable. Finally, recall that the subgradient of a convex function f; is the set of p 2 Mm n satisfying: p d

f (x0 + d)

f (x0 )

for direction d 2 Rn . The set of subgradients of a convex function is subdi¤erential. Dually we can de…ne a subdi¤erential for any function, but for a non convex function this set may or may not exist. Since Lipschitz functions may not necessarily have subgradients, we de…ne Clarke’s generalized gradient as

@f (x0 ) = co flim rf (xi ) : xi ! x0 ; xi 2 = S; xi 2 =

fg

where co denotes the convex hull, S is any set of Lebesgue measure zero in the domain, and f is a set of points at which f fails to be di¤erentiable.

2.2

Structural Properties of Correspondences

As we study the value functions and optimal solutions of collections of parameterized optimization problems for economic decision makers, it turns 9

out that the topological properties of feasible correspondences prove very important in our work. Let X and Y be topological spaces, and F : X Y a correspondence. A correspondence F (x) is upper semicontinuous (or u.s.c.) at x0 2 X if for any two sequences fxn g and fyn g such that xn ! x0 , yn ! y0 with yn 2 F (xn ) and implies y0 2 F (x0 ). If F is upper semicontinuous at x for all x 2 X , then it is upper semicontinuous. F (x) is lower semicontinuous (or l.s.c) at x0 2 X if for any two sequences fxn g and fyn g such that xn ! x0 , with yn 2 F (xn ) and y0 2 F (x0 ) implies yn ! y . If F is lower semicontinuous at x for all x 2 X , then it is lower semicontinuous. 2 F (x) is continuous at x0 2 X if it is both u.s.c and l.s.c. at x0 2 X, and a continuous correspondence if it is continuous for all x 2 X. As with functions, we can characterize the metric properties of correspondences. Let (X; X ) and (Y; Y ) be metric spaces,the correspondence F :X Y; and 2X the powersets of X. A useful metric for correspondences is the Hausdor¤ metric. De…ne Hausdor¤ distance between F (x00 ) and F (x0 ) for all x00 ; x0 2 X by H y

where

F (x0 ); F (x00 ) = M ax

y (y

0

; F (x00 )) =

"

inf

y 00 2F (x00 )

sup y 0 2F (x0 )

y (y

0

y (y

0

; F (x00 ));

; y 00 ) and

sup y 00 2F (x00 )

y (y

00

; F (x0 )) =

y (y

00

inf

#

; F (x0 ))

y 0 2F (x0 )

y (y

0

; y 00 )

We say a correspondence F (x) is Lipschitz continuous of modulus k on X if 8 x0 , x00 2 X , H y

F (x0 ); F (x00 )

k

x

x00 ; x0

F (x) is locally Lipschitz continuous near x 2 X of modulus k(x) on a neighborhood N (x; e) if it is Lipschitz continuous of modulus k on N (x; e). In other words, for all x0 , x00 2 N (x; e), H y

F (x0 ); F (x00 )

k(x)

x

x00 ; x0

Finally, we say F (x) is uniformly compact near x if there is a neighborhood N (x) of x such that the closure of [x0 2N (x) (x0 ) is compact for all x0 . 2

See Berge ([8], p108) for discussion. Note also that to de…ne u.s.c. and l.s.c. of a correspondence, we only need topological spaces (i.e., the topological spaces need not be metrizable).

10

3

Lipschitzian Properties of Value Functions

Our focus in this paper is on parameterized Lipschitzian optimization problems. Consider a collection of parameterized optimization problems describe as follows: let a 2 A be the space for the choice variables; s 2 S be the parameter space, f : A S ! R be an objective function, and D : S Aa correspondence the describes the feasible set of actions in each state s 2 S. We say an optimization problem is a Parameterized Lipschitzian Optimization Problem if the family: V (s) = max f (a; s)

(3.0.1)

a2D(s)

satis…es the following: (i) (A; A ) and (S; s ) are metric spaces, (ii) f (a; s) is jointly locally Lipschitz in s for each a 2 A; and (ii) D(s) is locally Lipschitzian in s 2 S. In many economic applications, the version of (3.0.1) under study has even more structure than a parameterized Lipschitzian optimization problem. For example, both A and S are convex metric spaces, so di¤erential characterizations of the the optimization problem under study are often available. We shall consider many such special cases of parameterized Lipschitz optimization problems in the sequel. We begin by stating our baseline assumptions on (3.0.1) that we shall maintain throughout the rest of the paper: Assumption 1: The primitive data in (3.0.1) satis…es the following conditions: (a) A is a sequentially compact topological space; (b) (S; S ) a metric space; (c) the objective function f : A S ! R is continuous in (a; s); (d) the feasible correspondence D : S A is a nonempty-valued continuous, compact-valued correspondence. With the exception of a few isolated results, we shall also assume the following: Assumption 2: (A; A ) is a metric space. We make two remarks on these initial assumptions. First, although optimization problems with primitive data satisfying Assumption 1 are not fully Lipschitzian (in the sense we have de…ned above), they do have enough Lipschitzian structure that the structural properties of the value function can be characterized. (See Theorem (2) below). Second, under Assumption 11

1, by Berge’s Maximum Theorem, both the value function V (s), and the set of optimal solutions A (s); are well-de…ned. For the sake of completeness, we note the following maximum theorem that is appropriate in our setting. Proposition 1 (Berge, [8], Maximum Theorem, p 116). Under Assumption 1, in Problem (3.0.1), (i) V is continuous on S; and (ii) A is upper hemicontinuous on S. In the next two theorems, we consider two such sets of conditions where we can characterize the Lipschitzian properties of the value function. We …rst consider the case is an unconstrained program, where action space is a sequentially compact topological space, the parameter space are metric space, the objective is appropriately uniformly locally Lipschitz in s. In the second case, we let actions space be metric space, the objective locally Lipschitz in (a; s); and D(s) to be a Lipschitzian correspondence. Let S1 S Rn be an open subset, and de…ne an (open) neighborhood of s 2 S1 by N (s; e) = fsj S (s; s0 ) < e; e > 0g S1 . In Theorem (2), we prove our …rst result on the Lipschitzian properties of the value function:3 Theorem 2 Under Assumption 1, and if (i.a) f is uniformly Lipschitz in s 2 S of modulus kf (resp, (i.b) f is uniformly locally Lipschitz near s 2 S of modulus kf (s; e) on N (s; e)); and (ii) D(s) = A; then in Problem (3.0.1), V (s) is Lipschitz of modulus kf on S (resp, V is locally lipschitz near s 2 S of modulus kf (s; e) on N (s; e)). Although Theorem 2 has important implications in many applications (e.g., see Hinderer [25]), it is often di¢ cult to apply in economic applications. Aside from the obvious restriction to unconstrained optimization problems, the lack of a metric structure for the action space A impedes the modeling of the Lipschitzian variation of feasible correspondence D(s), and its implied in‡uence on the variation of the value function V (s). As in many economic applications, the choice set A is a metric space (e.g., A Rn ), and D(s) varies in s; we now proof a version of Theorem 2 for the case that (i) the action space (A; A ) is a metric space, (ii) the feasible correspondence D(s) varies with the parameter s 2 S (e.g., we allow for constrained optimization problems), and (iii) the objective f (a; s) has local Lipschitz structure in (a; s).4 Note, under these conditions, (3.0.1) is a parameterized local Lipschitzian optimization problem. 3 The proof of all Lemmata and Theorems, unless noted, are in Appendix A at the end of the paper. 4 Finally, relative to the existing literature, versions of Theorem 3 for Lipschitz (but not locally Lipschitz) case are studied in both Cornet [12] and Hinderer, [25].

12

Theorem 3 In addition to Assumptions 1 and 2, in Problem (3.0.1) , let (i) f uniformly Lipschitz in a of modulus ka ; and uniformly lipschitz in s 2 S of modulus ks (resp, locally uniformly lipschitz near s 2 N (s; e),of modulus ks (s; e)), and (ii) D a Lipschitz continuous correspondence in s of modulus kd (resp, locally Lipschitz correspondence near s 2 N (s; e) of modulus kd (s; e)) in the Hausdor¤ metric. Then, V is Lipschitz of modulus ks + ka kd (resp, locally lipschitz near s 2 S of modulus ks (s; e) + ka kd (s; e) on N (s; e)). Remark 4 (Cornet, [12], Hinderer,[25])In Theorem (3) (i) can be replaced by f is jointly uniformly Lipschitz of modulus kf = maxfks ; ka g and (ii) remains the same, then V is Lipschitz of modulus kf (1 + kd ). Theorems 2 and 3, although providing su¢ cient conditions for metric bounds on the Lipschitzian properties of value functions, do not provide sharp bounds in some important problems in economics. Take for example the case of a simple, aggregative, one-sector optimal growth model. For time periods T = f1; 2; :::; T g; the sequence of value functions fVt (k)gt2T is generated recursively as Vt (k) =

max

y2[0;f (k)]

u(f (k)

y) + Vt+1 (y); t 2 T

(3.0.2)

where we assume k0 2 K R++ ; VT (k) = u(f (kT )); u(c) is the period utility function, 2 (0; 1) is the discount rate, f (k) is the production function. If we assume u(c) and f (k) are each real-valued, C 1 (and hence, locally Lipschitz) on R+ , u(c) is bounded below, u0 (0) bounded, and f (k) such that there exists a maximal capital stock k as exceeded when T ! 1, a standard application of the contraction mapping theorem delivers a stationary value function V1 (k) for the case that T ! 1. The stationary value function V1 (k) corresponds with a …xed point of standard Bellman operator that can be de…ne by (3:0:2). We now ask the following question: what is growth rate in the local Lipschitz modulus of the value function l(k;e) (Vt (k)) at …xed k > 0 over time t 2 T? One answer to is provided by Theorem (3), where one can build a recursive representation of the evolution of l(k;e) (Vt (k)). That is, in (3:0:2), using Theorem (3) in equation (3:0:2) for a …xed neighborhood for a point k > 0; say N (k; e); we obtain the following law of motion for the local Lipschitz modulus: l(k;e) (Vt (k)) = l(k;e) (u(f (k)) + l(k;e) (Vt+1 (y))) 13

(3.0.3)

(Also, see Hinderer [25], Theorem 4.1, for a special case). If we seek an upper estimate l(k;e) (V1 (k)) via the recursive formula in (3:0:3), one would require a strong condition on the growth rate in the Lipschitz modulus of the sequence {Vt (k)gt2T (e.g, to apply Hinderer, [25], Theorem 4.2). Essentially, the di¤erence equation governing the evolution of l(k;e) (Vt (k)) in (3:0:3) must satisfy a contraction condition in l(k;e) (V (k)) at k, all t 2 T. In many dynamic economic models, this will not be the case. Now, say in addition, we have u(c) and f (k) is concave. Then, Vt (k) and 0 V1 (k) are smooth with envelope Vt (k) = u0 (c (k))f 0 (k) (e.g., by Mirman and Zilcha [44], lemma 1). Further, noting the concavity of u, we have l(k;e) (Vt (k)) = l(k;e) (Vt+1 (k)) = l(k;e) (V1 (k)) u0 (0)f 0 (k)

(3.0.4)

for all t 2 T. Therefore, using the classical envelope theorem, the local Lipschitz modulus that satis…es (3:0:3) is easily characterized, namely 0 l(V1 (k)) u0 (0)f 0 (k) for all k > 0. Now, say the growth model in (3:0:2) has nonconvexities (e.g., the nonclassical one sector growth model of Dechert and Nishimura [14]). In this case, there is no classical envelope theorem; yet, we can still develop generalized (nonsmooth) estimates of the di¤erential behavior of V1 (k) in k, and accomplish the same objective (i.e., prove l(k;e) (Vt (k)) u0 (0)f1 (k) for all t 2 T, see Section 8 of the paper for details). So, although the local Lipschitz bounds in Theorem (3) are too large, envelope theorems (classical or generalized) can provide di¤erential characterizations of the solutions to (3:0:3) that are sharp. This is just one simple example why envelope theorems and di¤erential bounds can be important in economic applications, and nondi¤erential local Lipschitz bounds (via Theorem (3)) are not su¢ cient.

4

Nonlinear Duality and Constraint Quali…cations

We now study a central concern of this paper: the di¤erential characterization of the value function for parameterized Lipschitzian optimization problems in …nite dimensional Euclidean spaces. The methods we build develop such characterizations of the value functions using an appropriate Lagrangian (evaluated at its optimal solutions), and Lagrange multiplier rule. Prior to studying these di¤erential properties of value functions, we must …rst prove the existence of an appropriate (nonlinear) duality theory for the Lagrangians associated with the program in (3.0.1). 14

In this section, we develop such a nonlinear duality theory for problems with inequality constraints.5 Speci…cally, we prove a key theorem the characterizes the nature of this nonlinear duality present in particular parameterized Lipschitzian versions of the problem in (3.0.1). We show that a standard Lagrangian obtains a zero-duality gap, and satis…es a local saddlepoint property under a weak constraint quali…cation. Therefore, we now need to introduce the idea of a constraint quali…cation, and de…ne the various forms of constraint quali…cations that we shall use in this paper.

4.1

Constraint Quali…cations

For the rest of the paper, we make two additional assumptions in problem (3.0.1): (i) the action and parameter spaces are …nite dimensional and convex, and (ii) D(s) can be enumerated by a collection of smooth functions. In such a case, (3.0.1) becomes: V (s) = max f (a; s), a2D(s)

a2A

Rn , s 2 S

Rm

(4.1.1)

where now D(s) is given by

D(s) = fajgi (a; s)

0; i = 1; ::::::::; p; hj (a; s) = 0; j = 1; ::::::::; qg

with the function g(a; s) being inequality constraints, and h(a; s) the equality constraints. We shall make the following assumption: Assumption 3: (i) The spaces (A; A ) and (S; S ) are each convex in Rn and Rm , respectively, and (ii) the constraints gi , i = 1; ::; p and hj = 0, j = 1; ::; q are jointly C 1 . Assumption 4: The objective and the constrains are de…ned on a set A0 such that A A0 . The last assumption enables us to extend our results to the boundaries since for any a 2 A the the constrains are continuously di¤erentiable and objective locally Lipschitz and/or C 1 The optimal solution correspondence in Problem (4.1.1) is denoted by A :S A, and de…ned follows: A (s) = arg max f (a; s) a2D(s)

5 We focus on the inequality constraint problem …rst; we then later in the paper show how under standard smoothness constraints on equality constraints (i.e., they are smooth and satisfy a standard implicit function theorem), we can adapt our methods easily to the allow for equality constraints.

15

Under Assumption 3, and a weak constraint quali…cation, we can prove the existence of a Lagrange multiplier rule for (4.1.1) .6 For these conditions, the corresponding Lagrangian for (4.1.1) is de…ned as L(a; s; ) = f (a; s) =

T

g(a; s)

1, otherwise

T

h(a; s), if a 2 D(s)

(4.1.2) (4.1.3)

where: g(a; s) = [g1 (a; s); :::; gp (a; s)]T ; h(a; s) = [h1 (a; s); :::; hq (a; s)]T To obtain envelope theorems and/or di¤erential bounds for the value function, as well as to characterize the local saddlepoint properties of L(a; s; ); we need to make restrictions on the functions g and h the de…ne the feasible correspondence D(s). In particular, we need constraint quali…cations. Under Assumption 3, in (4.1.1), we will consider three types of constraint quali…cations: 7 (i) the Mangasarian-Fromowitz constraint quali…cation (MFCQ), (ii) the strict Mangasarian-Fromowitz constraint quali…cation (SMFCQ), and (iii) the linear independence constraint quali…cation (LICQ). The weakest form of constraint quali…cation is the MFCQ. We say a feasible point a 2 D(s) satis…es the Mangasarian-Fromovitz Constraint Quali…er (MFCQ) if: (i) the following vectors are linearly independent ra hi (a; s), j = 1; :::; q (ii) there exists a ye 2 Rn such that,

ra gi (a; s)e y < 0, i 2 I; ra hj (a; s)e y = 0, j = 1; :::; q

where I = fi : gi (a; s) = 0g. We also consider two other constraint quali…cations that are stronger than MFCQ. The …rst (and weaker) constraint quali…cation is the SMFCQ. We say a feasible point a 2 D(s) satis…es the Strict Mangasarian-Fromovitz Constraint Quali…er (SMFCQ) if: 6

See Theorem X, Section 5 below. For the sake of comparison with Milgrom and Segal [40], for convex problems, for s 2 S; we say a constraint system satis…es a Slater condition if there exists a point a2 D(s) such that h(a; s) = 0 and gi (a; s) < 0 for all constraints i that are active. 7

16

(i) the following vectors are linearly independent ra gi (a; s), i 2 Ib ; ra hi (a; s), j = 1; :::; q (ii) there exist y 2 Rn such that, ra gi (a; s)y < 0, i 2 Is ; ra gi (a; s)y = 0, i 2 Ib

ra hj (a; s)y = 0, j = 1; :::; q

where Ib = fi 2 I : i > 0g, Is = fi 2 I : i = 0g and I = fi : gi (a; s) = 0g. A third constraint quali…cation is the strongest we consider, and is the focus of the recent work of Rincon-Zapatero and Santos [50]. We say a feasible point a 2 D(s) satis…es the Linear Independence Constraint Quali…er (LICQ) if, the following vectors are linearly independent,

ra gi (a; s), i 2 I, ra hi (a; s), j = 1; :::; q where I = fi : gi (a; s) = 0g. In this section of the paper, we consider only versions of (4.1.1) with inequality constraints. In section 6, we add the equality constraints hj j = 1; :::; q to (4.1.1). For versions of (4.1.1) with only inequality constraints, when the de…nition of the constraint quali…cation is modi…ed in the obvious manner, we add a "/R" to the abbreviation that denotes the class of constraint quali…cation (where "R" refers to this restricted case). So, for example, in the case of the MFCQ, when equality constraints are not present, MFCQ reduces to the following hypothesis (denoted MFCQ/R): there exists a ye 2 Rn such that, ra gi (a; s)e y < 0, i = 1; :::; p

where I = fi : gi (a; s) = 0g. Similarly, we can de…ne the SMFCQ/R and LICQ/R are de…ned in an analogous manner for the restricted case of only inequality constraints.

4.2

Lagrange Multiplier Rules under the MFCQ

We are now prepared to state our main theorem on nonlinear duality and Lagrangians for the version of (3.0.1) given in (4.1.1). More speci…cally, in Proposition 5, we show that under the MFCQ/R, there exists a simple nonlinear duality theory (4.1.1) where we have the following: (i) the existence of a Lagrangian of the form 4.1.2; (ii) the fact the this Lagrangian satis…es 17

a zero duality gap, and, therefore, can be used to study the structure of the value function in (4.1.1); and (iii) the optimal solutions obtained from (4.1.2) are locally saddlepoint stable. Proposition 5 : At s 2 S; suppose MFCQ/R holds at an optimal solution a (s) 2 A (s). Then, under Assumptions 1-3, f id locally Lipschitz and no equality constrains, for any direction of perturbation x 2 Rm ; and any > 0; there exists a vector y( ; x) such that for all (& a ; & s ) 2 @a f (a (s); s) @s f (a (s); s) : ra g(a (s); s) y + rs g(a (s); s) x < 0 and: (& a ; & s ) (y; x) > inf Los (a (s); s; ; x) 2K

in which: Los (a (s); s; ; x) =

max

&s

& s 2@s f (a (s);s)

T

rs g(a (s); s)

x

Further, Los (a (s); s; ; x) has a local saddlepoint, and satis…es a zero duality gap. Proof. Tarafdar ([59], Theorem, 2) show for any x 2 Rm : S(y; ) =

max

& a 2@a f (a (s);s)

+

max

T

(& a

& s 2@s f (a (s);s)

ra g(a (s); s)) y T

(& s

rs g(a (s); s)) x

is a saddle function with saddle value inf sup S(y; ) = sup inf S(y; ) 0 y

=

0 y o Ls (a (s); s;

; x)

The result follows from Tarafdar ([59], Corollary, 7). We will introduce the equality constraints in the next section. With a nonlinear duality results for the Lagrangian in (4.1.2) established above and the results of the next section we will proceed to the question of di¤erential bounds and envelope theorems in (4.1.1).

18

5

Problems with Equality constraints

Constraint optimization problems typically deal with both equality and inequality constraints. In this section we investigate the e¤ects of equality constraints on an constrained nonconvex optimization problem. Assumption 4: The equality constraints hi , j = 1; :::; q are di¤ erentiable with respect to a, the vectors ra hi (a; s), j = 1; :::; q are continuous with respect to (a; s) in a neighborhood of (a (s); s) and the vectors ra hi (a (s); s), j = 1; :::; q are linearly independent. In our problem the number of equality constraints are less equal to n. If q = n ra hi (a (s); s) is of full rank then ra hi (a (s); s) is invertible implying a complete characterization of solutions. However we allow q < n, so we use the technique of reduction of variables and then apply the implicit function theorem. We state the following implicit function theorem that we use in getting bounds or/and form of the of the directional derivative of the value function. Proposition 6 (Classical Implicit Function Theorem). Let h : Rn Rk ! Rk be C1 ; and suppose that h(u; v) = 0 and ru h(u; v) has maximal rank. Then there exists open neighborhoods U of (u; v) and V of v, and a C1 mapping w : V ! Rk such that: (x; y) 2 U and h(x; y) = 0 () y 2 V and x = w(y) Our problem with equality and inequality constraints is now: max f (a; s), a

a 2 Rn , s 2 Rm

(5.0.1)

subject to gi (a; s)

0, i = 1; ::::::::; p

hj (a; s) = 0, j = 1; ::::::::; q Under Assumption 4, when n q; the matrix constructed with the ra hj (a; s) j = 1; ::::::::; q has maximal rank. We re-order a, such that a = (a I ; aI ) a I 2 Rq ; aI 2 Rn q thereby reducing the q equality constraints in q unknowns, aI will be treated as parameters. Therefore, by the implicit function given by (6), for h(a; s) = (h1 (a; s); :::; hq (a; s)) with h : Rn Rm ! Rq , we have in the a neighborhood of (a (s); s) : h(a; s) = 0 () a = (aD (aI ; s); aI ) 19

where aD is locally Lipschitz in (aI ; s) and aI 2 Rn q . De…ne the new reduced-form objective function in fe(aI ; s) = f (aD (aI ; s); aI ; s). Substituting also (aD (aI ; s), we can similarly de…ne a new reduced-form systems of ~ I ; s): We then de…ne the following reduced-form constraints g~(aI ; s) and h(a problem: max fe(aI ; s), aI 2 Rn aI

q

, s 2 Rm

(5.0.2)

subject to

gei (aI ; s)

0, i = 1; ::::::::; p

in a neighborhood U of (aI ; s): Note that for corresponding values (a; s) 2 F (s) and (aI ; s) 2 U ; the constraints and the objective functions of the reduced-form programs and the original program take the same values (as by construction, e h(aI ; s) is identically zero on U ). Thus, these two programs have corresponding feasible points and maxima, the same value function V (s) = Ve (s), as well as the same binding inequality constraints at (a (s); s) and (aI ; s). A correspondence also can be established between Kuhn Tucker points of both programs as the following lemma proves: Lemma 7 : If is a Kuhn Tucker vector for (5.0.2) at aI , then there exists some 2 Rq such that ( ; ) is a Kuhn Tucker vector for NLP at a = (aD (aI ; s); aI ). We are now ready to state the correspondence between MFCQ8 for the full nonlinear program NLP, and MFCQ/R for the reduced program (5.0.2). Lemma 8 : If MFCQ holds for some point (a ; s) for NLP, then MFCQ/R holds at (aI ; s) for (5.0.2). By this last Lemma, we can now compare the subgradients of the Lae I ; s) = (fe + ge)(aI ; s) and L(a; s) = (f + g)(a; s): Let H = grangians L(a e takes f(a; s) : h(a; s) = 0g. For a given and for any , the Lagragian L the same values in a neighborhood U of (aI ; s) as the restriction of L to H near (a ; s). Therefore: e ; s) = cof : 9f(an ; sn )g 2 domrs L \ H; (an ; sn ) ! 0; rs L(an ; sn ) ! g @s L(a I @s L(a ; s)

8

Recall the de…nition of MFCQ in Section 2 of the paper.

20

Thus for any direction x of perturbation: e os (aI ; s; ; x) = L

max

e2@s L(a e I ;s; ; )

e :x

max

2@s L(aI ;s; ; )

:x = Los (a ; s; ; ; x)

e (s); s) and K(a (s); s), Furthermore, since there is an injection between K(a I we have: sup and:

e 2K(a I (s);s)

inf

e 2K(a I (s);s)

e o (a ; s; ; x) L s I

e o (a ; s; ; x)g fL s I

sup ( ; )2K(a (s);s)

inf

Los (a ; s; ; ; x)

( ; )2K(a (s);s)

fLos (a ; s; ; ; xg

(5.0.3)

(5.0.4)

Proposition 9 (i) If the feasible set D(s0 ) is uniformly compact 8 s0 in the neighborhood of s and MFCQ hold at some optima, a (s) then the value function is continuous at s. (ii) If MFCQ is satis…ed for all a (s) 2 A (s) then for any direction x and t small enough V (s + tx) = Ve (s + tx). Proof. (i) From (Berge, [8], Maximum Theorem, p 116) V (:) is continuous at s. (ii) V (:) is continuous at, therefore there exist sequences tn ! 0 and an (s) ! a (s) such that for n large enough an (s) 2 A (s + tn x) for any direction x. Since MFCQ is satis…ed for all a (s) 2 A (s) and n large enough, V (s + tn x) = f (an (s); s + tn x) = fe(aI;n ; s + tn x) = Ve (s + tn x)

Or for t small enough

V (s + tx) = Ve (s + tx)

With these results we are equipped to state nonsmooth envelope theorems with equality and inequality constraints. In the next section we generalize the envelope theorems in Tarafdar ([59]) with equality constrains. We also provide the form pg the Clarke gradient under di¤erent constraint Quali…cations.

21

6

Generalized Di¤erentiability of the Value Function

Our approach to the question of di¤erentiability of the value function is to use the dual structure that underlies the existence of Lagrangian studied in Proposition (5), and at its set of optimal solutions, to use that Lagrangian, along with an appropriate constraint quali…cation, to compute di¤erential bounds and/or directional derivatives of the value function directly from the Lagrangian. Although this procedure is not necessary (e.g., see example (18) at the end of this section), it is a powerful method in our class of problems for constructing di¤erential bounds and envelopes. We begin with the question of di¤erential bounds for the case of smooth constraints, and locally Lipschitz objectives.

6.1

Di¤erential Bounds

We will provide conditions under which the value function in Problem (3.0.1) has di¤erential bounds. Speci…cally we show the bounds can be read of the Lagrangian. Relative to the issue of di¤erential bounds the weakest Constraint Quali…cation is most appropriate. In Theorem 10 (i) and (ii) below, we …rst provide a form for the lower and upper di¤erential bound based upon the Lagrangian at the optimum (along with the associated Lagrange multipliers) respectively. MFCQ guarantee the KKT points are nonempty, compact and convex valued. Thus we are able to establish only bounds in Theorem 10(iii).9 Theorem 10 Under Assumptions 1-3, for problem (3.0.1), if, (i) f locally Lipschitz, (ii) D(s) is nonempty and uniformly compact near s, and (iii) MFCQ hold for every optimal solution a (s) 2 A (s); then for any direction x 2 Rm of perturbation. we have the following: V (s + tx) t!0 t

(i) lim inf

V (s)

inf

2K(a (s);s)

9

Los (a (s); s; ; ; x);

We should note, we can obtain di¤erentiable bounds and directional derivatives for more general constraints, namely locally Lipschitz inequality constraints (smooth equality constraints). In particular, we can use the nonsmooth SMFCQ in Auslender ([5]) to extend the results directional derivatives in the next section, and similar nonsmooth MFCQ to apply the results of Fontanie [20] to obtain di¤erential bounds. We defer a discussion of these results for subsequent work.

22

and (ii) lim sup t!0

V (s + tx) t

V (s)

sup 2K(a (s);s)

fLos (a (s); s; ; ; x)g;

therefore, (iii)

sup

min

fLos (a (s); s; ; ; xg

sup

max

fLos (a (s); s; ; ; x)g

a (s)2A (s) ( ; )2K(a (s);s) D+ V (s; x) D+ V (s; x) a (s)2A (s) ( ; )2K(a (s);s)

where Los (a (s); s; ; ; x) =

max

V (s + tx) t!0 t

V (s)

T

rs g(a (s); s)

x

Ve (s0 ) in a neighborhood of s and

Proof. (i) By construction V (s0 ) V (s) = Ve (s), so that: lim inf

&s

& s 2@s f (a (s);s)

Ve (s + tx) t!0 t

lim inf

Ve (s)

inf

e 2K(a I)

e os (aI ; s; ; x) L

the second inequality follows from ([59], Theorem 8), and from (5.0.4): inf

e 2K(a I (s);s)

e os (aI ; s; ; x)g fL

inf

( ; )2K(a (s);s)

fLos (a ; s; ; ; xg

Hence the result follows (ii) As in Theorem ([59], Theorem 8) we choose a sequence ftn g converging to 0 such that: lim sup t!0

V (s + tx) t

V (s)

V (s + tn x) V (s) n!1 tn e V (s + tn x) Ve (s) = lim n!1 tn e V (s + tx) Ve (s) = lim sup t t!0 o e = sup Ls (aI ; s; ; x) =

lim

e 2K(a I (s);s)

23

The second equality follows from proposition (9) the third from the property of sequence ftn g converging to 0 and the fourth from ([59], Theorem 8). Also we have from relation (5.0.3), sup e 2K(a I (s);s)

e o (a ; s; ; x) L s I

sup ( ; )2K(a (s);s)

Los (a ; s; ; ; x)

and we get the result. (iii) MFCQ hold for all a (s) 2 A (s), hence the result follows. The Dini derivatives are bounded above and below, so the value function is locally Lipschitz in s. The next corollary characterizes the generalized gradient of the value function. f locally Lipschitz and MFCQ are too weak to get the exact form of the generalized gradient. As seen below the Clarke gradient of the the value function is contained by the Clarke gradient of the Lagrangian. Corollary 11 If (i) f locally Lipschitz, (ii) D(s) is nonempty and uniformly compact near s, and (iii) MFCQ hold for every optimal solution a (s) 2 A (s); then @V (s)

[a

(s)2A (s)

[

2K(a (s);s)

@s (f

g

h)(a (s); s)

Proof. Let 2 @V (s) and fsn g be a sequence converging to s such that rV (sn ) ! . From Theorem (10, (ii)) for any direction x, an (sn ) 2 An (sn ) and n 2 K(an (sn ); sn ) we have, rV (sn ) s = D+ V (sn ; x) max

& sn 2@s f (an (sn );sn )

T n rs g(an (sn ); sn )

& sn

T n rs h(an (sn ); sn )

x

(6.1.1)

To simplify notation we denote & sn (an (sn ); sn ) 2 @s f (an (sn ); sn ) by & sn . Since D(s) is nonempty and uniformly compact and MFCQ holds at all optimal points from ([23], Theorem 2.9; [21], corollary 3.6) there exist subsequence fan (sn ); sn ; n g, a (s) 2 A (s) and 2 K(a (s); s) such that an (sn ) ! a (s), sn ! s and n ! . Hence by taking the limit of (6.1.1) we get, s

max

& s 2@s f (a (s);s)

max

&s max

T

rs g(a (s); s)

2K(a (s);s) & s 2@s f (a (s);s)

&s

24

T

T

rs h(a (s); s)

rs g(a (s); s)

T

x

rs h(a (s); s)

x

Note K(a (s); s) is compact. The above condition holds 8, a (s) 2 A (s), thus s

sup

max

max

a (s)2A (s) 2K(a (s);s) 2@s L(a (s);s)

[

x]

Note, @s L(a (s); s) is the Clarke generalized gradient of the Lagrangian (4.1.2). Since @s L(a (s); s) is convex, from ([51], Theorem 32.2) we conclude, s

max

2K(a (s);s)

x:

2 COf[a

(s)2A (s) @s L(a

(s); s; ; )g

Thus, 2 COf[a

(s)2A (s)

:

2 @s L(a (s); s)

Implying the result. In next section, we given conditions under which value functions are directionally di¤erentiable and/or Clarke regular.

6.2

Directional Di¤erentiability

To obtain conditions in (4.1.1) for the value function V (s) to be either Clarke directionally di¤erentiable and/or directionally di¤erentiable, stronger conditions on the primitive data are needed. In particular, the constraint system that de…nes D(s) needs to satisfy a stronger constraint quali…cation (namely, SMFCQ). In addition, additional regularity conditions on the objective function (as it is well-known that a locally Lipschitz function is not necessarily directionally di¤erentiable) are helpful. In this section, we introduce these additional assumptions and sharpen our di¤erential bounds and di¤erentiability results. We then conclude this section with our main theorem, Theorem (13), which gives our su¢ cient conditions for the value function V (s) in (4.1.1) to be directionally di¤erentiable. We believe those conditions are the most general available in the literature. 6.2.1

Di¤erential Bounds Revisited

We begin by reconsidering Theorem 10 under the additional condition that the objective is Clarke regularity in the parameter s for each a 2 A. Lemma 12 improves greatly upon the characterization of the di¤erential bounds in Theorem (10). In particular, under the Clarke regularity assumptions of the objectives in a, for each s 2 S; we can use the directional di¤erential 25

structure of the Lagrangian Los (a (s); s; ; ; x) to characterize the lower and upper bounds for the Lipschitz variation in V (s) : Lemma 12 Under Assumptions 1-3, MFCQ, and given a locally Lipschitz, directionally di¤ erentiable in both a and s, and Clarke regular in s objective function, we have the following result. For any direction of perturbation x: (i) D+ V (s; x) = lim inf

t!0

V (s + tx) t

(ii) D+ V (s; x) = lim sup t!0

V (s + tx) t

V (s)

0

inf

Ls (a (s); s; ; ; x)

sup

fLs (a (s); s; ; ; x)g

( ; )2K(a (s);s)

V (s)

0

( ; )2K(a (s);s)

where 0

0

T

Ls (a (s); s; ; ; x) = fs (a (s); s; x)

rs g(a (s); s) +

T

rs h(a (s); s) x

and (iii) the generalized gradient of V (s) is given by, @V (s)

[a

(s)2A (s)

[

2K(a (s);s)

@s (f

g

h)(a (s); s)

Proof. Note by de…nition, max

& s 2@s f (a (s);s)

& s x = fs0 (a (s); s; x) = fs0 (a (s); s; x)

Clarke regularity implies the second inequality. Substituting the above relation in Theorem (10) gives the result in (i) and (ii). Part (iii) follows from Corollary (11). 6.2.2

Directional Di¤erentiability of the Value Function

In ([59] Theorem 5) we show if f is locally Lipschitz and jointly Clarke regular and MFCQ is satis…ed then there exist a unique element of the Clarke gradient of f (a (s); s) that satisfy the …rst order condition. Further there is a unique Kuhn Tucker multiplier associated with this unique Clarke gradient under SMFCQ. See ([59]) for details. This enables us to give conditions for directional di¤erentiability of the value function. Theorem 13 Under Assumption 1-3, in Problem (3.0.1), if, (i) f is locally Lipschitz and Clarke regular jointly in (a; s),(ii) D(s) is nonempty and

26

uniformly compact near s, and (iii) SMFCQ hold for every optimal solution a (s) 2 A (s); then for any direction x 2 Rm n 0 o 0 Ls (a (s); s; ; ; x) V (s; x) = max a (s)A (s)

where 0

0

Ls (a (s); s; ; ; x) = fs (a (s); s; x) T

rs g(a (s); s)

T

rs h(a (s); s)

x

Proof. Since SMFCQ hold for all a (s) 2 A(s), obviously MFCQ holds for all the elements of A(s). From Lemma (12), we have: n 0 o sup min Ls (a (s); s; ; ; x) a (s)2A (s) ( ; )2K(a (s);s) D+ V (s; x) D+ V (s; x)

sup

max

a (s)2A (s) ( ; )2K(a (s);s)

n 0 o Ls (a (s); s; ; ; x)

Appealing to Theorem 6 in Tarafdar [59], the corresponding K(a (s); s) is singleton for each (a (s); s). Thus, in the above inequality, we have sup =

sup sup a (s)2A (s)

max

Ls (a (s); s; ; ; x)

2K(a (s);s)

a (s)2A (s)

=

Ls (a (s); s; ; ; x)

2K(a (s);s)

a (s)2A (s)

0

min

0

0

Ls (a (s); s; ; ; x)

Since the supremum will be attained (e.g., Corollary 4.3, [21]), we conclude n 0 o 0 sup Ls (a (s); s; ; ; x) = max Ls (a (s); s; ; ; x) a (s)2A (s)

a (s)2A (s)

Hence, D+ V (s; x) = D+ V (s; x) =

max

a (s)2A (s)

frs L(a (s); s; ; ) xg

Thus, the upper and lower right Dini derivative of the value function are equal. 27

Finally, as the upper and lower right Dini derivative are equal, by de…nition, we have 0

D+ V (s; x) = D+ V (s; x) = V (s; x) n 0 o = max Ls (a (s); s; ; ; x) a (s)2A (s)

6.2.3

Generalized Gradient of the Value Function

We will now give the exact form of the generalized gradient of the value function. Since the Dini derivatives of the value function are bounded both above and below we can conclude that V (s) is locally Lipschitz at s implying the the generalized directional derivative exist. Moreover we show in corollary (14) that the value function is Clarke regular. Corollary 14 If (i) f locally Lipschitz and Clarke regular,(ii) D(s) is nonempty and uniformly compact near s, and (iii) SMFCQ hold for every optimal solution a (s) 2 A (s); then (i) @V (s) = [a

(s)2A (s) @s (f

g

h)(a (s); s)

and (ii) V (s) is Clarke regular Proof. From Corollary (11) we know, @V (s) =

[a

(s)2A (s) @s (f

g

Theorem (13) states, n 0 0 V (s; x) = max fs (a (s); s; x)

h)(a (s); s)

T

a (s)A (s)

=

max

a (s)A (s)

(

T

max& s 2@s f 0 (a s rs g(a (s); s)

(6.2.1)

rs g(a (s); s) (s);s;x) (& s T r h(a s

T

rs h(a (s); s)

x) (s); s)) x

The second inequality follows from f being Clarke regular and the de…nition of generalized gradient. Consider any, 2 [a

(s)A (s)

@s f (a (s); s)

T

rs g(a (s); s)

T

rs h(a (s); s)

(6.2.2)

Therefore for any direction x from theorem (13) and relation of directional derivative with generalized directional derivative, x =

max

a (s)A (s)

&s

T

rs g(a (s); s)

0

V (s; x) V 0 (s; x) 28

T

rs h(a (s); s)

x

o x

where & s 2 @s f (a (s); s). Concluding from de…nition of generalized gradient any satisfying (6.2.2) is also an element of the generalized gradient of V , 2

max

a (s)A (s)

@V (s)

The above and relation (6.2.1) gives the …rst part of the result. (ii) From de…nition V 0 (s; x) = max =

x:

max

a (s)A (s)

2 CO [a

L0s (a

0 (s)A (s) Ls (a

(s); s; ; ) x

(s); s; ; ) x

where CO is the convex hull. Note @s f (a (s); s; x) is convex, Rockafellar ([51], theorem 32.2) gives the second equality and the third follows from theorem (13). Thus V (s) is Clarke regular. Unlike in Corollary (11), now we have the exact form of the Clarke gradient given by the Clarke gradient of the Lagrangian. This makes the calculations of the Clarke gradient of the value function easier. Also Clarke Regularity makes the calculation of the Clarke generalized directional derivative much more simple.

6.3

The Smooth Objective Case Revisited

We can now sharpen the results in Gauvin and Debeau [21] for the case of smooth objective functions using a special case of Theorem 13 . To do this, we assume all the primitive data f; g are C 1 . We …rst prove a new version of Lemma 10 on di¤erential bounds under the smoothness conditions on the objective. The new versions of the lemmas for the lower and upper di¤erential bounds are as follows: Lemma 15 Under Assumptions 1- 3, the MFCQ, and f is C 1 , we have the following: for any direction of perturbation x: V (s + td) t!0 t

V (s)

V (s + td) t

V (s)

(i) D+ V (s; x) = lim inf

inf

rs L(a (s); s; ; )x

sup

rs L(a (s); s; ; ) x

2K(a (s);s)

and (ii) D+ V (s; x) = lim sup t!0

2K(a (s);s)

where rs L(a (s); s; ) x = rs f (a (s); s) x T

rs g(a (s); s) x 29

T

rs h(a (s); s) x

Lemma (15) provides exact di¤erential bounds for the value function. To state conditions under with the value function is directionally di¤erentiable, we impose SMFCQ. Under SMFCQ, we obtain a unique relationship between any optimal solution a (s) 2 A (s), and its associated KKT point. This result requires an important well-known result due to Kyparisis ([37]), which we state in the next proposition. Proposition 16 (Kyparisis, Proposition 1): Under Assumptions 1-3, in Problem (3.0.1), if f is C 1 , the SMFCQ condition holds for every optimal solution a (s) 2 A (s) i¤ the KKT points K(a (s); s) are a singleton. We make two remarks at this point on the importance of Proposition of Kyparisis [37] . First, in Rincon-Zapatero and Santos [50], as they note, the key ingredient in their proof of the C 1 di¤erentiability of the value function is obtaining conditions under which the KKT points are globally unique. (e.g., see [50], Theorem 3.2). In their proof of uniqueness of the KKT multipliers, they appeal to LICQ. Proposition 16 makes it clear this LICQ is not the appropriate constraint quali…cation relative to the question of globally unique KKT points. We shall pursue this point much more in the next section of the paper. Second, the existence of unique relationship between KKT multipliers and their optimal solutions a (s) enables us to give conditions under which the upper and lower right Dini derivative of the value function are equal. This, in turn, provides conditions for directionally di¤erentiable value function. Theorem 17 Under Assumption 1-3, in Problem (3.0.1), if, (i) f is C 1 , (ii) D(s) is nonempty and uniformly compact near s, and (iii) SMFCQ hold for every optimal solution a (s) 2 A (s); then for any direction x 2 Rm V 0 (s; x) =

max

a (s)A (s)

frs L(a (s); s; ; ) xg

Proof. Since SMFCQ hold for all a (s) 2 x(s); from Lemma 15, we have: sup

min

frs L(a (s); s; ; ) xg

sup

max

frs L(a (s); s; ; ) xg

a (s)2A (s) ( ; )2K(a (s);s) D+ V (s; x) D+ V (s; x) a (s)2A (s) ( ; )2K(a (s);s)

30

Appealing to Proposition 16, K(a (s); s) is singleton for each (a (s); s). Thus, in the above inequality, we have sup =

min

rs L(a (s); s; ; ) x

max

rs L(a (s); s; ; ) x

( ; )2K(a (s);s)

a (s)2A (s)

sup

( ; )2K(a (s);s)

a (s)2A (s)

or sup a (s)2A (s)

rs L(a (s); s; ; ) x =

sup a (s)2A (s)

rs L(a (s); s; ; ) x

Since the supremum will be attained (Corollary 4.3, [21]), we obtain sup a (s)2A (s)

frs L(a (s); s; ; ) xg =

max

a (s)2A (s)

rs L(a (s); s; ; ) x

Hence, D+ V (s; x) = D+ V (s; x) =

max

a (s)2A (s)

frs L(a (s); s; ; ) xg

Thus the upper and lower right Dini derivative of the value function are equal. We now give an interesting example of a consumer’s problem under nonlinear pricing, where the feasible set is convex for all income levels m, the objective is strictly quasiconcave, optimal solutions are interior, but SMFCQ fails. So the value function is not di¤erentiable. It can be shown to be directionally di¤erentiable by direct calculation, but Theorem 17 does not apply. So we cannot use the directional derivatives of the Lagrangian to directly calculate the directional derivative. Using the actual directional derivatives, though, we show that the di¤erential bounds in Lemma 15 that do apply to the problem are actually tight. Example 18 (Consumer’s Problem with V (s) directionally di¤ erentiable, but not C 1 ). Preferences are u : R2+ ! R, with u(x; y) = xy. The prices are parametric, and given as follows: px = 1 if x

5

= 2if x > 5

31

py = 1 if y

5

= 2if y > 5 Take income m 2 [8; 12]. Then, the budget correspondence can be enumerated as: 2x 2x

x+y

m

0

(A1)

5+y

m

0

(A2)

x + 2y

5

m

0

(A3)

5 + 2y

5

m

0

(A4)

Then, the consumer’s version of problem (4.1.1) is given as follows: max xy x;y

subject to the constraints (A). Denote the Kuhn Tucker multipliers corresponding to the above constraints, respectively, as i , for i = 1; 2; 3; 4. The Lagrangian in 4.1.2 is: L(x; y;

1;

2;

3;

4)

= xy

1 (x

+y

m)

2 (2x

5+y

m)

4 (2x

5 + 2y

5

3 (x

+ 2y

5

m)

m)

The (unique) optimal solution of the Lagrangian problem is given as: (x (m); y (m);

1;

2;

3;

4)

m m m ; ; ; 0; 0; 0 , m < 10 2 2 2 m m 2 ; 2 ; 1 ; 0; 0; 2 4 , = m 1 + 2 4 = 2 , m = 10 =

m + 10 m + 10 m + 10 ; ; 0; 0; 0; 4 4 8

=

, m > 10

For m < 10; only (A1) is binding; hence, LICQ is satis…ed. Dually, at m > 10; only (A4) is binding, and LICQ is satis…ed. But at m = 10, all constraints are binding, yet only (A1) and (A4) are active (as the others are saturated). Therefore, at m = 10; both LICQ and SMFCQ are violated. Hence, Theorem 13 does not apply. MFCQ does hold, so Theorem 12 does apply. Using the optimal solutions, the value function is V (m) = =

m2 , m 10 4 (m + 10)2 , m > 10 16 32

is directionally di¤ erentiable. Using Theorem 12 on di¤ erential bounds, Lagrangian gives us a bound of the directional derivative in direction d, min

1; 2; 3; 4 0

frs L(x (m); y (m); m;

1;

2;

3;

4 )dg

frs L(x (m); y (m); m;

1;

2;

3;

4 )dg

V (m; d) max

1; 2; 3; 4

Therefore, at m = 10, the bounds are, 2:5d 5d

0

V (10; d) 0

V (10; d)

5d, if d > 0 2:5d, if d < 0

Alternatively, as V (m) is directionally di¤ erentiable in direction d , we can directly verify the bounds V 0 (m; d) = 2:5, d > 0 = 5,

d<0

Therefore, the bounds given by Theorem 12 are tight:

C1 Di¤erentiability of the Value Function

7

We now consider the case of su¢ cient conditions for the existence of the classical envelope theorem (i.e., the C 1 di¤erentiability of the value function). The most general result in the literature providing conditions for the existence of a classical envelope have been provided by a recent paper of Rincon-Zapatero and Santos ([50], Theorems 3.1 and 3.2). 10 In their work, they allow for non-interior optimal solutions, and inequality constraints. Special cases of their result are found in Mirman and Zilcha [44], Benveniste and Scheinkman [6], and Milgrom and Segal [40]. We use a version of Theorem 17 to improve upon the result in Rincon-Zapatero and Santos [50]. We then provide a simple example, namely a modi…cation of the consumer’s problem above under price rationing, to give an example a version of the programming problem in (4.1.1) where the value function is smooth, yet the main theorem in Rincon-Zapatero and Santos [50] does not apply. 10

Rincon-Zapatero and Santos [50] study di¤erentiability of the value function in the context of a stochastic dynamic programming problem. As argued in our companion paper, Morand, Re¤ett, and Tarafdar [43], their result is precisely Gauvin and Debeau [21], Corollary 4.4) when the regularity conditions in Rincon-Zapatero and Santos [50] that guarantee unique optimal solutions are imposed.

33

Theorem 19 Under Assumptions 1-3, if (i) f is C 1 , f is strictly quasiconcave, g is quasi-convex (ii) D(s) is uniformly compact, (iii) SM F CQ holds for every optimal solution a (s) 2 A (s), then V (s) is strictly di¤ erentiable (i.e., C 1 ) in s. Proof. From Theorem (17), we know under SMFCQ, for every a (s) 2 A (s); for any direction x 2 Rm ; the directional derivative exists and is given by 0

V (s; x) =

max

a (s)2A (s)

frs L(a (s); s; ) xg

By the strict quasi-concavity hypothesis for f , and quasi-convexity of g, the set of optimal solutions A (s) is singleton for each s 2 S. Hence, in any direction x 2 Rm 0

V (s; x) = frs L(a (s); s; ) xg Consequently, Ds V (s) = rs L(a (s); s; ) We now give an example that shows the hypotheses of Rincon-Zapetaro and Santos [50] is too strong for some economic applications. The example modi…es the preferences in Example 18, which changes the nature of the constraint quali…cation present. In particular, SMFCQ does hold for this new example; but LICQ does not. Example 20 (Consumer’s problem under price rationing with a C1 value function but LICQ fails). We reconsider u : R2+ ! R is given by u(x; y) = xy 2 . The price are as follows: px = 1 if x

5

= 2if x > 5 py = 1 if y

5

= 2if y > 5 Income m 2 [8; 12]. The budget correspondence (p(x; y); m) is therefore as in Example 18. As the constraint set is convex, and the objective is strictly quasi-concave, the optimal solutions are unique. Denote the Kuhn Tucker 34

multipliers corresponding to the constraints again as 1 - 4 . The optimal solutions associated with the Lagrangian of this problem can be veri…ed to be: (x (m); y (m);

1;

2;

3;

4)

= (m 5; 5; 100 10m; 0; 10m 75; 0) , m < 10 m+5 m+5 (m + 5)2 = ; ; 0; 0 ; 0 , m = 10 3 3 9 20m m2 m2 10m m = 5; ; 0; 0; , m > 10 ; 2 4 4

For m < 10, constraints (A1) and (A3) are binding, and LICQ is satis…ed. Dually at m > 10, constraints (A3) and (A4) are binding, and LICQ is satis…ed. Interestingly, at m = 10, all constraints are binding, yet only constraint (A3) is active, while all other constraints are saturated. The vector r = [ 2; 1]T satis…es the conditions for SMFCQ; yet at m = 10 , LICQ is clearly violated. Hence, Rincon-Zapatero and Santos main theorem ([50], Theorem 3.1) does not apply; yet, we still have unique multipliers under Theorem 17. The value function of this problem is given by, V (m) = (m 5)25, m < 10 (m + 5)3 = , m = 10 27 5m2 , m > 10 = 4 is continuously di¤ erentiable with derivative: V 0 (m) = 25, m < 10 (m + 5)2 = , m = 10 9 5m = , m > 10 2

8

Applications

We now apply the results to Lattice programming problems in convex …nite dimensional sublattices of En (i.e., Rn with the standard pointwise Euclidean partial order). We …rst de…ne a few concepts that we will use in this section. We then provide a nonsmooth characterizations of increasing (nondecreasing) di¤erences. We then provide the example of nonclassical growth models as a special case of the applications of these results. Finally we give nonsmooth version of strictly monotone controls. 35

8.1

Mathematical Terminology for Lattice Programming

We …rst discuss some background mathematical de…nitions that arise in lattice programming. 11 Partially Ordered Spaces: A partially ordered set (or Poset) is a set X equipped with an order relation X that is transitive, re‡exive, and antisymmetric. We say a poset X is a lattice if for any two elements x and x0 in X; X is closed under the operation of in…mum in X , denoted x ^ x0 ; and supremum in X, denoted x _ x0 : The former is referred to as ”the meet”, while the latter is referred to as ”the join” of the two points, x; x0 2 X: A subset S of X is a sublattice of X if it contains the sup and the inf (with respect to X) of any pair of points in X: A lattice is complete if any subset S of X, _S and ^S are in X. Qualitative Properties of Functions: Let (X1 ; 1 ) and (X2 ; 2 ) be Posets. We consider both "point-to-point" mappings (functions) and "point-to-set" mappings (correspondences). A function f : X1 ! X2 is said to be isotone if f (x0 ) 2 f (x); when x0 1 x; for x; x0 2 X1 : If f (x0 ) >1 f (x) when x0 1 x for x; x0 2 X1 ; x 6= x0 , we say the function f is nondecreasing. If f (x0 ) >x2 f (x) when x0 x1 x; for x; x0 2 X1 ; x0 6= x, we say the function f is increasing. We say f (x) is antitone (or order-reversing), nonincreasing, and decreasing dually (e.g., if f (x) 1 f (x0 ) when x0 1 x; for x; x0 2 X1 , we say f (x)is antitone) : A function that is either isotone or antitone is monotone. Let f : X1 X2 ! X3 : Then f (x; y) is mixed-monotone if it is (i) isotone in x; each y 2 X2 ; (ii) antitone in y; each x 2 X1 :12 Supermodular Functions. Suppose that X is a lattice. A function f : X ! R is supermodular ( resp., strictly supermodular) in x if for all x and y in X; f (x _ y)+ f (x ^ y) (resp., >) f (x) + f (y). Parallelly, a function f : X ! R is submodular ( resp., strictly submodular) in x if for all x and y in X; f (x _ y)+ f (x ^ y) (resp., <) f (x) + f (y). Increasing and Decreasing Di¤erences: Let A = A1 A2 ::::::: An with each Ai ; i = 1; 2; :::::::n a lattice, and give A the product order. That is, A En is be a lattice. Let S Em be partially ordered set. Denote A i = A1 ::: Ai 1 Ai+1 ::: An . Consider a real valued function, f : A S ! R. We say f satis…es -(increasing) nondecreasing di¤ erences in (a i ; ai ; s) if for all s 2 S, f (a0 i ; ai ; s) f (a00 i ; ai ; s) is (increasing) nondecreasing in ai , 8 a0 i ; a00 i 2 11 See Veinott [61], and Topkis [60],for a complete discusion of the concepts in this section. 12 Our de…nition of mixed-monotone mappings here is bit di¤erent then in the literature. In particular, in the …xed point literature, a mixed-monotone mapping has P1 = P2 = P3 :

36

A i , ai 2 Ai and 8 i. -(increasing) nondecreasing di¤ erences in (a; s) if for all s 2 S, f (a0 ; s) 00 f (a ; s) is (increasing) nondecreasing in s, 8 a0 ; a00 2 A. f satis…es (decreasing) nonincreasing di¤ erences in (a i ; ai ; s), (decreasing) nonincreasing di¤ erences in (a; s) if f satis…es (increasing) nondecreasing di¤ erences in (a i ; ai ; s); (increasing) nondecreasing di¤ erences in (a; s). Qualitative Properties of Correspondences. Let (X; X ) and (Y; Y ) be partially ordered sets. We can talk about the monotonicity properties of correspondences. To do so, we endow 2Y (or, perhaps, 2Y n?) with an order relation RY : It is often useful to seek monotonicity properties in order relations RY that admit tractable monotone selections. The mapping F : X ! 2Y n? (M.1) Increasing upwards if x x, x; x 2 X implies for any z 2 F (x) there exist a z 2 F (x) such that z RY z (M.2) Increasing downwards if x x, x; x 2 X and z 2 F (x) implies there exist a z 2 F (x) such that z RY z (M.3) Veinott’s Strong set order Isotone if for x1 x2 ; for any z1 2 F (x1 ); z2 2 F (x2 ); z1 ^ z2 2 F (x1 ) and z1 _ z2 2 F (x2 ):

8.2

Nonsmooth Characterizations of Complementarity

It is well known that if f : A S ! R is twice continuously di¤erentiable positive (nonnegative) cross partials characterize increasing (nondecreasing) di¤erences. However in many applications in parameterized optimization, the hypothesis of C 2 primitive data is very strong. Therefore, in this section of the paper, we seek to provide nonsmooth characterizations of increasing di¤erence. In this section A is a convex sublattice, and f (a; s) is Lipschitz in a. We begin by proving or stating four key Lemmas that we will be used in the de…nition. The …rst three lemmas states three properties of a locally Lipschitz function in R. Lemma 21 If f is Lipschitz of modulus k in a for all s 2 S., then F (a0 ; a00 ; s) = f (a0 ; s) f (a00 ; s) is of Lipschitz of modulus 2k in the Euclidean metric for each s 2 S. Proof. Consider any a0 ; a00 ; b0 ; b00 2 A, and s 2 S; where both are given the a common Euclidean metric kxk for x 2 A or S:

37

Then, as f (x) is assumed Lipchitz of modulus k; we have the following series of inequalities: F (a0 ; a00 ; s) =

F (b0 ; b00 ; s)

f (a0 ; s)

f (a00 ; s)

f (a0 ; s)

f (b0 ; s) + f (b00 ; s)

f (a00 ; s)

a0i

a00i

k max i

2k max i

b0i

a0i

f (b0 ; s) + f (b00 ; s) + k max i

b0i ; a00i

b00i

b00i ;

where a0i ; b0i ; a00i ; b00i 2 Ai . Hence, F is locally Lipschitz of modulus 2k for all s 2 S: We can characterize isotone Lipschitz functions; we can provide su¢ cient conditions for strictly increasing Lipschitz functions. Let X be a metric space, and g : X !R be a real-valued mapping. Then, we have the following well-known characterization of a monotone Lipschitz function on X : Lemma 22 If g : X ! R is locally Lipschitz near x 2 X:Then, g is monotone on ,if all M 2 @g(x) are positive semide…nite. Proof. See ([27], Corollary 3.4). As with the case of a once-continuously di¤erentiable function, we cannot expect to achieve a complete characterization of a strictly monotone function (e.g., g : X !R; X = ( 1; 1) g(x) = x3 , x 2 X, is strictly monotone, di¤erentiable, with g 0 (x) = 3x2 0). We have the following su¢ cient condition for strict monotonicity of a Lipschitz function: Lemma 23 If f : A S ! R is locally Lipschitz at (a; s) and M 2 @f (a; s) are positive de…nite then f is strictly monotone. Proof. Follows from ([27], Theorem 3.2).

8.3

Monotonicity in Lattice Programming Problem

In this subsection we will consider a …nite horizon deterministic growth model with nonconvex production function. The utility function u : R+ ! R is twice continuously di¤erentiable, increasing, strictly concave and satis…es the Inada condition. The production function f : R2+ ! R is increasing and twice continuously di¤erentiable. Let f (k; z) be the production function where k is the capital stock and z is productivity. The feasible set is: D(k; z) = fa : 0

a

f (k; z)g 38

Hopenhayn and Prescott [26] gave the following two conditions under which investment is jointly increasing in (k; z). (i) u00 (c)f1 (k; z)f2 (k; z) + u0 (c)f12 (k; z) 0 and (ii) D(k; z) = faj0 a f (k; z)g has a sublattice graph. In other 3 is a sublattice: words the graph of the feasible set, grD(k; z) R+ Condition (ii) is very strong. Only the Leontief production function satis…es the condition. For example we want a

D(s); a0 2 D(s0 )

2

=) a ^ a0 2 D(s ^ s0 ) and a _ a0 2 D(s _ s0 ) The meet condition is not satis…ed by any other production function. These conditions are much stronger than necessary. For example let u(c) = ln c and f (k; z) = zk , 2 (0; 1), here investment is increasing in (k; z) but condition 2 is not satis…ed. In the remaining subsection we give conditions far less strong than Hopenhayn and Prescott [26] for monotone controls. For any t 2 T = f1; 2; :::; T g and k0 > 0; z > 0 value function is determined recursively by, Vt (kt ; z) =

max

0 a f (kt ;z)

u(f (kt ; z)

a) + Vt+1 (a; z)

(8.3.1)

and the terminal value by VT = u(f (k; z)) Here the objective in (8.3.1) locally Lipschitz and Clarke regular jointly in (a; k; z),(ii) D(k; z) is nonempty and uniformly compact near s, and (iii) SMFCQ hold for every optimal solution a (s) 2 A (s); then Vt (kt ; z) is locally Lipschitz and Clarke regular from Theorem (13). Now we de…ne tk (z; z

0

) = Vt (k; z 0 )

Vt (k; z)

Then from Lemma (21), Vt (k; z) is supermodular , 8 2 @k tk (z; z 0 ); 0. This condition gets rid of condition (ii). Thus we include other production functions. Theorem 24 Under Condition (i ) and u concave (a) At (k; z) is ascending in Veinott’s strong set order, and (b) sup At and inf At are jointly isotone selections Hence we improve a lot upon [26]. 39

8.4

Strict monotonicity in Lattice Programming Problems

In this subsection we will …nd conditions for a (s) 2 A (s) to increase with s. Strict monotonicity of the optimal solutions are given by Edlin and Shannon ([17], Theorem 1,2 and 3) for C 2 primitive data. Here, we will consider the case that the function f (a; s) in (3.0.1) is either locally Lipschitz and/or C 1;1 . The following theorem discusses nonsmooth conditions under which the optimal solution is increasing for C 1;1 primitive data. We …rst state the conditions under which the optimal solution is nondecreasing in s. Proposition 25 If f : A S ! R where A is a lattice and S a partially ordered set is the objective function and D(s) = D is a sublattice for all s is the constraint set, then a (s) 2 A (s) is nondecreasing in s if and only if f is quasisupermodular in a given s and satis…es single crossing property in (a; s). Proof. Milgrom and Shannon ([41], Theorem 4). Here note f does not have to be Lipschitz continuous and/or continuously di¤erentiable. However to get strict monotonicity condition of the optimal value function we need more structure on f . The following three theorems discusses this. Theorem 26 If (i) the objective function f : A S ! R with A = A1 ::::An , A 2 Rn is a lattice and S a partially ordered set is C 1 in (a; s) and C 1;1 in at least one ai , (ii) the constraint D(s) = D for all s 2 S, is a sublattice and (iii) at ai ! 0, fa0 i (a0 ; s) ! 1, for all i if ai ! 0, aj ! 0 then fa0 i (a0 ; s)=fa0 j (a0 ; s) ! a constant for all i; j, i 6= j then a (s) is increasing in s if f is quasisupermodular in a given s, satis…es single crossing property in (a; s) and there exist a i such that the partial derivative fa0 i (a; s) is Lipschitz in (a; s) with all elements of the Clarke derivative positive. Further the value function is Clarke regular. Proof. We will …rst consider the case s > s1 . Assumption (iii) implies a (s) 2 int A and by applying Proposition (25) we know a (s) a (s1 ) for s > s1 . In this problem from Theorem (13) the value function is Clarke regular since the objective function is C 1 and there are no constraints. From the de…nition of the optimal solution, fa0 i (a (s); s) = 0 for all ai 2 Ai . We know there exist an i such that the Clarke derivatives of fa0 i (a (s); s) exist and each of it’s element are positive. In other words, 40

fa0 i (a (s); s) is strict isotone in s. Thus there exist a i such that, fa0 i (a (s); s) > fa0 i (a (s); s1 ) Consequently, fa0 i (a (s); s1 ) 6= 0

() a (s) 2 = A (s1 ) So we claim a (s) > a (s1 ) for s > s1 . Similarly we can show a (s) < a (s1 ) for s < s1 . Thus a (s) is increasing in s. Edlin and Shannon ([17]) Theorem 1 and 3 is a special case of the above theorem with C 2 objective function.

9

Appendix

9.1

Proofs

Proof of Theorem 2: By the maximum theorem, since f is uniformly locally Lipschitz of modulus kf (s; e) on N (s; e); then for s00 ; s0 2 N (s; e) V (s00 )

V (s0 ) = f (a (s00 ); s00 )

f (a (s0 ); s0 )

for any a (s) 2 A (s): As in addition, a (s00 ) 2 D(s0 ) = A, we have V (s00 )

V (s0 )

f (a (s00 ); s00 ) kf (s; e)

f (a (s00 ); s0 )

00 0 S (s ; s )

thus V is locally Lipschitz. The Lipschitz case follows by taking kf (s; e) = kf for all s 2 S: Proof of Theorem 3: Proof. We …rst prove that V is locally Lipschitz. Let s00 ; s0 2 N (s; e): For any a(s) 2 D(s) that is -optimal, we have V (s) f (a(s); s) + . Thus, V (s00 )

V (s0 )

f (a(s00 ); s00 ) =

inf

a0 2D(s0 )

sup

V (s0 ) +

f (a(s00 ); s00 ) inf

0 0 a00 2D(s00 ) a 2D(s )

41

f (a0 ; s0 ) +

f (a00 ; s00 )

f (a0 ; s0 ) +

Reversing s00 and s0 implies V (s0 )

V (s00 )

sup

f (a0 ; s0 )

inf

00 00 a0 2D(s0 ) a 2D(s )

f (a00 ; s00 ) +

Hence, we have V (s00 )

V (s0 )

sup =

inf

a00 2D(s00 )

a0 2D(s0 )

sup

inf

0 0 a00 2D(s00 ) a 2D(s )

sup

inf

a00 2D(s00 )

+

a0 2D(s0 )

sup

f (a(s00 ); s00 )

f (a(s0 ); s0 )

f (a(s00 ); s00 ) f (a (s00 ); s0 ) +f (a (s00 ); s0 ) f (a (s0 ); s0 ) f (a (s00 ); s00 )

inf

0 0 a00 2D(s00 ) a 2D(s )

f (a (s00 ); s0 )

f (a (s00 ); s0 )

f (a (s0 ); s0 )

and f is locally uniformly Lipschitz in s of modulus ks (s; e) and uniformly Lipschitz in a modulus ka . This implies the following, V (s00 )

V (s0 )

sup

=

inf

0 0 a00 2D(s00 ) a 2D(s ) ks (s; e) S (s00 ; s0 )

ks (s; e)

00 0 S (s ; s )

ks (s; e) +

00 0 S (s ; s )

sup a00 2D(s00 )

A

a (s00 ); a (s0 )

ka

S

a (s00 ); a (s0 )

a0 2D(s0 )

+ ka kd (s; e)

= [ks (s; e) + ka kd (s; e)]

inf

+ ka

00 0 S (s ; s )

00 0 S (s ; s )

The last inequality follows from D being a locally Lipschitz correspondence near s 2 N (s; e) of modulus kd (s; e). The proof in the Lipschitz case simply notes ks (s; e) = ks and kd (s; e) = kd for all s 2 S. Proof of Lemma (7) Proof. For every (a; t) in a neigborhood U of (a ; s), since e hj (aI ; t) = h(aD (aI ; t); aI ; t) by the chain rule, for all j = 1; ::; q: Thus:

0 = @aI e hj (aI ; t) = raD hj (a; t) @aI aD (aI ; t) + raI hj (a; t) raD h(a; t) @aI aD (aI ; t) + raI h(a; t) = 0

and since raD h(a; t) has maximal rank at (a ; s) it has maximal rank on a neigborhood U 0 of (a ; s) included in U: Therefore: @aI aD (aI ; t) = f raD1 h(a; t) raI h(a; t)g 42

and aD is C 1 with respect to aI on U 0 . By de…nition e h = 0 so for any 2 Rq we have: @aI (fe + ge)(aI s) = @aI (fe + ge + e h)(aI s)

and by the chain rule we have: @aI (fe + ge)(aI s)

@a (fe + ge + e h)(a s) f("; In

q )g

where In q is the identity matrix (n q) (n q) and " is the gradient of aD at (aI ; s). Thus if is a Kuhn-Tucker vector for (5.0.2) at aI ; there exists 2 @a (f )(a ; s) such that: ; ) 2 @a (f ) ra (g; h)(a ; s) such that: 0=(

D+

raD g(a ; s)+ raD h(a ; s)) "+

I+

raI g(a ; s)+ raI h(a ; s)

for any choice of : By assumption raD h(a ; s) has maximal rank, and for the choice = raD1 h(a ; s)( D + raD g(a ; s)) we have: + raD g(a ; s) + raD h(a ; s) = 0

D

and thus I

+ raI g(a ; s) + raI h(a ; s) = 0

This proves that ( ; ) is a Kuhn-Tucker vector for NLP at (a ; s). Proof of Lemma (8) Proof. Consider y a direction satisfying MFCQ at (a ; s). By de…nition: raD h(a ; s) yD + raI h(a ; s) yI = 0 or, equivalently: yD =

raD1 h(a ; s) raI h(a ; s)yI = @aI aD (aI ; s) yI ;

in which the last equality stems from the previous proof where we established that @aI aD (aI ; s) is a singleton and that: raD h(a ; s) @aI aD (aI ; s) + raI h(a ; s) = 0; knowing also that raD h(a ; s) has maximal rank. have:

By the chain rule, we

raI ge(aI ; s) = raD g(a ; s) @aI aD (aI ; s) + raI g(a ; s) 43

so that: raI ge(aI ; s) yI = raD g(a ; s) yD + raI g(a ; s) yI = rg(a ; s) y < 0

the last inequality using MFCQ. Thus yI satis…es MFCQ/R at (aI ; s). Proof of Lemma (15)

For (i), as f is C 1 ; it is Clarke regular in all directions. Therefore, by Theorem (12), for all x, we have lim inf

t !0

V (s + td) t

V (s)

inf L0s (a (s); s; x) = rs f (a (s); s) x T

inf

rs g(a (s); s)

T

rs h(a (s); s)

x

where the in…mum is taken over ( ; ) 2 K(a (s); s). Thus, V (s + td) !0 t

lim inf t

V (s)

inf

( ; )2K(a (s);s)

rs L(a (s); s; ) x

For (ii), using the Clarke regularity of f; applying again Theorem (12), for all x; we have: lim sup t!0

V (s + td) t

V (s)

sup ( ; )2K(a (s);s)

=

sup ( ; )2K(a (s);s)

fL0s (a (s); s; x)g rs L(a (s); s; ) x

References [1] Aliprantis, C, and K. Border. 1999. In…nite Dimensional Analysis: A Hitchhiker’s Guide, Springer-Verlag Press. [2] Amir, R. L. Mirman, and W. Perkins. 1991, One-sector nonclassical optimal growth: optimality conditions and comparative dynamics. International Economic Review, 32, 625-644. [3] Amir, R. 1996. Sensitivity analysis of multisector optimal economic dynamics. Journal of Mathematical Economics, 25, 123-141 [4] Askri, and C. LeVan. 1998. Di¤erentiability of the value function of nonclassical optimal growth models. Journal of Optimization Theory and Applications. 97 (3), 591-604 44

[5] Auslender, A. 1979.Di¤erentiable Stability in Non Convex and Non Di¤erentiable Programming. Mathematical Programming Study. 10, 2941. [6] Benveniste, L. and J. Scheinkman. 1979. On the di¤erentiability of the value function in dynamic models of economics. Econometrica. 47, 72732. [7] Benhabib, J. and K. Nishimura. 1985. Competitive equilibrium cycles. Journal of Economic Theory, 35, 284-306. [8] Berge, C. 1963. Topological Spaces, MacMillan Press. [9] Brown, D., W. Heller, and R. Starr. 1992. Two-part marginal cost pricing equilibrium: existence and e¢ ciency. Journal of Economic Theory, 57(1), 52-72. [10] Clarke, F. 1975. Generalized Gradient and Application. Trans. American Mathematical Society, 205, 247-62. [11] Clarke, F. 1983. Optimization and Nonsmooth Analysis. SIAM Press. [12] Cornet, B. 1983. Sensitivity analysis in Optimization. Core Discussion paper No. 8322, Universite Catholique de Louvain, Louvain-la-Neuve, Belgiom. [13] Danksin, J. 1967. The Theory of Max-Min. Springer-Verlag, New York. [14] Dechert, W. and K. Nishimura. 1983. A complete characterization of optimal growth paths in an aggregative model with a non-concave production function. Journal of Economic Theory, 31, 332-354. [15] Dontchev, A. and R.T.Rockafellar. 2009. Implicit Functions and Solution Mappings. [16] Edlin, A. and C. Shannon. 1998. Strict Single Crossing and the Strict Spence-Mirrlees Condition: A Comment on Monotone Comparative Statics. Econometrica, 66, 1417-1425. [17] Edlin, A. and C. Shannon. 1998. Strict monotonicity in comparative statics. Journal of Economic Theory, 81, 201-219. [18] Farrell, M. 1959. The convexity assumption in the theory of competitive equilibrium. Journal of Political Economy, 1959, 67(4), 377-391. 45

[19] Fiacco, A.V. and J. Kyparisis. 1986. Convexity and Concavity of the Optimal Value Function in Parametric Nonlinear Programming. Journal Of Optimization Theory and Application, 48(1), 95-126. [20] Fontanie, G. 1980. Subdi¤erential stability in Lipschitz programming. MS, Operations Research and Systems Analysis Center, University of North Carolina. [21] Gauvin, J. and F. Dubeau. 1982. Di¤erential properties of the marginal function in mathematical programming. Mathematical Programming Studies, 19, 101-119. [22] Gauvin, J. and F. Dubeau. 1983. Some examples and counterexamples for the stability of nonlinear programming problems. Mathematical Programming Studies, 21, 69-78. [23] Gauvin, J and J.W. Tolle. 1977. Di¤erential Stability in Nonlinear Programming. SIAM Journal of Control and Optimization, 15, 294-311. [24] Giorgi,G and S. Komlosi. 1992. Dini derivatives in Optimization -Part 1. Decisions in Economics and Finance, 15, 3-30 [25] Hinderer, K. 2005. Lipschitz Continuity of Value Functions in Markovian Decision Processes. Math. Meth. Oper. Res., 62, 3–22. [26] Hopenhayn, H. and E. Prescott. 1992. Stochastic Monotonicity and Stationary Distributions for Dynamic Economies, Econometrica, 60(6), 1387-1406. [27] Jeyakumar,V., D.T Luc and S. Schaible. 1998. Characterizations of Generalized Monotone Nonsmooth Continuous Maps using Approximate Jacobians. Journal of Convex Analysis 5(1) 119-32. [28] Kehoe, T.K., D.K. Levine and P.M. Romer 1990. Determinacy of equilibria in dynamic models with …nitely many consumers. Journal of Economic Theory, 50(1), 1-21 [29] Kamihigashi, T. and S. Roy. 2006. Dynamic optimization with a nonsmooth, nonconvex technology: the linear objective case. Economic Theory, 29, 325-340. [30] Kamihigashi, T. and S. Roy. 2007. A nonsmooth, nonconvex, model of optimal growth. Journal of Economic Theory, 132, 435-460.

46

[31] Kelley, J. 1955. General Topology. Van Nostrand Press. [32] Khan, A. and J. Thomas. 2008. Idiosyncratic shocks and the role of nonconvexities in plant and aggregate investment dynamics. Econometrica, 76(2), 396-436. [33] Khanh,P.Q and N.D Tuan. 2007, Optimality Conditions for Nonsmooth multiobjective Optimization Using Hadamard Directional Derivatives. Journal of Optimization Theory and Application, 133, 341-57. [34] Klatte, D. and B. Kummer. 2002. Nonsmooth equations in optimization : regularity, calculus, methods, and applications. [35] Koopmans, T. 1961. Convexity assumptions, allocative e¢ ciency, and competitive equilibrium. Journal of Political Economy, 69(5), 478-479. [36] Kuratowski, K. 1968. Topology, Academic Press. [37] Kyparisis, J. 1985. On the uniqueness of Kuhn-Tucker Multiplier in Nonlinear Programming. Mathematical Programming, 32, 242-246. [38] Laraki, R. and W. Sudderth. 2004. The preservation of continuity and Lipschitz continuity of optimal reward operators. Mathematics of Operations Research, 29, 672-685. [39] Li Calzi, M. and A. Veinott, Jr. 1992. Subextremal functions and lattice programming. MS. Stanford University. [40] Milgrom, P. and I. Segal. 2002. Envelope theorems for arbitrary choices. Econometrica, 70, 583-601. [41] Milgrom, P. and C. Shannon. 1994. Monotone comparative statics. Econometrica, 62, 157-180. [42] Mirman, L., O. Morand, and K. Re¤ett. 2008. A qualitative approach to Markovian equilibrium in in…nite horizon economies with capital. Journal of Economic Theory, 139(1), 75-98. [43] Morand, O, K. Re¤ett, and S. Tarafdar, 2009b. Lipschitzian Stochastic Dynamic Programming. MS, Arizona State University. [44] Mirman, L. and I. Zilcha. 1975. On optimal growth under uncertainty. Journal of Economic Theory, 11, 329-339.

47

[45] Nishimura, K. and J. Stachurski 2005. Stability of stochastic optimal growth models: a new approach. Journal of Economic Theory, 122(1), 100-118. [46] Nishimura, K. R. Rudnicki, and J. Stachurski. 2006. Stochastic optimal growth with nonconvexities. Journal of Mathematical Economics. 42(1), 74-96. [47] Prescott, E., R. Rogerson, and J. Wallenius. 2009. Lifetime aggregate labor supply with endogeneous workweek lengthn. Review of Economic Dynamics, 12(1), 23-36. [48] Quah, J. 2007. The comparative statics of constrained optimization problems. Econometrica. [49] Reiter, S. 1961. A note on convexity of the aggregate production set. Journal of Political Economy, 69(4), 386-387. [50] Rincon-Zapatero, J. and M. Santos. 2009. Di¤erentiability of the value function without interiority assumptions. Journal of Economic Theory, 144(5), 1948-1964. [51] Rockafellar, R.T. Convex Analysis. Princeton Press. [52] Rockafellar, R.T. and R. Wets. 1998. Variational Analysis, Springer. [53] Rogerson, R. and J. Wallenius. 2008. Micro and macro elasticities in a life cycle model with taxes. Journal of Economic Theory, [54] Romer, P. 1990. Endogeneous Technical Change. Journal of Political Economy, 98(5), S71-S102 [55] Romer, P. 1990. Are nonconvexities important for understanding growth? The American Economic Review, Papers and Proceedings of the Hundred and Second Meeting of the American Economic Association,80(2), 97-103. [56] Rothemberg, J. 1960. Nonconvexity, aggregation, and Pareto optimality. Journal of Political Economy, 1960, 454. [57] Samuelson, P. 1947. Foundations of Economic Analysis, Cambridge Press. [58] An Ordinal Theory of Games with Strategic Complementarities, Working Paper , June 1990. 48

[59] Tarafdar. S, 2009a. Optimization in Economies with Nonconvexities MS, Arizona State University. [60] Topkis, D. 1998. Supermodularity and Complementarity. Princeton Press. [61] Veinott, A. 1992. Lattice programming: qualitative optimization and equilibria. MS. Stanford. [62] Viner, J. 1931. Cost curves and supply curves. Zeitschrift fur Nationalokonomie 3: Reprinted in Readings in price theory. Homewood, Il. Richard D. Irwin, 1951

49

A Nonsmooth Approach to Envelope Theorems

lattice programming, Stackelberg games, Negishi methods in general equi$ librium theory ... show this is the most appropriate constraint qualification relative to the ... For problems with nonsmooth Clarke$regular objectives, Clarke [10] pro$.

298KB Sizes 2 Downloads 179 Views

Recommend Documents

How to address envelope PS.pdf
How to address envelope PS.pdf. How to address envelope PS.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying How to address envelope PS.pdf.

Envelope Compliance Certificate -
No roof insulation is installed on a suspended ceiling with removable ceiling panels. u 6. Stair, elevator ... Master switch at entry to hotel/motel guest room. u 4.

Model reference adaptive control of a nonsmooth ... - Springer Link
Received: 17 May 2005 / Accepted: 14 July 2005 / Published online: 29 June 2006. C Springer Science + Business ... reference control system, is studied using a state space, ...... support of the Dorothy Hodgkin Postgraduate Award scheme.

A nonsmooth Robinson's inverse function theorem in ...
Jan 24, 2014 - function whose domain is a neighborhood of ¯x, we say that this localization is .... may be difficult to check in the general case considered.

mini envelope template.pdf
www.naturescorridor.blogspot.com. Mini. Envelope. Templates. Page 1 of 1. mini envelope template.pdf. mini envelope template.pdf. Open. Extract. Open with.

EIFS - BC Building Envelope Council
Oct 15, 2013 - EIFS is a high performance cladding… □ By the time ... CMHC & NRC's Rainscreen Testing. 0.2 L/min ... Control the application conditions. 3.

Envelope condition method with an application to ... - Stanford University
(2013) studied the implications of firm default for business cycles and for the Great ... bonds bt; income yt; and does not default, it can choose new bond bt ю1 at price qрbt ю 1; ytЮ: ..... We choose the remaining parameters in line with Arella

Envelope condition method with an application to ... - Stanford University
degree n, then its derivatives are effectively approximated with polynomial of degree ... degree when differentiating value function. ...... Industrial Administration.

2015_tiny envelope template.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 2015_tiny ...

A NOTE ON CONTROL THEOREMS FOR ...
Fix an integer N ≥ 1 and a prime number p ≥ 5 not dividing N. Let X denote ..... morphism associated with f factors and let R be the integral closure of Λ in K. We call the ...... CMS Conference Proceedings 17, American Mathematical Society, ...

A VARIATIONAL APPROACH TO LIOUVILLE ...
of saddle type. In the last section another approach to the problem, which relies on degree-theoretical arguments, will be discussed and compared to ours. We want to describe here a ... vortex points, namely zeroes of the Higgs field with vanishing o

A new approach to surveywalls Services
paying for them to access your content. Publisher choice and control. As a publisher, you control when and where survey prompts appear on your site and set any frequency capping. Visitors always have a choice between answering the research question o

A Unifying Approach to Scheduling
the real time r which the job has spent in the computer system, its processing requirement t, an externally as- signed importance factor i, some measure of its ...

8.4: Proportionality Theorems
Page 1. 8.4: Proportionality Theorems. Tuesday, August 22, 2017. 9:56 AM. Chapter 8 Page 1. Page 2. Chapter 8 Page 2. Page 3. Chapter 8 Page 3.

Natural-Fingering-A-Topographical-Approach-To-Pianism.pdf ...
There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Natural-Fingering-A-Topographical-Approach-To-Pianism.pdf. N

A mutualistic approach to morality
Consider for instance a squad of soldiers having to cross a mine field. ..... evidence confirms this prediction, showing a widespread massive preference for.

a stochastic approach to thermodiffusion
Valckenierstraat 65, 1018 XE Amsterdam, The Netherlands. **. Laboratoire Cassiope ..... perature, IBM J. Res. Dev, vol. 32, p. 107, 1988. Kippenhahn R. Weigert A., Stellar Structure and Evo- lution, 1st Ed., Appenzeller I., Harwit M., Kippen- hahn R.

A PROBABILISTIC APPROACH TO SOFTWARE ...
other words, a defect whose execution can violate the secu- rity policy is a .... access to the more critical system resources and are susceptible to greater abuse.

A Unifying Approach to Scheduling
University of California ... ment of Computer Science, Rutgers University, New Brunswick, NJ. 08903 ... algorithms serve as a good approximation for schemes.

B201 A Computational Intelligence Approach to Alleviate ...
B201 A Computational Intelligence Approach to Alleviate Complexity Issues in Design.pdf. B201 A Computational Intelligence Approach to Alleviate Complexity ...

A NOVEL APPROACH TO SIMULATING POWER ELECTRONIC ...
method for inserting a matlab-based controller directly into a Saber circuit simulation, which ... M-file can be compiled into a C function and a Saber template can call the foreign C function. ... International Conference on Energy Conversion and Ap

A mutualistic approach to morality
function of the basic interdependence of their respective fitness (see also Rachlin .... Consider, as an illustration, the relationship of the cleaner fish Labroides ... and thereby creating the conditions for the evolution of cooperative behavior ..