Lattice Methods and Monotone Comparative Statics Pete Troyan Lattice methods are very useful analytical tools that appear in many areas of economics (e.g., monotone comparative statics, supermodular games, matching theory). In this note, we will define the basic concepts of partial orders, posets, lattices, the strong set order, and supermodularity and increasing differences. We will then focus on the application of lattice methods to monotone comparative statics, culminating in a statement of Topkis’ Theorem.

1

Motivation

Consider the following generic constrained optimization problem, where x is a choice variable and ✓ is a parameter: max f (x, ✓) x2R

Let x⇤ (✓) be the solution to this problem when the parameter is ✓. A basic question in economics is how the solution x⇤ (✓), depends on the parameter ✓. A very useful tool for answering this question is the implicit function theorem, which is found by taking the first order conditions f (x⇤ (✓), ✓) = 0 and differentiating with respect to ✓. Assuming the derivatives all exist and the solution is interior and unique, this gives: @x⇤ (✓) = @✓

fx✓ (x⇤ (✓), ✓) fxx (x⇤ (✓), ✓)

1

But, what happens if x⇤ (✓) is not di↵erentiable? Or, what if it is not a function at all, but a correspondence? We clearly can’t di↵erentiate a correspondence like this, and what do we mean by a correspondence that is “increasing in ✓”? Additionally, we may only be able to sign the RHS over a certain range of x or ✓. Topkis’ Theorem will help us reach similar conclusions to the IFT that we might be interested in, but with weaker assumptions. However, before we use it, we must first deal with some new definitions and terminology.

2

Preliminaries

Consider any set X. You should have learned with Laurence that we can define a relation R on X, where we often will write xRy. Some standard examples of relations are the usual greater than relation on the real numbers, or a preference relation , where x y if and only if x is strictly preferred to y. For now, though, we will just use R to denote an arbitrary relation. You should have also learned some important properties of relations. The three that will be most important for us are transitivity, reflexivity, and antisymmetry: 1. Transitivity: xRy and yRz =) xRz 2. Reflexivity: xRx 3. Antisymmetry: xRy and yRx =) x = y A relation R that satisfies these three properties is called a partial order. The relation together with the set (X, R) is called a poset (partially ordered set). It is very important to note that we do not have completeness. That is, there can be elements x, y 2 X that are not comparable (neither xRy nor yRx hold).

2.1

An example

For an example of a finite partial order, let X be the divisors of 60 and R be the relation “is divisible by”. So, X = {1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60}, and, xRy if and only if x is divisible by y. Let’s check the three properties above. 2

1. Reflexivity: itself.

xRx is obvious, as any number is clearly divisible by

2. Antisymmetry: If x is divisible by y and y is divisble by x, x must equal y, since, if not, one of them is larger, and a smaller number cannot be divisible by a larger number. 3. Transitivity: This should also be pretty clear, simply from how we factor numbers. For example, 60 is divisible by 30, and 30 is divisible by 15, so, 60 is divisible by 15. The most important point here is that the relation “is divisible by” is not complete. Consider 2 elements of X, for example, 2 and 3. It is pretty obvious that 2 is not divisible by 3, nor is 3 divisble by 2. So, these two elements are not comparable under our order, which is why we call it a partial order.

2.2

The Product Order

Consider for example the set R2 , and take 3 points, as shown below. How can we order these points? Intuitively, it seems pretty clear that a is greater than b, as it is bigger in both the x and y arguments. But, what about a and c? or c and b? It is not clear how to compare them, and so, we simply won’t. This is where the usefulness of partial orders sets in. y

a

b c

x

Figure 1: How do we compare a, b, and c? The product order codifies this into math, and it is probably the most important partial order you will need to know this year. We are going to 3

define it only on Cartesian products of the real line (so, Rn basically), and what is says is that if we have 2 points a, b 2 Rn , then a is greater than b in the product order if and only if every argument of a is greater than or equal to every argument of b, where we will use P to denote the product order (P for “product”). Formally, let a = (a1 , . . . , an ) and b = (b1 , . . . , bn ). Then, a P b if and only if ai bi for all i. So, how do we order the points in our figure? We have a P b and a P c (since we only require weakly greater in each argument), but b and c are not comparable. It is not hard to check that the product order satisfies properties 1-3 above (and it also is obviously not complete).

2.3

Meet and join

Now that we know what a poset is, we can define two operators, the meet (^) and the join (_). The meet and join take any 2 points in a poset and send them to another point. Formally, they are defined as Meet: x ^ y = max{z|xRz, yRz} Join: x _ y = min{z|zRx, zRy} What do the meet and join do in words? The meet takes two points and returns the largest point (max) that is smaller than both. The join does the opposite. It takes two points and returns the smallest point (min) that is larger than both. The reason we need these operations is precisely because x and y may not be comparable themselves under a partial order, so, we take another point z that is comparable to both and is as close as possible to x and y. We’ll draw a picture to show how this works for the product order.

4

y

a b

e

d

c

x

Figure 2: Examples of meets and joins Consider the figure above, and say we want to find the meet and join of b and c. To find the meet, we look at all points that are smaller than both b and c (in the product order), which is all points that are to the southwest of both b and c. Then, we find the largest such point, which is e. The join is similar. We find all points that are larger than both in the product order (all points to the northeast of b and c), and take the smallest one, which is d. So, b^c=e b_c=d Note that if two points are actually comparable in the order under consideration, then the larger is the join and the smaller is the meet. So, here, for example, a ^ b = b and a _ b = a. In the product order, which is probably the only order you will see this year, the meet and join are equivalent to element by element min and max operators. That is, when taking the meet of two points a and b, we look at each component individually and take the smallest one. In symbols (only for the product order!), a ^ b = (min{a1 , b1 }, . . . , min{an , bn }) 2 Rn

For the join, we simply replace min with max1 . For example, the join of 1

A silly mnemonic I use to remember the distinction between meet and join: when you see meet, think meen. Kind of sounds like min, no?

5

the points (2,3,6) and (1,4,3) is (2,4,6) under the product order.

2.4

Lattices

Now that we have the meet and join down, we can define a lattice. A lattice is simply a poset that is closed under meet and join. That is, given a poset (X, R), if, when we take any two points in X and take their meet and join, we get another point in X, we have a lattice. Having our choice set be a lattice is a very important condition for Topkis’ Theorem. One thing you must remember: A CONSUMER BUDGET SET IS NOT A LATTICE! You will see this come up all of the time on old comps, where they try to trick to into applying Topkis’ Theorem with a consumer budget set as your choice set. You cannot do this. You must first transform the problem before you can apply Topkis (we will show this below). Why is the consumer budget set not a lattice? Look at the picture: y

X b

e

d

c

x

Figure 3: Consumer budget sets are not lattices. The red line denominates our standard consumer budget set. The points b and c are both in the set, but their join is d, which is not in the budget set. Therefore, the budget set is not closed under the join operation, we don’t have a lattice, and we can’t directly apply Topkis. The other common situation you will encouter is when the choice set is all of Rn+ . In this case, since any 2 points in Rn+ have meets and joins in Rn+ , we do have a lattice, and Topkis can be applied directly.

6

2.5

The Strong Set Order

Recall our goal in all of this: to say when x⇤ (✓) is increasing in ✓. But, if there are multiple solutions and x⇤ is a set, and not a point, what does increasing even mean? To compare sets in this way, we use what is called the strong set order. Let A S B mean A is greater than B in the strong set order. Formally, A

S

B , a _ b 2 A, a ^ b 2 B, 8a 2 A, 8b 2 B

So, that math looks ugly. What does it say in words? It is pretty intuitive: A is larger than B if, whenever we take one element from each set, the “smaller” of the two elements (the meet) must belong to B, and the “larger” (the join)2 must belong to A (note that these elements can belong to both sets). Another way to say it: there cannot be an element that is in A and not B, but is smaller than some element of B. Some pictures might clarify:

A

B 2

1

3

B

4

A

Figure 4: On the top, A is greater than B. On the bottom, they are not comparable. On the top of the figure, the sets we are comparing are intervals, while on the bottom, each set only has two points. On the top of the figure, A S B, but on the bottom, A and B are not comparable in the strong set order. The problem on the bottom is the element {2}. It is smaller than some element in B (namely, 3), and it only belongs to A. On the top, if we ever take 2 2

Smaller and larger may be terms that are a bit misleading here. It really is the meet and join, but smaller and larger is a good way to get intuition.

7

elements a and b, the smaller of the two will always belong to B (although it may belong to A as well). The strong set order means that there are no “holes”. Notice that the strong set order reduces to the standard greater than order of the real line if our sets are simply singletons. So, this is generally what we will use when x⇤ (✓) is a set to say that x⇤ is increasing in ✓ (x⇤ (✓1 ) S x⇤ (✓2 ) for ✓1 ✓2 ).

2.6

Supermodularity and Increasing Di↵erences

We have one final preliminary to discuss before actually being able to state and prove Topkis’ Theorem. We will start with a property known as increasing di↵erences. Let f be a function f : X ⇥ ⇥ ! R where both X and ⇥ are lattices. Consider points xH , xL 2 X and ✓H , ✓L 2 ⇥, where xH X xL and ✓H ⇥ ✓L ( X and ⇥ are partial orders on the lattices). The function has increasing di↵erences if f (xH , ✓H )

f (xL , ✓H )

f (xH , ✓L )

f (xL , ✓L )

Since f goes to R, the order after applying f is the standard order on the real line. Note what this equation says. If we think of ✓ as a parameter to our objective function, what this says is, the extra value to choosing a higher level of x (xH over xL ) is greater when the level of the parameter is higher (✓H rather than ✓L ). This captures some kind of complementarity between x and ✓. So, for example, if someone had a utility function over a good x that was u(x) = ✓x, then, the incremental gain from choosing x = 10 over x = 5 is larger when she has a higher value of ✓, and we would say that u has increasing di↵erences in (x, ✓). Supermodularity is a tad more confusing. If we have a function from a lattice to the real line f : X ! R, we say it is supermodular if f (xH _ xL ) + f (xH ^ xL )

f (xH ) + f (xL ), 8xH , xL 2 X

Fortunately, you rarely have to worry about this definition of supermodularity. Why? Consider the case of one parameter and one choice variable, both of which lie in R (and, from now on, let H denote “high” and L denote “low” as we did above). Let f : R2 ! R. Since supermodularity holds for all points in R2 , consider points (xH , ✓L ) and (xL , ✓H ). Then, the LHS of the supermodularity equation applied to these points says says 8

f ((xH , ✓L ) _ (xL , ✓H )) + f ((xH , ✓L ) ^ (xL , ✓H )) = f (xH , ✓H ) + f (xL , ✓L ) Why? Recall with the product ordering the meet is just the element by element min and the join is just the element by element max. That is what we have done above. So, supermodularity says f (xH , ✓H ) + f (xL , ✓L )

f (xH , ✓L ) + f (xL , ✓H )

Rearranging, we get f (xH , ✓H )

f (xL , ✓H )

f (xH , ✓L )

f (xL , ✓L )

which is ID (increasing di↵erences)! So, supermodularity and ID are the same in this case. I believe it is true that supermodularity always implies increasing di↵erences, but the converse is not true in general. However, the converse is true in certain cases, and, in fact, it can be shown (and will be important for you to remember) that if our choice set is a Cartesian product of closed subsets of R ordered by the product ordering, supermodularity and ID are actually equivalent.3 In most problems you encounter, you will only have to check increasing di↵erences. However, you should understand that the two concepts are not actually fully equivalent. 2.6.1

Another characterization of supermodularity and ID

As mentioned before, Topkis can be useful when things are not di↵erentiable. However, in many problems you encounter on homeworks and comps, you will be given di↵erentiability of the objective (not necessarily of x⇤ (✓), which still may not be a function). When the objective is di↵erentiable, there is a very useful and easy characterization of increasing di↵erences that makes answering the question much easier. Recall above how we described ID as a kind of complementarity between x and ✓ (for now, we only have one choice variable and one parameter). What does complementarity mean in terms of derivatives? It means that the cross partial derivative is positive. This can be written as 3

Remember, consumer budget sets do not fall into this category, as they cannot be written as X1 ⇥ X2 · · · ⇥ Xn for some sets Xi ✓ R.

9

fx✓

@ = @✓



@f @x



0

The term in parentheses is the marginal utility, if we think of f as a utility function. When we take a derivative with respect to ✓, we are seeing how marginal utility changes with ✓. So, the fact that this is positive means that, when ✓ is higher, the marginal value at any given level of x is also higher. But, this is exactly what the increasing di↵erences property says! We can also draw some useful characterizations if f is only di↵erentiable in one argument, and not the other. In general, we can write the following: Theorem 1 Let X, ⇥ ✓ R and f : X ⇥ ⇥ ! R. Then, the following are equivalent: 1. f has increasing di↵erences in (x, ✓). 2. If @f /@x exists, it is nondecreasing in ✓ 3. If @f /@✓ exists, it is nondecreasing in x 4. If @ 2 f /@x@✓ exists, we have @ 2 f /@x@✓

0

The proof of this “theorem” is not very difficult, and the intuition is just that given above. Basically, what it says is that if we have di↵erentiability and need increasing di↵erences, we can show it by checking any of the 4 properties above. Also, it should be noted that all of the inequalities are weak inequalities, so we only need nondecreasing, and not strictly increasing. Topkis in general only allows us to draw weak, and not strict, conclusions. If we have more than 2 arguments to our function f , things can quickly get more complicated. Fortunately, the following theorem often comes to the rescue. Theorem 2 Let our constraint space be a Cartesian product of closed subsets of R with the product ordering. If f exhibits increasing di↵erences over all possible pairs that can be taken from (x1 , . . . , xn ), then f is supermodular in (x1 , . . . , xn ). I am not going to prove this (Clayton has a proof on his website if you are interested), but it is a very important theorem to remember. Topkis’ 10

Theorem will require us to have our objective be supermodular in all choice variables taken together. This can often be very difficult to check. Fortunately, this theorem tells us that, in lots of cases, we don’t have to check this. All we have to do is check that f has ID in (x1 , x2 ), (x1 , x3 ), . . . for all pairs. ID in pairs is quite easy to check (especially when we have derivatives and can use Theorem 1 above). For this to apply, our constraint space must be a Cartesian product of closed subsets of R (again, no budget sets).

3

Topkis’ Theorem (finally)

Now that we have all of the machinery, Topkis’ Theorem is quite easy to state and prove. Here it is. Theorem 3 Consider an optimization problem of the form max f (x, ✓) x2D

where D is a lattice (with order X ) and does not depend on ✓. If f exhibits increasing di↵erences in (x, ✓) relative to ( X , ⇥ ) and is supermodular in x (relative to X ), then the optimal choice correspondence x⇤ (✓) will be increasing in the strong set order relative to X (where an increase in the parameter is relative to ⇥ ). Proof. Let xH 2 x⇤ (✓H ) and xL 2 x⇤ (✓L ) be elements of the optimal choice correspondences when the parameters are “high” and “low” respectively. Since xH was chosen when the parameter was ✓H , it must be at least as good as any other x, and, in particular, must give a weakly higher value of the objective than xH _ xL , or, in other words, f (xH , ✓H ) f (xH _ xL , ✓H ). So, 0

f (xH _ xL , ✓H ) f (xH _ xL , ✓L ) f (xL , ✓L ) 0

f (xH , ✓H )

(optimality of xH )

f (xH , ✓L )

(increasing di↵erences in (x, ✓))

f (xL ^ xH , ✓L )

(supermodularity in x) (optimality of xL )

11

The inequalities come straight from the definitions of ID and supermodularity. Note that we have zeros on both ends, which means all of the inequalities must be equalities. Now, look at what the first (now) equality says: f (xH _ xL , ✓H ) = f (xH , ✓H )

Since xH 2 x⇤ (✓H ) and xH _ xL gives the same value of the objective, it must be that xH _ xL 2 x⇤ (✓H ) as well. We can similarly argue from the last equality that xH ^ xL 2 x⇤ (✓L ). But, recall our definition of the strong set order, we required that the join be in the “larger” set and the meet be in the “smaller” set. This is exactly what we have shown here, so, we can say x⇤ (✓H )

S

x⇤ (✓L )

and we can conclude that our optimal choice is weakly increasing the parameter. ⌅ We have kept things pretty general and rigorous here. However, for most cases you deal with, you will either have one choice variable with the usual order, or maybe 2 ordered by the product order. You generally won’t see more complicated orders this year. With Topkis’ Theorem, we are only interested in moving one parameter at a time, and so, we only require ID in the choice variables and the parameter of interest (and ignore all other parameters, meaning we usually take ✓ to be a scalar). However, since when you move a parameter, all of your optimal choices may change, we must have supermodularity in all choice variables.4 So, now that we’ve gone through all of the formalities, how do we actually apply Topkis’ Theorem? There are essentially 3 things we must check before we can conclude that the optimal choice is weakly increasing in the parameter of interest. They are: 1. The choice set D is a lattice. 2. f has increasing di↵erences in (x1 , . . . , xn , ✓) 4

Note that when we say, for example, “supermodularity in all choice variables” this is a property of all of the choice variables taken together. What I mean to say is, this does not mean the objective is supermodular in x1 , supermodular in x2 , etc., but rather that it is supermodular in (x1 , . . . , xn ) taken together as a whole. Make sure you read the definition of supermodularity/ID carefully and understand it.

12

3. f is supermodular in (x1 , . . . , xn ). If these three conditions are met, we can apply Topkis. A few points about checking these conditions: • If there is only one choice variable, supermodularity is trivial and does not need to be checked. In this case, all you must check is increasing di↵erences in (x, ✓). • In probably every situation you will see this year, you will either have only one choice variable (in which case supermodularity is trivial), or you will use Theorem 2 above. Basically, if you only remember one thing about Topkis, it should be this (subject to some conditions that are usually satisfied, such as having a choice set that is a Cartesian product of closed subsets of R): If the objective has increasing differences in all 2 element pairs of (x1 , . . . , xn , ✓), you can use Topkis’ Theorem to conclude that the arg max of the optimization is weakly increasing in ✓. This is true because, by Theorem 2, increasing di↵erences in all 2 element pairs implies both 2 and 3 above. • Don’t forget about the characterizations of ID using derivatives. With di↵erentiability, checking all two element pairs for ID comes down to making sure every mixed partial is positive. Also, this requires the mixed partials be positive everywhere on the choice/parameter set. The same holds for the nondi↵erentiable case: f must be supermodular over the entire space. • Sometimes, our objective may have decreasing di↵erences instead of increasing (replace by  in the definition). This is not a problem. All we do then is replace ✓ by ✓. Then, we have increasing di↵erences in (x, ✓), and so we can conclude that x⇤ is weakly increasing in ✓, which is equivalent to saying it is weakly decreasing in ✓.

4

Examples

So, let’s do a couple of quick examples of how to actually use Topkis.

13

4.1

An auction problem

Say we have an agent participating in an auction. His value for the object is v. This is a first price auction, and so if he submits a bid b and wins, he pays his bid. If he loses, he pays nothing. If he submits bid b, his probability of winning is F (b), where F is a cdf. His maximization problem is max(v b

b)F (b) + 0(1

F (b))

The parameter here is his valuation v, and we want to know how his optimal bid changes with his valuation, b⇤ (v). So, we want to apply Topkis. Let’s check our 3 conditions. 1. The choice set is a lattice, since it is just R+ . 2. To check increasing di↵erences, we will take the derivative of the objective (call it h) w.r.t. v. It is @h = F (b) @v The derivative is just the cdf, which is clearly increasing in b. Thus, by Theorem 1 above, we have increasing di↵erences in (b, v). (Note that here, we cannot necessarily take a mixed partial, because we did not specify that the cdf was di↵erentiable. However, this is ok, because all we really need is that the derivative w.r.t. v is increasing in b, which we know cdfs are.) 3. There is only one choice variable, so, supermodularity is trivial. Thus, all of the conditions for Topkis’ Theorem are satisfied, and we can conclude that b⇤ (v) is weakly increasing in v. That is, a person with a higher valuation for the object will bid weakly more. (I told you Topkis was easy once you understand the definitions.)

4.2

A firm’s problem

A fixed level of output. A firm has two inputs, captial (k) and labor (l), to a production function that produces output x. The e↵ectiveness 14

of labor is parametrized by ✓ below. In the short run, the firm’s output is fixed, and so, it’s goal is to minimize cost while producing output level x: min rk + wl k,l

subject to f (k, ✓l) = x

r is the rental rate of capital and w is the wage paid to labor. Assume that fl and fk are positive. It may be helpful to define the isoquant function L(x, k), which is the total number of labor units necessary to produce output x with capital k. It is defined implicitly as f (k, L(x, k)) = x. Using this equation and the constraint, we can write L(x, k) = ✓l. Then, we rewrite the problem as a maximization only over k as min rk + k

wL(x, k) ✓

Find conditions on f or L under which k ⇤ is weakly increasing or decreasing in ✓. (If all of the producer theory and isoquant stu↵ made no sense, just take the above maximization problem as a math problem and try to apply Topkis). Solution. We learned Topkis using maximizations, so, let’s switch some signs and make the problem a maximization. max rk k

wL(x, k) ✓

Now, let’s check the three conditions. Again, we only have one choice variable, so supermodularity is trivial, and the constraint set is just R+ , which is obviously a lattice. So, all we need is ID in (k, ✓). If we take the mixed partial hk✓ , where h is the overall objective, we get hk✓ =

w @L ✓2 @k

Where do we get @L from? Look at the equation that implicitly defines @k L: f (k, L(x, k)) = x. Di↵erentiate this implicitly with respect to k to get 15

fk + fl or

@L =0 @k

@L = @k

fk fl

So, using this, we see that hk✓ is negative (since we assumed fk and fl werre always positive), that is we have decreasing di↵erences in (k, ✓), or increasing di↵erences in (k, ✓). Thus, Topkis lets us conclude that as ✓ increases, k ⇤ (✓) will increase, which is the same as saying that k ⇤ is decreasing in ✓. That is, as labor becomes more e↵ective, your optimal choice of capital decreases. A variable level of output. Now, suppose that the firm can choose its output x, as well as it’s inputs k and l. It can sell ouput at price p, and r and w are as before (set ✓ = 1 for this problem). The firm now wants to maximize profits, and its maximization problem is max pf (k, l) k,l

rk

wl

Further suppose that fkl  0, which means that capital and labor are substitutes in the production function, so that as the firm rents more capital, the marginal product of labor decreases. What happens to the firm’s optimal choice of labor l⇤ as the wage w increases? Solution. Here, our parameter of interest is w, not ✓. Also, note that now, we have two choice variables, so, ID and supermodularity are no longer as trivial as before. Condition 1 (choice set a lattice) still holds here easily, because the choice set is just R2+ . Also, note that this is a Cartesian product of closed subsets of R (R2+ = R+ ⇥ R+ ), so, we can use the pairwise technique to verify ID/supermodularity (Theorem 2). So, let’s do it. First, we verify supermodularity in the choice variables. Note that we have 2 choice variables, k and l. To show supermodularity, we can show that the objective exhibits increasing di↵erences in all 2 element pairs. There is only one possible pair here, so, we just need to show ID in (k, l). How do we do so? Take a mixed partial. Again, call the objective h. 16

hkl = pfkl  0 where the inequality comes from the assumption that k and l are substitutes. So, we have decreasing di↵erences, which also implies increasing di↵erences in (k, l). By Theorem 2, the objective is supermodular in (k, l). The last thing we must verify is increasing di↵erences in (k, l, w). Note that here, we include all choice variables, but only the parameter of interest. This is very important. We can change parameters one at a time, but, as a parameter changes, all of the optimal choices may change, and so, even though we are only interested in how l⇤ changes, we still must include k when checking ID. You can ignore parameters that are not of interest, but you cannot ignore choice variables. So, we have three variables here. How can we check ID? Using the pairwise method. That is, we must check for ID in (k, l), (k, w), and (l, w). If we find ID in all of these pairs, we know that we have ID in the three-tuple (k, l, w) by Theorem 2. We already checked (k, l) and found that we actually had ID in (k, l). What about the other two?

hkw = 0 hlw = 1 Since hkw = 0, it is nonrestrictive (that is, there are both increasing and decreasing di↵erences, and we can use whichever we need to draw our conclusion). hlw is negative, and so, again, we have increasing di↵erences in (w, l). So, let’s summarize what we found. We have increasing di↵erences in the pairs (k, l), (w, k), (w, l) Theorem 2 above then implies that we have increasing di↵erences in the 3-tuple (k, w, l). So, we can conclude that l⇤ is increasing in w, or that l⇤ is decreasing in w. Further, even though we were not asked this, we can also conclude that k ⇤ is increasing in w. 17

This makes sense. As the wage goes up, we expect the firm to use less labor and more capital. The point of this problem was to illustrate what happens with multiple choice variables. To draw a conclusion about any one choice variable and how it depends on the parameter, we must have increasing differences (or supermodularity) in all of the choice variables and the parameter of interest. If, for example, when taking pairs above we had found that we had ID in (k, l), (w, k), (w, l) where we switched the first l to l, we could not draw a conclusion from Topkis’ Theorem. l and l are di↵erent variables, and so we cannot say we have ID in a 3-tuple of the form (k, l, w) or (k, l, w).

All of our problems here had di↵erentiable objective functions, just to keep things simple. However, this will not always be the case. We can even have situations where the choice set is discrete, which definitely precludes di↵erentiability. However, when these situations arise, the steps are basically the same. You still want ID in all 2 element pairs of the choice variables and the parameter of interest, but, rather than taking derivatives, you will just have to show the property directly from the definition given in section 2.6.

18

Lattice Methods and Monotone Comparative Statics

eas of economics (e.g., monotone comparative statics, supermodular games, matching theory). In this note, we will define the basic concepts of par- tial orders, posets, lattices, the strong set order, and supermodularity and increasing ..... The parameter here is his valuation v, and we want to know how his optimal bid ...

535KB Sizes 3 Downloads 109 Views

Recommend Documents

Robust Comparative Statics in Large Dynamic Economies - Core
Mar 5, 2014 - operator maps a probability distribution into a probability distribution, so its domain or range is not a lattice in any natural order (Hopenhayn and ...... 13A natural first approach to comparative statics in general equilibrium econom

[PDF BOOK] Lattice Methods for Multiple Integration
... optical electronic inc zycal bioceuticals inc zydus healthcare usa llc zygogen llc ... Nyamuragira Piton de la Fournaise Erta AleIn finance an exchange rate also ...

A comparative study of ranking methods, similarity measures and ...
new ranking method for IT2 FSs and compares it with Mitchell's method. Section 4 ... edge of the authors, only one method on ranking IT2 FSs has been published, namely Mitchell's method in [24]. ...... The authors would like to thank Professor David

Monotone Operators without Enlargements
Oct 14, 2011 - concept of the “enlargement of A”. A main example of this usefulness is Rockafellar's proof of maximality of the subdifferential of a convex ...

Monotone Strategyproofness
Apr 14, 2016 - i ) = {(x, x/) ∈ X × X : either x/Pix & xP/ .... being the unique connected component implies that P/ i |A = P// i |A, and thus we also have. A = {x : xP// i y for all y ∈ C}. Similarly, we can define the set B of alternatives ...

On Monotone Recursive Preferences
Jul 8, 2016 - D serves as the choice domain in this paper. One can visualize ..... reconcile the high equity premium and the low risk-free rate. So long as the ...

Monotone Operators without Enlargements
Oct 14, 2011 - the graph of A. This motivates the definition of enlargement of A for a general monotone mapping ... We define the symmetric part a of A via. (8).

Decompositions and representations of monotone ...
monotone operators with linear graphs by. Liangjin Yao. M.Sc., Yunnan University, 2006. A THESIS SUBMITTED IN PARTIAL FULFILMENT OF. THE REQUIREMENTS FOR THE DEGREE OF. Master of Science in. The College of Graduate Studies. (Interdisciplinary). The U

Lattice
search engines, as most of today's audio/video search en- gines rely on the ..... The choice and optimization of a relevance ranking for- mula is a difficult problem ...

A Comparative Study in Ancestral Range Reconstruction Methods ...
facilitate accounting for uncertainty in model parame- ...... University (Greg Plunkett); U.S. National Parks Service, Haleakala NP, ... Software available from.

A comparative study of ranking methods, similarity ...
An illustration of eA 6 eB is shown in Fig. 6. The following ...... 0. 9. Low amount .31 .31 .36 .37 .40 .40 .99 .99 .99 .81 .50 .50 .38 .29 .29 .15 .15 .15 .02 .02 .02. 0.

A comparative study of probability estimation methods ...
It should be noted that ζ1 is not a physical distance between p and ˆp in the .... PDF is peaked or flat relative to a normal distribution ..... long tail region. .... rate. In the simulation, four random parameters were con- sidered and listed in

A Comparative Study in Ancestral Range Reconstruction Methods ...
raises the question of how phylogenetic data are best used to infer whether dispersal-mediated allopatry is indeed the predominant mode of insular lineage.

RATIONAL POLYHEDRA AND PROJECTIVE LATTICE ...
Lattice-ordered abelian group, order unit, projective, rational polyhedron, regular fan, .... ˜v = den(v)(v, 1) ∈ Z n+1 is called the homogeneous correspondent of v. An m-simplex conv(w0,...,wm) ⊆ [0, 1]n is said to be regular if its vertices ar

A Comparative Study of Methods for Transductive ...
beled data from the target domain are available at training. We describe some current state-of-the-art inductive and transductive approaches and then adapt ...

A Comparative Study in Ancestral Range Reconstruction Methods ...
pared with one another to assess compatibility of genic regions for combined analysis (data not shown). No well- supported branches (≥75% bootstrap support) ...

Monotone Linear Relations: Maximality and Fitzpatrick ...
Nov 4, 2008 - if a maximal monotone operator has a convex graph, then this graph must ..... Thus, G is a proper lower semicontinuous convex function.