Unemployment, Capital and Hours – On the Quantitative Performance of a DSGE Labor Market Model ∗

Philip Jung Goethe University Frankfurt

Abstract This paper shows that the standard Mortensen-Pissarides framework embedded in a RBC macroeconomic model with risk averse agents, capital and a labor-leisure choice has the ability to match all moments of the actual US-unemployment rate and other labor market variables within tight bounds when estimated on aggregate output alone. It correctly predicts around 90% of the variation on business-cycle frequency. We describe the set of parameter values that generate these results and show that they lie in the space of commonly estimated or calibrated values in macroeconomic DSGE models. In addition we show that some wage setting arrangements like "right to manage" approaches typically employed in the literature will be unable to generate the observed fluctuations in unemployment rates and give the reason for their failure.

JEL Classification System: E30,E24,J64 Keywords: unemployment, bargaining, RBC model ∗ Correspondence: Jung: Goethe University, Postbox 94, 60325 Frankfurt am Main, Germany, Tel.: +49 69 798-28422, e-mail: [email protected]. I thank my advisor, Dirk Krueger, for his patience and encouragement during the last year, and Keith Kuester, Christian Offermanns and my colleague Markus Hagedorn for helpful comments. I also thank participants at the SCE Meetings in Cyprus, in particular Michael Reiter and Christian Haefke, for helpful remarks.

1

Introduction

The inclusion of labor markets into general equilibrium models has witnessed an increase in research effort over the last decade. While some success has been made since the early contributions of Merz (1995) and Andolfatto (1996), recently it has been pointed out by Shimer (2005) that the basic workhorse of the labor market, the search and matching framework of Mortensen and Pissarides (1994) has some serious shortcomings. He shows that the model is unable to jointly generate the observed fluctuations in the unemployment rate and the vacancy to unemployment rate, which is respectively 7 and 14 times as volatile as productivity, without relaying on unreasonably high and fluctuating destruction rates or unreasonably fluctuating productivity shocks. As pointed out by Hall (2005a) and Shimer (2005) high destruction rates have the counterfactual implication that they destroy the strong negative correlation between unemployment and vacancy rates, the famous Beveridge curve observation. Additionally they point out that the key driving force of the data appear to lie on the job creation side, downplaying the role of destruction rates. In response, Hagedorn and Manovskii (HM, 2005) argue, that the results of Shimer are not robust to a different calibration strategy. They show, that using different calibration-targets, the model is able to generate endogenously the right variances. They rely on a very high benefit of not working, equal to 94% of the average wage rate, and use as a target an endogenous cross correlation between wages and productivity, which we view as equivalent to directly targeting the relative standard deviation of the unemployment rate, as we will show below. In summarizing the literature, Hornstein, Krusell, and Violante (2005) demand the inclusion of capital into the model to interpret the key ratios in a consistent manner. They conclude that the results of Hagedorn and Manovski are hard to interpret due to the fact that Shimer and HM are both not using a general equilibrium set up with capital, leaving some degrees of freedom in interpreting their results. The purpose of the paper is to clarify this issue by introducing capital and a labor-leisure choice with risk averse agents into the model and to show that a variant of the Mortensen-Pissarides model can account for all moments of the actual unemployment rate of the US. The general equilibrium structure we employ allows a clear interpretation of all parameters. In particular the key unobserved parameters, the bargaining power and the vacancy posting cost, are pinned down by principally observed variables.

1

We show that the calibration strategies commonly employed have at least one degree of freedom that can be used to match labor market facts (or to mismatch labor market facts) and therefore cannot be used to claim a failure or success of the model. We derive the set of joint parameter values that are able to replicate labor market facts in a full equilibrium model. We show that these parameters lie in the space of commonly estimated or calibrated parameters; however, also the results of Shimer can be defended on common grounds. We then derive implications of different wage setting arrangements currently employed in the literature and show why the Nash-bargaining setup (or a variant thereof) has the potential of endogenously explaining labor markets facts within the basic Mortensen and Pissarides search and matching framework. Other mechanisms that are implementable in a macroeconomic model will likely fail.1 In particular we show that "right-to-manage-models" where firms and worker bargain about wages and firms are allowed to set hours freely, a framework typically used in combination with sticky price models as in Trigari (2004), will be unable to generate the necessary fluctuations. We highlight the role played by the profit share in these types of models and show the connection between the profit share, estimates of the inter- and intratemporal elasticities of substitution commonly employed in the literature and the wage-setting arrangements. We show that one does not need very elastic labor supply estimates, as argued in Hall (2005b), to arrive at the conclusion that the model matches the relative standard-deviation of unemployment to output. One does, however, need the assumption that the utility difference between employment and unemployment is small, a condition likely to hold for the marginal worker while more problematic for the average worker. We show that this assumption is consistent with standard assumptions on preferences used in modern macroeconomic theory and observed values for the outside option. To validate the quality of the model with regard to labor markets beyond simulation results we propose to use the model within a Kalman filtering approach identified on output or any 1

There is a huge literature on different wage setting mechanism and optimal contracting, that has let to important new insights. Whether one can implement them in a standard macro-model and thereby testing their implications in a business-cycle context under standard utility specifications has up to my knowledge yet to be shown.

2

other aggregate NIPA data2 , using no data on the unemployment rate in the estimation. This methodology allows us to argue that the fit of the model is endogenous. We compare the predictions of the model with the actual data and show that the model tracks the US unemployment rate within tight bounds. The model wrongly predicts on average by 0.37 percentage points or alternatively correctly explains 90% of the actual unemployment rate according to a R 2 measure. This example shows that the basic RBC model can provide a good fit to aggregate labor market facts and all moments of the unemployment rate, while retaining most other features of an otherwise standard RBC model driven by technology shocks, see Hansen (1985). The paper proceed as follows: Section 2 presents the basic setup of the Mortensen-Pissarides Model within an RBC framework, Section 3 gives the intuition for the results to follow and discusses the relation between assumptions on the wage setting arrangement and the implied volatility of unemployment rates. Section 4 derives the set of parameter values that are jointly able to match labor market facts. In Section 5 we present and evaluate the performance by estimating the model on US data via a Kalman-filtering approach using a particular calibration as an example. Section 6 concludes.

2

Model

2.1

The households problem

The model economy consists of a large number of identical families, each family has a continuum of members of two types, unemployed workers with mass u and employed workers with mass 1 − u, where the total mass of workers and families is each normalized to one. The families earn income from the wages of their worker members, from unemployment benefits for their unemployed members, from their capital holdings and from their shares in a mutual fund that redistributes all profits arising in the economy. The family collects all the income of their members and distributes back individual consumption. We assume, that the family maximizes the sum of individual utilities, where each individuum 2

Using output is similar to using the Solow residual as implied by the model. If any of the endogenous aggregate NIPA data series is used, output, investment, consumption or aggregate wages, our results are robust with respect to the labor market. As will be shown, the failure of the model is linked to the use of BLS estimates on wages, productivity or hours or most other measures on output per hour.

3

receives the same weight. Family preferences are given by: W =

∞ X

β

t

0

Z

1

(1)

u(hit , cit )di 0

that is the discounted sum over all individual utilities, where i indexes a member of a representative household. Individual utilities are given by the utilities function consistent with balanced growth, identical for all individuals:

(cϕ (1 − h)(1−ϕ) )1−σ if σ 6= 1 1−σ u(c, h) = ϕ log(c) + (1 − ϕ) log(1 − h) if σ = 1

u(c, h) =

(2) (3)

We check the sensitivity of our results with respect to the role played by the intratemporal elasticity of substitution by also considering:

u(c, h) =

C 1−σ − Bhϕ 1−σ

(4)

We label this case non-balanced growth. This case is much easier to calibrate, and much more robust with regard to steady-state implications. However, the goal of this paper is to present a general theory consistent with balanced growth, so we just point out occasionally, where assumptions on non-separability will help. As will become apparent, the basic mechanism needed for a matching model to work is independent of the assumed utility function while steady state implications will likely not be due to the different mechanism with regard to average hours worked per person employed in the two classes considered and the role played by the intra-temporal elasticity. The budget constraint of the family reads as:

0

C+ K + p s s = D ∗ s + p s s +

Z

1−u

wi hi di 0

+u ∗ b + K + (r − δ)K + T

(5)

where C denotes aggregate consumption of the family, K is capital holding, b is the unemployment benefits granted by the government, T is a lump sum transfer from the government, D denotes 4

the dividends paid by the mutual fund, while s is the share the family holds in that mutual fund. Each employed member has an individual contract, that specifies the real wage per hour, w and the hours h he has to work. We denote by ps the price of the mutual fund share relative to the aggregate output good. Given our representative family assumption, all families hold the same share in aggregate profits, so in equilibrium there is no trade, and s will be one for all t. The assumption of a family structure is a convenient device to insure that ex post heterogeneity with respect to employment histories does not give rise to a non-degenerate distribution of asset holdings across individuals. Models with risk-neutral agents of course can avoid this assumption, given that people do not care about risk. Given this assumption the problem of the family is then simply how to distribute their aggregate income optimally, so it will maximize equation(3) with respect to individual consumption and thereby equate the marginal utilities of their members, given the amount of hours each employed person has to work according to his contract where by assumption, unemployed individuals don’t work at all. We abstract from home-production and capture this effect by one parameter, the transfers b an unemployed receives from an unemployment protection system. The solution of the optimal consumption allocation problem is given by:

ce,i = cu (1 − he,i )



(1−ϕ)(1−σ) ϕ(1−σ)−1

(6)

where cu denotes consumption of the unemployed members, while ce denotes consumption of the employed member. The family cannot distribute more than their aggregate consumption plan, so u ∗ c u + cu

Z

1−u

(1 − he,i )

(1−ϕ)(1−σ) 1−ϕ(1−σ)

di = C

0

Hence: C

cu = u+

R 1−u 0

(1 − he )

(1−ϕ)(1−σ) 1−ϕ(1−σ)

(7) di

Substituting back into the inner period utility function of the family and using the budget constraint we can write the the inter-temporal maximization problem of the family in a recursive

5

f: form, where we denote the value function as W f (S) = W

max 0

C ϕ(1−σ) (u +

C,K ,s

s.t.

0

C + K + ps s = D(S) ∗ s + ps s +

Z

R 1−u 0

(1 − he,i ) 1−σ

(1−ϕ)(1−σ) 1−ϕ(1−σ)

di)1−ϕ(1−σ)

f (S 0 ) + βW

(8)

1−u

wi (S)hi (S)di 0

+u ∗ b + (1 + r(S) − δ)K + T (S)

(9)

S is the set of minimal state variable which includes aggregate capital holding, the realization of the shock, and the current unemployment rate and its law of motions is discussed below. 3 We call with a slight abuse of notation the aggregate per period utility function U (C, u, h), where h is at this stage a shorthand for a possible entire distribution of different hour contracts. 4 Note that the family assumption, being equivalent to a complete market assumption, though convenient, has the important property that from an individuals perspective the best state of the world is being unemployed. He does not face a dis-utility of work, while receiving the same amount of consumption. While it is well known that neglecting precautionary saving motives in models with incomplete markets and heterogeneous agents will bias the aggregate savings decision, in a model with an explicit hour choice it will also bias the intratemporal decision and will therefore bias the wage and hour properties of the model. The extend to which these influences are important for business cycle fluctuations appears to be an open question.

2.2

The firms problem

There are three sectors in this economy, a final good producing sector that combines intermediate goods to produce the homogeneous consumption good, an intermediate good sector that combines capital and a labor good to produce the intermediate good and the labor market firms that use labor to produce the labor good. We assume a perfectly competitive final good producing sector, 3

In the log case things simplify and we have perfect consumption insurance in the sense that each member gets the same level of consumption irrespective of whether he works or not. So aggregate period utility is given by 1−u

U (C, u, h)

=

(1 − ϕ) log(1 − hi )di

ϕ log(C) + or

1−u

U (C, u, h)

=

log(C) −

Bhϕ i di

0

4

(10)

0

We will later on focus on symmetric equilibria, where all employed will work the same amount of time.

6

(11)

that transforms a measure one of intermediate goods into the single aggregate good according to the period production function Y =[

Z

1

yjd j]

0

(12)

The maximization problem of the final good firms solves in each period: Z

max P Y − y

pj yj dj

(13) (14)

s.t. (14)

where pj is the price of the intermediate good j and P is the aggregate price index. Perfect substitutability guarantees Pt = pj . Intermediate goods are produced by firms having a standard Cobb-Douglas technology: yj = kjα lj1−α

(15)

where kj is capital used by firm j and labor input l is itself an aggregator of a differentiated labor services lz : l=

Z

1−u

(16)

lz dz 0

Each labor service lz is produced by individual firms, viewed as matches between a worker and a small firm. The match has a linear production technology: (17)

lz = A t h z

where h indicates hours worked of an individual worker and A is the technology level assumed common for all matches and allowed to follow an exogenous AR(1) process. 5 For simplicity we abstract from monopolistic price setting behavior in the intermediate firm sector and set  = 1, so pj is equal to the aggregate price level and also normalized to one. The labor good is viewed as perfect substitutes, so labor firms take also prices as given. We denote the price of the labor good by P w . Intermediate goods firms maximize profits to solve: max yi − rki − ki ,l

Z

1−u

P w lz dz

0

which lead to the standard first order conditions 5

We assume all matches having the same technology level for simplicity right now.

7

(18)

Pw = r =

Y (1 − α) l Y α K

(19) (20)

Note however that P w is not the wage, but denotes the price the labor firm obtains for the services. The wage is determined as the outcome of a bargaining game between the worker and the labor-service firm.

2.3

Bargaining

The labor service firm produces an intermediate labor good, which exclusively depends on labor inputs and is needed to produce the aggregate output good as described above. This assumption is convenient, because it allows to disentangle the capital input decision from the labor input decision and allows to aggregate back to a standard Cobb-Douglas production function typically used in the RBC literature. A firm takes the price P w as given and bargains about wages and hours. Firm and worker separate with an exogenous constant probability λ. In the derivation we focus on a constant probability. In the estimation we allow λ to follow a lognormally distributed AR(1) process to show, that exogenous destruction rates will not be the key for labor market dynamics.6 Let Ξ = [0, ∞) be the set of all asset holdings, U = [0, 1] the set of possible unemployment rates and Υ = [0, ∞),  the space of possible realisations of the technology shock. Let S = Ξ × U × Υ be the cartesian product with associated Borel Set B and let s ∈ S be the current state of the match. Then the value of an intermediate firm for a given sequence of wage and hour contracts is given by: Π(s) = P w Ahi − wi hi + E(1 − λ)q(s0 | s)Π(s0 | s)

(21)

subject to the law of motion of the state variables, where the bargaining parties assume, that future wages and hours are function of the minimal state vector s. Here q is the discount factor of the firm that is linked to the discount factor of the owner, i.e. the family, such that 0

q(s | s) = β 6

∂u(c(s0 |s) ∂c0 ∂u(c(s)) ∂c

(22)

The use of endogenous destruction rates can of course create any fluctuations one wants, depending on how much mass of workers on distributes around the destruction threshold. We view ths a sproblematic and don’t follow this line of research here.

8

Note that formally speaking, to define the above value function properly, we need to resort to a Markov perfect equilibrium concept. That is worker and firms assume that future wages and hours are functions of the future state variables, aggregate capital holdings K, the unemployment rate u and the technology level A, while bargaining about the spot rates. Given assumed functions for the law of motion for these future wages and hours which just depend on the minimal state vector the problem amounts to a fixed point solution, such that the assumed law of motions are consistent with the wage-functions we obtain from the current period Nash-bargaining as derived below. This is needed because in principal workers can influence their own future wage thus our assumption of Markov perfection rules out dynamic contracts where current wages and hours can be conditioned on the entire past history of the match. We assume a standard Nash-bargaining structure. Given our family structure this bargaining is ill-defined from an individual’s perspective due to the perfect insurance provided by the family. We therefore assume, that the entire family and the firm bargain, with the family taken as given all other bargaining outcomes. We first derive the threat point of the family, i.e. the value of the outside option of the family if the bargaining breaks down. Writing the families problem in a recursive manner leads to:

W (1 − u, K, A) = max0 U (C, h, 1 − u) + βEW (1 − u0 , K 0 , A0 ) C,K

s.t. u0 = u + λ(1 − u) − π ue u Z 1−u D ps s 0 0 w(i)h(i)di + ut ∗ b + K + (rt − δ)K + T = ∗ s + ps s + C +K + P P 0 where U is the aggregate household utility function derived above and π ue is the probability of an unemployed member of the family of getting a job this period, taken as given by the household. The thread point of the bargaining can then be derived as the marginal value of having one worker being unemployed, relative to being employed, i.e.: ∆V =

∂W (1 − u, K, A) ∂U ∂C ∂U ∂W (1 − u0 , K 0 , A0 )) = + + β(1 − λ − π ue )E (23) ∂(1 − u) ∂C ∂(1 − u) ∂(1 − u) ∂(1 − u0 )

and letting U be the inner utility function the above can be rewritten as

9

∆V (s) = ϕ(1 − σ)

U (wh − b) + C (1−ϕ)(1−σ) 1−ϕ(1−σ)

U (1 − ϕ(1 − σ)(−1 + (1 − hi ) (1−ϕ)(1−σ) R 1−u ((u + 0 (1 − he,i ) 1−ϕ(1−σ) di) where U is7 : ϕ(1−σ)

U=

Ct

(u +

R 1−u 0

(1 − he ) 1−σ

)

+ β(1 − λ − π ue )E∆V (s0 )

(1−ϕ)(1−σ) 1−ϕ(1−σ)

di)1−ϕ(1−σ)

(24)

(26)

With ∆V being similar to the standard utility difference known from the matching function literature we can now proceed to define the bargaining problem as arg max ∆V µ Π1−µ w,h

(27)

where µ is the bargaining power of the family/worker and a parameter of the model. Taking first order conditions with respect to w and h lead after some rearranging and imposing symmetry to:

Π = − 7

∂U ∂(1−u)∂hi ∂U ) ( ∂C

(1 − µ) ∆V ∂U µ ∂C

= P wA =

(1 − α)Y (1 − u)h

(28) (29)

In the log case: ∂W (1 − u, K, Υ) ∂W (1 − u0 , K 0 , Υ0 )) 1 = (wh − b) + ϕ ln(1 − hi ) + β(1 − λ − π ue )E ∂(1 − u) C ∂(1 − u0 ) and in the non-balanced growth case: ∂W (1 − u, K, Υ) ∂W (1 − u0 , K 0 , Υ0 )) 1 = (wh − b) − ϕBhϕ + β(1 − λ − π ue )E ∂(1 − u) C ∂(1 − u0 )

10

(25)

Depending on the assumed utility function this yields for the general case 8 : (1−ϕ)(1−σ) −1 1−ϕ(1−σ) (1−ϕ)(1−σ) (1−h ) i 1−ϕ(1−σ) (1−ϕ)(1−σ) 1−u (1−he,i ) 1−ϕ(1−σ) di) 0

U (1−ϕ(1−σ)

(1 − α)Y = (1 − u)h

((u+ 

U ϕ(1 − σ) C (1−ϕ)(1−σ) R 1−u (1 − α) (1 − h) ((u + 0 (1 − he,i ) 1−ϕ(1−σ) di) C = (1−ϕ)(1−σ) Y (1 − u) h 1−ϕ(1−σ) (1−ϕ) (1 − h) ϕ

(32)

(33)

So we get back a variant of the standard intratemporal condition, which depends on the unemploymentlevel, and the Nash-Bargaining condition deflated by marginal utility to convert to common units. The inclusion of labor markets does however leads to a different wheighting factor in the intratemporal optimality condition, given by: ((u +

Z

1−u

(1 − he,i )

(1−ϕ)(1−σ) 1−ϕ(1−σ)

di)

(34)

0

This is due to the fact that with non-seperable utility employed and unemployed worker will consum different amounts of consumption goods even within the above family structure.

2.4

Matching Markets

To close the model we assume a standard Cobb-Douglas matching technology 9 M = suξ v 1−ξ

(36)

where M is the number of workers being matched to a firm, and v is the number of vacancies posted by the entrepreneurial sector. We define the probability π ue of getting a job from a worker’s perspective and the probability θ of finding a worker for an open vacancy from the 8

In the log case this simplifies to: (1 − α) (1 − hi ) C = Y (1 − u) 1−ϕ h ϕ

(30)

Finally, the case of non-balanced growth gives: (1 − α) h−ϕ C = Y (1 − u) Bϕ 9

We checked sensitivity and also used

uv

M= (uξ with no qualitative change of the results.

11

1

+ vξ ) ξ

(31)

(35)

firm’s perspective as: M = sx1−ξ u M s = ξ v x

π ue ≡ θ ≡

(37) (38)

The indicator for market tightness x is defined to be: x≡

v u

(39)

We let Π denote expected total profits obtained from a match. Therefore Π solves the Bellmanequation: Π(s) = P w (s)Ah(s) − w(s)h(s) + β(1 − λ)Π(s0 )

(40)

for a given law of motion for the aggregate state vector s. We assume free entry into the entrepreneurial market for labor, such that the number of vacancies posted is given by: κ = Eq 0 Π0 θ

(41)

where κ is the exogenous cost of posting a vacancy and θ is the probability of being matched. That is, entrepreneurs will enter the labor market until the expected profits, properly discounted, will be equal to the cost of entry, that is κ. Rewriting the free entry condition using the profit gives: (1 − λ)κ κ = Eq 0 ((P w0 A0 h0 − w0 h0 ) + ) θ θ0

(42)

Using first order conditions and some manipulations (see the appendix) we get the standard wage setting equation typical for this class of models:

wh = µ(κx + P w Ah) + (1 − µ)(b − where f (h, u) = 10

∂U ∂(1−u)

f (h, u) ∂u ∂ct

)

(43)

depends on the particular utility function assumed.10

For the general case: f (h, u) ∂u ∂ct

=

C (1−ϕ(1−σ) (−1 + (1 − h) ϕ(1−σ) ((u + (1 − u)(1 − h)

(1−ϕ)(1−σ) 1−ϕ(1−σ)

(1−ϕ)(1−σ) 1−ϕ(1−σ)

)

(44)

)

log case: f (h, u) ∂u ∂ct

= Cϕ ln(1 − h)

12

(45)

This is the familiar form often obtained in the search and matching literature. The wage is a convex combination of the overall value added P w Ah plus the savings of not having to repost a vacancy and the compensation for the dis-utility of work (note that f<0) beyond the unemployment benefits point expressed in comparable units with consumption. Note that f is endogenous and surely a function of individual hours worked, therefore time varying, and potentially a function of u too, depending on the assumptions on utility and family structure. In contrast, with risk-neutral agents and no labor-leisure choice, this term is absent from the equation. For future reference we define the wage share to be γ=

2.5

wh(1−u) y

and denote the gross profit share as

(Ah−wh)∗(1−u) . y

Government

The government adjust to balance the budget period by period by adjusting the transfers T . The government budget constraint reads as: g + ut ∗ b + Tt = 0

(46)

We assume, that g is fixed, while the lump sum transfer T is moving over the business cycle. This assumption avoids to describe behavioral rules on taxes that would add additional distortions to the model.

2.6

Equilibrium

We state the definition of equilibrium in recursive form. Definition 1 A symmetric equilibrium for the above economy given an initial capital stock K 0 f:S→R and unemployment rate u0 and an initial set of 1 − u0 matches, is a value function W

for the representative family and associated policy functions C : S → R++ , K 0 : S → R,

value functions ∆V : S → R and Π : S → R and associated policy functions w : S → R++ , h : S → R++ for the bargaining, vacancy openings v : S → R++ and associated probabilities θ : S → [0, 1] and π ue : S → [0, 1] for the matching market, Dividends D : S → R and transfers T : S → R, policies for the firm k, l and prices P W , r such that f satistfies the household Bellman equation and C, K 0 are the associated policy functions, 1. W given r, P W , h, w, and π ue .

13

2. k, l satisfy, given h, r, P w : r = α ∗ k α−1 l1−α P w = (1 − α) ∗ k α l−α

3. Capital markets and markets for the labor good clear:

Z

Z li = l = A ∗ i

ki = k = K

Zi

hi ∗ (1 − u)

i

4. ∆V and Π satisfy the bellman equations of the marginal worker and the profit function of the firm. 5. Given probabilities, value functions ∆V and Π, and assumed future individual labor contracts, w, h are the solution to the Nash-Bargaining procedure: arg max ∆V µ Π1−µ w,h

6. The perceived law of motion for the aggregate state variables in the Nash-Bargaining procedure are consistent with the actual behavior:

u0 = u + λ(1 − u) − π ue ∗ u K 0 = K ∗ (1 − δ) + I A0 = ρA + ,  ∼ N (0, σ 2 7. Zero profits from posting a vacancy:

κ = Eq 0 Π0 θ 8. Perceived probabilities are consistent with the actual, that is they satisfy the aggregate matching condition:

14

M u M v

≡ π ue = sx1−ξ ≡ θ=

s xξ

9. Dividends paid to the household are consistent with overall profits in the society: D = (P w Ah − wh)(1 − u) − κv 10. Government balances its budget in each period. 11. Final goods market clear, that is C + I + g + κv =

3

Z

1 0

(1−α)

kiα li

di

Intuition

Before proceeding to the calibration it might be helpful to consider the intuition why particular wage-setting arrangements are able or unable to generate unemployment rate fluctuations that are roughly seven times as volatile as productivity and vacancy-to unemployment rate fluctuations that are fourteen times as volatile. Up to now we have focused on one particular form of the wage setting process, which we summarize as Nash-Bargaining procedure. However, different wage setting arrangements have been advocated (see Hall (2005a) as a potential way to overcome the perceived shortcomings of the standard search and matching model. The basic idea in these approaches is to derive a wagesetting that leads to a relatively constant or sticky wage contract such that the firm can pocket more profits in booms than in recessions, which correspondingly lead to higher employment-rates in booms. One approach, which we label contract approach, characterizes different contracts, either by changing the underlying bargaining game or by using an optimal contract given some commitment technology as in Rudanko (2005). The other approach, sometime employed in models with monetary frictions, see Christoffel, Kuester, and Linzert (2005) uses right-to-manage contracts, where firms can choose hours at will, but worker and firm bargain about the spot-wage rate. This allows the incorporation of staggered wage contracts.

15

This section identifies the key mechanism responsible for the fluctuations in unemployment rates in a simplified version of the model and relates it to assumptions on the wage setting process used in the literature. We show that the ability of the model to replicate the observed fluctuations in the data depends crucially on a positive correlation between productivity and the profit share firms obtain from newly employed workers. Therefore right-to-manage contracts will be unable to generate enough fluctuations endogenously, given that the assumption implies a constant profit-share. Nominal wage setting frictions a la Calvo will likely generate a negative correlation between the profit share and productivity, severing the problem. Optimal contracts in turn will generate plans of wage-payments throughout the existence of the match. Whether a framework with optimal contracts can generate the observed fluctuations will primarily depend on assumptions with respect to how new contracts are put into place, not on the distribution of profits over the lifetime of the contract. In particular the results described for the Nashbargaining procedure will carry over to a bargaining over new contracts. Whether they can be easily adopted to competitive search frameworks, where bargaining over new contracts is replaced by the competitive search or wage posting procedure cannot be said in general. To show this intuition we will look at a simplified log-linearized version of the model neglecting capital. We can use the method of matching coefficient to derive the general dynamic in closed form and give the link to assumptions on the wage setting process. The general dynamics of the matching model with regard to labor market dynamics is captured by the free entry, or vacancy posting condition:. y 0 (1 − α) κ(1 − λ) κ ) = Eβ( − w 0 h0 + 0 θ 1−u θ0

(47)

where we used the symmetry of the hour decision for all workers. To recall h is hours worked per match and w is wage per hours the firm has to pay, θ is the probability of getting a match and κ is the vacancy costing post. We use a constant exogenous separation rate λ and a constant discount factor β for simplicity. Recalling the definition of the wage-share the profit share γ t , (γ =

(pw h−wh)∗1−u) ), y(1−α)

wh(1−u) Y

= (1 − α)(1 − γ t ) and

we can rewrite the above equation together with it’s

steady state relation as: κ y 0 (1 − α) 0 κ(1 − λ) ) = Eβ( γ + θ 1 − u0 θ0 κ y ss (1 − α)γ ss = β θss (1 − uss )(1 − β(1 − λ) 16

(48) (49)

where we had defined the profit share as gross total profits arising from labor market contracts relative to total output (without netting the vacancy posting cost). We see immediately that given an estimate of θ ss we have to calibrate κ in steady state to an estimate of the profit share on labor market contracts. Different assumptions on the wage setting processes lead now directly to different dynamics of the profit share. Consider first the case, where γ is constant. This happens in all ”right to manage” frameworks or in set-ups with a monopolistic competitive structure, where firms choose prices and hours, taking wages as given or being separately bargained about via a Nash-bargaining procedure. To generate positive profits assume either decreasing returns to scale, or a standard Dixit-Stiglitztype aggregator over labor-market goods, where firms can set hours at will. Then the firms problem would be: Ah% − wh

(50)

implying a constant profit share % − 1. The same holds in a standard monopolistic competitive model, where we would introduce the Dixit-Stiglitz-aggregator into our labor market good sector. A standard price-setting argument would directly lead to a constant mark-up rule and therefore a constant profit share. Note that assumptions on price or wage-stickyness would generate a time-varying mark-up, but, as is well known, typically a markup that is negatively correlated with the business cycle. To see that these type of models cannot generate the observed fluctuations we shall assume that output per person

y 0 (1−α) 1−u0 ,

which we normalize without loss of generality to one in steady

state, can be described for simplicity by an exogenous AR(1) process in technology, therefore abstracting from capital and unemployment rates as state variables. We note that the influence of capital or unemployment rates on productivity in the typically derived models is fairly small, and we neglect it for the moment to get the intuition right. So let this process be denoted by A0 = ρA + ε Guessing a linear law of motion for

1 θ

we obtain via matching coefficients after some manipula-

tions: ξe x≡

ρ(1 − β(1 − λ)) e 1 e = ∗A≡f ∗A e (1 − β(1 − λ)ρ) θ

(51)

with x being again the vacancy to unemployment ratio, ξ is the matching coefficient and hat denote deviation from the steady state. This simple relation allows to address the relative fluctu17

ations of the vacancy to unemployment ratio to productivity immediately. We discuss calibration of the parameters in more detail below, so for the moment we take fairly uncontroversial parameters from the literature. Assume the autocorrelation of productivity to be ρ = .95, the discount factor to be β = .985 and the exogenous separation-rate to be λ = .1 on quarterly observations, values typically used for example in Shimer (2005), the above expression boils down to f ∼ .7, and letting ξ be around .5-.7, also typical values, we get a relative standard deviation of x to productivity of 1.4, roughly in line what Shimer (2005) obtains. This is off by a factor of 10 on quarterly values for the US as documented in Shimer. All other shocks one can imagine will typically influence output, so we need a shock that drives the above equation beyond driving output. We agree with Shimer that the alternative, large fluctuations in destruction rates, have a lot of counterfactual implications and, given the observed destruction rates as estimated below, are unable to reproduce the right magnitudes. A second way out is a fluctuating vacancy posting cost, which would be a theory based on unobservable variables. In any case, a fluctuating κ shows up on the left hand and the right hand side of the equation. We cannot rule out that an independent exogenous process on κ can deliver the right amount of fluctuation, even though some conducted simulation-results are negative in this dimension, but it will almost certainly destroy the high negative correlation between unemployment and output. We conclude that the only way to get the vacancy-to unemployment rate fluctuating by an amount consistent with the data in an endogenous fashion, that is, between 12-14 times as much as output, is for the profit share to fluctuate considerably. Consider now the case where γ is time varying as is the case in efficient wage bargaining. Linearizing the free entry condition gives: 0 f0 (1 − β(1 − λ)) + (1 − β(1 − λ)) γ e0 − (1 − λ)e θ) −e θ = E(A ss ss ss θ θ γ

(52)

Matching coefficients once again and assuming that the profit share γ = gA is a linear function of the technology process with guessed coefficient g we get: f=

ρ(1 − β(1 − λ)) ρ(1 − β(1 − λ))g + θss (1 − ρβ(1 − λ)) γ ss θss (1 − ρβ(1 − λ))

(53)

We see directly that the smaller γ the higher the variance of x. Also a high ρ will lead ceteris paribus to a higher variance. Obviously g can be anything and clearly will depend on all structural parameters, but the general intuition will carry over even in a fully specified general equilibrium 18

model, as will be shown below. Also note that a counter-cyclical profit share, as is typically the case in sticky wage models, will induce problems concerning the variance, because it will induce a negative covariance between the two, therefore dampening fluctuations in x. The results presented so far would indicate that a high bargaining power of the worker could give the right variance, because it will lower the profit share ceteris paribus. This is incorrect in general, because a high bargaining power will dampen the fluctuations in the profit share, that is g will typically be reduced, due to the fact that firms have to pay a bigger part of the pie to the worker. The order of magnitude cannot be addressed without solving the full model, but as shown below this effects outweigh the effects on a lower profit share by far. Also note that even a bargaining power of zero will not induce sufficient variation in the vacancy-to-unemployment ratio, unless the profit share is made sufficiently small simultaneously. The basic tradeoff then is to induce a high g, i.e. a low bargaining power of the worker, and simultaneously keep γ ss low. The only way to accomplish this is to choose the outside option fairly high, that is, to make the utility difference from a workers perspective between working and not working very small. This is the basic intuition of the results presented in Hagedorn and Manovskii (2005), and it does carry over to a general equilibrium setup. The absolute value of the outside option does strongly depend on the general equilibrium effects and cannot be used to argue in favor or against the model. In fact, as will be shown below, for all outside options interpreted as a replacement ratio, say between .1 and .6 of average wage, there exist set of parameters, generating the right fluctuations. To make a precise statement about the order of magnitude of g, i.e. the parameter governing the standard deviation of the profit-share relative to productivity, we have to solve the full model, which we describe in the next section.

19

4 4.1

Quantitative Analysis Data description

We are interested in matching quarterly NIPA-data from the US. Our sample period is from 1954:3 to 2005:2. We use real output, consumption and Investment from the NIPA and deflate them by the population older than 16 to express them in per capita terms. We define consumption to be consumption of Non-Durables and Services, and correspondingly, define investment to be NIPA investment plus consumption of Durables. We use NIPA compensation divided by population as our measure of total wage. For the labor market facts we closely follow Shimer (2005). We use the official unemployment rate from the BLS (we also look at the level of unemployed persons and employed persons). As our measure of vacancies we use the ConferenceBoard-Help-Wanted Index and use this measure divided by the number of unemployed as our measure on the vacancy to unemployment ratio. The job finding probability is estimated (see Shimer (2005) for a discussion) by: π ue = 1 −

ut+1 − ust+1 ut

(54)

with ust+1 being the number of short term unemployed less than one month. Similarly, the destruction rate is computed as the ratio of short-term unemployed workers next month to employed workers this month et , given by: λt =

ut+1 et (1 − .5π uet )

(55)

The biggest problem arises with regard to data that are related to choices of hour worked. There is a big literature and discussion about the right measure of hours. Most of the debate circles around the point, whether the hour measure has a unit root or not, and correspondingly whether estimation should be done in differences or levels. The right measure is critical for the sign on the technology shock in impulse response function estimation, as exemplified in the work of Gali (1999) and, for the opposite result, in Christiano, eichenbaum, and Vigfusson (2003). Francis and Ramey (2005) show that neglecting the governmental sector might be responsible for the trend in the hour per capita series. To obtain a normalized total hours series, we multiply average weekly hours by the employment to civilian-labor force ratio, which accounts for the fact that our model is normalized to employed and unemployed persons being one, therefore it tries to 20

account for the strong upward trend in the labor-force participation rate. We implicitly make the assumption that in all sectors not included in the BLS hours series the average workweek of an average employee is the same as in the sectors included by the BLS. Both series show a downward trend, basically between 1970 and 1980, the model cannot account for. Given that we work with HP-filtered data with an HP-filter of 100000, we choose to remove the trend, and focus on business-cycle feature. However, the inclusion of the unemployment rate offers an independent measure of labor utilization, mainly absent from the debate on conditional moments. To compare results to part of the literature we also report results from the total hour measure obtained by Francis and Ramey (2005), which does not have a downward trend.

4.2

Calibration

This section describes our calibration procedure. We first discuss the set of parameters that can be chosen similar to standard RBC-specifications and which we view as fairly uncontroversial. We then provide calibration targets for the parameters from the labor market that are relatively straightforward and also fairly uncontroversial. We proceed to highlight a set of parameters for which there is no common agreement in the literature and that cannot be easily pinned down from equilibrium equations. We describe for each parameter the range which we consider as reasonable, and argue that the model leaves one degree of freedom. We then undertake a fourdimensional search over the space of potential parameter values that can jointly replicate the relative standard deviation of unemployment rates to output of roughly 7 in US-data and is consistent with the equilibrium restrictions from the model. We have the following 15 parameters in our model: We first discuss the relatively uncontroversial part. We view a period to be one month, consistent with labor markets observations. We therefore have to readjust some of the standard parameters of the RBC model, commonly used on quarterly frequency. Given the underlying RBC structure, some parameters can be assigned in a fairly standard fashion. Other parameters are more problematic. The fairly standard parameters are α, β, δ, ρ, g and are discussed first. The only slightly problematic parameter is the capital-share. We cannot assign all NIPA-profits to unambiguous capital income because, given labor market contracts, part of it might go to the profit share, normally absent from a RBC model. We set α = .35 slightly below the value used in Prescott (1994). The raw wage-share is .56 in our data, but we have to adjust in part for 21

Table 1: Parameters of the model α

capital-share

ξ

matching function coefficient

β

discount factor

λ

destruction-rate

δ

depriciation-rate

ϕ

dis-utility of work

µ

bargaining-power

σ

risk aversion

κ

vacancy-posting cost

b

UI-benefits

Υ

technology level

s

matching normalizer

g

government share

σΥ

standard deviation technology

ρ

autocorrelation coefficient

This tables summarizes the parameters of the model.

ambiguous capital income as explained for example in Prescott (1994) and for profits. Given that we view the profit share as fairly small, (we end up targeting a profit share of 1% as described below), we choose a value very close to the standard calibration. The steady state relations of the model imply that I Y K Y

δ=

− n − gr

(56)

where n is the population growth rate and gr is TFP growth. We target a ratio of 3.3*12 for capital and .25 for the investment to output ratio. Hence δ = .007

(57)

on a monthly base, corresponding to roughly 2% on quarterly values. We set β=

1 1+

α K Y

−δ

= .9972

(58)

which corresponds to an annual interest rate of around 3.5% We assume lump sum transfers to or from the family and obtain government expenditure residually from the goods-market clearing condition conditional on the value for the cost of opening vacancies in the society as described below where we target a ratio of government consumption to output very close to .2. We take as our benchmark a standard RBC view, and set the autocorrelation of the technology shock to ρ = .95. on quarterly frequency, implying ρ = .983 on 22

monthly frequency. This choice allows comparison to results obtained in Shimer and HM, which use a similar value. We discuss the role of this particular specification below. The next set of parameters, λ, ξ, s is related to labor markets and can also be viewed as fairly uncontroversial. We follow Shimer in targeting the following labor market facts: We set the exogenous destruction rate to be λ = .03

(59)

at monthly frequencies, implying a destruction rate of 9% on quarterly frequency. Our measured destruction rate has a mean of .033. Standard estimates are in the range of 8-10% on quarterly data. The average unemployment rate is .056 in our data. The BLS constructs estimates from their sample for the official unemployment rate based on .06, so we target a steady state level of uss = .06

(60)

Given our steady state relations this value implies u = π ue =

λ λ + π ue λ(1 − u) = .47 u

(61) (62)

The average probability of transiting from unemployment to employment constructed as described above gives π ue = .4432.11 We follow Shimer and normalize the vacancy to unemployment-ratio to one. This allows to determine the parameter in the matching function s. Doubling the vacancy to unemployment ratio x and readjusting s would leave all other ratios unaffected. We have θ = π ue 11

(63)

Note that Krause and Lubik (2004) derive an inconsistency or indeterminacy of equilibria, such that targeting jointly destruction rate, the steady state level of unemployment and the probability of unemployment to employment, cannot be obtained simultaneously. This is true given that the precise values lead to a mutual inconsistency. In example using the precise values of uss = .056 and destruction rates of .033 would force us to assume π ue = .55. However, the mean of the unemployment rate is surely slightly time-varying, and would depend on the starting date of our sample. The estimates of the mean come with a standard error, and as the calibration above makes clear, we view the implied difference well within reasonable bounds. Krause and Lubik (2004) obtain their results by targeting quarterly values, therefore assuming an independence of the probability of getting a job across month which quite likely doesn’t hold. To see their point note that if we use destruction rates of .09 in the above equation on quarterly frequency while sticking to an average unemployment rate of .06 we would get a π ue = 1.51. We view this not as an inconsistency but as a time-aggregation problem, arguing that the right frequency to calibrate the model is monthly data, where this problem doesn’t arise.

23

by the properties of the matching function, that is, the probability of finding a match θ is equal to the probability of finding a job π ue . Alternatively we could target θ and adjust s, leaving the dynamics unchanged. The choice slightly matters, because it influences marginally the total amount of resources spent on vacancy posting, however, the influence is very small. Shimer reports an estimate on the matching function coefficient of ξ = .7, while Hall (2005a) reports .5. We use .5 in our benchmark calibration, given that vacancies and unemployment rates have roughly the same variances. The choice slightly matters by putting more weight on vacancies relative to unemployment which ceteris paribus decreases (increases) the variance of unemployment by decreasing (increasing) the variance of vacancies. However, the basic points we want to stress are unaffected by this choice. We now proceed to the more controversial set of parameters remaining to be calibrated: µ, κ, ϕ, σ and b. Given that there is a direct relation between the parameter κ and the endogenous profit share variable γ through the steady state free entry equation, we cast our discussion in terms of the latter, which is more convenient and principally observable, while the vacancy posting cost is not. We have three unused equilibrium equations left, giving us more parameters to be determined than steady state relations. The intratemporal optimality condition on hours reads as (1−ϕ)(1−σ)

(1 − h)(1 − α) ((u + (1 − u)(1 − h) 1−ϕ(1−σ) ) c = (1−ϕ)(1−σ) y h(1 − u) 1−ϕ(1−σ) (1−ϕ) (1 − h) ϕ

(64)

From the wage setting equation we obtain 1−γ =

γ εF risch )β π ue + 1) − (1 − µ) (1−σ)(1−ϕ) (−(1 − h) µ(( 1−β(1−λ)

The free entry condition states that κ=(

(1 − (1 − µ)eb)

β(1 − α)Y γ ) θ 1 − β(1 − λ) (1 − u)

(1−ϕ)(σ−1) 1−ϕ(1−σ)

+ 1)

(65)

(66)

The parameters ϕ, σ are directly governing the inter-temporal/intratemporal labor elasticities and the risk-aversion. Therefore, in our model, the parameter influence three different economic characteristics, risk aversion, willingness to substitute consumption, and willingness to substitute labor and leisure. These parameters are notoriously hard to pin down. Additionally and in contrast to standard models, they also directly influence the key endogenous, the profit share. 24

Given that, typically, we want to target a particular level of h, u and the profit share, thereby setting κ, and having made a choice on preferences, we see that equation (67) constraints the joint setting of µ and b through the equilibrium relation and through our target for the steady state level of u. Given the targets for a particular parametrization it is possible that no equilibrium exists due to the fact that the outside option cannot become to big. Otherwise people would choose not to work and enjoy their leisure, and thus hours and unemployment would have to adjust, but are targeted. The Frisch labor elasticity, which is an important determinant of the profit share, is defined for the general balanced growth utility specification is given as εF risch =

∂u dh dw ∂h / | ∂u =const = h w ∂c ∂u h ∂h∂h −

∂u h ∂c∂h

=

(1 − h) (1 − ϕ(1 − σ)) h σ

(67)

∂u ∂c∂c

We note that this elasticity is not independent on the level of leisure. Some micro-evidence on (1−ϕ(1−σ)) suggest a value of .4. However the Frisch elasticity will depend on the level of hours worked, see the nice discussion on this point by Domeij and Floden (2004). They also provide evidence for the downward bias in the elasticity estimates if borrowing constraints matter in the data. According to Domeij and Floden (2004) the fraction

1−h h

is equal to 4 and the implied

estimated elasticity is 1.6. Given that on quarterly data the standard deviation for total hours is around 1% at most, implying a variation of half an hour per week, we argue, that there exists no convincing micro-evidence to choose this parameter, that would lead to a clear interpretation relative to the model used in this exercise. We therefore proceed as follows: The dis-utility of work parameter directly governs the steady state level of hours and the consumption to output ratio via the intratemporal condition. We target the consumption to output ratio to pin down the dis-utility of work parameter, conditional on a steady state level of hours worked and on a chosen parameter of riskaversion. We target a ratio of c = .55 y

(68)

Given this target, we argue, that for the remaining parameters one can just describe the set of parameter values on σ, h, µ, γ and b that will jointly be able to match labor market facts, that is, to match a relative standard deviation of the unemployment rate to output of 7. We again interchange the discussion on the parameter ϕ with a discussion of the endogenous steady 25

state value h. We therefore use the log-linearized version of the model to calculate the implied standard-deviation of unemployment to output for all parameter values of the remaining variables σ, h, µ, γ on a bounded set derived below, and adjust b, κ and ϕ to satisfy the equilibrium restrictions.

12

We start out by giving the maximum range of parameters we consider. Table 2: Uncertain Parameters Description

Parameter/Implied endogenous

Range

profit-share

γ

[.002,.05]

risk-aversion

σ

[.1,]

bargaining power

µ

[0,1]

hours worked/Frisch elasticity

h

[.2,.4]

outside option

b

[.2,.4]

This tables summarizes the uncertain parameters of the model. Note that we discuss properties in terms of endogenous instead of parameters, if there is a one to one relation. For example the our discussion of profit-share translates one to one into the unobservable vacancy posting cost, which is a parameter of the model.

We will briefly comment on these choices. NIPA profit measure would give a maximum of 9.41% of measured profit. The after tax profit is 3.5% while undistributed profits are 3.13% over the sample period. In these numbers there is a significant proportion due to labor and capital income. Some part also might go to profits due to a monopolistic market structure. What we need is a measure of profits on labor market contracts. So we view the upper bound of .05 as extremely wide. We do not rule out values close to zero, given that general equilibrium economists have used a value of zero since at least 50 years of research. HM take the opposite direction, and the more appropriate one from a pure calibration point of view, and directly target the vacancy posting cost. Given the one to one relation between the cost and the profit share, one can map back their estimates on the vacancy posting cost into a profit share measure. Their number is around 2% relative to productivity, which would map into a smaller number relative to output. Given the considerable uncertainty surrounding their measuring approach, we would take their number to lie between 1% and 2%. 12

Given that we have just one shock, the technology shock, this ratio is independent of the underlying variance of the technology shock. The model can be cast into a standard state space representation and their moments can be solved by standard formulas as spelled out for example in Hamilton (1994).

26

As discussed above, there is no agreement on the right choice of preferences and the Frisch elasticity. We therefore choose wide bands for the risk-aversion and the average level of hours worked. We do not view the Hosios condition as an appropriate target as in Shimer (2005), so we consider the whole range of possibilities. If we identify the outside option as a net replacement rate abstracting from the value of homeproduction our considered range is within standard values. Shimer (2005) identifies b as social security replacement rate, and sets it to .4. The OECD replacement rate for the US is roughly .3, while we note that our model includes a lot of people that do not receive unemployment benefits at all. The average recipient ratio of unemployment benefits was around .4, see Wandner and Stettner (2000). So, if we define b as unemployment insurance, the average level should be set to be .16. HM set this parameter to .94, but they don’t have an hour choice in there model. Clearly, if b gets too large, we will be unable to find an equilibrium given our other targets, because no one will individually choose to work. The problem now amounts to a four-dimensional search procedure, where we search over the parameters of σ, γ, µ and the steady state level of the endogenous hour choice h, and let the the outside option b adjust endogenously. We then check whether it is within the set of parameter values we consider. We show a typical result in Figure 1 below.

For given preferences the plot shows the relative standard-deviation of unemployment to output for varying µ and γ. We see, that for any µ, given a profit share bigger than .02, the model is unable to replicate the relative standard-deviation of unemployment to output, which is roughly 7 in the data. The same holds true for any γ, given µ bigger than .06. This result holds true for all preferences, even though, of course, the precise numbers change. In each case, one jointly needs a low bargaining power and a low profit share in order to match the high variability of unemployment rates in the data. However, there are a lot of parameters within the range considered, that are able to replicate the relative standard-deviation. The number of HM on the profit share is well within this set. Figure 4 shows for given γ = .01 and µ = .01 and for varying implied intra-temporal elasticity of substitution the relative standard deviation of unemployment to output. The picture shows, that the relative standard-deviation is increasing in σ and h, or equivalently is increasing in the 27

Figure 1: Relative Standard Deviation of

σu σy

for given h and σ

relative standard deviation

8 6 4 2

0 0.04 0.03 0.02 0.01 gamma

0

0

0.1

0.2

0.3

0.4

0.5

mu

The plot shows the relative standard deviation of unemployment to output for a given riskaversion parameter σ = 1.1 and a steady state hour choice h of h = .35. The vertical line indicates the empirical target of σσuy = 7

intra-temporal elasticity of substitution. We see that for low µ and γ, the implied standarddeviation is in the neighborhood of 7, which is consistent with labor market facts. The higher σ, the higher we can set γ or µ, without being inconsistent with labor market facts. Finally, we show in Figure 5 the level of the outside option implied by the model if we target a profit share of 1% and set the bargaining power also to 1%, as above. We see that for any chosen value of the outside option there are parameter values, that are consistent with labor market facts. To be consistent, however, requires for each parameter setting, that the utility difference between working and not-working is small. This condition, likely to be true for the marginal worker, (being basically the definition of the marginal worker) translate in our model to a condition on the average worker, (having assumed away heterogeneity), therefore it is a condition, that might not hold in the data in general. This property is criticized by Costain and Reiter (2005) to imply a counter-factually high reaction of the steady state unemployment 28

rate to changes in the outside option. Three remarks might be in order. Conditionally on the model at least the cross sectional part of the regression Costain and Reiter (2005) rely upon might not be identified as discussed in Jung (2004), so the magnitude of the point elasticity should be used carefully.13 Secondly the assumption of complete insurance implies that wages are a linear function of unemployment benefits. Once this assumption is relaxed, the shape of the utility function matters, see Jung and Kuester (2006) for an example, and the implied semi-elasticity of the model gets weaker. Still, it is outside the bounds provided by Costain and Reiter (2005). Finally, if wages are completely independent of the outside option as in the game theoretic approach of Hall and Milgrom (2005), the mechanism we highlight would work exactly the same, given that, as shown in their paper, the Nash-bargaining result will lead for some parameter values to the same structural equations. However, of course, the interpretation would change drastically.

14

How does this result translates into the calibration strategies of Shimer and HM? Given that their model does not have a choice on hours and their world is populated by risk-neutral individuals, their outside-option doesn’t conflict with targeting a particular level of unemployment and a level of hours. Besides that, they deal with the problem in two different ways. HM treat the outside option as an unobserved parameter, while they try to provide a precise measure of κ to pin down profits. To set the outside option, they choose to target an endogenous correlation, the correlation between wages and productivity which implicitly enables them to derive at the right volatility. Shimer in turn sets the outside option to .4, interpreted as observed unemployment benefits, and the bargaining power to the Hosios-condition, in his case .3, and implicitly adjusts the profit share, which in his case is also fairly small, around 2% which implicitly leads to low fluctuations in the unemployment rate. Given that the inclusion of the labor market is the novel feature of the model relative to standard RBC models we choose to work with a calibration that is able to generate the right labor market volatility using the one degree of freedom we have. We then argue that the quality of the model should be judged by looking at the joint implication of 13

The point elasticity also reacts highly non-linear to a change in the outside option, so the evaluation at the point estimate of the regression might be problematic.

14

The modeling choice appears to boil down the question whether one wants to rely on correlated shocks as in Hall and Milgrom (2005), that drive their results, or on an endogenous mechanism. Note that a negatively (with productivity) correlated outside option, maybe due to home production, would of course generate enough fluctuations in this model also, without running into the Costain and Reiter (2005) critic.

29

all endogenous conditional on getting the right labor market volatilities. To evaluate the usefulness of the model, we have to take a particular stand on the parameter values. We set Table 3: Parameter Choice Description

Parameter/Implied endogenous

Range

profit-share

γ

.01

risk-aversion

σ

1.1

bargaining power

µ

.01

hours worked/Frisch elasticity

h

.35

outside option

b

.2

This tables summarizes gives our choice on the parameters/endogenous at hand.

We choose σ close to the log case of 1 often used in RBC studies. Working time is roughly 1/3 of total available time, implying an intra-temporal elasticity of substitution of 1.65. Profit- and bargaining share are set to match the average standard deviation of unemployment to output in our simulations and an outside option that implies a b=.45*.45=.2, where we tried to match a recipient ratio of 45% and unemployment-benefits also of 45% of the average wage. The implied dis-utility of work is .32. We used a calibration with a very low outside option, to show that nothing depends on the absolute value. Also the very low bargaining power can easily be increased to say .2, without changing the main implications of the model with respect to the statistics we discuss next.

5

Results

This section discusses different methods to address the quantitative properties of the model. We first present simulation results along the dimensions considered in Shimer (2005) to provide evidence that the model is able to replicate the basic labor market facts. Given that we have targeted already the relative standard deviation of unemployment to output, we cannot claim success or failure along this dimension. We therefore proceed to estimate the model. We identify the technology shock by estimating the state space representation of our model on GDP alone. The model makes predictions for all other variables conditional on replicating output. We

30

evaluate the model by comparing these predicted series with the actual ones and provide some measure of fit for the quality of the prediction. Obviously the results depend on how to identify the technology shock; we repeat the exercise using different series to check the sensitivity of the results to the identifying assumption. The alternative would be a full scale Bayesian approach, where we had to specify as many structural shocks as dimensions considered. This would require strong assumptions on the quality of the model to replicate all dimensions. Our goal in this section is more modest, so we take a standard RBC model as our benchmark driven by one shock alone.

5.1

Business Cycle Predictions for the Labor Market

We start out by simulating the model 10000 times on 612 data points at monthly frequency which we then aggregate to quarterly values. We report the median draw and use 95% confidence bands. To be comparable with Shimer (2005), we report results for the moments he considers with the same conservative HP-filter (λ = 100000). We report our simulations using also an HP-filter on the simulated data to be comparable with Shimer’s results (if we would take the raw data, the results would not change significantly). We have chosen the variance of the technology shock to match the standard deviation of output, but report relative standard deviations to make the results independent upon this choice. Table (4) presents the results of this exercise: The slightly different numbers of our data relative to Shimer are due to the fact that we normalize with respect to output, having a standard deviation of .026, while Shimer normalizes with respect to his productivity measure, output per person from BLS, which has a standard-deviation of .02. Additionally we work with the unemployment rate, having an unconditional standard-deviation of .178, while number of persons unemployed has a slightly higher standard deviation of .195 and an implied relative standard deviation to output of 7.62. Note further that we can easily match a slightly higher standard deviation, simply be reducing our profit share to .0099 instead of .01. The correlations of the data would be visually unaffected. We report correlations between the vacancy-to-unemployment-ratio and unemployment also at one lead, because this ratio is slightly affected by aggregation. The model is able to match the second moments of the vacancy, unemployment and the vacancy to unemployment ratio reasonably well. It is also able to generate the Beveridge curve behavior of the data, in particular the cross-correlations between vacancies, unemployment and the vacancy 31

Table 4: Model comparison to Shimer (2005) Corr(HP-

DATA

-95%-confidence

MEAN(HP)

95%-confidence

(vt , ut )

-0.905

-0.953

-0.931

-0.902

(vt , ut+1 )

-0.911

-0.964

-0.942

-0.910

(yt , ut )

-0.919

-0.995

-0.992

-0.988

(xt , ut )

-0.974

-0.993

-0.990

-0.985

(xt , u(t+1 )

-0.955

-0.965

-0.945

-0.913

(ut , ut−1 )

0.949

0.916

0.936

0.956

(vt , vt−1 )

0.950

0.690

0.790

0.862

(xt , xt−1 )

0.952

0.833

0.892

0.930

σu σy σv σy σx σy σ πue σy

6.879

6.627

6.794

6.897

7.623

8.071

8.474

8.978

14.093

14.343

14.762

15.076

4.32

7.1713

7.381

7.538

filtered)

The table gives selected moments of the 10000 simulations on 612 data points, where x denotes the vacancy to unemployment ratio, v are vacancies, u is the unemploymentrate, y is output and π ue is the probability of receiving a job when unemployed. All data are HP-filtered with a λ = 100000 to be comparable with Shimer (2005).

to unemployment ratio is approximated very well. We are not within confidence-bands in some dimensions, but we note, that the data would also come with standard errors attached to the measurement. The model is particularly off in the correlation between unemployment and output, which is basically one in the simulation, while it is just .9 in the data, but even a very small amount of measurement error (especially conceptual measurement problems) could account for it. The model also shows much less persistence in the vacancy series relative to the data, a problem we share with Shimer. However, part of the autocorrelation is due to the trend in and its removal from the data, so we are not too concerned with this number. The model fails to match the second moment for our probability measure of transiting between the unemployment and the employment state and is almost double as volatile as our measure. While the measure is subject to some doubts the particular Cobb-Douglas form of the matching function we have assumed might in part be responsible for this problem. In contrast to Shimer the fluctuations

32

in the vacancies are roughly in line with the data (and of the same order of magnitude as the unemployment data), which is due most likely to the fact, that we have chosen a different coefficient in the matching function, being .5 in our case, and .7 in Shimer’s case. These results are not sensitive towards the particular parameters we have chosen. There exist a continuum of parameters that can generate these numbers for all values of the outside option, all of them sharing the particular feature that the utility difference between working and not-working for the average worker is fairly small. To address business cycle components we compare HP filtered (with a λ = 100000) as well as BandPass filtered (we use the BP filter developed by Christiano and Fitzgerald (2001)) second moments of the data to the model.15 We then ”estimate” the model with our calibrated values using a Kalman filter to get the actual realization of the technology shock implied by the model. Having one shock, we just use the log of output to identify the actual technology shock. 16

17

We again use a HP filter with λ = 100000 to de-trend output and all other series, arguing, that the long run trend in unemployment rates, that our model cannot explain, will surely effect all other endogenous in a non-trivial fashion, as can be seen from our steady-state relations. In particular changes in labor taxes, the bargaining power, the outside option and the vacancy posting cost all have potentially strong equilibrium impact on hours, wages, the profit share and the level of output. The removed trend from the unemployment rate is very similar to the removed trend in the destruction-rates. While destruction-rates do not play an important role in our model on business-cycle frequency, they might likely be the cause of part of the trend. What 15

We chose to work with different filters because the different filters potentially bias the result and create different implications for the properties of our time-series.

16

This, of course, biases the results in our favor, given the high correlation between output and unemployment. Estimation on consumption, investment or aggregate wages lead however to similar conclusions, (for consumption we need a higher inter-temporal elasticity to get it right) while an estimation on productivity, or wage per hour fails completely. We discuss this property in detail below.

17

We do not follow the strong interpretation of estimation as advocated by recent DSGE estimators following a Bayesian interpretation of the world, as exemplified in the work of Smets and Wouters (2002) for monetary models, or Christoffel, Kuester, and Linzert (2005) for labor market models. This procedure would force us to introduce additional unobservable structural shocks to the model; while this is of course an interesting route to go, we are more interested in seeing in what dimensions the model is off and where the model improves upon standard models. We therefore stick to a weaker interpretation and ask, how much of the movements in the data can we match, conditional that we get output or aggregate NIPA data driven by technological shocks as implied by the model right. This implies assumptions on the error process to avoid stochastic non-singularity of the model. An interesting alternative approach to specifying a full set of structural shocks is taken by Ireland (2001), who estimates parameters allowing for an AR(1) error processes in the endogenous variables. We leave it to future research to provide parameter estimates along one of these lines.

33

has driven up the destruction-rates, of course, is an open question. To show how our results with regard to labor markets are influenced by this de-trending choice we also report estimates on a linear de-trended output series. To be able to estimate the model on quarterly data we aggregate the linearized law of motion matrix to quarterly values, neglecting the aggregation error we introduce into the variancecovariance-matrix of the state-equation of the Kalman filter by assuming a diagonal variancecovariance matrix.

18

To address the fit we calculate the mean squared error, and an R-squared

statistic of the actual and the one-step ahead forecast of the model relative to the data. This measure tells us how much of the variation the model actually describes or predicts. Figure 2 and Figure 3 plot the implied unemployment rate:

18

Simulation-results show that the error we introduce is very small. The Kalman filter returns the implied unemployment rate of the model. The only free parameter we estimate is the implied standard deviation of the technology shock and its actual realization. While HM use a standard deviation of 0.007 obtained from productivity measures, our estimated standard-deviation from the model is .0058, lower then their estimates and lower then the typically employed value of .0072 in RBC models, but we use different de-trending mechanism then the typical study. Given the estimated unemployment rate, we transform back using the HP-filtered trend component or in the alternative case, use no transformation except taking exponents to get the raw level data.

34

Figure 2: Unemployment Rate (estimated on HP-detrended output)

0.14 actual predicted errorprocess

0.12 0.1 0.08 0.06 0.04 0.02 0 −0.02 1950

1960

1970

1980

1990

2000

2010

The plot shows the actual and the predicted US unemployment rate, which was estimated on HP-filtered data and transformed back using the removed trend component. The errorprocess describes the difference between actual vs. predicted series.

35

Figure 3: Unemployment Rate (estimated on linear detrended output)

0.14 actual predicted errorprocess

0.12 0.1 0.08 0.06 0.04 0.02 0 −0.02 −0.04 1950

1960

1970

1980

1990

2000

2010

The plot shows the actual and the predicted unemployment rate, which was estimated on linear detrended output. The error-process describes the difference between actual vs. predicted.

36

Even on a linear detrending mechanism on output that is add odds with business-cycle booms and recessions as commonly measured, the model appears to perform very well, given that we used no information on unemployment. It clearly doesn’t catch all trends and is on average off by .008 percentage points, but on business-cycle frequency it performs similar to the HP-filtered series, getting about 85% of the fluctuation rate; however, we see that the model is not able to catch the slow-moving trend from the data which is removed by the conservative HP-filter. Given that a low order trend in unemployment rate will also translate into a trend in all other series and is not captured by the model the procedure to remove a low order trend from each series separately appears to make sense as long as one does not have an explicit model describing this trend in a general equilibrium model. The standard-deviation of the implied error of the unemployment rate is 0.0042 on HP-filtered output estimation. BLS estimates confidence-bands for their reported official measure to be .002 percentage points, so the model is outside this band. However, we argue, that this fit comes fairly close given the conceptual uncertainty surrounding the use of this time-series.19 Table 5 presents the actual and the one-step-ahead-prediction error in terms of a R-squared measure20 that describes the amount of variation the model can explain. The low prediction on BP filtered vacancies does not appear to be a complete failure of the model but, as becomes apparent from the picture, is due to the fact that the model treats vacancies as a flow variable that moves up on quarterly values whenever a shock hits the economy. That is the model predicts a contemporaneous correlation while in the data the help wanted index appears to lag one period. In our model we have treated values being at the beginning of the quarter, while the data appear to be recorded at the end of the quarter. We would speculate that on monthly data the problem would be not as severe as it is on quarterly data. Of course the vacancy to unemployment ratio is also influenced by this mismatch. In spite of this caveat we argue that the model tracks the ratio on business-cycle frequency remarkably well. 19

We obtain similar results, when using investment, wage or consumption to identify the technology shock, as discussed later. However, consumption causes some problem, given our parameters. One can resolve it by assuming a higher intra-temporal elasticity of substitution.

20

, where e is the error process between the actual and the predicted data, Our measure is R2 = 1 − ee yy 0 y y 0 normalized by the variances of the two series. Note that the measure can become negative.

0





37

Table 5: R-squared measure of actual vs. predicted HP100000

HP100000

BandPass

BandPass

(current)

(forecast)

(current)

(forecast)

y

1.00

0.89

1.00

0.86

c

0.78

0.73

0.61

0.50

i

0.82

0.70

0.90

0.79

wh(1-u)

0.85

0.85

0.83

0.85

w

0.61

0.64

0.60

0.68

w [FR]

0.39

0.32

-.18

-.22

h(1-u)

0.91

0.88

0.79

0.81

h(1-u) [FR]

0.79

0.72

0.69

0.71

h

0.70

0.67

0.65

0.56

h [FR]

0.51

0.51

0.42

0.49

y/h(1-u)

0.81

0.62

0.38

-0.05

-0.28

-0.46

-0.38

-0.71

u(level)

0.89

0.88

0.83

0.83

u (rate)

0.88

0.88

0.84

0.84

v

0.34

0.75

0.41

0.85

x

0.77

0.84

0.74

0.87

π ue

0.10

0.42

-0.12

0.27

Variables:

y/h(1-u) [FR]

R-squared measure of actual vs. predicted, measured conditional on the realization and as one-step ahead forecast on HP-detrended output (λ = 100000), all data also HP-detrended, (columns 1 and 2), and for Business-Cycle comparison BandPass-filtered, (columns 34). y=output, i=investment, c=consumption, wh(1-u)=NIPA aggregate wage,w=wage per hour, h=BLS average weekly hours, h(1-u)=total hours, y/(1-u)=BLS productivity, u=unemployment,v=vacancies,x=vu-ratio; [FR] indicates the use of the total hour measure obtained from Francis and Ramey (2005).

5.2

Business Cycle Predictions for other Variables

Given that the labor market part of the model appears to work reasonably well this section discusses how the model does in all other dimensions. Table 6 gives the relative standarddeviation of key endogenous variables relative to output on business-cycle frequency. We see that the model suffers from similar problems as the standard Hansen model. In particular, consumption, average hourly compensation and our hour measures, total and

38

Table 6: Relative standard deviation to output Variables

HP100000

HP100000

BandPass

BandPass

σ(..) σ(y)

data

model

data

model

y

1.00

1.00

1.00

1.00

c

0.63

0.45

0.52

0.30

i

3.05

2.87

3.80

3.13

wh(1-u)

1.10

0.88

1.04

0.86

w

0.68

0.39

0.52

0.25

w [FR]

0.55

0.38

0.49

0.26

h(1-u)

0.90

0.58

1.01

0.62

h(1-u) [FR]

1.01

0.60

1.08

0.62

h

0.33

0.26

0.32

0.25

h [FR]

0.60

0.26

0.32

0.64

y/h(1-u)

0.54

0.48

0.45

0.39

y/h(1-u) [FR]

0.47

0.49

0.52

0.40

u(level)

6.88

6.41

7.19

6.72

u (rate)

6.88

6.70

7.19

7.04

v

7.62

8.80

8.74

9.46

x

14.09

14.09

15.57

14.84

π ue

4.32

6.88

4.52

7.42

Standard deviation of the model vs. the actual data. HP filtered (λ = 100000) and BP-filtered for Business Cycle frequency. y=output, i=investment, c=consumption, wh(1u)=NIPA aggregate wage,w=wage per hour, h=BLS average weekly hours, h(1-u)=total hours, y/h(1-u)=productivity, u=unemployment,v=vacancies,x=vu-ratio; [FR] indicates the use of the total hour measure obtained from Francis and Ramey (2005).

average weekly, fluctuate less than the data.21 The low bargaining power that is needed to match unemployment fluctuation introduces too much persistence into the wage setting process. In particular, aggregate wages are just 80% as volatile and average hourly wages are 50% as volatile as the data. One reason for the problem can be seen by plotting a rolling window of with a length of 32 quarters. We see a strong increase in the relative volatility

σw σy

σw σy

starting in the

mid-eighties which is not uncounted for in the model. In particular, commonly viewed stylized facts that wage per hours are less volatile than output do strongly depend on the sample period. 21

The fit of consumption can be increased by using a higher intra-temporal elasticity

39

The "structural break" identified from the picture is typical for most series and is clearly not accounted for in the model. However, we note that aggregate wages are fit fairly well according to our R2 statistic while average wages are fit rather poorly. To show the failure and the success of the model Figure 9 and Figure 10 present the autocorrelation function and some cross-correlation functions of the data and of the simulation relative to output. To obtain the autocorrelation function we estimate a second order VAR on the HP-filtered (λ = 100000 ) data-series of output, consumption, investment, aggregate wages, aggregate hours, unemployment rate and the vacancy-to-unemployment-ratio. We do the same for our simulated model and report the median and 68% confidence-bands for the simulated series. While the model matches autocorrelation-functions and cross-correlation functions, in particular the labor market is within confidence-bands, (note that the actual data also come with standard errors, which we haven’t presented) it overstates the correlation between NIPA-total compensation and our measure of total hours (other measure of hours do very similar).

6

Conclusion

The paper has provided a general equilibrium labor market model with risk averse agents, capital decision and a labor-leisure choice that allowed to incorporate labor market frictions into a otherwise standard RBC framework. We derived the link between assumptions on the interand intra-temporal elasticity of substitution and the key endogenous variables responsible for fluctuations, the profit share. We showed that the Nash-bargaining procedure can endogenously explain the observed fluctuations in labor-market variables and provided intuition why other wage mechanism, like right to manage wage setting procedures, that are easily implementable in macroeconomic DSGE models will likely fail. We derived the set of parameters that are consistent with key labor market facts and argued that they are within the range of commonly estimated values. In particular, the outside option can be made consistent with observed values and there exist set of parameters such that for all intra-temporal elasticity estimates the model can deliver the right relative standard deviation of unemployment to output. The key assumption one has to be willing to make to let the model explain labor market facts is the small utility difference of an employed person versus an unemployed person, viewed from a families perspective under the assumption that markets are complete. We argued that this feature could be viewed

40

as the defining property of the marginal worker, but given that we have not introduced skill heterogeneity, translates in our model into a property of the average worker, which might be much harder to defend. Therefore it appears important to address this issue in an incomplete market model with skillheterogeneity and some form of uninsured consumption risk where unemployment-benefits can provide a positive role on consumption insurance, which we view as the obvious next step. Some authors have criticized the property of a small utility difference between working and unemployment, for example Hall (2005b), who "belong(s) to the school of macroeconomics that believes that alternative activities for most workers, including unemployment compensation, are worth far less than the workers produce on the job. The elasticity of labor supply is low. (pp.25)". However, one has to use non-standard utility functions to back up the claim that the model fails, given that we can reproduce the above result for any intra-temporal elasticity estimate. In particular, it is not the value of non-work that is important per se but the threat point of the worker that has to be close to productivity (a property that any competitive model always assumed) which induces a small profit share in steady state in combination with a low bargaining power that induces the right cyclical variation in the profit share. The analysis above shows that both mechanisms are needed to generate observed fluctuations. Hall (2005b) himself points out to different bargaining arrangements that change the threat point and makes it independent of unemployment benefits, bringing it more closely to the competitive outcome. If one takes a more relaxed stand on this parameter and allows it to capture part of the surely more complicated bargaining and search process of reality, (that might include firing cost, contracts that pay current wage until the end of the month, even though the work relation broke down etc., all increasing implicitly the outside option of the worker) and in particular to allow this parameter to capture part of the competitive search procedure underlying classical economics, it appears likely that the threat point of the worker is indeed close to productivity. While the precise wage setting mechanism appears important for steady state comparison and comparative statics, we argue that for explaining business-cycle behavior the introduction of a parameter that captures the myriad of possible game-theoretic settings and that can well be made consistent with observed unemployment benefit payments should not be ruled out ex-ante. Independent of this debate, given a particular calibration within the set of parameters consistent with labor market facts, we showed that the model when estimated on output alone was able 41

to predict the actual US unemployment rate within tight bounds. We view this as an argument against Shimer’s claim, that the basic Mortensen-Pissarides model cannot account for basic labor-market facts, unless unreasonably calibrated. It can also be used as evidence against the introduction of different wage-setting arrangements, as proposed for instance in Hall (2005a), given that the model can predict at least 85% of the aggregate wage correctly on business-cycle frequency, without using any information on this series. The failure to predict average wage per hour appears not to be a failure of the Nash-Bargaining procedure, but likely be either a failure of the right total hour series to convert to average wage, or a failure of the assumed underlying shock structure. If the last argument is the correct interpretation, the model shares this feature with any RBC model relying on technology shocks alone. We therefore cannot attribute the failure to the Nash-Bargaining procedure unless one finds a convincing wage setting mechanism within the class of purely technology-driven models that can explain the cross-correlation between BLS hour and productivity measures and gets wages and unemployment rates right on actual US data at the same time. We are not aware that such a model exists. In conclusion, we view the estimation of a general equilibrium model in a standard macroeconomic framework as helpful to discriminate between the many different labor market and wage-setting mechanisms proposed in the literature and view the ability of a model to replicate the raw unemployment rate without using shocks to target it as a natural starting point. It will be interesting to see, if other labor market arrangements can be embedded in a model that can be estimated on macroeconomic data to compare their different implications and to potentially reject the validity of their underlying mechanism.

42

References Andolfatto, D. (1996): “Business Cycle and Labor-Market Search,” American Economic Review, 86(1), 112–132. Christiano, L., M. eichenbaum, and R. Vigfusson (2003): “What Happens after a Technology Shock,” Working paper. Christiano, L., and T. Fitzgerald (2001): “The Band Pass Filter,” Working paper. Christoffel, K., K. Kuester, and T. Linzert (2005): “An Estimated Macro Labour Market Model for the German Economy,” Working paper. Costain, J. S., and M. Reiter (2005): “Business Cycle, Unemployment Insurance, and the Calibration of Matching Models,” Working paper. Domeij, D., and M. Floden (2004): “The labor-supply elasticity and borrowing constraints: Why estimates are biased.,” Working paper. Francis, N., and V. A. Ramey (2005): “Measures of Per Capita Hours and their Implications for the Technology-Hours Debate,” Working paper. Gali, J. (1999): “Technology,Employment, and the Business Cycle: Do Technology Shocks Explain Aggregate Fluctuations,” American Economic Review, 89(2), 249–271. Hagedorn, M., and I. Manovskii (2005): “The Cyclical Behavior of Equilibrium Unemployment and Vacancies Revisited,” Working paper. Hall, R. E. (2005a): “Employment Fluctuations with Equilibrium Wage stickiness,” American Economic Review, 95(1), 53–69. (2005b): “Job Loss, Job Finding, and Unemployment in the U.S. Economy over the Past Fifty Years,” Working paper. Hall, R. E., and P. R. Milgrom (2005): “The Limited Influence of Unemployment on the Wage Bargain,” Working paper. Hamilton, J. D. (1994): Time Series Analysis. Princeton University Press, New Jersey. 43

Hansen, G. D. (1985): “Indivisible Labor and the Business Cycle,” Journal of Monetary Economics, 16(3), 309–327. Hornstein, A., P. Krusell, and G. Violante (2005): “Unemployment and Vacancy Fluctuations in the Matching Model: Inspecting the Mechanism,” Working paper. Ireland, P. N. (2001): “Technology Shock’s and the Business Cycle: An empirical Investigation,” Journal of Economic Dynamics and Control, pp. 703–719. Jung, P. (2004): “Unemployment, Hours, Taxation and the Welfare State - A Quantitative Assessment of Structural Changes,” Working paper. Jung, P., and K. Kuester (2006): “Unemployment Fluctuations in a model with complete and incomplete Markets,” Working paper. Krause, M., and T. Lubik (2004): “A Note on Instability and Indeterminacy in Search and Matching Models,” Working paper. Merz, M. (1995): “Search in the Labor Market and the Real Business Cycle,” Journal of Monetary Economics, 36(2), 269–300. Mortensen, D. T., and C. A. Pissarides (1994): “Job Creation and Destruction in the Theory of Unemployment,” Review of Economic Studies, 61(3), 397–415. Rudanko, L. (2005): “A Note on Instability and Indeterminacy in Search and Matching Models,” Working paper. Shimer, R. (2005): “The Cyclical Behavior of Equilibrium Unemployment and Vacancies,” American Economic Review, 95(1), 25–49. Smets, F., and R. Wouters (2002): “A DSGE Model of the Euro Area,” Working paper no. 171, ecb. Trigari, A. (2004): “Equilibrium Unemployment, Job Flows and Inflation Dynamics,” ECB Working Paper, 304. Wandner, S. A., and A. Stettner (2000): “Why are many jobless workers not applying for benefits?,” Monthly Labor Review, 6, 21–32. 44

7 7.1

Appendix Figures Figure 4: Relative Standard Deviation of

σu σy

for given µ and γ

relative standard deviation

9 8 7 6 5 4 3 2 3 0.5

2

0.4

1 sigma

0.3 0

0.2

h

The plot shows the relative standard deviation of unemployment to output for a given bargaining power µ = 0.01 and a given profit share γ = .01.

45

Figure 5: Outside-Option

implied outside option

0.5 0.4 0.3 0.2 0.1 0 3 0.5

2

0.4

1 sigma

0.3 0

0.2

h

The plot shows the implied outside option b for different values of the steady state level of hours worked per person h and the riskaversion parameter σ conditional on using a low bargaining power µ = .01 and a low profit share γ = .01 which implies a relative standard deviation of unemployment to output of approximately 7 or higher in each case.

46

Figure 6: Vacancies (BandPass-filtered) 0.3

actual data predicted

0.2

0.1

0

−0.1

−0.2

−0.3 1955

1960

1965

1970

1975

1980

1985

1990

1995

2000

The figure shows actual and predicted BandPass-filtered vacancies estimated on HPdetrended output.

47

Figure 7: Vacancy-to-unemployment-ratio (BandPass-filtered) actual data predicted 0.4

0.2

0

−0.2

−0.4

−0.6 1955

1960

1965

1970

1975

1980

1985

1990

1995

2000

The figure shows actual and predicted BandPass-filtered vacancy-tounemployment ratios estimated on HP-detrended output.

48

Figure 8: Rolling window of

σw σy

1.8

relative standard deviation

1.6

1.4

1.2

1

0.8

1955

1960

1965

1970

1975

1980

1985

1990

1995

The figure shows a rolling window of relative standard deviations of average wage per hour relative to output. The window length is 32 quarters.

49

Figure 9: Crosscorrelation for aggregate NIPA-Data (h(1−u),wh(1−u)(t−i)) 0.8 0.6 0.4

0.2 0 −0.2 −0.4 −0.6 −0.8

5 (x,u(t−i))

10

5 10 (y,wh(1−u)(t−i)) 0.8 0.6 0.4 0.2 0 0.8 0.6 0.4 0.2 0

0.8 0.6 0.4 0.2 0 −0.2

(wh(1−u),h(1−u)(t−i)) 0 −0.2 −0.4 −0.6 −0.8 5 (y,c(t−i))

0.8 0.6 0.4 0.2 0

10

5 10 (y,h(1−u)(t−i))

0.8 0.6 0.4 0.2 5 (y,x(t−i))

5

10

(u,x(t−i))

5

10

5 (y,i(t−i))

10

5

10

5

10

0.8 0.6 0.4 0.2 0 (y,u(t−i)) 0 −0.2 −0.4 −0.6 −0.8

10

The figure plots selected cross-correlation functions up to 10 lags. y=output, i=investment, c=consumption, wh(1-u)=NIPA aggregate wage,w=wage per hour, h=BLS average weekly hours, h(1-u)=total hours, y/h(1-u)=productivity, u=unemployment,v=vacancies,x=vuratio

50

Figure 10: Autocorrelation for aggregate NIPA-Data y/h(1−u)

c

0.8 0.6 0.4 0.2 2 0.8 0.6 0.4 0.2 0 −0.2

2

4 i

4

6

h(1−u)

6

8

8

0.8 0.6 0.4 0.2

0.8 0.6 0.4 0.2

2

4 6 wh(1−u)

8

2

4

6

8

4

6

8

u 0.8 0.6 0.4 0.2 0

0.8 0.6 0.4 0.2 0 2

4 x

6

8

2

4

6

8

2

0.8 0.6 0.4 0.2 0

The figure plots selected auto-correlation functions up to ten lagged values. y=output, i=investment, c=consumption, wh(1-u)=NIPA aggregate wage,w=wage per hour, h=BLS average weekly hours, h(1-u)=total hours, y/h(1-u)=productivity, u=unemployment,v=vacancies,x=vu-ratio

51

7.2

Dynamics

This section summarizes all equations characterizing equilibrium. The household problem with associated FOC leads to: Euler equation for capital: ∂U (C 0 , h0 , 1 − u0 ) ∂U (C, h, 1 − u) = βE (1 + r 0 − δ) ∂C ∂C 0 1 = Eq 0 (1 + r 0 − δ)

(69) (70)

The FOC conditions of the Nash bargaining procedure give conditions for the optimal choice of wages and hours: Wage equation: (See the appendix for the derivation) wh = µ(κx + P w Ah) + (1 − µ)(b +

f (h, u) ∂u ∂ct

)

(71)

intra-temporal condition to get hours: C P w (1 − α) = o(h, u) Y (1 − u)

(72)

where o(h, u) depends on the particular form of the utility function. The zero profit condition for entrepreneurs posting vacancies gives: Free entry condition to get x: (1 − λ)κ κ 0 = Eq ((P 0w A0 h0 − w0 h0 )(1 − τ Π ) + ) θ θ0

(73)

From the working of the matching market we derive: Matching linking x, π ue and θ : π ue = sx1−ξ s θ = xξ v x = u

(74)

u0 = u + λ(1 − u) − π ue u

(77)

(75) (76)

The aggregate Law of motions follow: Law of motion for u:

52

(78)

kt+1 = (1 − δ)kt + It The budget constraints of the family and the government lead to: Budget Constraint: Ct + Kt+1 + pst st = Dt ∗ st−1 + pst st−1 + Z 1−u w(i)t h(i)t di + ut ∗ +Kt + (rt − δ)Kt + Tt

(79)

0

Government

gt + u t ∗ b + T t = 0

(80)

Given the assumptions above, consumption of the representative family follows the standard aggregate budget constraint: Aggregate resource constraint Ct = Yt − gt − κvt − It

(81)

The firm sector maximizes profits such that: Relative Price of labor good P w = (1 − α)

r=α

7.3

Y L

Y K

(82)

(83)

Steady State

We derive the steady state level of the endogenous variables in terms of the profit share as the key endogenous with respect to labor market identification. Profit share: Profit per period per firm is given by Ψ =

1 Y  1−u (1 − α) − wh.

profit share of the labor market firms as Ψ(1 − u) = (1 − α)γ y

53

Hence we can write and define the

where we define the wageshare as wh(1 − u) ≡ (1 − α)(1 − γ) y

(84)

which is the standard wageshare adjusted for the fact that we now have to control for profits. Cost of vacancy posting:

κ=(

β(1 − α)y γ ) θ) 1 − β(1 − λ) (1 − u)

(85)

Bargainig share: We define b = ebwh, such that we express the outside option in percentage of the average wage earned.22 :

+ (1 − α)) − (1 − µ( κx(1−u) Y (1 − α)(1 − γ) =

C µ) (1−u) Y

(1−ϕ)(1−σ) 1−ϕ(1−σ) (1−ϕ(1−σ) (−1+(1−h) ϕ(1−σ) (1−ϕ)(1−σ) ((u+(1−u)(1−h) 1−ϕ(1−σ) )

(1 − (1 − µ)eb)

Dis-utility of work

)

) (87)

(1−ϕ)(1−σ)

c (1 − h)(1 − α) ((u + (1 − u)(1 − h) 1−ϕ(1−σ) ) = (1−ϕ)(1−σ) y h(1 − u) 1−ϕ(1−σ) (1−ϕ) (1 − h) ϕ

(88)

Capital α

Y 1 +1−δ = K β

(89)

Unemployment rate u= 22

λ λ + π ue

(90)

In the log case the expression simplifies to (1 − α)(1 − γ) =

µ( κx(1−u) + (1 − α)) − (1 − µ)(1 − u) YC ϕ ln(1 − h)) Y (1 − (1 − µ)b) 

while in the non-balanced growth case (1 − α)(1 − γ) =

µ( κx(1−u) + (1 − α) + (1 − µ)(1 − u) YC ϕBhϕ ) Y (1 − (1 − µ)b) 

54

(86)

Matching π ue = sx1−ξ s θ = xξ

(91) (92)

The above equation shows the key role the profit share plays in identifying steady state parameters. There are two principally unobservable parameters in the model (in addition to preference and technology parameters), the bargaining power and the vacancy posting cost. Given a stand on the outside option, which at least in principle is observable, and an estimate on the profit share, which is also principally observable, the bargaining power and the vacancy posting cost can be pinned down from the steady state equations.

7.4

Derivation of Wage Sharing Rule

Here we derive the wage setting rule:

wh = µ(κx + P w Ah) + (1 − µ)(b − Using Π =

(1−µ) ∆V ∂u µ

f (h, u) ∂u ∂ct

(93)

)

from the first order condition we can rewrite the utility difference as

∂ct

∆V ∂u ∂ct

where f (h, u) =

∂U ∂1−u

g = wt ht − b + f (h, u) + (1 − λ − π ue )Eqt+1 ∆V g = ∆V ∂u

(94)

∂ct

depends on the particular dis-utility of work form:

U (1−ϕ(1−σ)(−1+(1−hi )

f (h, u) ∂u ∂ct

=

=

((u+ 

(1−ϕ)(1−σ) 1−ϕ(1−σ)

)

(1−ϕ)(1−σ) 1−u (1−he,i ) 1−ϕ(1−σ) di) 0

U ϕ(1 − σ) C

C (1−ϕ(1−σ) ϕ(1−σ) (−1 + (1 − h) ((u + (1 − u)(1 − h)

(1−ϕ)(1−σ) 1−ϕ(1−σ)

(1−ϕ)(1−σ) 1−ϕ(1−σ)

)

(95)

)

log case: f (h, u) ∂u ∂ct

= Cϕ ln(1 − h)

55

(96)

µ µ f (h, u) Π = (wt h − b + ∂u + β(1 − λ − π ue ) Eqt+1 Πt+1 (1 − µ) (1 − µ) ∂c t

Π = Ptw At ht − wt h + β(1 − λ)Eqt+1 Πt+1

(97)

hence: (1 − µ) f (ht , ut ) Π= (wt h − b + + β(1 − λ − π ue )Eqt+1 Πt+1 ∂u µ ∂c

(98)

t

rearraging gives: e0 = Eqt+1 Π

(1−µ) µ ∆Ut

− γt

(99)

π uet β

From the asset equations we get:

κ βθ

= Eqt+1 Π0

κ βθ

=

x =

(1−µ) µ (wh

(1−µ) µ ∆Ut

(100) − γt

π uet β (1−µ) µ ∆Ut −

γt

(101)

κ

−b+

x =

f (h,u) ∂u ∂ct

− (P w Ah − wh) (102)

κ

wh = µ(κx + P w Ah) + (1 − µ)(b −

56

f (h, u) ∂u ∂ct

(103)

Unemployment, Capital and Hours – On the ...

inconsistency. In example using the precise values of uss = .056 and destruction rates of .033 would force us to assume πue = .55. However, the mean of ... (2004) obtain their results by targeting quarterly values, therefore assuming an independence of the probability of getting a job across month which quite likely doesn't ...

412KB Sizes 0 Downloads 46 Views

Recommend Documents

Unemployment and Human Capital
Jul 30, 2012 - where v(ω) is the value of a skilled worker in a labor market in state ω. Observe that wl(ω) ≥ 0, which places a bound on v(ω): bi ρ. ≤ v(ω) ≤ bi ...

Unemployment, Hours, Taxation and the Welfare State
Mar 14, 2007 - relative to the big increase in wage inequality in the US. .... Section 4 describes part of the data and institutional arrangements for Germany ..... 9 Conditional on having a job, the bargaining is well defined also for types left to 

human capital and unemployment dynamics: why more ...
generate unemployment dynamics by education that we observe in the data. ... incentive to fire them, than employees with no training or general training, which.

172 hours on the moon.pdf
Page 1 of 4. 172 hours on the moon. 172 hours on the moon by johan harstad reviews, discussion. 172. hours on the moon by johan harstad reviews, discussion ...

Unemployment Benefits and Unemployment in the ... - Penn Economics
impact of right-to-work laws on location of manufacturing industry and by Dube et al. (2010) to identify the effect of minimum ...... Subsidies,” Journal of Monetary Economics, 43, 457–495. Shimer, R. (2007): “Reassesing the Ins and Outs of Une

Unemployment Benefits and Unemployment in the ... - Penn Economics
The key feature of the U.S. unemployment insurance system is that unemployment insurance policies are determined at the state level ..... benefit extensions on unemployment that we document. Our point of departure is the analysis in Section 4.7 .....

On the Interaction between Risk Sharing and Capital ...
Feb 5, 2007 - Email addresses: [email protected] (Martin Barbie), .... produced by the technology is the only good in the economy and is used for.

Unemployment and Business Cycles
Nov 23, 2015 - a critical interaction between the degree of price stickiness, monetary policy and the ... These aggregates include labor market variables like.

Pigou on business cycles and unemployment: an anti ...
Pigou on business cycles and unemployment: an anti-gold-standard view. Norikazu Takami. 1. Introduction. This note studies A.C. Pigou's position on what truly ...

Pigou on business cycles and unemployment: an anti ...
ISSN 0967-2567 print/ISSN 1469-5936 online Ó 2011 Taylor & Francis ... ranging from the effect of wage adjustment on business cycles to the problem ..... have [sic] recently, it would seem, adopted in some considerable degree, a policy.

Effect of homeownership on Unemployment Status and ...
analysis of unemployment duration: we estimate the probabilities of being a homeowner .... They find the unconditional and conditional probability of un-.

Unemployment and the real wage
Any increase in real wage rate, depressing profit margin and profit share ...... (condition 12 satisfied); zone API'=A'PS" = stagnationist confiict (condition 12 fails); ...

On the Effects of Ranking by Unemployment Duration
Nov 23, 2016 - Antonella Trigari, Ludo Visschers, and participants at the Conference on Matched Employer-Employee Data: Developments since AKM ..... of a job-destruction shock. Its value function is ..... and let q and q′, with q

Unemployment and Business Cycles
Nov 23, 2015 - *Northwestern University, Department of Economics, 2001 Sheridan Road, ... business cycle models pioneered by Kydland and Prescott (1982).1 Models that ...... Diamond, Peter A., 1982, “Aggregate Demand Management in ...

Capital and Labor Mobility and their Impacts on ...
for labor productivity, was constructed based on national accounts statistics provided by INEGI ... abroad are Michoacán (1.66%), Zacatecas (1.51%) and Nayarit (1.35%). ..... Mexico more open to foreign capital in order to complement trade-related a

Aggregate Demand and the Dynamics of Unemployment
Jun 3, 2016 - 2 such that u1 ⩽ u2,. |J (z,u2) − J (z,u1)| ⩽ Ju |u2 − u1|. Definition 3. Let Ψ : (J, z, u, θ) ∈ B (Ω) × [z,z] × [0,1 − s] × R+ −→ R the function such that.

Aggregate Demand and the Dynamics of Unemployment
Jun 3, 2016 - Take λ ∈ [0,1] such that [T (J)] (z,uλ) and EJ (z′,u′ λ) are differentiable in λ and compute d dλ. [T (J)] (z,uλ) = C0 + β (C1 + C2 + C3) where.

On the Effects of Ranking by Unemployment Duration
Nov 9, 2015 - ... Universitat Aut`onoma de Barcelona, and Barcelona GSE (email: .... non-sequentially when they use formal methods such as advertising.

Unemployment and Development
This paper draws on household survey data from countries of all income levels to ..... unlikely that measurement error on this dimension is much of a limitation. ...... Malaysia. 1991, 2000. IPUMS-I. Mexico. 1990, 1995, 2000, 2010, 2015. IPUMS- ...

Uncertainty and Unemployment
This paper previously circulated under the title “Uncertainty,. Productivity and Unemployment in the Great Recession”. †Email: [email protected]; ...