Executive Compensation and Industry Peer Groups∗ Stefan Lewellen† London Business School July 10, 2015

Abstract I develop a novel set of firm-specific industries (FSIs) based on firms’ disclosures of their primary product market competitors in their 10-K filings. When peer groups are defined using FSIs, I find strong evidence of relative performance evaluation in CEO compensation and retention decisions. For example, when I decompose firm performance into “luck” and “skill” components, I find that CEOs are only compensated based on skill. I also link the literature on relative performance evaluation to the literature on peer group compensation benchmarking and find little evidence that compensation benchmarking inflates pay for the average CEO. My results are consistent with simple contracting models of CEO pay and contrast sharply with a long literature that has found little evidence of relative performance evaluation, strong evidence of “pay for luck,” and strong evidence of a benchmarking-induced “ratchet effect” in CEO pay.

∗ I thank Andrew Metrick, Gary Gorton, Nick Barberis, Matt Spiegel, Gerard Hoberg, Mike Faulkender, Heather Tookes, Jenn Baka, Bruce Carlin, James Choi, Francesca Cornelli, David DeAngelis (discussant), Andrea Eisfeldt, Daniel Ferreira, Mireia Gin´e, Francisco Gomes, Dirk Jenter, Tomislav Ladika (discussant), Todd Milbourn (discussant), Alan Moreira, Henri Servaes, Justin Wolfers, Liu Yang, Jacqueline Yen, conference participants at the 2013 EFA, SFS Cavalcade, and WFA meetings, and seminar participants at Emory, the FDIC, the Fed Board of Governors, Georgia Tech, IESE, LBS, LSE, Maryland, OFR, UCLA, Vanderbilt, and Yale for helpful feedback. Lora Chow, Byron Edwards, Tate Harshbarger, Yoonie Hoh, Siddharth Jain, Connie Liu, Candice Manatsa, Aayush Upadhyay, Tim Wang, Roger Yang and Doris Zhao provided excellent research assistance. † Email: [email protected]. Phone: +44 (0)20 7000 8284. Address: London Business School, Regent’s Park, London NW1 4SA, United Kingdom.

1

Introduction

It is well established that the separation of ownership and control can create moral hazard problems between principals and agents. Dating back to Holmstr¨om (1979, 1982) and Holmstr¨om and Milgrom (1987), a long literature has argued that the optimal contracting solution to this moral hazard problem involves a sharing rule that compensates agents based on all observable characteristics that might contain information about agents’ performance. This set of observable characteristics includes, most prominently, the performance of a firm’s industry peers. Intuitively, the optimal contract for compensating Coke’s CEO should not simply depend on the performance of Coke, but rather on the performance of Pepsi too, since both firms’ performance may have been affected by a common industry shock such as a drop in sugar prices or a change in regulation that fell outside of either CEO’s control. By adjusting Coke’s performance to filter out these common industry shocks, the theory goes, Coke’s board can construct a more accurate picture of the actual performance of Coke’s CEO. Despite nearly thirty years of research, however, this basic prediction of optimal contracting theory has been difficult to find in the data. The vast majority of empirical research on performance benchmarking in executive compensation decisions (which is more formally known as relative performance evaluation, or RPE) has found no large-sample evidence of industry performance benchmarking in observed CEO compensation arrangements. What little evidence exists supporting performance benchmarking is limited to small pockets of the cross-section of firms, such as young CEOs. The primary takeaway from this literature is that the compensation of Coke’s CEO generally depends on Coke’s performance alone rather than the conditional performance of Coke relative to Pepsi or to the beverage industry as a whole. In short, CEOs appeared to be paid for “luck” rather than (or in addition to) being paid for “skill,” in contrast to the basic prediction of simple optimal contracting models of CEO pay. This paper evaluates the determinants of CEO pay using a novel set of peer group definitions that differ significantly from the standard industry definitions that have been used in previous executive compensation tests. In particular, I exploit a federal law requiring firms to provide an unbiased list of their primary product market competitors in their annual 10-K filings.1 These selfreported peer groups, which I refer to as firm-specific industries (FSIs), are hand-collected from over 36,000 10-K filings from 2002 to 2008.2 Unlike traditional industry classifications such as SIC codes, 1 2

The relevant law is Section 101 of Regulation S-K of the Securities Act of 1933 (17 C.F.R. §229.101). The sample period is restricted to 2002-2008 for two reasons. First, the decision to start in 2002 was based on

1

FSIs are constructed based on a single, consistent, theoretically-sound principle (product market outputs) that is rooted in economic fundamentals. Furthermore, FSIs are defined at the firm level, much as Wall Street analysts create firm-specific lists of comparable companies. Hence, FSIs should offer an improvement over traditional industries such as SIC codes, which are transitive and disjoint by construction. Consistent with FSIs being “better” than traditional industry definitions, I show in the Appendix and in a companion paper (Lewellen (2012b)) that FSIs have significantly higher explanatory power than traditional industry definitions (as well as newer classifications such as those created by Hoberg and Phillips (2010a)) in a number of common empirical contexts such as stock return and leverage tests. When peer groups are defined using FSIs, I find strong evidence supporting the use of performance benchmarking in executive compensation decisions. Consistent with a weak-form version of the performance benchmarking / RPE hypothesis, I find a positive relationship between CEO compensation and firms’ peer group-adjusted stock returns. This effect is quantitatively significant: holding firm performance fixed, a one standard deviation decrease in peer group stock returns increases average CEO compensation by approximately 6% per year. I next decompose firm performance into “luck” and “skill” following the “pay-for-luck” compensation literature (Antle and Smith (1986), Janakiraman, Lambert, and Larcker (1992), Bertrand and Mullainathan (2001), Garvey and Milbourn (2006), Albuquerque (2009)). Consistent with the simple optimal contracting hypothesis first proposed by Holmstr¨ om (1979, 1982) and Holmstr¨om and Milgrom (1987), and in contrast to the vast majority of research on executive compensation, I find that CEOs are only compensated for skill. I also find no evidence of performance benchmarking in CEO compensation or termination decisions when standard industry definitions such as SIC classifications are used to construct peer groups, which is consistent with the vast majority of the executive compensation literature.3 Collectively, my results suggest that the lack of empirical evidence supporting the simple optimal contracting hypothesis of Holmstr¨om (1979, 1982) and Holmstr¨om and Milgrom (1987) may largely be a function of peer group mismeasurement rather than an incomplete or incorrect theoretical foundation. I also examine the impact of peer group performance on CEO retention decisions. In addition to the lack of evidence supporting the use of RPE in compensation decisions, a recent literature has financial considerations. Second, the data collection process began in the winter of 2009, at which point 2008 year-end data was the last year available. 3 Even in cases where evidence of performance benchmarking has been found using traditional classification standards or variants thereof (e.g. Albuquerque (2009)), I find no evidence of performance benchmarking using such classifications during my sample period.

2

also documented a lack of RPE in CEO retention decisions (Jenter and Kanaan (2008), Eisfeldt and Kuhnen (forthcoming)). However, this liteature also uses “standard” industry classifications when measuring peer group performance. I re-examine the impact of peer group performance on CEO turnover using FSIs and find that CEOs are fired following poor relative performance, which is consistent with the performance benchmarking / RPE hypothesis. Simply put, my evidence suggests that CEOs are paid and terminated based on skill rather than luck. The empirical failure of the relative performance evaluation hypothesis has led to a large theoretical literature that attempts to justify the lack of performance benchmarking in the data. In particular, two main alternative theories have been proposed that can, under certain circumstances, justify the lack of performance benchmarking even in the presence of a moral hazard problem. The first main alternative is an entrenchment or rent extraction story. Under this hypothesis, CEOs have captured their boards and can effectively set their own pay conditional on adhering to a “shareholder outrage” constraint. The second main alternative is an labor market or outside-option hypothesis which can be motivated (for example) by relaxing the assumption of an exogenously-determined participation constraint, or by directly modeling the labor market for CEOs using a talent assignment framework. Under the outside options hypothesis, firms may find it optimal to pay a CEO for “luck” (or “absolute” performance) in order to prevent a CEO from moving to a competing firm. Disentangling the three hypotheses above is complicated by the fact that these hypotheses can potentially operate through two distinct compensation channels. In particular, while relative performance evaluation is concerned with setting the components of pay correctly to induce optimal effort from the CEO, the rent extraction and outside options hypotheses primarily relate to the total level of pay awarded to the CEO. Furthermore, a growing literature has found evidence that benchmarking is commonly used in setting the level of pay: in particular, many firms use compensation benchmarking to set pay levels (that is, CEO pay levels depend on the pay levels earned by peer firm CEOs). Bizjak, Lemmon, and Naveen (2008), Bizjak, Lemmon, and Nguyen (2011), and Faulkender and Yang (2010, 2011) all find evidence of direct peer-group compensation benchmarking; Bizjak, Lemmon, and Naveen (2008) and Bizjak, Lemmon, and Nguyen (2011) find evidence consistent with the labor market hypothesis, while Faulkender and Yang (2010, 2011) find evidence consistent with the rent extraction hypothesis. Hence, industry peer groups can be used in two types of benchmarking that are relevant for CEO compensation decisions, which I refer to respectively as “performance benchmarking” (or RPE) and “compensation benchmarking.” While pinning down the use of relative performance evaluation is fairly straightforward, pinning 3

down the magnitudes associated with the “compensation benchmarking” channel is quite difficult. For starters, existing performance benchmarking and compensation benchmarking tests do not attempt to measure the effects of both channels simultaneously. Hence, most existing tests in the literature suffer from an omitted variables problem. Furthermore, existing tests of the compensation benchmarking channel suffer from the “reflection” or simultaneity problem documented by Manski (1993): if i’s pay depends on j’s pay and vice versa, it is impossible to identify whose pay is driving whom.4 This makes it virtually impossible to measure whether average CEO pay is being inflated (or deflated) through the use of compensation benchmarking. Finally, it appears that compensation benchmarking peers may be selected strategically in order to potentially inflate a given CEO’s pay (Faulkender and Yang (2010, 2011)). This finding is problematic because it suggests that peer CEOs’ pay (on the right-hand side) may depend on own-firm CEO pay (on the left-hand side), leading to a reverse causality problem. FSIs allow me to overcome these complications. First, I provide evidence that firms appear to report their product market peers in an unbiased fashion. Since a firm’s product market peers are likely correlated with a firm’s compensation benchmarking peers but do not appear to be chosen to influence compensation, I can use FSI peer groups to directly proxy for a firm’s direct compensation benchmarking peers in a manner similar to an instrumental variable. Second, FSIs allow me to overcome the reflection problem because each firm has its own “industry” and FSI industries are intransitive. In particular, there are many instances in my sample where i is linked to j, j is linked to k, but i is not linked to k. Since i and k are not directly linked, i is excluded from k’s compensation equation (and vice versa), which overcomes the simultaneity problem. In effect, i is an instrument for the effect of j’s compensation on k, since i is only linked to k through j. Importantly, Bramoull´e, Djebbari, and Fortin (2009) prove that identification is possible in this setting as long as there exists at least one such “intransitive triad” of companies, a condition that is easily met in my sample. As a result, I am able to identify the impact of both channels (performance and compensation benchmarking) on flow compensation using FSI peer group definitions. In contrast, identification is generally not possible in this setting using “traditional” industry classifications because all traditional industry classifications are transitively defined such that i, j, and k are all directly linked.5 4

See Manski (1993) and Leary and Roberts (2011) for more details on the reflection problem. Many standard tests of the compensation benchmarking channel are also “Average E” tests (in the language of Gormley and Matsa (2012)) that may be subject to measurement error biases. 5 This is true even if firm i is left out of its own peer group when constructing peer group averages (Moffitt (2001)). Technically, Bramoull´e, Djebbari, and Fortin (2009) show that identification using traditional industries is possible

4

When performance peers and compensation peers are defined using FSIs, I find little support for the compensation benchmarking channel of executive compensation: average CEO pay does not appear to be influenced by the pay of peer firm CEOs (though I continue to find strong evidence of performance benchmarking). These results are not consistent with the recent literature on the compensation benchmarking channel. However, since FSIs are not necessarily identical to the actual peers that are used by CEOs, boards, and outside consultants to benchmark compensation, my tests may effectively suffer from a “weak instrument” problem. To partially mitigate this problem, I create empirical proxies for actual compensation peer groups by truncating each firm’s FSI to only include peers with better operating performance, higher returns, higher values of Tobin’s Q, or higher CEO pay in the previous year. These truncation definitions follow the results found in the recent compensation benchmarking literature (Bizjak, Lemmon, and Nguyen (2011), Faulkender and Yang (2010, 2011)). Even when my “instrument” is calibrated to most closely proxy for firms’ direct compensation peers, I still find little evidence that compensation benchmarking affects CEO pay after controlling for observable performance measures. In particular, the only calibration that produces a link between compensation benchmarking and CEO pay is a calibration in which each firm’s FSI definition is truncated to only include peer firms whose CEOs were paid more than the target firm’s CEO in the previous year. While this result is broadly consistent with the existing literature on compensation benchmarking, none of the other truncation definitions (size, Q, size ×Q, operating performance, returns) produce a link between peer group CEO compensation and own-firm CEO pay. As such, after controlling for the use of relative performance evaluation and other factors known to affect pay, I find relatively little evidence supporting the idea that compensation benchmarking induces an upward “ratchet effect” in CEO pay. This paper introduces FSIs and argues that FSIs provide a more accurate description of industries than traditional industry classification standards.6 However, other authors have also created industry definitions that are customized across individual firms. In particular, Hoberg and Phillips (2010a,b) collect business descriptions from firms’ 10-K filings and use textual analysis to construct as long as the industries have different sizes. However, a necessary (but not sufficient) condition for identification is that a peer benchmarking effect actually exists using these groups. Unfortunately, I do not find any evidence of performance benchmarking effects using traditional industry definitions. 6 This paper is also related to a long literature examining the influence of industry and product market dynamics on firm characteristics (see, e.g., Brander and Lewis (1986), Maksimovic (1988), Dixit (1989), Bolton and Scharfstein (1990), Maksimovic and Zechner (1991), Hopenhayn (1992), Williams (1995), Miao (2005), and Spiegel and Tookes (forthcoming)). FSIs represent a potentially significant step forward in researchers’ ability to accurately map product markets within the U.S. economy, and may have many other applications within finance and other areas such as accounting, marketing, and industrial organization.

5

“text-based network industry classifications,” or TNICs, based on firms’ stated descriptions of their product markets. However, TNICs are computed using a textual analysis algorithm that may introduce noise into industry definitions, while FSIs are collected “straight from the horse’s mouth.” In addition to Hoberg and Phillips (2010a,b), Rauh and Sufi (2012) use industry definitions from the data vendor CapitalIQ, which is a subsidiary of Standard & Poor’s. Like FSIs, the CapitalIQ data used by Rauh and Sufi (2012) is sourced from firms’ lists of their primary product market competitors in their 10-K filings. However, CapitalIQ’s data consists of a single cross-section which is updated on an annual basis. Thus, the CapitalIQ data cannot be used for panel data tests. Finally, Lee, Ma, and Wang (2015) use internet traffic on the SEC’s EDGAR website to define firm-specific peer groups. However, their metric relies on investors’ tastes rather than firms’ own disclosures about their primary product market competitors. The remainder of the paper is organized as follows. Section 2 provides a brief overview of the three main executive compensation hypotheses and the two channels through which they operate. Section 3 provides an overview of FSIs. Section 4 details the data used in the paper and provides a variety of summary statistics. Section 5 outlines my empirical design. Section 6 contains the paper’s main results. Section 7 contains a variety of robustness tests. Section 8 concludes.

2

Background

2.1

Relative Performance Evaluation

Simple moral-hazard-based theories of executive compensation start with a risk-neutral principal who is tasked with compensating a risk-averse agent for their actions over a period of time. Suppose that a performance measure p (such as stock returns or accounting variables) for firm i can be linearly decomposed into CEO actions (a), an unobservable common shock (c), and an unobservable firm-specific shock (u) according to the formula pi = ai + c + ui . For simplicity, all three variables are assumed to be normally distributed with zero mean and variances given by σa2 , σc2 , and σu2 , respectively. Following Gibbons and Murphy (1990), all three variables are also assumed to be independent of one another, which implies that both the common shock and the performance of peer firms are outside of a CEO’s control. Furthermore, suppose

6

there are a total of N firms (including firm i) that are subject to the common shock c, and suppose that pi is observable for each of these firms as well. Given this setup, the principal’s best measure of the CEO’s actions is  E[ai |p1 , ..., pN ] = β pi − γ

N X

 pj  = β (pi − γ p¯−i ) ,

j6=i

where β =

2) σa2 (σa2 +(N −1)σc2 +σu 2 )(σ 2 +N σ 2 +σ 2 ) (σa2 +σu a c u

and γ =

σc2 2. σa2 +N σc2 +σu

The optimal sharing rule for the agent will be

linear in both pi and p¯−i and will take the form7 si (pi , p−i ) = α + β(pi − γ p¯−i ) ,

(1)

where si represents total annual compensation. Since β and γ are positive, this implies that CEO incentives should be increasing in firm performance (i.e. “pay-for-performance”) but decreasing in industry performance (to eliminate the effects of common shocks). The key feature of the incentive contract in equation (1) is that CEO compensation depends linearly on the extent to which a CEO outperforms their peers. As noted by Jenter and Kanaan (2008), the term in parentheses in equation (1) represents the residual from a regression of pi on p¯−i . Hence, equation (1) suggests that CEOs should be compensated based on the idiosyncratic component of firm performance after completely conditioning out peer group performance. The concept of completely filtering out peer group performance from firm performance is known as the strong form of the relative performance evaluation hypothesis. The assumption that CEOs have no influence on the performance of their peer group may not be true in certain product markets. When pj is a function of ai (or vice versa), it is no longer optimal to completely remove the effects of peer performance from firm performance, since the peer performance component contains information about the manager’s actions. However, Janakiraman, Lambert, and Larcker (1992) show that it is still optimal to remove some of the peer group component when designing the CEO’s optimal compensation contract. This weaker result (removing some of the peer group component) is known as the weak form of the relative performance evaluation hypothesis. Most aggregate tests of CEO compensation have failed to find significant evidence of relative performance evaluation (Antle and Smith (1986), Gibbons and Murphy (1990), Jensen and Mur7 See, e.g., Holmstr¨ om (1979, 1982), Holmstr¨ om and Milgrom (1987), Lambert and Larcker (1987), Banker and Datar (1989), and Janakiraman, Lambert, and Larcker (1992).

7

phy (1990), Barro and Barro (1990), Janakiraman, Lambert, and Larcker (1992), Aggarwal and Samwick (1999a,b), Murphy (1999)).8 A notable exception is Albuquerque (2009), who finds evidence of both strong-form and weak-form performance benchmarking. However, her findings do not appear to hold during my sample period (Gong, Li, and Shin (2011) find the same result). Cross-sectionally, Bertrand and Mullainathan (2001), Jin (2002), Garvey and Milbourn (2003), Garvey and Milbourn (2006), Rajgopal, Shevlin, and Zamora (2006), and Cremers and Grinstein (2011) find asymmetric evidence of RPE depending on market conditions, CEO traits, or CEOs’ outside options. However, these tests generally do not find evidence of RPE for the average CEO. Taken literally, the RPE hypothesis suggests that boards and CEOs should agree on an exante compensation contract that specifically defines the set of peers against whose performance the CEO will be measured during the contract period. Until recently, there has been scant evidence that CEOs and boards actually use such explicit performance benchmarking mechanisms in their compensation contracts.9 However, Gong, Li, and Shin (2011) examine firms’ explicit use of performance benchmarking following an SEC disclosure requirement introduced in 2006 and find that around 25% of firms explicitly use performance benchmarking. Their results indicate that firms use performance benchmarking in part to eliminate the impact of common peer group shocks on CEO compensation – that is, they find evidence of the weak-form performance benchmarking effect among firms that use explicit performance benchmarking in their CEO contracts. DeAngelis and Grinstein (2015) also find evidence consistent with firms explicitly contracting with CEOs on relative performance measures. However, Carter, Ittner, and Zechman (2009) examine explicit benchmarking in the UK and find little evidence that firms use benchmarking to eliminate common shocks. In short, most of the existing literature has found little large-sample evidence supporting relative performance evaluation in CEO compensation decisions.10 8

Aggarwal and Samwick (1999a) and Joh (1999) find that CEO pay responds positively to peer group performance in oligopolistic industries, which is the opposite of the standard prediction. 9 Furthermore, a long literature has noted the lack of indexed stock options in CEO compensation packages (see, e.g., Bebchuk and Fried (2004)), though such options until recently were subject to unfavorable accounting treatment relative to standard, non-benchmarked options (Core, Guay, and Larcker (2003)). 10 Furthermore, anecdotal evidence suggests that firms also frequently use performance benchmarking implicitly (or ex-post) without contracting on an explicit ex-ante peer group. (In addition, standard incentive-based compensation generally has an implicit relative performance component; see Core and Guay (2003) for a discussion.) One potential benefit of ex-post performance benchmarking is that this approach allows the board to select an appropriate peer group based on the CEO’s actions during the year (such as, for example, entering a new product market or exiting an existing product market).

8

2.2

Compensation and the Labor Market for CEOs

In standard principal-agent models, CEO pay is set to exactly match the CEO’s outside option, which is normally assumed to be exogenous. However, time-varying or endogenous outside options may eliminate or reduce the effectiveness of performance benchmarking. Oyer (2004) shows that when the outside option is endogenized, performance benchmarking will be inefficient if the CEO’s outside options are positively correlated with industry performance. For example, suppose that a positive technology shock occurs in industry X that significantly enhances the human capital of CEOs in that industry and also contributes to abnormal industry performance. In this case, a CEO with subpar relative performance may still be recruited by outside firms that wish to benefit from the CEO’s newfound expertise. As a result, compensating the CEO purely based on relative performance may not be sufficient to retain the CEO’s services. A recent set of talent assignment models also link the demand for a CEO’s human capital to her pay level (see, e.g., Gabaix and Landier (2008) and Tervi¨o (2008)). For example, Edmans, Gabaix, and Landier (2009) argue that the level of pay is driven by labor market considerations (“pay-for-talent”), but the components of pay (cash, stock, etc.) are chosen to solve the moral hazard problem caused by the separation of ownership and control. [ADD MORE ABOUT RPE] Like RPE, the “labor market” hypothesis encapsulated by the theories above is consistent with optimal contracting principles: the optimal contract rewards the CEO for her talent while also attempting to overcome the moral hazard problem that exists between the firm’s owners and the CEO. While I am not aware of any theoretical models that directly link CEO i’s compensation to the compensation of CEO j, the labor market hypothesis indirectly suggests that firms may choose to benchmark CEO pay against the compensation earned by CEOs at rival firms, since the compensation of other CEOs is likely the best empirical proxy for a CEO’s outside options. Indeed, the use of a compensation comparison peer group is widespread in practice (Bizjak, Lemmon, and Naveen (2008), Bizjak, Lemmon, and Nguyen (2011), Faulkender and Yang (2010, 2011), Shue (2012)). For example, Bizjak, Lemmon, and Naveen (2008) randomly sample 100 companies from the S&P 500 index and find that 96 of the 100 companies use peer group or market-wide compensation benchmarking as an input into determining CEO pay. If these direct compensation peers are used as an empirical proxy for the CEO’s reservation wage, the use of compensation benchmarking is therefore consistent with the labor market or outside options hypothesis (Bizjak, Lemmon, and Naveen (2008), Bizjak, Lemmon, and Nguyen (2011)). Hence, while the labor market

9

hypothesis is generally distinct from the relative performance evaluation hypothesis, the labor market theory arguably supports the use of compensation benchmarking in CEO compensation decisions.

2.3

Rent Extraction

While the RPE and labor market hypotheses are consistent with optimal contracting, a third class of theories argues that many CEOs have sufficient power to materially influence their own pay (Bertrand and Mullainathan (2001), Bebchuk, Fried, and Walker (2002), Bebchuk and Fried (2004), Kuhnen and Zwiebel (2008), Morse, Nanda, and Seru (2011)). Under this “rent extraction” hypothesis, CEOs would like to pay themselves as much as possible every year. However, CEOs know that they cannot pay themselves too lavishly without drawing the attention of shareholders. Hence, CEOs look for reasons to justify their lavish pay packages that will not draw the attention of shareholders. One way to justify pay increases is by using performance benchmarking asymmetrically. Specifically, when industry performance is strong, the rent extraction hypothesis suggests that CEOs will want to avoid the use of performance benchmarking, since firm performance alone can potentially justify high pay. In contrast, industry performance is weak, the rent extraction hypothesis suggests that CEOs will become avid proponents of performance benchmarking, since benchmarking will limit or even eliminate any negative shocks to their compensation. Bertrand and Mullainathan (2001) and Garvey and Milbourn (2006) find empirical evidence consistent with this pattern. A second way that CEOs can justify lavish pay packages is by manipulating the list of their compensation peers to include firms whose CEOs are highly paid. For example, a CEO may argue that their “outside options” include firms in unrelated business lines whose CEOs are highly paid. Faulkender and Yang (2010, 2011) and Shue (2012) find evidence consistent with this argument. Hence, the rent extraction hypothesis supports both (partial) performance benchmarking and compensation benchmarking.

2.4

Integrating Compensation Benchmarking and Performance Benchmarking

Equation (1) characterizes the optimal compensation contract under a simple moral hazard problem when outside options are exogenous.11 However, the labor market and rent extraction hypotheses 11 Compensation benchmarking is superfluous in the RPE model because all of the relevant information needed to contract around the moral hazard problem is contained within the performance measures pi and p¯−i . Hence, under the RPE hypothesis, E[ai |p1 , ..., pN , s1 , ..., sN ] = E[ai |p1 , ..., pN ]; that is, the signals provided by peers’ compensation

10

provide plausible justification for the use of compensation benchmarking, and the existence of the compensation benchmarking channel suggests that equation (1) should be modified to incorporate both forms of potential peer group benchmarking (performance benchmarking and compensation benchmarking). As such, the CEO compensation equation is assumed to take the form si (pi , p−i , s−i ) = α + β(pi − γ p¯−i ) + δ¯ s−i ,

(2)

where s¯−i represents average peer group compensation.12 Importantly, equation (2) incorporates both performance benchmarking and compensation benchmarking into a single formula. The basic prediction of performance benchmarking / RPE is that β and γ will be positive in equation (2). The basic prediction of the labor market hypothesis is that δ will be positive. Finally, the basic prediction of the rent extraction hypothesis is that β and δ will always be positive, while γ will be positive when firm performance is poor (in absolute terms) and will be zero otherwise.

3

Firm-Specific Industries

All three of the hypotheses described above (RPE, labor markets, and rent extraction) depend crucially on peer group definitions. However, the industry classification system used by most researchers – the Standard Industrial Classification, or “SIC” system – generally does a poor job of classifying firms into peer groups based on common economic characteristics (Bhojraj, Lee, and Oler (2003), Weiner (2005), Chan, Lakonishok, and Swaminathan (2007), Lewellen (2012a), Hoberg and Phillips (2010a,b), Lee, Ma, and Wang (2015)).13 Even the creator of the SIC system (the U.S. Government) admits that the “SIC scheme of classification [contains numerous types of] inconsistencies that [are] due almost entirely to a lack of a unified economic concept of the industry and of the proper way to categorize establishments by industry” (Gabor, Houlder, and Carpio (2001)).14 This is problematic given that Bhojraj, Lee, and Oler (2003) find that SIC classifications are used in more than 90% of all industry-related tests published in major journals. Figure 1 displays the traditional definition of an industry classification. This definition applies choices contain no new information that would improve the precision with which the principal can infer the agent’s actions. 12 Following the empirical literature on compensation benchmarking, I make the simplifying assumption that peer group compensation affects own-firm compensation in a linear fashion. 13 Clarke (1989), Guenther and Rosman (1994), and Kahle and Walking (1996) also examine the empirical properties of SIC codes across different granularities and data sources, but the results of these studies are mixed. 14 The U.S. Government created the SIC system in 1939.

11

to each of the widely-used industry classification standards in finance and economics, namely the SIC and NAICS systems, the Global Industry Classification Standard (GICS) system developed by MSCI Barra and Standard & Poor’s, and the 48 (or 49) industries developed by Fama and French (1997). The key feature of Figure 1 is that traditional industry classification definitions are transitive: if firm A is in the same industry classification as firms B and C, then firms B and C must also be in the same classification. Traditional industry classification standards are also disjoint, in the sense that a firm can be assigned to one and only one parent-level classification. While transitive and disjoint industry definitions are convenient for research purposes, their extreme rigidity is often at odds with the actual nature of product market competition. For example, Pepsi generates nearly half of its sales and operating profits from its snacks division (FritoLay, Quaker Foods), but Pepsi’s SIC classification (SIC code 2086) only contains large beverage manufacturers (Coke, Pepsi, the Coke and Pepsi bottling groups, Dr. Pepper Snapple Group, Monster, National Beverage, and Heckmann Corp). Hence, Pepsi is only matched with about half of its true competitors. To overcome the problems with traditional industry classifications, this paper introduces firmspecific industries, or FSIs. To construct FSIs, I exploit a federal law requiring firms to name their competitors in their annual 10-K reports under certain conditions. Specifically, Section 101 of Regulation S-K of the Securities Act of 1933 (17 C.F.R. §229.101) states that: “[g]enerally, the names of competitors need not be disclosed. The registrant may include such names, unless in the particular case the effect of including the names would be misleading. Where, however, the registrant knows or has reason to know that one or a small number of competitors is dominant in the industry it shall be identified.” The statutory language in 17 C.F.R. §229.101 only requires firms to report competitors’ names if such competitors are “dominant” in their industry. In practice, however, firms often provide a list of all significant competitors in a market, even if none of the competitors are dominant in an economic sense. Furthermore, most of the firms listed as competitors are large, and the majority of large firms list competitors. Hence, there is a significant overlap between the competitor lists and the S&P 1500 firms that are covered by ExecuComp. Importantly, the statutory language in 17 C.F.R. §229.101 also explicitly prohibits firms from listing competitors in a “misleading” fashion, which should help to ensure that firms do not report competitors strategically. FSIs differ from traditional industry classifications in that industries are defined on a firm-byfirm basis based on the specific economic characteristics of the firm in question. Hence, FSIs are 12

neither transitive nor disjoint. This yields two immediate benefits which are depicted graphically in Figure 2. First, a single firm can reside within multiple FSIs spanning many “traditional” industry classifications. FSIs can also be asymmetric: if A points to B and B points to C, C need not point to A as a competitor. As such, FSIs should provide a more accurate map of the product market space within the U.S. economy than traditional classification standards. The benefits of FSIs can be seen by again examining Coke and Pepsi. In its 2008 10-K filing, Coke lists Pepsi, Dr. Pepper Snapple Group, Nestl´e, Group Danone, Kraft Foods, and Unilever as its primary competitors. In contrast, Pepsi reports Coke, Cadbury Schweppes, and Nestl´e as its primary beverage competitors and Kraft Foods, Proctor & Gamble, General Mills, Kellogg’s, Campbell Soup, ConAgra, and Snyder’s as its snack competitors. Hence, (i) Pepsi can be matched to snack and beverage makers, (ii) Coke need not be matched to snack manufacturers, and (iii) even in the beverage sector, Coke and Pepsi can be matched with a different set of competitors. None of these outcomes would be possible using traditional industry classification standards. [ADD PARAGRAPH COMPARING FSIs to HobergPhillips] [ADD INFO ON SEGMENT SIC CODES] [CITE Chen, Cohen and Lou]

4 4.1

Data and Summary Statistics Firm-Specific Industry Definitions

Competitor names are collected by hand from firms’ annual 10-K filings with the SEC for firm fiscal years ranging from 2002 to 2008. The decision to start in 2002 is based on the fact that investors’ attention to governance practices increased significantly around this time (Bebchuk, Cohen, and Wang (forthcoming)). Time considerations also forced me to stop collecting 10-K data for fiscal years beyond 2008. Thus, I began by generating a list of all firms in the merged CRSP/Compustat database each fiscal year from 2002-2008. This process produced a total of 36,176 firm-year observations, of which 17,021 such 10-Ks, or approximately 47%, contained valid competitors that could be matched to Compustat (in total, approximately 60% of the 10-Ks in the sample contained one or more competitor names, but many of these 10-Ks only listed private or foreign companies as competitors).15 The matching process used to assign 10-K competitor references to Compustat is described in the Appendix. While competitor names from firms’ 10-K 15 This compares favorably to CapitalIQ, which according to Rauh and Sufi (2012) can only assign approximately 30% of firms to direct competitors, and must assign the other 70% of firms to industries in an indirect fashion.

13

filings were also matched to Compustat’s Global database, the present analysis is restricted to Compustat’s North American database. The statutory language in 17 C.F.R. §229.101 only requires firms to list the names of “dominant” competitors within their industries. This language naturally suggests that firms are likely to only list large competitors, which will introduce selection bias into the sample. Furthermore, despite the legal requirement preventing firms from listing “misleading” competitors, firms may still select their competitors on a strategic basis, which may also introduce selection biases. To partially address these biases, I expand the definition of FSIs by including peers that point to firm i within firm i’s FSI, even if firm i did not include such peers in their list of competitors. Concretely, firm i’s FSI in a given year is defined as F SIi ≡ {j : (i listed j) ∪ (j listed i)} .

(3)

This definition is helpful because it expands the number of firms with FSIs while also reducing any strategic selection biases or size effects that exist in the sample. For example, suppose that small firm i lists larger firms j and k as competitors, medium-sized firm j only lists the larger firm k as a competitor, and k lists no competitors. Under my revised definition of FSIs, i’s peer group will be {j,k}, j’s peer group will be {i,k}, and k’s peer group will be {i,j}. Hence, k is assigned an FSI even though it did not provide a list of competitors, while i is listed as a competitor to j and k even though it is smaller than both firms. I examine the potential impact of selection biases on my tests in further detail in section 7.

4.2

Other Data Sources

I obtain executive compensation data from ExecuComp, stock returns and market capitalization information from the Center for Research in Security Prices (CRSP), and accounting data from Compustat. Corporate governance data is sourced from Andrew Metrick’s website, and I obtain the Fama and French (1997) industries from Ken French’s website. Inflation data comes from the BLS. Finally, the text-based network classifications designed by Hoberg and Phillips (2010a,b) are sourced from the authors’ website (http://www.rhsmith.umd.edu/industrydata/). Following the majority of the executive compensation literature, I restrict the ExecuComp sample to only include CEOs. This restriction is justified in part by observing that other executives may have an incentive to strategically influence each other (for example, by forming a cartel) in order to improve

14

their own relative standing (Holmstr¨ om (1982)). I also drop all firm-year observations where the CEO changed during the course of the year.

4.3

Variable Definitions

Following Bertrand and Mullainathan (2001) and Albuquerque (2009), CEO compensation is measured as the log of the total flow of compensation in a given year. My decision to focus on flow compensation is motivated by the inclusion of the compensation benchmarking channel in my tests. While performance benchmarking is used to align incentives, and is hence related to a CEO’s total wealth, compensation benchmarking is used to adjust flow pay rather than to adjust the CEO’s overall incentives. Hence, I focus on flow pay in nearly all of my tests. All non-scaled compensation metrics are adjusted for inflation and are reported in 2008 dollars. Other variables include firm and industry returns, market returns (both S&P 500 returns and returns on the CRSP value- and equal-weighted market benchmarks), the accounting return on assets (ROA) and return on equity (ROE), log sales divided by assets, log total assets, the firm’s book value-to-market value (B/M) ratio, CEO age, CEO tenure, squares of CEO age and tenure (Bertrand and Mullainathan (2001)), CEO ownership, and Gompers, Ishii, and Metrick (2003)’s governance index.16 I compute many of these variables at the industry level as well. CEO ownership is defined as the number of shares owned by the CEO (excluding options) divided by total shares outstanding. ROE is defined as net income divided by average book equity and ROA is defined as after-tax operating income divided by average book assets. The sample size is restricted to firms with total assets of at least $10 million and stock returns in CRSP. Stock returns are adjusted for delisting (Shumway (1997)) and the CRSP sample is restricted to securities with share codes equal to 10 or 11. CEO compensation decisions are typically made after the end of firms’ fiscal years, so returns and other variables are observable. While there are numerous potential measures of firm and peer performance, I will focus primarily on stock returns as the key variable for tests of performance benchmarking / RPE. I motivate my focus on stock returns in three ways. First, the overwhelming majority of the RPE literature focuses on stock returns. Second, stock returns are not as easily manipulated as accounting variables such as ROA (Antle and Smith (1986)). Finally, Gong, Li, and Shin (2011) report that of the firms who explicitly report using performance benchmarks to 16

Including board characteristics such as interlocking directors or the number of board meetings does not materially affect any of my results.

15

compensate their executives, 75% used stock returns as a relative performance measure (the next most common performance measure was ROE, with less than 15%). Consistent with the literature, I also exclude firm i from all peer performance measures that are subsequently matched against firm i. Following Leary and Roberts (2011), ROA, ROE, and the book-to-market ratio (B/M) are Winsorized at the 1% and 99% levels in all tests.

4.4

Summary Statistics

Table 1 reports summary statistics for my sample of firm-specific industries. Panel A shows that the average and median number of competitors listed in each 10-K and matched to a Compustat GVKEY are around 9 and 7, respectively. Hence, these industries are smaller than standard classifications such as three-digit SIC codes. Panel B restricts the sample to firms that have valid stock returns in CRSP. Here, the average and median number of competitors drops slightly to 8 and 6, respectively. Panel C restricts the sample to ExecuComp firms. The panel shows that the number of ExecuComp firms directly reporting competitors rose from 740 in 2002 to a high of 958 in 2007. Panel C shows that the average and median number of firms reported by firms in the ExecuComp sample are nearly identical to the broader sample at 9 and 7, respectively. Table 2 reports summary statistics related to executive compensation. The mean and median levels of total compensation from 2002 to 2008 (in 2008 dollars) were $5.5 million and $3.2 million, respectively. Average total compensation is relatively flat over the sample period, reaching a peak of $5.8 million in 2005 and a trough of $5.1 million in 2008, which corresponds with the onset of the financial crisis. Likewise, inflation-adjusted salaries remained essentially constant over the sample. However, bonuses dropped significantly in 2006 in favor of non-cash compensation and remained well below trend in 2007 and 2008. This structural break appears to be associated with a 2006 change in Execucomp’s classification of most bonuses as nonequity incentive compensation.

5

Empirical Approach

Equation (2) characterizes CEO pay for firm i in peer group j at time t (sijt ) as a function of own-firm performance (pijt ), peer group performance (¯ p−ijt ), and peer group compensation (¯ s−ijt ). Adding peer group and time fixed effects produces the regression equation: sijt = α + β¯ s−ijt + γ 0 p¯−ijt + δ 0 pijt + ζ 0 gj + θ0 mt + εijt .

16

(4)

Importantly, peer group averages in (4) are constructed using FSIs rather than traditional industry classification standards. There are three main identification concerns with the empirical specification in (4). The first concern is related to the self-selection of peers. If firms are selecting their compensation benchmarking peers strategically, this implies that the peer group used to compute s¯−ijt is potentially endogenous. Indeed, Faulkender and Yang (2010, 2011) find that the peer groups chosen for compensation benchmarking purposes appear to be selected strategically in order to justify higher pay levels. To address this concern, I use FSIs in place of the “direct” compensation peers used by other authors (Bizjak, Lemmon, and Naveen (2008), Bizjak, Lemmon, and Nguyen (2011), Faulkender and Yang (2010, 2011). FSIs should be correlated with firms’ actual compensation benchmarking peers as disclosed in their proxy statement filings, and the evidence presented in section 7 suggests that FSI peers do not appear to be chosen strategically by firms in a manner that would link peer selection to compensation. The second identification concern in equation (4) is related to Manski (1993)’s “reflection” or simultaneity problem: it is difficult to measure the effect of s¯−i on si because i’s compensation can influence j’s compensation, which in turn can influence i’s compensation, and so on. Researchers typically use randomized experiments or variance restrictions to overcome the reflection problem (An (2011), Blume, Brock, Durlauf, and Ioannides (2011)).17 However, the unique structure of FSIs allows me to directly overcome the simultaneity problem described above without random assignment or variance restrictions. Specifically, the intransitivity of FSIs allows for identification because it allows two firms to be connected to the same common peer without also being connected to one another. When network links are intransitive, Bramoull´e, Djebbari, and Fortin (2009) and De Giorgi, Pellizzari, and Redaelli (2010) formally show that the coefficients in (4) are identified under a relatively unrestricted set of conditions.18 I provide the necessary and sufficient conditions for identification in the present setting in the Appendix. Importantly, traditional industry classifications cannot be used to overcome the reflection problem, since they are transitive in nature. My identification strategy using FSIs can be illustrated by again examining Coke and Pepsi. 17

For example, Shue (2012) uses the random assignment of students into Harvard Business School cohorts to create spatial distance and variance decomposition measures that allow for the estimation of a reduced-form parameter and, in some cases, allow for full identification of the individual peer effects. Likewise, Beshears, Choi, Laibson, Madrian, and Milkman (2012) and Bursztyn, Ederer, Ferman, and Yuchtman (2012) use random assignment to examine peer effects in retirement savings decisions and asset purchases, respectively. In contrast, Leary and Roberts (2011) use an instrumental variables approach to identify endogenous and exogenous capital structure peer effects, where their instrumental variable consists of lagged idiosyncratic stock returns. 18 Cohen-Cole (2006) reports a similar finding using a different network structure.

17

Coke and Pepsi compete with each other in the product market for beverages, while Pepsi and Snyder’s compete with each other in the market for salty snacks. However, Coke does not directly compete with Snyder’s in any single product market. Hence, while Coke is linked to Pepsi and Pepsi is linked to Snyder’s, Coke is not linked to Snyder’s. This allows me to overcome the reflection problem because Snyders’ CEO’s pay only influences Coke’s CEO’s pay through its effect on Pepsi’s CEO’s pay. While FSIs allow me to overcome the reflection problem, a third identification concern is that unobserved peer group effects may influence the peer group averages s¯−ijt and p¯−ijt . To alleviate this problem, I eliminate the group fixed effects by first-differencing equation (4) (Blume and Durlauf (2006), Blume, Brock, Durlauf, and Ioannides (2011), Helmers and Patnam (2011)).19 This yields the difference equation: ∆sijt = β∆¯ s−ijt + γ 0 ∆¯ p−ijt + δ 0 ∆pijt + θ0 ∆mt + ∆εijt .

(5)

Equation (5) is consistent with the weak-form version of RPE in which only some performance benchmarking takes place. If the weak-form RPE hypothesis is true, the coefficient(s) on γ should be negative, while the coefficient(s) on δ should be positive. Furthermore, the rent extraction and labor market hypotheses predict that the coefficient on β will be positive. In contrast to the “weak-form” test given by (5), the majority of “strong-form” RPE tests within the literature utilize a two-stage process where the first stage consists of regressing a firm performance measure against a peer group performance measure and controls. Since I use stock returns as my main performance measure, my first stage consists of regressing firm stock returns against FSI stock returns and controls. The predicted and residual values from this first-stage regression are then inserted into the second-stage regression. This yields the following second-stage regression 0 ∆sijt = β∆¯ s−ijt + γluck ∆rc p−ijt + η 0 ∆pijt + θ0 ∆mt + ∆εijt , ijt + γskill ∆rf ijt + δ ∆¯

(6)

19 Other authors use different approaches to eliminate correlated group effects. Bramoull´e, Djebbari, and Fortin (2009) suggest the use of a within transformation coupled by instrumental variables. Rather than transforming the data to remove the correlated peer group effect gj , De Giorgi, Pellizzari, and Redaelli (2010) simply instrument for the endogenous peer group effect s¯−ijt . In untabulated results, I follow De Giorgi, Pellizzari, and Redaelli (2010) and instrument for s¯−ijt using the performance measures p¯−ijt of firm i’s peers’ peers. In the language of De Giorgi, Pellizzari, and Redaelli (2010), these peers’ peers represent “excluded” peers. None of my results are materially affected.

18

where r represents returns, rc ijt and rf ijt represent the predicted and residual values from the firststage regression, and p and p¯ represent the vectors of other non-return performance measures (ROA, ROE, etc.) used in equation 6. Under the “strong-form” RPE hypothesis, CEOs should only be paid for “skill,” which is the residual component from the first-stage regression. Conversely, CEOs should not be paid for “luck” (the predicted values from the first-stage regression). Hence, under the strong-form RPE hypothesis, the γluck coefficient in (6) should be zero and the γskill coefficient should be positive. All of the other predictions remain unchanged.

6

Results

6.1

Weak-Form RPE Tests

I begin by estimating the “weak-form” model given by equation (5). Results are reported in Table 3. Firm characteristics (including firm returns) and other peer group characteristics are included in all regression specifications in this section but are omitted from the resulting table for brevity. Standard errors in all tests are also clustered by firm and by time (Petersen (2009), Cameron, Gelbach, and Miller (2011)). Two main findings stand out in Table 3. First, in contrast with the vast majority of the empirical literature on CEO compensation, Table 3 presents strong evidence of (weak-form) relative performance evaluation when peer groups are defined using FSIs. After standardizing coefficients, I find that a one-standard-deviation increase in peer group stock returns reduces the (log) change in total CEO compensation by about 6% in column (4) of the table (which corresponds with the full specification in equation (5)). Hence, the economic magnitude of performance benchmarking on CEO pay appears to be significant.20 When compensation benchmarking peers are defined using FSIs, I also find no evidence that compensation benchmarking induces an upward “ratchet effect” in CEO pay: the coefficient on average peer group compensation (¯ s−ijt ) is statistically and economically small in all specifications in the table. Hence, in contrast to the recent empirical literature on compensation benchmarking (Faulkender and Yang (2010, 2011), Bizjak, Lemmon, and Naveen (2008), Bizjak, Lemmon, and Nguyen (2011)), I find that CEO i’s pay does not appear to depend on peer CEO j’s pay after 20 Table (3) also shows that the coefficients on peer group ROA and ROE are positive and statistically significant, which goes againt the canonical RPE hypothesis. However, this result is consistent with the rest of the CEO compensation literature extending from Gibbons and Murphy (1990) to Albuquerque (2009) and is consistent with Gong, Li, and Shin (2011), who find that the vast majority of firms that explicitly use performance benchmarking to set CEO compensation focus on returns, not accounting variables.

19

controlling for observable performance measures. Hence, which compensation benchmarking clearly exists in practice, there is little evidence that this type of benchmarking alone affects the average CEO’s pay. One possible explanation for the lack of a compensation benchmarking effect in Table 3 is that firms may define their compensation peers very differently than their FSIs. For example, Gong, Li, and Shin (2011) find significant differences across explicit performance benchmarking peers and the peer groups chosen for compensation benchmarking purposes. Furthermore, Bizjak, Lemmon, and Naveen (2008) find that firms typically choose larger, better-performing firms as their compensation peers, and this tends to inflate pay, since compensation is typically an increasing function of firm size and firm performance. Bizjak, Lemmon, and Naveen (2008) and Faulkender and Yang (2010, 2011) also provide evidence that many firms choose peers whose executives are more highly compensated than the firm’s own executives. My tests in Table 3 may not have picked up any of these effects. I cannot use compensation peer groups directly in my tests because these peer groups may be selected precisely to justify the CEO’s pay, creating endogeneity concerns. As such, it is necessary to develop an empirical proxy for firms’ direct compensation peers. To construct such a proxy, I restrict the peer groups used to measure compensation benchmarking (¯ s−ijt ) to only include FSI peers that were (i) larger, (ii) had better operating performance, (iii) had higher stock returns, or (iv) had higher CEO compensation levels in the previous year. Since some (if not all) of these FSI peers should include firms’ actual compensation peers, this approach should pick up any general compensation benchmarking effects while overcoming the problems associated with the strategic selection of compensation peer groups. Importantly, the firm’s FSI remains unchanged for the purposes of performance benchmarking. In the language of equation (5), I construct the compensation benchmarking variable s¯−ijt using only a subset of each firm’s FSI, while leaving the construction of the p¯−ijt term unchanged. Since these “new” compensation peer groups still represent subsets of existing FSIs, all of the identifying assumptions discussed in section 5 should continue to hold. Table 4 contains the results of my tests. Each column in the table represents the results from a regression where compensation peer groups are formed by restricting each firm’s FSI to only include firms that had a higher (lower, in the case of B/M) value of some variable X (listed at the top of each column) than firm i in the previous year. For example, each compensation peer group in the first column of the table is defined as the set of j firms within firm i’s FSI such that Sizej,t−1 > Sizei,t−1 , where size is measured as the natural log of total assets. Hence, the variable ∆ln(F SI Compensation) in the first column of the table represents the average year-over-year 20

change in log compensation among firms that are in the same FSI as firm i, but that were larger in size as of the previous year-end date. Likewise, each compensation peer group in the column labeled “Compensation” is given by the set of j firms within firm i’s FSI whose CEO’s compensation in year t − 1 exceeded that of firm i’s CEO. Hence, the variable ∆ln(F SI Compensation) in the “Compensation” column represents the average year-over-year change in log compensation among firms that are in the same FSI as firm i, but whose CEOs were paid more than the CEO of firm i in year t − 1. The first six columns of Table 4 shows that most of my empirical proxies for direct compensation peers fail to produce the compensation benchmarking effect that has been identified in the recent literature: the coefficient on ∆ln(F SI Compensation) is statistically and economically small in all but one of these columns. However, two other findings are worth noting. First, the loading on the 12-month FSI return variable remains negative and highly statistically significant in all but one instance, suggesting that weak-form RPE continues to hold regardless of how compensation peers are defined (in fact, the loadings on this variable increase significantly in magnitude relative to the loadings in Table 3). In addition, the column labeled “Compensation” shows that the year-overyear change in the average CEO’s compensation is positively correlated with the year-over-year change in the compensation of their higher-paid FSI peers. [ADD MORE]

6.2

Strong-Form RPE Tests

Table 5 presents the results of strong-form RPE tests (i.e. the “pay-for-luck” hypothesis of Bertrand and Mullainathan (2001), Garvey and Milbourn (2006), and Bizjak, Lemmon, and Naveen (2008)). This test is performed in two stages. In the first stage, firm returns are regressed against equalweighted peer group returns (using FSIs), a set of controls, and year fixed effects. In the second stage, the predicted and residual values from the first-stage regression are inserted into the weakform RPE regression. According to the RPE hypothesis, CEO pay should be adjusted to eliminate the effects of industry-specific shocks, which are captured by the predicted values from the first stage regression. Hence, a positive loading on the predicted value means that CEOs are being paid for “luck.” In contrast, the residual values represent the CEO’s “skill.” Simple moral-hazard based models of compensation predict that the loading on the residual values (“skill”) should be positive. Following Aggarwal and Samwick (1999b), Garvey and Milbourn (2006), and Bizjak, Lemmon, and Naveen (2008), I also include interaction terms between luck, skill, and the empirical cdf of the within-firm variances of luck and skill in my empirical specification. Aggarwal and Samwick (1999b) 21

note that a failure to include these terms may systematically underestimate the pay-performance sensitivity. Hence, the coefficient on “skill” for a firm with the median level of skill volatility during my sample period is given by the loading on “skill” plus one-half of the loading on the “skill × cdf of skill variance” interaction term. The previous literature has found strong evidence that CEOs are paid for luck. When peer group are defined using FSIs, however, I find no such evidence of pay-for-luck. Of the six specifications in Table 5, the coefficient on the luck variable is statistically and economically insignificant in every column. In contrast, the loadings on skill are consistently positive and statistically significant. These findings indicate that CEOs are compensated based on the component of firm performance that is orthogonal to industry performance, consistent with strong-form RPE. Table 5 also shows that while CEOs are only compensated for skill, the magnitude of pay-for-skill compensation is strongly decreasing in the firm’s skill volatility (the same finding also holds for luck). For example, column (4) of the table shows that the CEO at a firm with the lowest skill volatility has a pay-for-skill coefficient of 0.225 + 0 × −0.448 = 0.225, while a CEO at a firm with median skill volatility over the sample period has a pay-for-skill coefficient of 0.225 + 0.5 × −0.448 = 0.001 (not statistically different from zero). In other words, even though CEOs are compensated for skill, only CEOs with relatively persistent skill earn this compensation. While this finding is consistent with multiple interpretations, a natural interpretation is that boards are learning about CEO skill over time. All else equal, the signal-to-noise ratio of skill will be highest for CEOs who demonstrate a persistent level of skill. Furthermore, Table 5 shows that while CEOs at firms with the lowest luck volatilities are not paid for luck, compensation is negatively related to luck for the median CEO. For example, the CEO at a firm with median luck volatility has a pay-for-luck coefficient of −0.008+0.5×−0.854 = −0.435 (significantly different from zero at the 1% level). Thus, in stark contrast to the previous literature, I find that a substantial fraction of CEOs earn less compensation when “lucky” shocks occur within their industry. Following Garvey and Milbourn (2006), I also use dummy variables to estimate separate skill and luck coefficients based on whether the skill or luck variables are greater than or less than zero. The last column of the table shows that the absence of pay-for-luck is symmetric; pay-for-luck does not exist regardless of whether luck is up or down. In contrast, consistent with Garvey and Milbourn (2006), I find that CEOs are paid more for skill when skill is down than when skill is up. In other words, CEOs are punished for “bad” skill but are not necessarily rewarded for having “good” skill. 22

6.3

CEO Turnover

Most of the prior literature on CEO turnover finds that boards filter out industry or market performance when deciding whether or not to dismiss a CEO (Warner, Watts, and Wruck (1988), Morck, Shleifer, and Vishny (1989), Barro and Barro (1990), and Gibbons and Murphy (1990)). This finding is puzzling given the lack of performance benchmarking found in previous studies, since it suggests that boards do not use performance benchmarking to set compensation but do use performance benchmarking when deciding whether or not to dismiss a CEO.21 However, a recent paper by Jenter and Kanaan (forthcoming) effectively resolves the puzzle: they find that CEOs are fired after poor industry performance, which suggests that boards are not using performance benchmarking when evaluating personnel decisions. Kaplan and Minton (2006) find similar empirical results using a slightly different test. Theoretically, the findings of Jenter and Kanaan (forthcoming) and Kaplan and Minton (2006) are supported by Eisfeldt and Kuhnen (forthcoming), who show that terminating CEOs for poor industry performance can be optimal if CEOs are matched to firms using a competitive assignment framework. Eisfeldt and Kuhnen (forthcoming)’s empirical tests also support the notion that CEOs are terminated for poor industry performance as well as poor industry-adjusted performance. In this section, I examine whether CEO dismissals can be explained by poor relative performance when peer groups are defined using FSIs. Following Jenter and Kanaan (forthcoming) and Eisfeldt and Kuhnen (forthcoming), I begin by identifying all CEO exits during my sample period. I then classify each CEO exit as either voluntary or forced. To distinguish between forced and voluntary dismissals, I use the following reduced-form approach: all CEOs of any age who resigned according to ExecuComp (rather than retiring) are considered to have been fired. Furthermore, CEOs under the age of 60 who leave for unknown reasons are considered to have been fired. In contrast, the existing literature (see, e.g., Parrino (1997)) uses a combination of rules and a detailed search of press reports to identify forced CEO departures. I motivate my reduced-form measure in two ways. First, I randomly select 25 CEOs each from my “forced” and “voluntary” dismissal categories and examine media reports and press releases related to their departures. Consistent with fired CEOs “resigning,” I find that 13 of the 25 random CEOs whose ExecuComp status is “resigned” were dismissed involuntarily, while only two of the 25 random CEOs who did not “resign” according to ExecuComp were involuntarily terminated. Second, I argue that if anything, my dismissal classifications work against finding evidence of 21

However, this finding is consistent with Brown, Goetzmann, and Park (2002)’s study of the hedge fund industry.

23

performance benchmarking in CEO turnover. The evidence described above is consistent with this argument: only two of the 25 CEOs who left under “voluntary” circumstances were misclassified according to standard algorithms, while 12 of the 25 CEOs whose departures were classified as “forced” were actually not forced out of their jobs. In fact, numerous CEOs who were “forced” out of their jobs actually resigned to accept positions at larger, more prestigious organizations. Importantly, these CEOs’ performance was not substandard relative to their peers in the years preceding their departure. Hence, leaving them in my sample works against finding evidence of performance benchmarking in CEO turnover. Following Jenter and Kanaan (forthcoming), I model forced dismissals using a Cox (1972) proportional hazard model. My sample includes a total of 755 CEO exits, of which 225 are classified as forced. Like Jenter and Kanaan (forthcoming), I examine both weak-form and strong-form tests of the relationship between performance benchmarking and CEO turnover. Table 6 presents the results of my tests. Consistent with Jenter and Kanaan (forthcoming), the first two columns provide evidence in favor of “weak-form” RPE: the loading on firm returns is negative and significant, while the loading on FSI returns is positive and significant. (Since this is a hazard model, a positive loading on FSI returns is equivalent to a negative loading on FSI returns in my compensation regressions, which is consistent with weak-form RPE.) However, unlike Jenter and Kanaan (forthcoming), the last three columns also provide evidence in favor of “strong-form” RPE: in all three specifications, the coefficient on luck is statistically zero, while the coefficient on skill is negative and statistically significant. This indicates that CEOs who were forced to quit their jobs were less “skilled” than CEOs who left voluntarily. As such, the CEO turnover results in Table 6 complement the executive compensation results in previous tables by providing strong evidence that boards evaluate relative firm performance during all phases of a CEO’s employment.

7 7.1

Robustness Tests Standard RPE Tests

Tables 3-5 utilize a different model than existing tests of relative performance evaluation. In particular, the performance benchmarking literature uses a variant of equations (5) and (6) that does not include the compensation benchmarking term ∆¯ s−ijt from both equations. To examine the robustness of the results in Tables 3-5 to my empirical specification, I re-run the weak-form RPE tests in Tables 3 using the “standard” performance benchmarking / RPE model in the literature. 24

Table 7 presents the results of these weak-form RPE tests when peer group performance measures are computed using FSIs. The table shows that the coefficient on FSI returns is negative and statistically significant in every column, which provides strong support for weak-form RPE. Magnitudes are virtually identical to those in Table 3. Hence, the weak-form RPE results in Table 3 appear to be robust to concerns regarding the inclusion of compensation benchmarking in equation (5). Table 8 contains the results from strong-form RPE tests using the standard specification in the literature. This specification is identical to equation (6) without the compensation peer group variable s−ijt . Like Table 5, Table 8 provides support for the strong-form version of RPE: the “luck” coefficient is always statistically and economically insignificant, while the “skill” coefficient is always economically and statistically large. The magnitudes in Table 8 are again similar to the magnitudes in Table 5. Hence, I find little evidence of pay-for-luck, in marked contrast to previous studies. However, one should be careful not to interpret these findings as suggesting that no CEOs are paid for luck: given the aggregate nature of my tests, I only claim that the average CEO does not appear to be paid for luck. As such, CEOs in certain firms or certain industries (such as the oil industry, per Bertrand and Mullainathan (2001)) may indeed be paid for luck.

7.2

FSIs versus Traditional Industry Classifications

I next examine the sensitivity of my results to the use of FSIs. Specifically, it is possible that my results are being driven by the unique characteristics of my sample period, in which case existing classification standards may also produce strong evidence of RPE. Table 9 replicates the test in column (4) of Table 7 using peer group performance measures that are calculated using traditional industry classification definitions. The industry classifications I consider in Table 9 include 2-, 3-, and 4-digit SIC codes, the Fama and French (1997) 48-industry classifications, and 4-, 6-, and 8digit GICS classifications. Like this paper, Albuquerque (2009) also finds strong evidence in support of RPE. She constructs industry-and-size matched peer groups for each firm based on 2-digit SIC classifications and beginning-of-year size quartiles. In the last column of Table 9, I replicate her procedure by matching each firm with a peer group that consists of all other firms in the same 2-digit SIC classification and size quartile. As in my previous tests, firm i is excluded from its peer group averages in all tests in Table 9. Consistent with the previous literature, I find little evidence of performance benchmarking when peer group performance is calculated using traditional industry classifications. While all of the coefficients on peer group returns are negative in Table 9, none of them are statistically 25

significant and the coefficients are smaller in magnitude (both economically and statistically) than the coefficients in Table 7. In addition, while Albuquerque (2009)’s industry-size matching process yields a larger point estimate than most of the other industry classifications in the table, I find that the coefficient on peer group performance remains statistically insignificant using her peer group definitions.22 Finally, while I do not restrict the sample sizes in Table 9 to the set of firms with FSIs, the results in Table 9 remain qualitatively unchanged when the sample is restricted to the set of firms with valid FSIs. Similar (untabulated) tests show that when peer group performance is measured using traditional industry classifications, CEOs are indeed paid for luck. I also replicate my CEO retention tests in Table 6 using traditional industry definitions. In untabulated results, I find evidence consistent with Jenter and Kanaan (2008) that CEOs are indeed terminated for “luck” when industry definitions such as SIC codes are used instead of FSIs.

7.3

External Validity

While most ExecuComp firms report a list of their primary product market competitors, the tests in Tables 3-8 are restricted to firms that have valid FSI peer groups. However, a sizable minority of firms do not provide a list of their competitors, and this may be problematic if, for example, firms that do not list competitors have lower stock returns or have weaker corporate governance than their (unobserved) peers.23 To test for potential selection biases, I begin by examining the characteristics of “reporters” versus “non-reporters” during the twelve months prior to the firm’s fiscal year-end date. Given my definition of FSIs, reporters include both firms that directly listed competitors and firms that were listed as competitors by other firms. Non-reporters consist of all other firms in the sample. To examine the link between reporters and non-reporters, I estimate a probit model where the dependent variable equals one if the firm is a reporter and equals zero if the firm is a non-reporter. The main explanatory variables are the control variables described previously. I also add year and industry fixed effects to the regression (the industry fixed effects are based on six-digit GICS classifications, per Lewellen (2012a)). 22

Gong, Li, and Shin (2011) find a similar result. Even if stock returns or governance do not affect the decision to list competitors, we might still expect certain types of firms to be more likely to report competitors than others. For example, firms in competitive industries may compete with so many other firms that it would be burdensome to provide a full list of these competitors. Likewise, large conglomerates may compete against so many other firms in their various product lines that a full list of competitors would be difficult to compile. In addition, firms such as public utilities that essentially operate as regulated monopolies may not have any real competitors, and hence would not need to provide a competitor list. 23

26

Table 10 contains the results. The table shows that firms reporting competitors tend to be larger and tend to have lower prior-year stock returns than firms that do not report competitors. The first result is likely to be a byproduct of the FSI matching process, while the second result is not consistent with an agency story where managers avoid reporting competitors to avoid drawing attention to the firm’s poor past performance. The remainder of the evidence reported in the table does not tell a consistent story.24 Importantly, the third column of the table shows that governance does not appear to drive the choice to list competitors; the G-index variable is negative but economically and statistically small. In short, while I cannot rule out the possibility that some firms may strategically choose whether or not to report a list of their primary product market competitors, I find no consistent evidence supporting this hypothesis.

7.4

Selection Biases

Conditional on providing a list of competitors, firms may also choose to list or omit certain competitors based on strategic factors. Indeed, there is widespread evidence of peer group manipulation in both compensation benchmarking (Bizjak, Lemmon, and Nguyen (2011), Faulkender and Yang (2010, 2011)) and in firms’ proxy statement disclosures about relative stock price performance (Lewellen, Park, and Ro (1996)). If managers are strategically selecting poorly performing competitors, firms that report competitors should earn higher returns or have better operating performance than their competitors (Lewellen, Park, and Ro (1996)). To examine this possibility, Table 11 lists the average differences in stock returns and operating performance between a firm and its FSI peer group. The sample for these tests is restricted to firms with valid FSIs. Paired t-statistics are used to determine whether firms’ characteristics differ from those of their competitors. The first column of Table 11 lists the average differences across the entire panel. Importantly, the first column shows that firms on average earn a 12-month return that is statistically and economically indistinguishable from the returns earned by their competitors.25 Furthermore, firms appear to list competitors that are (i) larger in size, (ii) have better operating performance and larger values of Q, and (iii) have stronger corporate governance than the firm itself. Hence, under the assumption that strategic reporters would list weaker competitors, the data appear to show 24

For example, column 1 shows that reporters have a higher value of Q than non-reporters, while column 2 (industry fixed effects) shows the opposite and column 3 (addition of the G-index variable) shows no relationship between Q and the decision to report competitors. 25 I do not adjust for risk, under the presumptions that (i) risk is appropriately reflected in returns, and hence on average, not adjusting should not introduce any bias in the results, and (ii) any executive compensation contracts that are tied to industry peers are likely to measure peer performance using risk-unadjusted returns.

27

that firms do not strategically select their FSI peers. However, these results could be problematic because the rent extraction hypothesis suggests that firms might list larger, better-performing competitors in order to influence compensation. Indeed, Bizjak, Lemmon, and Nguyen (2011) find that firms list compensation peers that tend to be larger, have strong operating performance, and have high values of Q. To examine this finding in more detail, I restrict the sample to Gompers, Ishii, and Metrick (2003)’s “Democracies” (firms with a G-index ≤ 5) and “Dictatorships” (firms with a G-index ≥ 14). If firms are strategically listing competitors for compensation-related purposes, we would expect Dictatorships to be more likely than Democracies to list competitors with strong operating performance or high values of Q. However, the next three columns of Table 11 show that this is largely not the case. Democracies and Dictatorships differ little with the exception of the size, B/M, and scaled sales variables. While Dictatorships are more likely to be matched with competitors with higher values of Q than Democracies, this is consistent with Gompers, Ishii, and Metrick (2003)’s finding that Dictatorships tend to have low values of Q. Furthermore, this result also holds when a similar test is run using traditional industry classifications (2-digit and 3-digit SIC codes along with 6-digit GICS codes), where there should be no such selection concerns. Interestingly, Dictatorships tend to be matched with smaller competitors than Democracies, which goes against a strategic selection story. In sum, little evidence exists to support the hypothesis that firms are strategically selecting their competitors.

8

Conclusions

Understanding the determinants of CEO compensation is a first-order question within the field of finance. Using a novel set of firm-specific industries, I document the widespread use of performance benchmarking in CEO compensation, consistent with the theoretical results of Holmstr¨om (1979, 1982) and Holmstr¨ om and Milgrom (1987). I construct firm-specific industries (FSIs) by exploiting an SEC rule requiring firms to provide an unbiased list of their primary product market competitors. A one-standard-deviation increase in firms’ industry-adjusted stock returns increases the average CEO’s compensation by approximately 6%, which is an economically meaningful effect. I also decompose firm performance into “luck” and “skill” and find that CEOs are only compensated for skill. These results cannot be explained by firm size, industry size, market returns, or other related variables. Furthermore, the intransitive nature of FSIs allows me to separately identify

28

the endogenous effect of peer group compensation levels on own-firm CEO compensation. I find evidence that CEO pay adjusts to peer group pay in a manner that is consistent with retention motives on the part of boards. I also find evidence that CEOs are fired for poor relative performance rather than being fired for an “unlucky” industry shock. Together, these results are consistent with optimal contracting theories of executive compensation and highlight the importance of accurate peer group definitions in executive compensation tests.

29

References Aggarwal, Rajesh K. and Andrew A. Samwick (1999a), “Executive Compensation, Strategic Competition, and Relative Performance Evaluation: Theory and Evidence.” Journal of Finance, 54, 1999–2043. Aggarwal, Rajesh K. and Andrew A. Samwick (1999b), “The Other Side of the Trade-off: The Impact of Risk on Executive Compensation.” Journal of Political Economy, 107, 65–105. Albuquerque, Ana (2009), “Peer Firms in Relative Performance Evaluation.” Journal of Accounting and Economics, 48, 69–89. An, Weihua (2011), “Models and Methods to Identify Peer Effects.” The Sage Handbook of Social Network Analysis, Chapter 34, 514–532. Antle, Rick and Abbie Smith (1986), “An Empirical Analysis of the Relative Performance Evaluation of Corporate Executives.” Journal of Accounting Research, 24, 1–39. Banker, Rajiv D. and Srikant M. Datar (1989), “Sensitivity, Precision, and Linear Aggregation of Signals for Performance Evaluation.” Journal of Accounting Research, 27, 21–39. Barro, Jason R. and Robert J. Barro (1990), “Pay, Performance, and Turnover of Bank CEOs.” Journal of Labor Economics, 8, 448–481. Bebchuk, Lucian A., Alma Cohen, and Charles C. Y. Wang (forthcoming), “Learning and the Disappearing Association Between Governance and Returns.” Journal of Financial Economics. Bebchuk, Lucien and Jesse Fried (2004), Pay without Performance: The Unfulfilled Promise of Executive Compensation. Harvard University Press, Cambridge, MA. Bebchuk, Lucien Arye, Jesse M. Fried, and David I. Walker (2002), “Managerial Power and Rent Extraction in the Design of Executive Compensation.” University of Chicago Law Review, 69, 751–846. Bertrand, Marianne and Sendhil Mullainathan (2001), “Are CEOs Rewarded for Luck? The Ones Without Principals Are.” Quarterly Journal of Economics, 116, 901–932. Beshears, John, James J. Choi, David Laibson, Brigitte C. Madrian, and Katherine L. Milkman (2012), “The Effect of Providing Peer Information on Retirement Savings Decisions.” NBER Working Paper 17345. Bhojraj, Sanjeev, Charles M. C. Lee, and Derek K. Oler (2003), “What’s My Line? A Comparison of Industry Classification Schemes for Capital Market Research.” Journal of Accounting Research, 41, 745–774. Bizjak, John, Michael Lemmon, and Lalitha Naveen (2008), “Does the Use of Peer Groups Contribute to Higher Pay and Less Efficient Compensation?” Journal of Financial Economics, 90, 152–168. Bizjak, John, Michael Lemmon, and Thanh Nguyen (2011), “Are All CEOs Above Average? An Empirical Analysis of Compensation Peer Groups and Pay Design.” Journal of Financial Economics, 100, 538–555.

30

Blume, Lawrence and Steven Durlauf (2006), Identifying Social Interactions: A Review. Wiley, Hoboken, NJ. Blume, Lawrence E., William A. Brock, Steven N. Durlauf, and Yannis M. Ioannides (2011), Handbook of Social Economics, chapter 18. North-Holland, San Diego, CA. Bolton, Patrick and David A. Scharfstein (1990), “A Theory of Predation Based on Agency Problems in Financial Contracting.” American Economic Review, 76, 956–970. Bramoull´e, Yann, Habiba Djebbari, and Bernard Fortin (2009), “Identification of Peer Effects Through Social Networks.” Journal of Econometrics, 150, 41–55. Brander, James A. and Tracy R. Lewis (1986), “Oligopoly and Finacial Structure: The Limited Liability Effect.” American Economic Review, 76, 956–970. Brown, Steven J., William N. Goetzmann, and James Park (2002), “Careers and Survival: Competition and Risk in the Hedge Fund and CTA Industry.” Journal of Finance, 56, 1869–1886. Bursztyn, Leonardo, Florian Ederer, Bruno Ferman, and Noam Yuchtman (2012), “Understanding Peer Effects in Financial Decisions: Evidence from a Field Experiment.” Working Paper. Cameron, A. Colin, Jonah P. Gelbach, and Douglas L. Miller (2011), “Robust Inference With Multi-Way Clustering.” Journal of Business and Economic Statistics, 29, 238–249. Carter, Mary Ellen, Christopher D. Ittner, and Sarah L. C. Zechman (2009), “Explicit Relative Performance Evaluation in Performance-Vested Equity Grants.” Review of Accounting Studies, 14, 269–306. Chan, Louis K. C., Josef Lakonishok, and Bhaskaran Swaminathan (2007), “Industry Classifications and Return Comovement.” Financial Analysts Journal, 63, 56–70. Clarke, Richard N. (1989), “SICs as Delineators of Economic Markets.” Journal of Business, 62, 17–31. Cohen-Cole, Ethan (2006), “Multiple Groups Identification in the Linear-in-Means Model.” Economic Letters, 92, 157–162. Core, John E. and Wayne R. Guay (2003), “When Efficient Contracts Require Risk-Averse Executives to Hold Equity: Implications for Option Valuation, for Relative Performance Evaluation, and for the Corporate Governance Debate.” Working Paper. Core, John E., Wayne R. Guay, and David F. Larcker (2003), “Executive Equity Compensation and Incentives: A Survey.” FRBNY Economic Policy Review, 27–50. Cox, David R. (1972), “Regression Models and Life Tables (with Discussion).” Journal of the Royal Statistical Society, Series B 34, 187–220. Cremers, Martijn and Yaniv Grinstein (2011), “The Market for CEO Talent: Implications for CEO Compensation.” Working Paper. De Giorgi, Giacomo, Michele Pellizzari, and Silvia Redaelli (2010), “Identification of Social Interactions through Partially Overlapping Peer Groups.” American Economic Journal: Applied Economics, 2, 241–275. 31

DeAngelis, David and Yaniv Grinstein (2015), “Performance Terms in CEO Compensation Contracts.” Review of Finance, 19, 619–651. Dixit, Avinash (1989), “Entry and Exit Decisions under Uncertainty.” Journal of Political Economy, 97, 620–638. Edmans, Alex, Xavier Gabaix, and Augustin Landier (2009), “A Multiplicative Model of Optimal CEO Incentives in Market Equilibrium.” Review of Financial Studies, 22, 4881–4917. Eisfeldt, Andrea L. and Camelia M. Kuhnen (forthcoming), “CEO Turnover in a Competitive Assignment Framework.” Journal of Financial Economics. Fama, Eugene F. and Kenneth R. French (1997), “Industry Costs of Equity.” Journal of Financial Economics, 43, 153–193. Faulkender, Michael and Jun Yang (2010), “Inside the Black Box: The Role and Composition of Compensation Peer Groups.” Journal of Financial Economics, 96, 257–270. Faulkender, Michael and Jun Yang (2011), “Is Disclosure an Effective Cleansing Mechanism? The Dynamics of Compensation Peer Benchmarking.” Working Paper. Gabaix, Xavier and Augustin Landier (2008), “Why Has CEO Pay Increased So Much?” Quarterly Journal of Economics, 123, 49–100. Gabor, Monica, Daniel Houlder, and Monica Carpio, eds. (2001), 2001 Report on the American Workforce, chapter 3. United States Bureau of Labor Statistics, Washington, DC. Garvey, Gerald and Todd Milbourn (2003), “Incentive Compensation When Executives Can Hedge The Market: Evidence of Relative Performance Evaluation in the Cross Section.” Journal of Finance, 58, 1557–1581. Garvey, Gerald T. and Todd T. Milbourn (2006), “Asymmetric Benchmarking in Compensation: Executives are Rewarded for Good Luck but not Penalized for Bad.” Journal of Financial Economics, 82, 197–226. Gibbons, Robert and Kevin J. Murphy (1990), “Relative Performance Evaluation for Chief Executive Officers.” Industrial and Labor Relations Review, 43, 30–51. Gompers, Paul, Joy Ishii, and Andrew Metrick (2003), “Corporate Governance and Equity Prices.” Quarterly Journal of Economics, 118, 107–155. Gong, Guojin, Laura Yue Li, and Jae Yong Shin (2011), “Relative Performance Evaluation and Related Peer Groups in Executive Compensation Contracts.” The Accounting Review, 86, 1007– 1043. Gormley, Todd A. and David A. Matsa (2012), “Common Errors: How to (and Not to) Control for Unobserved Heterogeneity.” Working Paper. Guenther, David A. and Andrew J. Rosman (1994), “Differences between COMPUSTAT and CRSP SIC Codes and Related Effects on Research.” Journal of Accounting and Economics, 18, 115–128. Helmers, Christian and Manasa Patnam (2011), “Does the Rotten Child Spoil His Companion? Spatial Peer Effects Among Children in Rural India.” Working Paper. 32

Hoberg, Gerard and Gordon Phillips (2010a), “Product Market Synergies and Competition in Mergers and Acquisitions: A Text-Based Analysis.” Review of Financial Studies, 23, 3773–3811. Hoberg, Gerard and Gordon Phillips (2010b), “Text-Based Network Industries and Endogenous Product Differentiation.” Working Paper. Holmstr¨om, Bengt (1979), “Moral Hazard and Observability.” Bell Journal of Economics, 10, 74– 91. Holmstr¨om, Bengt (1982), “Moral Hazard in Teams.” Bell Journal of Economics, 13, 324–340. Holmstr¨om, Bengt and Paul Milgrom (1987), “Aggregation and Linearity in the Provision of Intertemporal Incentives.” Econometrica, 55, 303–328. Hopenhayn, Hugo (1992), “Entry, Exit, and Firm Dynamics in Long Run Equilibrium.” Econometrica, 60, 1127–1150. Janakiraman, Surya N., Richard A. Lambert, and David F. Larcker (1992), “An Empirical Investigation of the Relative Performance Evaluation Hypothesis.” Journal of Accounting Research, 30, 53–69. Jensen, Michael C. and Kevin J. Murphy (1990), “Performance Pay and Top Management Incentives.” Journal of Political Economy, 98, 225–264. Jenter, Dirk and Fadi Kanaan (2008), “CEO Turnover and Relative Performance Evaluation.” Working Paper. Jenter, Dirk and Fadi Kanaan (forthcoming), “CEO Turnover and Relative Performance Evaluation.” Journal of Finance. Jin, Li (2002), “CEO Compensation, Diversification, and Incentives.” Journal of Financial Economics, 66, 29–63. Joh, Sung Wook (1999), “Strategic Managerial Incentive Compensation in Japan: Relative Performance Evaluation and Product Market Collusion.” Review of Economics and Statistics, 81, 303–313. Kahle, Kathleen M. and Ralph A. Walking (1996), “The Impact of Industry Classifications on Financial Research.” Journal of Financial and Quantitative Analysis, 31, 309–335. Kaplan, Steven N. and Bernadette A. Minton (2006), “How has CEO Turnover Changed? Increasingly Performance Sensitive Boards and Increasingly Uneasy CEOs.” NBER Working Paper 12465. Kuhnen, Camelia and Jeffrey Zwiebel (2008), “Executive Pay, Hidden Compensation and Managerial Entrenchment.” Unpublished working paper. Lambert, Richard A. and David F. Larcker (1987), “An Analysis of the Use of Accounting and Market Measures of Performance in Executive Compensation Contracts.” Journal of Accounting Research, 25, 85–125. Leary, Mark T. and Michael R. Roberts (2011), “Do Peer Firms Affect Corporate Financial Policy?” Working Paper. 33

Lee, Charles M.C., Paul Ma, and Charles C.Y. Wang (2015), “Search-Based Peer Firms: Aggregating Investor Perceptions Through Internet Co-Searches.” Journal of Financial Economics, 116, 410–431. Lewellen, Stefan (2012a), “Corporate Governance and Equity Prices: Are Results Robust to Industry Adjustments?” Working Paper. Lewellen, Stefan (2012b), “Firm-Specific Industries.” Working Paper. Lewellen, Wilbur G., Taewoo Park, and Byung T. Ro (1996), “Self-Serving Behavior in Managers’ Discretionary Information Disclosure Decisions.” Journal of Accounting and Economics, 21, 227– 251. Maksimovic, Vojislav (1988), “Capital Structure in Repeated Oligopolies.” Rand Journal of Economics, 19, 389–407. Maksimovic, Vojislav and Josef Zechner (1991), “Debt, Agency Costs, and Industry Equilibrium.” Journal of Finance, 46, 1619–1643. Manski, Charles F. (1993), “Identification of Endogenous Social Effects: The Reflection Problem.” Review of Economic Studies, 60, 531–542. Miao, Jianjun (2005), “Optimal Capital Structure and Industry Dynamics.” Journal of Finance, 60, 2621–2659. Moffitt, Robert A. (2001), “Policy Interventions, Low-level Equilibria and Social Interactions.” In Social Dynamics (Steven Durlauf and Peyton Young, eds.), MIT Press, Cambridge, MA. Morck, Randall, Andrei Shleifer, and Robert W. Vishny (1989), “Alternative Mechanisms for Corporate Control.” American Economic Review, 79, 842–852. Morse, Adair, Vikram Nanda, and Amit Seru (2011), “Are Incentive Contracts Rigged by Powerful CEOs?” Journal of Finance, 66, 1779–1821. Murphy, Kevin J. (1999), Handbook of Labor Economics, chapter 3. North-Holland, San Diego, CA. Oyer, Paul (2004), “Why Do Firms Use Incentives That Have No Incentive Effects?” Journal of Finance, 59, 1619–1650. Parrino, Robert (1997), “CEO Turnover and Outside Succession: A Cross-Sectional Analysis.” Journal of Financial Economics, 46, 165–197. Petersen, Mitchell A. (2009), “Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches.” Review of Financial Studies, 22, 435–480. Rajgopal, Shivaram, Terry Shevlin, and Valentina Zamora (2006), “CEOs’ Outside Employment Opportunities and the Lack of Relative Performance Evaluation in Compensation Contracts.” Journal of Finance, 61, 1813–1844. Rauh, Joshua and Amir Sufi (2012), “Explaining Corporate Capital Structure: Product Markets, Leases, and Asset Similarity.” Review of Finance, 16, 115–155.

34

Shue, Kelly (2012), “Executive Networks and Firm Policies: Evidence from the Random Assignment of MBA Peers.” Working Paper. Shumway, Tyler (1997), “The Delisting Bias in CRSP Data.” Journal of Finance, 52, 327–340. Spiegel, Matthew and Heather Tookes (forthcoming), “Dynamic Competition, Valuation, and Merger Activity.” Journal of Finance. Tervi¨o, Marko (2008), “The Difference That CEOs Make: An Assignment Model Approach.” American Economic Review, 98, 642–668. Warner, Jerold B., Ross L. Watts, and Karen H. Wruck (1988), “Stock Prices and Top Management Changes.” Journal of Financial Economics, 20, 238–249. Weiner, Christian (2005), “The Impact of Industry Classification Schemes on Financial Research.” SFB 649 Discussion Paper 2005-062. Williams, Joseph T. (1995), “Financial and Industrial Structure with Agency.” Review of Financial Studies, 8, 431–474.

35

Appendix A: Further Details on Firm-Specific Industries To construct FSIs, I began by manually downloading 10-K filings for every firm in the merged CRSP/Compustat database spanning the fiscal years 2002-2008. Each 10-K was searched by hand for the phrase “compet,” and each hit was examined to determine whether the firm referenced an actual competitor in the surrounding text. If a competitor was referenced, the competitor’s name was then manually recorded. In firms with multiple segments, the same competitor was often listed and recorded multiple times. However, segment data was discarded due to a lack of overlap with Compustat and each competitor was only included once in a firm’s industry. This is akin to equally-weighting competitors across a firm. As a result, my assignment procedure works against FSIs having increased explanatory power relative to other industry classifications, which often assign firms to industries based on a firm’s largest or most profitable segment. Each 10-K competitor name was then matched to the CRSP/Compustat database and the Compustat Global database by hand. In many cases, a subsidiary or operating division of a publicly-traded company was listed as a competitor rather than its publicly-traded parent. For example, a firm might list Otis Elevator as a competitor, though Otis is a fully-owned subsidiary of United Technologies. In such cases, the parent company’s Compustat code (i.e. the GVKEY for United Technologies) was assigned to firms listing Otis Elevator as a competitor. Occasionally, a 10-K listed competing products without specifying the name of the competing manufacturer. In these cases, product names were discarded since a competing firm was not directly named. Finally, a small fraction of 10-Ks listed competitors in an indirect fashion – for example, many telecommunications providers listed “RBOCs” (Regional Bell Operating Companies) or “ILECs” (Incumbent Local Exchange Carriers) as competitors. These indirect references were also discarded in the name of conservatism. Hence, if anything, the FSIs in this paper are likely to understate the full list of competitors included in firms’ 10-K filings.

Appendix B: Necessary and Sufficient Conditions for the Identification of Peer Group Effects Suppose the structural model is given by the linear-in-means model (4), with the restriction that E[εijt |pijt , gj , fi , mt ] = 0. Note that gj now represents firm i’s FSI. As is standard in peer effects models, |β| is assumed to be less than one. It is also assumed that pijt is strictly exogenous conditional on gj , fi , and mt . Furthermore, the network structure (consisting of all FSIs) is allowed to be stochastic but is assumed to be strictly exogenous (I will return to this assumption shortly).

36

Importantly, no other restrictions are placed on the variance matrix or the error term. Given these assumptions, full identification of the parameter vector (α, β, γ, δ, ζ, η, θ) relies on the following condition provided by Bramoull´e, Djebbari, and Fortin (2009). Definition 1. An “intransitive triad” is a set of three firms i, j, k such that i is connected to j, j is connected to k, but i is not connected to k. Bramoull´e, Djebbari, and Fortin (2009) prove that the endogenous and exogenous peer effects (¯ s−ijt , p¯−ijt ) are identified if at least one intransitive triad exists.26 However, Bramoull´e, Djebbari, and Fortin (2009) examine identification within a cross-sectional setting, so they use a within transformation to eliminate the correlated peer group effect gj . Identification in their setting relies on a second key assumption (namely, that the “diameter” of a network is at least three). However, in my panel data setting, the correlated peer group effect can be removed through first differencing, which does not require demeaning variables within each peer group (Blume, Brock, Durlauf, and Ioannides (2011)). Hence, I am able to identify peer group effects based on the existence of intransitive triads alone using Proposition 3 of Bramoull´e, Djebbari, and Fortin (2009).27 The intuition behind the identifying assumption is as follows. Suppose that A and B are in the same traditional industry classification, and C is in a separate traditional industry classification. Under the assumption that traditional industry definitions are “correct,” A’s pay might be related to B’s pay (and vice versa), but neither A nor B’s pay should be related to firm C other than through a market-wide economic shock. Hence, A’s (reduced-form) compensation equation is given by sA = α+βsB +γpB +δpA , and B’s compensation equation is given by sB = α+βsA +γpA +δpB . Simple algebra yields the solutions si = [α/(1 − β)] + [γ/(1 − β 2 )]pi + [βδ/(1 − β 2 )]pj for i ∈ {A, B}. Identification is not possible in this setting because there are four variables of interest (α, β, γ, δ) but only three estimable parameters. Now suppose that industries are defined (“correctly”) using FSIs, and that A is linked to B and B is linked to C. In this example, firms A and C are not directly linked to one another. Hence, C is excluded from A’s compensation equation (and vice versa), which overcomes the simultaneity problem. The compensation equations are now given by sA = α + βsB + γpB + δpA (firm A), sB = α + β(sA + sC )/2 + γ(pA + pC )/2 + δpB (firm B), and sC = α + βsB + γpB + δpC (firm C). These equations can be solved to yield four estimable parameters, which allows for identification. Identification also relies on the assumption that the network structure is stochastic but exoge26

Another condition for identification is that δβ + γ 6= 0 – that is, a peer effect actually exists. Technically, the existence of intransitive triads guarantees that the square of the adjacency matrix is non-zero, which is the key identification condition. The adjacency matrix is a weighting matrix where the ijth element equals 1/ni if j is i’s peer, where ni is the number of i’s direct peers, and 0 otherwise. As such, row i of the square of the adjacency matrix will contain all of i’s peers’ peers. 27

37

nous. I argue that this assumption is met as long as firms do not strategically misreport competitors. In order for the network structure to be endogenously determined, firms or CEOs must choose FSI peers in order to directly impact CEO compensation (as opposed to choosing FSI peers based on exogenous factors, such as the competitor sharing the same product market as the firm). It is for this reason that I do not consider self-reported compensation peer groups, since the previous literature has found evidence that compensation peer groups may be chosen in a manner that inflates CEO pay. However, FSIs do not appear to suffer from this bias: Table 12 provides little evidence that firms are strategically reporting competitors. Furthermore, any unobservable time-invariant industry, firm, or executive-level factors will be swept away by the first-differencing procedure. Hence, the assumption of exogenous network structure is likely to be met in the current setting. Finally, I use first-differencing in equations (5) and (6) because the firm-specific nature of FSIs makes it effectively impossible to run a levels regression with the appropriate fixed effects. In order to control for unobserved peer group heterogeneity in a levels regression, I would have to transform the data by demeaning all variables within a given FSI. However, a within transformation is not feasible for two related reasons. First, since FSIs routinely change over time, the ‘fixed effect’ associated with an FSI would often be restricted to a single firm-year observation. As such, identification would have to come from firms whose FSIs did not change over a period of years. While this is a potentially reasonable approach (ignoring selection biases), this approach will not work technically because I define firm i’s FSI based on own-reported peers and peers that listed firm i as a competitor. Bramoull´e, Djebbari, and Fortin (2009) show that identification in a levels setting comes from differences between own-reported peers and either “local” peers (e.g. peers who report you as a peer) or “network” peers. Given my two-sided matching process, I cannot use “local” peers for identification, since every firm that lists i as a competitor is also included in i’s peer group (and hence, there are no differences available to exploit). Identifying off of differences between FSIs and firms’ “network” peers is technically possible, but this approach would require me to break the data into “networks” (presumably using traditional industry definitions). However, this is troublesome for a number of reasons. First, if I use “large” networks (like 2-digit SIC classifications), any estimates from a “network” within transformation will be extremely noisy. I could define networks more granularly (say, using 3-digit SIC classifications); however, doing so may introduce technical complications because the identifying assumptions for a within transformation require three degrees of separation within each network (Bramoull´e, Djebbari, and Fortin (2009)), and this condition will not be met within smaller industry classifications. Defining networks based on small industries would also effectively eliminate any benefits of FSIs, since peer groups would be restricted to firms in the same small (traditional) industry classification. Hence, I use first-

38

differencing instead, which does not suffer from the same problems as fixed effects regressions in my setting.

Appendix C: Applications using Firm-Specific Industries Asset Pricing Application: Stock Returns I first examine whether FSIs explain more of the variation in stock returns than traditional industry classifications. However, the small size of FSIs is problematic for asset pricing tests, since small industries tend to produce tests with significant amounts of noise (Lewellen (2012a)). To expand the size of FSIs, I reformulate the FSI for firm i by also including peer firms’ peers. Thus, the FSI for firm i contains i’s peers (including firms that pointed to i) and i’s peers’ peers. I refer to this definition of FSIs as “FSI-II industries.” I exclude all financial firms and utilities from the sample as well as firms with a market capitalization below the 10% NYSE size breakpoint each month. I also exclude all firms with a CRSP share code that is not 10 or 11. Finally, I exclude all FSI-II industries that contain fewer than five firms, as these industry returns are likely to contain a significant amount of idiosyncratic noise. I begin by comparing the performance of FSIs against traditional industry classifications. To parsimoniously incorporate a variety of commonly-used standards of differing granularities, I choose to examine the following classification standards: three-digit SIC classifications, six-digit GICS classifications, and FF48 classifications. I also include the TNICs of Hoberg and Phillips (2010a,b). By definition, the FSI assigned to firm i does not contain firm i (all such references are deleted), but this is not true within standard industry classifications. Thus, to create a level playing field, I exclude firm i from its industry return in all of my tests. To construct my stock return tests, I begin by computing firm and industry returns for each of the various industry classification standards. To avoid look-ahead bias, industry definitions from fiscal year t are matched with stock returns from July of year t + 1 to June of year t + 2. Hence, a firm that reports its competitor list in its 10-K filed in December of year t will be assigned to its corresponding FSI from July of year t + 1 to June of year t + 2, and industry returns be computed for these dates as well. Since the sample of 10-Ks ranges from 2002-2008, the sample of stock returns ranges from July 2003 to June 2010. Following Rauh and Sufi (2012), I run panel regressions of the form: ri,t − rf,t = α + β1 (rm,t − rf,t ) + β2 (rind,t − rf,t ) + εi ,

39

(7)

where ri,t − rf,t represents the excess return on firm i in month t, rm,t − rf,t represents the excess return on the market portfolio, and rind,t − rf,t represents the return on firm i’s industry (excluding firm i). Standard errors are clustered by time. Following Rauh and Sufi (2012), industry returns are computed on an equal-weighted basis for most of my tests. Table A.1 presents the results. The table shows that on an equal-weighted basis, FSIs explain a significantly larger fraction of stock returns than their traditional counterparts as measured by adjusted R2 . FSIs also explain a greater fraction of stock returns than Hoberg and Phillips (2010a,b)’s TNIC classifications. The improvements in explanatory power are material: on an equal-weighted basis, adjusted R2 s improve by about 13% relative to three-digit SIC, FF48, and TNIC classifications, and by about 4% relative to six-digit GICS classifications. The panel also includes a “horserace” test between the various industry classifications, with column 6 showing that FSIs “win” the horserace. Hence, FSIs offer a material improvement over other industry classification standards at explaining the variation in stock returns. Columns 7-9 of the panel examine how FSIs compare with the static competitor definitions from CapitalIQ used by Rauh and Sufi (2012). Rauh and Sufi (2012) also examine the relative performance of their static CapitalIQ competitors versus three-digit SIC classifications in panel regressions using equal-weighted industry returns with standard errors clustered by time. While I do not possess their CapitalIQ data, and hence cannot replicate their exact test on my sample, the R2 results show that FSIs appear to be at least as good as the static classifications used by Rauh and Sufi (2012) at explaining variation in stock returns. I also attempt to find “optimal” weights for each firm within each FSI to maximize the explanatory power of FSIs. I do this using an iterative algorithm that seeks to minimize mean squared error within years 1, . . . , t − 1 and applies these weights to each FSI in year t. The last column of Table A.1 examines the performance of these “optimal” FSIs relative to other classification standards. The adjusted R2 value from the regression in (10) improves to 0.28, which is a 17% improvement over three-digit SIC, FF48, and TNIC classifications, and an 8% improvement over six-digit GICS classifications. Hence, while “optimal” FSIs do not offer that large of an improvement over equal-weighted FSIs, the difference between optimal FSIs and other industry classifications is quite large.

Corporate Finance Application: Leverage Table A.2 examines whether FSIs do a better job of explaining the variation in corporate capital structure than other types of industry classifications. I begin by restricting the sample to all firms with assets over $10 million that contain at least five industry competitors. Financial firms and

40

utilities are also excluded from the sample. Following Rauh and Sufi (2012) and Hoberg and Phillips (2010a,b), I then regress the book leverage ratio of each firm against the equal-weighted leverage ratio of its competitors. Thus, all of the industry leverage measures (including FSIs) are equally weighted. Book leverage is defined as total debt divided by total assets (Compustat variables dltt+dlc / at). Standard errors are clustered by firm and time, and time fixed effects are included in the regression. The table shows that FSIs offer a significant improvement over traditional classification standards at explaining the variation in leverage. The regression with FSIs has an adjusted R2 of 0.16, which is 45% higher than the R2 s from regressions involving three-digit SIC and FF48 classifications (0.11) and 33% higher than the adjusted R2 of the regression involving six-digit GICS classifications (0.12). Thus, like stock returns, FSIs do a much better job of explaining the variation in leverage than traditional industry classifications.

41

Figure 1 Traditional Industry Definitions

Firm 1

Firm 2

Firm 13

Firm 7

Firm 10

Firm 4 Firm 3

Firm 11

Firm 8

Firm 14

Firm 15

Firm 6 Firm 5

Firm 12

Firm 9

Firm 16

Industry 2

Industry 1

This figure shows the structure of traditional industry classification definitions, in which industries are transitive and pairwise disjoint. In other words, all firms belong to a single industry, and firms cannot be linked with other firms outside of their industry. Hence, if firm X is in firm Y’s industry, firm Y is also in firm X’s industry.

Figure 2 Firm-Specific Industry Definitions Firm 1

Firm 2

Firm 13

Firm 7

Firm 10

Firm 4 Firm 3

Firm 11

Firm 8

Firm 14

Firm 15

Firm 6 Firm 5

Firm 9

Firm 12

Firm 16

This figure shows the structure of firm-specific industry definitions. Unlike traditional (symmetric) industry definitions, firm-specific definitions are intransitive. This allows, for example, firms 1 and 4 to have separate competitors, and allows for connections between firms 4 and 8 (spanning two different traditional industry classifications).

42

Table 1 Summary Statistics Panel A presents summary statistics for the full sample of Firm-Specific Industries (FSIs) that can be matched to Compustat. FSIs are constructed at the firm level based on the competitors that firms report in their annual 10-K filings. Each industry excludes the "reporting" firm. Panel B reports summary statistics for the FSI competitors that can be matched to both CRSP and Compustat. Panel C presents summary statistics when the sample of "reporters" is restricted to firms that are also in the S&P ExecuComp database. Panel D compares the characteristics of firms that report competitors and the competitors listed by each firm. The column labeled "Equal-weighted competitors" only counts each competitor once, even if the competitor is listed in numerous firms' FSIs. The column labeled "All competitors" averages over all FSIs, so a firm listed as a competitor in 10 FSIs is counted 10 times. Reporters are restricted to firms listed in ExecuComp, but competitors need not be in ExecuComp. Panel A: Number of Firms per Industry with Valid GVKEY

Year 2002 2003 2004 2005 2006 2007 2008

Avg. # of firms per industry 8.6 9.0 9.3 9.6 9.8 9.8 9.6

Median # of firms per industry 7 7 7 7 7 7 7

Std. Dev. 8.2 8.6 8.7 8.6 8.6 8.6 9.0

Min 1 1 1 1 1 1 1

25% 3 4 4 4 4 4 4

75% 11 12 12 13 13 13 13

Max 153 158 163 139 127 129 135

Min 1 1 1 1 1 1 1

25% 3 3 3 4 4 3 3

75% 9 10 10 11 11 11 11

Max 129 125 123 119 109 112 120

Std. Dev. 8.4 8.6 8.6 7.9 7.5 7.5 7.8

25% 3 3 4 4 4 4 3

75% 11 11 12 12 12 12 12

Max 129 125 123 119 109 112 120

Panel B: Number of Firms per Industry matched to CRSP/Compustat

Year 2002 2003 2004 2005 2006 2007 2008

Avg. # of firms per industry 7.2 7.5 7.7 8.0 8.1 8.1 8.1

Median # of firms per industry 6 6 6 6 6 6 6

Std. Dev. 6.5 6.9 6.9 6.9 6.7 6.9 7.2

Panel C: Number of Firms per Industry for ExecuComp Firms

Year 2002 2003 2004 2005 2006 2007 2008

# Firms reporting competitors 740 805 833 844 896 958 927

Avg. # of Median # firms per of firms per industry industry 8.3 6 8.5 6 8.7 7 8.8 7 8.9 7 8.9 7 8.8 7

43

Table 2 Summary Statistics for ExecuComp Sample This table contains summary statistics for all firms listed in the ExecuComp database from 20022008. All values are reported in constant 2008 dollars using CPI data from the Bureau of Labor Statistics. Total compensation is defined as the total value of salary, bonus, other annual payments, and the Black-Scholes values of all options and long term incentive plans (the tdc1 variable in ExecuComp). Total cash compensation is defined as the sum of salary and bonus. Total non-cash compensation is the difference between total compensation and total cash compensation. Data on total assets comes from Compustat. (in thousands of 2008 dollars)

Year 2002 2003 2004 2005 2006 2007 2008 Totals

Total Compensation Average Median St. Dev $5,902 $3,112 8,979 $5,385 $2,907 7,254 $5,876 $3,420 8,149 $5,844 $3,427 7,744 $5,843 $3,364 8,352 $5,384 $3,172 7,112 $5,247 $3,134 7,292 $5,628 $3,226 7,841

Salary Average Median St. Dev $778 $704 441 $785 $719 437 $786 $717 435 $785 $726 452 $781 $714 455 $746 $691 459 $780 $730 456 $778 $717 449

Bonus Average Median St. Dev $841 $424 1,420 $1,058 $509 2,053 $1,155 $630 1,796 $1,227 $645 2,253 $501 $0 1,929 $312 $0 1,933 $253 $0 2,023 $743 $153 1,976

Year 2002 2003 2004 2005 2006 2007 2008 Totals

Total Cash Compensation Average Median St. Dev $1,619 $1,155 1,695 $1,843 $1,210 2,301 $1,941 $1,347 2,069 $2,012 $1,376 2,455 $1,282 $873 2,116 $1,058 $774 2,060 $1,034 $800 2,113 $1,521 $988 2,167

Non-Cash Compensation Average Median St. Dev $4,235 $1,580 8,292 $3,495 $1,463 6,008 $3,884 $1,807 7,057 $3,786 $1,781 6,447 $4,404 $2,160 7,573 $4,326 $2,320 6,137 $4,213 $2,244 6,340 $4,107 $1,927 6,874

Compensation / Assets Average Median St. Dev 0.36% 0.16% 0.70% 0.35% 0.15% 1.24% 0.34% 0.17% 0.60% 0.32% 0.15% 0.59% 0.28% 0.16% 0.50% 0.30% 0.15% 0.53% 0.29% 0.16% 0.46% 0.32% 0.16% 0.70%

44

Table 3 CEO Compensation Tests using Firm-Specific Industries The sample consists of all firms that report competitors in their 10-K filings from 2002-2008 that also have non-missing compensation data in ExecuComp, along with their reported competitors. The table shows the coefficients from OLS regressions along with standard errors in parentheses that are clustered by firm and year. All variables are in changes as measured from year t -1 to year t except returns. All peer group measures are calculated as the equal-weighted average of all competitors listed in a firm's 10K with valid data in CRSP and Compustat. Firm i is excluded from its peer group average. All peer groups are defined using firm-specific industry (FSI) classifications and are equally weighted. See the text for details on the construction of FSIs. Statistical significance at the 1%, 5%, and 10% levels is denoted by ***, **, and *, respectively.

Δln(FSI Compensation) t Δln(FSI Compensation) t-1 12-mo. FSI return (EW) Δ FSI ROA Δ FSI ROE Controls Year FE 2-way clustered errors N Adjusted R-squared

Dependent Variable = Δln(Total Compensation) t (1) (2) (3) (4) (5) 0.023 0.021 (0.025) (0.025) 0.020 (0.015) -0.083** -0.131** -0.102*** (0.032) (0.052) (0.036) 0.986* 1.061* 0.501 (0.584) (0.587) (0.442) 0.069*** 0.119** 0.153*** (0.009) (0.056) (0.042) Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 5,533 5,751 5,737 5,459 4,766 0.03 0.03 0.03 0.03 0.03

45

Table 4 Additional Compensation Tests using Firm-Specific Industries This table presents the results of OLS regressions of the change in annual (log) total compensation on peer group compensation, peer group performance, and a set of own-firm controls. Each regression is identical to the regressions in the previous table. In this table, however, the variable Δln(FSI Compensation) is constructed using a truncated peer group in every column but the last three columns. Specifically, compensation peer groups are restricted to firms that had a larger value of the variable in each column (size, ROA, etc.) than a given firm at the end of the previous year. For example, the Δln(FSI Compensation) variable in the first column is constructed using all firms in firm i's FSI that had a larger market capitalization one year ago than firm i. The "Size x M/B" column restricts the compensation peer group to all firms that were larger and had a higher M/B ratio than the firm in question. The column labeled "BLN (2011) measure" does not truncate peer groups, but replaces the change in peer group compensation with the measure ln(FSI Compensation, t-1) ln(Compensation, t-1), which is used in Bizjak et al. (2011). The last two columns augment full (untruncated) FSIs with either 5 or 10 randomly chosen firms from the same 2-digit SIC code as the firm in question, where the random firms' CEOs earned higher compensation during the previous year than the firm in question. All peer group measures are equally weighted. The sample period is 2002-2008. Standard errors clustered by firm and year are in parentheses. Statistical significance at the 1%, 5%, and 10% levels is denoted by ***, **, and *, respectively.

Dependent Variable = Δln(Total Compensation)

Δln(FSI Compensation)

Size (1) -0.007 (0.012)

ROA (2) -0.027 (0.029)

B/M (3) 0.018 (0.022)

-0.197** (0.094) 1.287* (0.776) -0.095 (0.085) Y Y Y 4,198 0.03

-0.154** (0.067) 1.114* (0.057) 0.135 (0.102) Y Y Y 4,012 0.03

-0.191*** (0.072) 1.224* (0.065) 0.172** (0.083) Y Y Y 4,122 0.03

Size x B/M (4) 0.004 (0.011)

ln(FSI Comp) - ln(Comp),t-1 12-mo. FSI return (EW) Δ FSI ROA Δ FSI ROE Additional controls Year FE 2-way clustered errors N Adjusted R-squared

-0.237** (0.113) 1.465 (0.991) 0.127 (0.123) Y Y Y 2,979 0.04

46

Full FSI + 5 SIC Full FSI + 10 SIC BLN (2011) Compensation Compensation Returns Compensation Measure Peers Peers (5) (6) (7) (8) (9) 0.031 0.135*** 0.043 0.055 (0.030) (0.041) (0.031) (0.048) 0.176*** (0.040) -0.118 -0.204*** -0.090** -0.193*** -0.189*** (0.075) (0.053) (0.043) (0.054) (0.057) 1.288** 1.135*** 0.695 1.284** 1.106** (0.599) (0.062) (0.587) (0.533) (0.520) 0.083 0.129 0.065*** 0.045 0.099* (0.097) (0.112) (0.024) (0.68) (0.059) Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 3,951 4,147 4,622 5,459 5,459 0.03 0.05 0.12 0.03 0.03

Table 5 Strong-Form Compensation Tests Using Firm-Specific Industries The sample consists of all firms that report competitors in their 10-K filings from 2002-2008 that also have non-missing compensation data in ExecuComp, along with their reported competitors. The table shows the coefficients from OLS regressions along with standard errors in parentheses that are clustered by firm and year. "Luck" represents the predicted values from a panel regression of firm returns on FSI returns (with and without additional controls). "Skill" represents the residuals from these first-stage regressions. Both variables are also interacted with the empirical cumulative distribution function associated with the variance of each firm's luck and skill measures. The last column also interacts luck and skill with dummy variables that equal one when the luck (skill) measure is less than zero. All other variables are measured in changes from year t -1 to year t . All peer group measures are calculated as the equal-weighted average of all competitors listed in a firm's 10-K with valid data in CRSP and Compustat. The row labeled "Δln(FSI Comp), Higher-paid peers" restricts the peer group in year t to firms with higher total compensation in year t -1 than the firm in question. Firm i is always excluded from its peer group average. Statistical significance at the 1%, 5%, and 10% levels is denoted by ***, **, and *, respectively.

Variable Δln(FSI Compensation)

(1) 0.027 (0.023)

Δln(FSI Comp), Higher-paid peers Luck Skill

0.016 (0.104) 0.177** (0.091)

Luck x CDF of luck variance Skill x CDF of skill variance

Dependent Variable = Δln(Total Compensation) (2) (3) (4) (5) 0.027 0.044 0.026 (0.023) (0.028) (0.027) 0.035 (0.038) -0.189 -0.037 -0.008 -0.057 (0.179) (0.109) (0.151) (0.095) 0.177** 0.231*** 0.225*** 0.218*** (0.090) (0.063) (0.064) (0.077) -0.727*** -0.854*** -0.297 (0.192) (0.190) (0.659) -0.475*** -0.448*** -0.532*** (0.071) (0.066) (0.088)

Luck x Luck is down Skill x Skill is down Controls First-stage controls Year FE 2-way clustered errors N Adjusted R-squared

Y N Y Y 5,458 0.03

Y Y Y Y 5,458 0.03

47

Y Y N Y 5,341 0.03

Y Y Y Y 5,341 0.03

Y Y Y Y 3,879 0.03

(6) 0.032 (0.026)

0.061 (0.154) 0.100 (0.078) -0.837*** (0.212) -0.319*** (0.076) 0.005 (0.161) 0.322*** (0.114) Y Y Y Y 5,341 0.03

Table 6 CEO Turnover Tests The sample consists of all CEO exits from 2002-2008 where the firm also has a valid FSI. The table shows the coefficients from a Cox (1972) proportional hazard model along with standard errors in parentheses that are clustered by firm and year. CEO exits that are "voluntary" are considered to be censored observations, while CEO exits that are "forced" are considered to be "events." Hence, a positive coefficient indicates a negative correlation with forced (as opposed to voluntary) exits. "Luck" represents the predicted values from a panel regression of firm returns on FSI returns (with and without additional controls). "Skill" represents the residuals from these first-stage regressions. All peer group measures are calculated as the equal-weighted average of all competitors listed in a firm's 10-K with valid data in CRSP and Compustat. Firm i is excluded from its peer group average. Statistical significance at the 1%, 5%, and 10% levels is denoted by ***, **, and *, respectively.

Variable 12-month Firm return 12-month FSI return

(1) -0.703** (0.301) 0.639*** (0.197)

(2) -0.802*** (0.297) 0.345* (0.197)

Luck Skill Controls First-stage controls Year FE 2-way clustered errors N

N -N Y 755

Y -Y Y 755

48

(3)

-0.284 (0.406) -0.802*** (0.297) N N Y Y 755

(4)

-0.692 (0.446) -0.803** (0.331) N Y Y Y 755

(5)

0.535 (0.544) -0.406* (0.234) Y Y Y Y 755

Table 7 Weak-Form Tests of Performance Benchmarking using Firm-Specific Industries The sample consists of all firms that report competitors in their 10-K filings from 2002-2008 that also have non-missing compensation data in ExecuComp, along with their reported competitors. The table shows the coefficients from OLS regressions along with standard errors in parentheses that are clustered by firm and year. All variables are in changes as measured from year t -1 to year t except returns. All peer group measures are calculated as the equal-weighted average of all competitors listed in a firm's 10-K with valid data in CRSP and Compustat. Firm i is excluded from its peer group average. Statistical significance at the 1%, 5%, and 10% levels is denoted by ***, **, and *, respectively.

Intercept 12-mo. Firm return 12-mo. FSI return (EW) S&P 500 return Δ ROA Δ ROE Δ Sales / Assets Δ Leverage Δ 36-mo. Stock volatility Δ B/M Δ Ln(Assets) Δ FSI ROA Δ FSI ROE Year FE 2-way clustered errors N Adjusted R-squared

Dependent Variable = Δln(Total Compensation) (1) (2) (3) (4) (5) -0.036 -0.109*** -0.053*** -0.117*** -0.078*** (0.023) (0.012) (0.018) (0.015) (0.014) 0.231*** 0.245*** 0.242*** 0.153* 0.152* (0.081) (0.081) (0.082) (0.082) (0.086) -0.114*** -0.071*** -0.080*** -0.112*** (0.018) (0.024) (0.027) (0.033) 0.168 0.190 (0.139) (0.138) 1.005*** 0.943*** (0.207) (0.187) -0.052 -0.055 (0.097) (0.096) 0.110** 0.106** (0.044) (0.042) -0.319 -0.313 (0.264) (0.270) -0.180 -0.140 (0.207) (0.193) -0.109** -0.108* (0.051) (0.055) 0.283*** 0.281*** (0.069) (0.069) 0.430** (0.170) 0.114*** (0.036) N Y Y Y Y Y Y Y Y Y 6,700 6,700 6,700 6,329 6,296 0.01 0.02 0.02 0.03 0.03 49

Table 8 Strong-Form Tests of Performance Benchmarking The sample consists of all firms that report competitors in their 10-K filings from 2002-2008 that also have non-missing compensation data in ExecuComp, along with their reported competitors. The table shows the coefficients from OLS regressions along with standard errors in parentheses that are clustered by firm and year. "Luck" represents the predicted values from a panel regression of firm returns on FSI returns (with and without additional controls). "Skill" represents the residuals from these first-stage regressions. Both variables are also interacted with the empirical cumulative distribution function associated with the variance of each firm's luck and skill measures. All other variables are measured in changes from year t -1 to year t . All peer group measures are calculated as the equal-weighted average of all competitors listed in a firm's 10-K with valid data in CRSP and Compustat. Firm i is excluded from its peer group average. Statistical significance at the 1%, 5%, and 10%

Variable Luck Skill Luck x CDF of luck variance Skill x CDF of skill variance Controls First-stage controls Year FE 2-way clustered errors N Adjusted R-squared

DV = Δln(Total Compensation) (1) (2) 0.007 -0.100 (0.092) (0.112) 0.236*** 0.201*** (0.077) (0.055) -0.083 -0.763*** (0.321) (0.259) -0.602*** -0.432*** (0.122) (0.054) Y Y N Y Y Y Y Y 5,642 5,591 0.04 0.04

50

Table 9 Weak-Form Tests of Performance Benchmarking using Traditional Industry Classifications The sample consists of all firms that report competitors in their 10-K filings from 2002-2008 that also have non-missing compensation data in ExecuComp, along with their competitors based on traditional industry definitions (SIC codes, Fama-French 48 industries, GICS codes). The table shows the coefficients from OLS regressions along with standard errors in parentheses that are clustered by firm and year. All variables are in changes as measured from year t -1 to year t except returns. All peer group measures are calculated as the equal-weighted average of all firms in the same industry as firm i with valid data in CRSP and Compustat. Firm i is excluded from its peer group average. Statistical significance at the 1%, 5%, and 10% levels is denoted by ***, **, and *, respectively.

Intercept 12-mo. Firm return 12-mo. Industry return Δ ROA Δ ROE Δ Sales / Assets Δ Leverage Δ 36-mo. Stock volatility Δ B/M Δ Ln(Assets) Year FE 2-way clustered errors N Adjusted R-squared

2-digit SIC -0.058*** (0.006) 0.176** (0.071) -0.043 (0.102) 0.964*** (0.266) -0.058 (0.062) 0.110** (0.052) -0.387* (0.223) -0.081 (0.218) -0.085* (0.051) 0.253*** (0.063) Y Y 9,133 0.03

3-digit SIC -0.059*** (0.010) 0.174** (0.076) -0.017 (0.053) 0.905*** (0.288) -0.063 (0.066) 0.101** (0.049) -0.418* (0.234) 0.000 (0.266) -0.084 (0.052) 0.253*** (0.060) Y Y 8,876 0.03

Dependent Variable = Δln(Total Compensation) 4-digit Fama-French 4-digit 6-digit SIC 48 GICS GICS -0.058*** -0.067*** -0.054*** -0.063*** (0.018) (0.022) (0.014) (0.010) 0.172** 0.172*** 0.173*** 0.180*** (0.076) (0.067) (0.064) (0.064) -0.021 -0.006 -0.008 -0.067 (0.051) (0.109) (0.148) (0.124) 0.883*** 0.960*** 0.966*** 0.968*** (0.284) (0.264) (0.262) (0.258) -0.057 -0.057 -0.058 -0.059 (0.064) (0.061) (0.061) (0.061) 0.103** 0.109** 0.110** 0.109** (0.051) (0.052) (0.052) (0.052) -0.430* -0.384* -0.383* -0.386* (0.243) (0.228) (0.223) (0.221) -0.044 -0.069 -0.074 -0.081 (0.264) (0.221) (0.221) (0.231) -0.077 -0.097* -0.087* -0.091* (0.054) (0.056) (0.052) (0.053) 0.248*** 0.253*** 0.251*** 0.253*** (0.065) (0.065) (0.062) (0.062) Y Y Y Y Y Y Y Y 8,626 9,137 9,141 9,140 0.03 0.03 0.03 0.03

51

8-digit 2-digit SIC / GICS Size quartiles -0.063*** -0.069*** (0.009) (0.016) 0.179** 0.174** (0.069) (0.080) -0.040 -0.065 (0.070) (0.046) 0.958*** 0.956*** (0.257) (0.273) -0.055 -0.064 (0.064) (0.067) 0.110** 0.107** (0.053) (0.052) -0.395* -0.418* (0.224) (0.232) -0.074 -0.042 (0.222) (0.204) -0.088* -0.103* (0.050) (0.055) 0.256*** 0.269*** (0.059) (0.063) Y Y Y Y 9,103 8,889 0.03 0.03

Table 10 Probit Analysis of Firms Reporting Competitors This table contains the results from a probit regression of a number of predictive variables on a dummy variable taking the value of one if a firm self-reported competitors in a given year and a value of zero otherwise. Explanatory variables include the natural log of total assets, the book-to-market ratio (of assets), the 12-month cumulative stock return prior to each firm's fiscal year-end date, the firm's return on equity (net income / average assets), return on assets (EBIT / average assets), a scaled sales measure (sales / average assets), CEO characteristics, and the Gompers, Ishii and Metrick (2003) G -index value for each firm as of its fiscal year-end date. The sample period is 2002-2008. The sample is restricted to firms with valid compensation data in ExecuComp. The final column only include firms with a valid G -index value. The ROE, ROA, scaled sales and B/M variables are Winsorized at the 1% and 99% levels. G -index values are sourced from Andrew Metrick's website; accounting data is sourced from Compustat; and stock return data comes from CRSP. Industry controls are based on six-digit GICS industry classifications. Statistical significance at the 1, 5, and 10% levels is denoted by the symbols ***, **, and *, respectively. DV = 1 if firm i reported competitors in its 10-K in year t Variable Log Assets

(1) 0.115*** (0.010)

B/M

-0.649*** (0.061)

12-month return 36-mo. return volatility

ROA Sales / Assets

Age Squared CEO Tenure Tenure Squared

(0.078)

(0.020) -0.207 (0.143)

-0.191***

-0.130**

(0.036)

(0.056)

3.792*** -0.193**

0.716** (0.348) 0.060

2.547*** (0.605) 0.343**

(0.082)

(0.090)

0.298

0.146

-0.833*

(0.228)

(0.257)

(0.441)

0.104*** 0.013

(0.147)

-0.001

0.025

(0.027)

(0.042)

0.044*

0.038

(0.020)

(0.023)

-0.000

-0.000**

-0.000

(0.000)

(0.000)

(0.000)

(0.038)

-0.009*

-0.003

-0.016*

(0.005)

(0.006)

(0.008)

0.001*** (0.000)

CEO Ownership %

0.202***

-0.343***

(0.021) CEO Age

(0.013)

(3) 0.395***

(0.033) (0.302) ROE

(2) 0.270***

0.413* (0.231)

0.000 (0.000)

0.001** (0.000)

0.138

-0.632

(0.255)

(0.453)

G -index

-0.003 (0.010)

Constant Year fixed effects Industry fixed effects Pseudo R-squared Firm-year observations

-0.845

-4.269***

-3.437***

(0.571)

(0.701)

(1.145)

Yes

Yes

Yes

No

Yes

Yes

0.18

0.36

0.33

10,108

10,108

5,852

52

Table 11 Do Firms Report Competitors Strategically? This table compares the average stock returns and operating variables of firms that report competitors against the equally-weighted average returns and operating variables of such competitors. Coeffiecients are estimated from an unbalanced panel of data that is restricted to firms that reported competitors in a given year and also have a valid G -index value. The sample period is 2002-2008. The subscripts "firm" and "comp" represent the returns/operating performance of the firm and its listed competitors, respectively. For example, the first variable measures the difference in cumulative 12-month returns prior to the fiscal year-end date between a firm and the competitors it lists in its public filings. The first column of the table shows the average difference between a firm and its competitors across the sample. The next three columns break the sample into "Democracies", governance-neutral firms, and "Dictatorships" based on the methodology outlined in Gompers, Ishii and Metrick (2003). Dictatorship firms have poor governance, while Democracies have better governance. The "Difference" column represents the difference in coefficient estimates between Dictatorships and Democracies. Standard errors and statistical significance are based on two-sided paired t -tests. Statistical significance at the 1, 5, and 10% levels is denoted by the symbols ***, **, and *, respectively. All Variable

G -index Category

Data

Dem

Dict

Difference

Rfirm,12 - Rcomp,12

-0.009

-0.035

0.008

0.043

(0.006)

(0.037)

(0.025)

(0.044)

ROEfirm - ROEcomp

-0.018***

0.008

-0.018

-0.026

(0.015)

(0.015)

(0.021)

0.000

-0.009*

-0.009

(0.007)

(0.005)

(0.009)

0.015

-0.073*

-0.088*

(0.007)

(0.030)

(0.038)

(0.048)

BMfirm - BMcomp

0.021***

-0.065***

0.036**

0.101***

(0.004)

(0.015)

(0.016)

(0.022)

Sizefirm - Sizecomp

-0.614***

-0.369***

(0.026)

(0.142)

(0.004) ROAfirm - ROAcomp

-0.008*** (0.001)

Sales firm - Sales comp

Gfirm - Gcomp

0.027***

0.122*** (0.045)

53

0.091 (0.118)

0.460** (0.184)

Table A.1 Stock return tests Industries are constructed on a firm-specific basis using (i) companies that the firm listed as a competitor in its 10-K filing; (ii) other companies that point to the firm as a competitor in their own 10-K filings; and (iii) the firm's competitors' competitors. The sample period for 10-Ks is 2002-2008. Industry data from year t is matched to CRSP stock returns from June of year t+1 to July of year t+2. Thus, the sample period for stock returns is July 2003 - June 2010. All other data comes from Compustat. Industries are equally-weighted as in Rauh and Sufi (2012), and firm i is always excluded from its industry return. The dependent variable in all regressions is monthly excess firm stock returns. "Market return" represents the return on the CRSP value-weighted portfolio in month t. "FSI return" represents the return on the firm's industry as calculated using the method described above. The other variables represent equal-weighted industry returns computed using other industry classification standards. To eliminate the effects of small firms, firms smaller than the 10% NYSE size breakpoint are dropped, as are firms matched to industries that contain less than five firms. Financial firms and utilities are also excluded from the sample, as are firms with CRSP share codes other than 10 and 11. Statistical significance at the 1, 5, and 10% levels is denoted by the symbols ***, **, and *, respectively. Max DV: excess return of firm i in month t Excess returns: Market returnt

(1) (0.0188)

FSI returni,t

(2)

(3)

(4)

(5)

0.2267*** 0.4594*** 0.6667*** 0.3279*** 0.3844*** (0.0461)

(0.0433)

(0.0469)

0.8390***

0.5002***

(0.0132)

(0.0309) 0.6093***

(9)

(0.192)

(10) 0.2512***

(0.102)

(0.0156)

0.547***

0.520***

0.8235***

(0.025)

(0.033)

(0.0092)

(0.0249) 0.4585***

0.1001***

0.119

0.058

(0.0407)

(0.0076)

(0.081)

(0.043)

6-digit GICS return i,t

0.7134***

0.2294***

(0.0403)

(0.0310)

FF48 returni,t

Adjusted R-squared

(8)

R-squared

0.2167***

(0.0483)

N

(7)

0.635*** 1.166*** 0.592*** (0.092)

3-digit SIC return i,t

Constant

(6) 0.0053 (0.0308)

TNIC returni,t

(0.0537)

Rauh and Sufi (2012)

0.6672*** -0.0656 (0.0432)

(0.0213)

0.0001

-0.0013

-0.0001

-0.0011

-0.0010

-0.0011

0.0019

0.0028

0.0017

0.0001

(0.0005)

(0.0016)

(0.0015)

(0.0021)

(0.0020)

(0.0013)

(0.002)

(0.003)

(0.002)

(0.0005)

117,159

117,159

117,159

117,159

117,159

117,159

144,588

144,588

144,588

117,159

0.27

0.24

0.24

0.26

0.24

0.28

0.12

0.08

0.12

0.28

54

y

Table A.2 Capital Structure Tests Industries are constructed as before. Leverage is defined as total debt divided by total assets (i.e. [dltt+dlc]/at). Financial firms and utilities are excluded from the sample. Standard errors are clustered by year and by firm. Statistical significance at the 1, 5, and 10% levels is denoted by the symbols ***, **, and *, respectively.

Variable Constant FSI

(1) -0.007 (0.004) 1.016*** (0.019)

3-digit SIC

DV: Leverage of firm i in year t (2) (3) (4) 0.032*** 0.018*** 0.026*** (0.008) (0.006) (0.009)

0.837*** (0.032)

6-digit GICS

0.890*** (0.026)

FF48 Year fixed effects Adjusted R-squared

Yes 0.16

Yes 0.11

55

Yes 0.12

0.846*** (0.030) Yes 0.11

(5) -0.023** (0.009) 0.623*** (0.023) 0.117*** (0.045) 0.308*** (0.038) 0.042 (0.040) Yes 0.20

Executive Compensation and Industry Peer Groups

Jul 10, 2015 - unbiased list of their primary product market competitors in their ...... Handbook of Social Economics, chapter 18. ... Through Social Networks.

819KB Sizes 0 Downloads 164 Views

Recommend Documents

Executive Compensation and Risk Taking Executive ...
Jun 6, 2010 - Have traders, loan officers, risk managers or bank CEOs and ... is the closest analogue to the stock price – it is a market price of credit risk. By.

Equilibrium Executive Compensation and Shareholder ...
Sep 28, 2017 - Point 4 implies that in equilibrium the manager has no incentive to mis- ... system and imposing the manager's budget constraint, one obtains ...

Executive Compensation in Emerging Markets
Jun 26, 2014 - ling, 1976). The sociological perspective theory assumes that managers do not always act in self-interested ways and in a situation of interest conflict they often ..... Ding et al. (2006). Ownership, firm size, firm age, location, and

For Peer Review -
2009). de Haan and colleagues employed a computational model to test the activity ..... functional images to 3 mm isotropic voxels that are the minimum spatial ...... Song XW, Dong ZY, Long XY, Li SF, Zuo XN, Zhu CZ, He Y, Yan CG, Zang YF.

Executive Summary ... -
partnership between the National Science Communication Institute (nSCI) and the ..... DPN, MetaArchive Cooperative, CLOCKSS. 5. ...... Executive Director for Operations and Director of Publications, Federation of ... Medha Devare, Data and Knowledge

Industry Focus -
Roxas, Makati City, Manila. Citibank NA Philippines NA is regulated by The Bangko Sentral ng Pilipinas. The Product is made available in Poland by Dom.

UVTPC - PEER, 2017 Teacher Practice Survey Executive Summary ...
For six PBEE core practices, educators reported their Stage of Change on a five point scale. ○ Using the area immediately around the school and other ...

Card Scan Executive -
fast scan speed, top accuracy and easy-to-use software to turn business cards into ... business cards directly into your computer. Boost business productivity.

Card Scan Executive -
CardScan is a desktop device that quickly and accurately scans the printed ... build a database of your vital contacts, either in CardScan's address book, ...

The Effect of Competition on Executive Compensation ...
is the insight of Raith (2003). He develops a theoretical ..... Enterprise Integrated Accounts System), a census of firms since 2004, providing detailed information on firms' balance-sheet; and from its predecessor for the period prior to 2004, the.

COMPENSATION & BENEFITS STRUCTURE NAME ... -
Phone & Internet Reimbursement. 750. 9,000. Children Education ... *Medical can be claimed quarterly or annually to avail tax benefits. It can be paid monthly in ...

Company Perspective Industry -
Feb 10, 2014 - Aggressive marketing strategy is adopted by Prataap by getting associated with popular children tv show 'Chhota Bheem' and Mr. Amitabh Bachchan starrer children-oriented supernatural movie 'Bhootnath Returns'. •. Medium-term Revenue

HYDERABAD Executive President's Team Member ... -
Dec 15, 2013 - Advance Ticket price till 13th Dec: Rs.400/- per person. (SKU – C086) ... loving the work. Herbalife products and business opportunity has changed my life in last 10 years. All of this has happened because of the system,.

Finance Society Executive Board Responsibilities ... -
collaborate and work with the finance department heads in achieving the needs ... Write and send emails and social media posts to the members pertaining to ...

JOB ANNOUNCEMENT Executive Director South ... -
At least 7 years of combined experience in senior management positions in nonprofits or governmental agencies ... (including, if possible, accounting software).

Method and apparatus for facilitating peer-to-peer application ...
Dec 9, 2005 - microprocessor and memory for storing the code that deter mines what services and ..... identi?er such as an email address. The person making the ..... responsive to the service request from the ?rst application received by the ...

Leeching Bataille: peer-to-peer potlatch and the ...
with conceptualising the actual practice of gifting and how it can best be understood in relation to ... These effects can no longer be restricted to their online aspect. ..... This is a description of a noble and valuable social practice: “filesha