The Effects of Credit Rating and Watchlist Announcements on the U.S. Corporate Bond Market Alberto Crosta



Stockholm School of Economics & Swedish House of Finance

March 10, 2014 Abstract I examine the effects of contemporaneous credit rating and watchlist announcements on the over-the-counter U.S. corporate bond market. I find significant negative daily abnormal returns (-2.91%) over a ten-day window associated with a downgrade announcement with negative watch. The effect is particularly strong over the two-day post-event window (-1.90%), while there is some weak evidence of market timing during the four days preceding a downgrade (-0.58%). Abnormal returns following upgrades with positive watch are weaker both in terms of statistical significance and magnitude. I also observe higher abnormal bond returns following downgrades with negative watch around rating-sensitive boundaries. These results suggest that bond abnormal returns could also be driven by regulation constraints, besides the information content of the ratings. Finally, a multivariate cross-sectional analysis on abnormal returns over the two-day window following downgrades shows that the negative watchlist state is a key determinant of bond market’s response even when key control variables are included.

Keywords: Corporate Bond Prices, Credit Rating Changes, Watchlists, Abnormal Returns JEL Classification: G10, G14, G24



Stockholm School of Economics, Sveavgen 65, Box 6501 SE-113 83 Stockholm, Sweden. E-Mail: [email protected]

1

Introduction

Since their introduction in the early ’90s, credit outlooks and reviews (the so-called “watchlists”) have been used by credit rating agencies to signal positive or negative temporary trends in the creditworthiness of rated issuers and issues1 . In the past few years, watchlists have increased their importance in the rating process, as they provide crucial short-term information that cannot be conveyed in credit ratings, due to their need for stability over time. For this reason, changes in credit outlooks and reviews have also been included during credit rating announcements in order to “smooth” a downgrade process from two different ratings across time, allowing credit rating agencies to take more conservative cuts in credit ratings without losing credibility2 . While some body of empirical research has investigated the effects of watchlist inclusions alone, to my knowledge nothing has been done to analyze the effects of credit outlooks and reviews whenever they are published together with a rating change. This is the main focus of my analysis. In this paper I introduce the credit watchlists as a further level of information content on creditworthiness which focuses on short-term horizon, and I study the corporate bonds’ price behavior around the announcement days using a comprehensive panel dataset of intra-daily transactions on the U.S. over-the-counter market. The analysis aims to address four specific questions: do credit reviews matter when they are announced together with rating changes? If so, is it mainly due to the information content of the reviews, or due to rating-specific constraints? Moreover, can a lead-lag expectations model solely based on credit rating and reviews contribute to better understand the evolution of bond prices around rating announcements? And finally, how important are the watchlists and the informativeness/predictability of rating changes as determinants of abnormal bond returns in case of rating downgrades and upgrades, once one controls for different variables that are likely to affect bond returns? To investigate the effects rating reviews on rating announcements, I set up an event study on cumulative abnormal bond returns (CAR) using intra-daily data on the overthe-counter US corporate bond market from January 2003 to June 2011. I find that rating reviews are an important factor affecting bond prices whenever joint rating and credit review announcements are made, strengthening or weakening the effects of the rating changes. In particular, I observe statistically significant bond market reactions following a rating event in 1 Each rating agency has its own way to call the watchlists: Credit Watch for S&P, Watchlist for Moody’s and Rating Watch for Fitch. In my analysis I will consider these terms as interchangeable, and not referred to a specific rating agency. 2 Examples of such announcements are, for instance, the downgrade of long-term rating of JPMorgan Chase & Co. from AA- to A+ with placement on rating watch negative by Fitch (11/05/2012), or the downgrade with negative watch on American International Group announced by S&P and Moody’s early September 2008.

the form of abnormal bond returns over a two-day window (0,+1). This is particularly strong for downgrades announced together with a negative watch (-1.90% CAR), and much weaker - though significantly different from zero at 90% confidence level - for upgrades with positive watch (+0.20% CAR) during days (0,+1) compared to the case where announcements are made with no review or review with different sign (positive review for downgrades, negative review for upgrades). This suggests that institutional investors take into account credit watch announcements together with the rating changes in their portfolio choices, and this is particularly true in case of downgrades with negative reviews, which indicate the risk of a further downgrade. Explanations for such differences between upgrades and downgrades could be found in the asymmetric loss function of rating agencies, or the fact that investors take more into account bad news than good news due to their asymmetric risk aversion3 . In order to assess whether the effects might be produced by the information content of the rating reviews or rating-specific constraints of market participants, I analyze two sensitive rating boundaries that affect portfolio decisions of institutional investors: the NAIC1 vs NAIC2 (A vs BBB rating) level, and the Investment Grade vs Speculative Grade level (BBB vs BB rating). I investigate all downgrades of issues that approach each rating boundary from the upper side4 , and check whether rating reviews generate significant higher-than-average abnormal returns. Indeed, rating-sensitive institutional investors that hold a relevant amount of fixed-income securities in their portfolios, such as pension funds and insurance companies, carry a risk of forced liquidation (or even litigation) in case the bond is downgraded under the lowest bound set by their investment mandates, which often is set to investment grade only. Moreover, they hold a greater inventory risk and a lower level of required capital on a riskweighted basis as soon as a bond they hold in their portfolio is downgraded under the NAIC1 level, or analogously the A rating. Therefore, it might be optimal for these rating-sensitive institutional investors to condition their portfolio decisions on rating watches in order to anticipate future costly effects of rating changes. If this is true, one should find stronger and more significant effects in terms of average abnormal bond returns around the rating thresholds that define critical levels for rating-sensitive institutions whenever a negative watch is announced together with a downgrade. That would also be true for upgrades with positive review, although they should not show the same strong results because it would be not possible for rating-sensitive market participants to hold bonds not included in their investment mandates. My findings seem to support this view to some extent, as I find a strong and rather significant impact of negative credit watches on dowgrades close to the 3

Among others, Holthausen and Leftwich (1986), Goh and Ederington (1999) and Dichev and Piotroski (1999) document the asymmetry in stock price reactions to rating downgrades and upgrades, even though they do not investigate this empirical regularity 4 That is, ratings close to A- for the NAIC1 level and close to BBB- for the Investment Grade level.

2

sensitive boundaries on a 10-day post-event window. In order to explore the effects of news in rating announcements and watchlist inclusion on corporate bonds from the three main rating agencies (Moody’s, Standard & Poor, and Fitch), I construct a lead-lag / expectation model solely focusing on the new information content of each rating event. I find that unexpected (informative) rating downgrades are followed by higher abnormal bond returns with respect to the expected case, and this is particularly true when a negative watch is announced together with a downgrade. This suggests that investors might anticipate the negative effects of a downgrade whenever the underlying bond/issuer has been put under negative review, or has been previously downgraded by another rating agency (similar to lead-lag effect), hence reducing the impact of rating announcements around the event date. Finally, I run a cross-sectional analysis to indentify the main determinants of cumulative abnormal returns in the two-day window following rating events. In case of downgrades, I find that negative watchlist announcements and, to some extent, the informativeness / unexpectedness of rating changes are key determinants in the cross-section of abnormal bond returns, even when one introduces different sorts of control variables that are likely to affect bond returns. On the other hand, positive watchlists do not seem to significantly contribute to the cumulative abnormal returns, which are mainly driven by the movement from investment grade to speculative grade level and negative pre-event abnormal returns.

2

Literature Review

In the past few years several authors have investigated the effects of different kinds of corporate events on bond prices. Part of these event studies analyzes market reactions in terms of abnormal bond returns following credit rating events, such as rating downgrades and upgrades5 . In particular, Hand, Holthausen, and Leftwich (1992) examine daily bond returns associated with either ratings or watchlist announcements from Standard & Poor, using a small dataset of proprietary daily bond transaction data from the NYSE. They find significant effects on both stock and bond prices following downgrades and watchlist inclusions, but weaker effects following upgrades6 . However, they do not focus on events where ratings and watchlistings are announced simultaneously. 5

Among others, Wansley, Glascock, and Clauretie (1992) and Hite and Warga (1997). They report a -139bp excess bond return for issues that were assigned a negative watchlist, and -127bp for those which experienced a rating downgrade. Positive watchlist inclusion generates a significant average excess bond return equal to 225bp, while this is equal to a mere 35bp in case of actual upgrades. However, they find the opposite results once they consider an uncontaminated sample, resulting in mixed evidence. 6

3

Steiner and Heinke (2001) study daily excess German eurobond returns driven by either Standard & Poor and Moody’s warnings of possible rating changes (i.e. watchlistings) or rating changes alone from 1985 to 1996. They too find that announcements of downgradings as well as negative watchlistings induce a significant abnormal return around the announcement day, but no significant price change can be found in the case of upgrades or positive watchlisting. Also, there is some evidence of price pressure on abnormal returns, which contradicts the information content hypothesis of rating actions: the so-called “fallen angels”7 experience much stronger negative price reactions than average, suggesting that these effects might be induced by regulated institutional investors. Finally, there is some evidence of credit-induced price movements that start before the rating or watchlist announcement, suggesting that the market partially anticipates the rating events. G¨ uttler and Wahrenburg (2007) study the lead-lag relationships for near-to-default issuers with multiple ratings by Moody’s and Standard & Poor. They find that downgrades (upgrades) by one rating agency are followed by downgrades (upgrades) of greater magnitude by the second agency, and that rating changes of the same sign by the second agency are significantly more likely after downgrades than after upgrades by the first agency. These findings might help to explain downward rating momentum, and might also suggest that the first rating downgrades could affect bond prices more severely than following ones. Alsakka and ap Gwilym (2012) extend G¨ uttler and Wahrenburg’s work by analyzing the behavior of sovereign outlook and watchlist assignments across global rating agencies. They show how Moody’s seems to put more weight on rating stability, Standard & Poor tends to focus on short-term accuracy, and Fitch’s actions seem to strongly depend on the former two rating agencies. They also find that downgrade momentum and negative watch momentum are significant, but the same cannot be said about upgrade momentum and positive watch momentum. Another line of studies, such as Norden and Weber (2004), shows that negative outlooks and reviews have a significant negative effect on stock returns and credit default swap (CDS) spreads, while rating downgrades seem only to be associated with CDS spread changes. Also, Fitch seems not to affect stock and CDS markets in any way, suggesting that markets might react differently depending on the source of the rating/watchlist announcement. Bannier and Hirsch (2010) analyze whether the introduction of rating reviews has increased the information content of rating actions. Using Moody’s data and stock returns, they claim that this innovation in the rating system has introduced a finer level of granularity in the rating classification, even though they cannot exclude the monitoring effect of rating reviews. Bessembinder, Kahle, Maxwell, and Xu (2009) illustrate how the results of most of pre7

Investment grade bonds that are downgraded to speculative grade are called “fallen angels”.

4

vious studies on abnormal bond returns have shown some inconclusive results mainly due to the way returns are estimated. Using a thorough simulation analysis they show that parametric test statistics constructed using monthly data and/or small samples (as in Hand, Holthausen, and Leftwich (1992)) are most often not well specified, leading to low power of the tests. Moreover, previously used benchmarks such as Lehman’s Indexes induce significantly positive biases due to the inclusion of bankruptcies in the sample. Therefore one should use properly constructed matching portfolios as a benchmark for studies where bankruptcies are not directly incorporated in the analysis. Finally they explain how the use of daily data from the Trade Reporting and Compliance Engine (TRACE) significantly increases the power and specification of the tests once one takes care of the lack of liquidity that characterizes most of the traded bonds. This improvement in power is remarkable even compared to tests conducted using daily data quotes, showing that quoted prices are often weakly linked to actual transaction prices. Using Bessembinder et al. methodology, May (2010) studies the effects of rating changes on both bond and stock prices, using TRACE daily data. He finds evidence of statistically significant negative abnormal bond returns over the two-day window following a downgrade, and a significant though smaller in magnitude positive effect following an upgrade in an uncontaminated sample. My study contributes to the existing literature by investigating whether credit reviews affect rating upgrades/downgrades, once they are announced contemporaneously. To my knowledge, this is the first study that addresses this question, and tries to investigate the possible reasons for such effects using a carefully filtrated intra-daily corporate bond dataset from TRACE and Mergent FISD. I also investigate possible sources of the asymmetry in bond price reactions to rating downgrades and upgrades once watchlists are introduced, and provide empirical evidence that credit outlooks and reviews are a key determinant of cumulative abnormal returns following a rating event. The structure of the paper is as follows. In Section 3 I describe the institutional background, the data sources and the filtering procedures to construct my dataset. In Section 4 I illustrate the matching-portfolio methodology that has been used to compute the cumulative abnormal returns. Section 5 reports the results of the event study and of the multivariate cross-sectional analaysis of abnormal bond returns.

3

Institutional Background and Data Description

I study corporate bond prices using a large panel dataset obtained merging the Fixed Income Securities Database (FISD) on corporate bond characteristics with the Trade Reporting and 5

Compliance Engine (TRACE) database on disseminated corporate bond transactions in the secondary market. My dataset consists of daily observations of ratings, reviews and bond transactions from January 2003 to June 2011.

3.1

Credit ratings and Watchlists

Credit ratings are ordinal measures of long-horizon expected loss8 , and provide information about the probability of default and other forms of corporate financial distress. Typically, credit ratings are assigned by rating agencies to specific issuers and issues based on some quantitative and qualitative rating criteria, which provide the framework to identify risks and their impact on future credit quality. Quantitative measures include systematic assessment of financial data and ratio analysis, while qualitative assessments are introduced to capture the nuances of the real world that might contradict the information provided by a purely quantitative model. This mix of qualitative and quantitative analysis is then discussed by a rating committee, where a rating is assigned after a voting. Once a rating agency identifies a substantial change in the issuer’s creditworthiness, it will either announce an immediate rating change and/or publish a note that the specific issue/issuer is under review, most often indicating a direction/outlook. In case the rating agency decides to announce that the issuer/issue is under review, further meetings of rating agency analysts with firm management will determine the necessity of a rating change. Broadly speaking, a downgrade is generally caused by a worsening of the financial conditions of the firm that issued the rated bond, and can have both a systemic and a idiosyncratic nature. For instance a drop in a distress-related indicator that is used in the rating process, such as the interest coverage9 , can be caused by both firm-specific shocks - for instance event risk10 - that affect a firm’s earnings, or by macroeconomic, systemic shocks like a sudden rise in interest rates, which translates into higher interest expenses. To ensure rating stability, only relevant shocks that are likely to have a major impact on a firm’s long-term financial conditions might lead to a rating revision. Rating outlooks and rating reviews (“Watchlists”) are a complementary service provided by the most important rating agencies since the early 90s, and they are mostly used to signal the potential direction and timing of future rating changes, as well as the evolution of default risk over the short to intermediate term, typically three months to two years. They reflect financial or other credit trends that have not yet reached the level that would trigger a rating action, but which may do so if the trend continues. To ensure long-term stabil8

Moody’s definition. Interest coverage is defined as the ratio EBITDA/interest expense. 10 See for instance Collin-Dufresne, Goldstein, and Helwege (2010) on this topic. 9

6

ity of ratings, credit agencies do not incorporate short- to medium-term transitory changes in the economic and/or fundamental business conditions, following the so-called “throughthe-cycle” methodology: they take rating actions only when there is no risk to revert such changes shortly afterwards. There are several reasons for maintaining rating stability. As Altman and Rijken (2004) point out, it is desirable from a regulatory perspective since a too timely update of credit ratings could generate procyclicality effects, and exacerbate financial crises. Moreover, portfolio strategies and bank capital requirements are linked to credit ratings. Hence, keeping rating stability would minimize the risk of frequent and costly portfolio rebalancing. Finally, rating stability is by itself desirable for rating agencies from a mere reputation perspective, since rating reversals are generally perceived as a weakness in their evaluations by market participants11 . Introducing rating outlooks and reviews, rating agencies can signal that a change in the riskiness of the obligor has been observed, but it is still uncertain whether such shock will lead to permanent credit actions. Therefore, they can help to reconcile the goals of long-term stability and short-term accuracy of credit ratings. A rating that is put on review for possible upgrade or downgrade (or more seldom with uncertain direction) will be typically evaluated over an average period of 103 days12 , at the end of which a new announcement will be made about the new rating, with a confirmation of the previous credit rating or its change to a new one. As watchlist status has proven to be a good predictor of future rating migrations13 , some market participants that are constrained to invest in a specific rating category by official regulations (such as SEC rules) / capital requirements / investment mandates, might use outlooks and reviews as a conservative approach to anticipate future changes in ratings below the minimum standards set by investment constraints. Hence they can reduce the risk of urgent costly liquidation or litigation for holding a bond that does not match their investment mandates. For this reason, it is fair to say that rating reviews could be considered the most important source of timely information provided by rating agencies on specific obligors14 , even when they are announced together with rating changes. 11

In their Special Comment, Fons, Cantor, and Mahoney (2002) point out how market participants strongly prefer rating stability to more frequent rating updates, since ratings should be a “stable measure of intrinsic financial strength” 12 See Keenan, Carty, and Shtogin (1998). 13 Conditioning on outlooks, Hamilton and Cantor (2004) show that it’s possible to improve the historical performance of ratings as predictors of default. 14 Metz and Donmez (2008) show how credit outlooks and watchlists also help to identify those issuers that are very likely to default in a short period of time, and those who might have their ratings withdrawn.

7

3.2

Mergent FISD

I obtain bond characteristics and historical credit ratings and reviews published by Standard&Poor, Moody’s and Fitch from the Mergent FISD database. The Fixed Investment Securities Database (FISD) is an extensive collection of publicly offered U.S. Corporate bond data. It includes issue characteristics on over 100,000 corporate, U.S. Agency, U.S. Treasury, and supranational debt securities, and pricing information from the National Association of Insurance Commissioners (NAIC) on buy and sell transactions by life insurance companies, property and casualty insurance companies, and Health Maintenance Organizations (HMOs). Mergent FISD database has previously been used by Campbell and Taksler (2003), Bessembinder, Kahle, Maxwell, and Xu (2009), Chen, Lookman, Sch¨ urhoff, and Seppi (2012) among others as a way to sort corporate bonds by their characteristics. I select corporate bonds using the similar criteria of Hand, Holthausen, and Leftwich (1992), Bessembinder, Kahle, Maxwell, and Xu (2009), May (2010) and Chen, Lookman, Sch¨ urhoff, and Seppi (2012). Following Campbell and Taksler (2003) I also exclude bonds with credit-enhancement features to ensure that those included in my analysis are only backed by the creditworthiness of the issuer, hence making my sample more homogeneous in terms of bond characteristics and more reliable when I compare ratings across issues. Summarizing, I include all fixed-rate, senior, unsecured, USD denominated bonds with nonoption features issued by US-based firms. From this subsample I also exclude all bonds maturing in less than a year, since their price is going to heavily depend on time left before maturity. In the Mergent FISD database there are four different kinds of credit watch: positive, not on watch, off watch and negative15 . They are either assigned separately, or together with a change in rating. In my analysis I focus on two specific pairs of rating/review: downgrades with a review for further downgrade (“negative watch”) and upgrades with a review for further upgrade (“positive watch”). If the placement on watchlist carries complementary information to the one contained in credit ratings, and if this complementary information is priced by the market as it should, these two pairs might show a significant difference with respect to downgrades and upgrades alone, and those where any other type of review listing is observed16 . To compare my results to the previous bond literature, I also study the effects 15 In general, a positive watch status indicates that a company is experiencing favorable financial and market trends, relative to its current rating level. If these trends continue, the company has a good possibility of having its rating upgraded. Similarly, a negative review status suggests that a company is experiencing unfavorable financial and market trends, relative to its current rating level. If these trends continue, the company has a good possibility of having its rating downgraded. Not-on-watch and off-watch status indicate that the issuer is not currently affected by any of the above trends, and therefore no rating change should be expected in the short term. 16 Upgrades with negative watch and downgrades with positive watch are very rare, and so are up-

8

of rating changes alone. Summarizing, I consider the following event groups in different steps of my analysis: 1. Upgrades with Positive Watchlist 2. Upgrades with Not/off/negative Watchlist (“Upgrade/else” from now on) 3. Downgrades with Negative Watchlist 4. Downgrades with Not/off/positive Watchlist (“Downgrade/else” from now on) 5. All Upgrades 6. All Downgrades Table 1 describes the distribution of rating events at the issuer level across the years in the sample.

3.3

TRACE

The TRACE system has been created to increase transparency of US corporate bonds’ transactions between all FINRA (Financial Industry Regulatory Agency) members. Since its introduction, TRACE reported transactions have increased dramatically, and from October 1, 2004 all bond trades are disseminated, 99% of which in real time17 . Thus, TRACE is becoming the standard tick-by-tick data source for empirical research on US corporate bonds as Dick-Nielsen (2009)), Bessembinder, Kahle, Maxwell, and Xu (2009), May (2010) among others point out. Most empirical corporate bond market studies written before the introduction of TRACE in July 2002 were based on daily quotes and constructed matrix prices for the bonds. These methods produce biased results, as discussed in Sarig and Warga (1989), Bessembinder, Kahle, Maxwell, and Xu (2009) and Dick-Nielsen, Feldh¨ utter, and Lando (2012). However, a relevant portion (around 8%) of observations included in TRACE are mistakenly reported trades of different kinds, which must be corrected in different ways. To avoid possible biases in my results, I need to take into account these errors before I can proceed with the analysis. In order to do so, I first clean up TRACE data removing obvious typos and then use the filtration procedures implemented by Dick-Nielsen (2009): his methodology is well suited for my analysis, since I need to create daily value- and volumeweighted prices from raw TRACE intra-daily data as in Bessembinder, Kahle, Maxwell, and grades/downgrades following negative/positive reviews respectively. This is also pointed out by Keenan, Carty, and Shtogin (1998). 17 See Bessembinder, Maxwell, and Venkataraman (2006) and Dick-Nielsen (2009) for a full description of TRACE database

9

Xu (2009). To my knowledge, many papers which study abnormal bond returns around credit rating announcements using TRACE disseminated data have tackled this problem by simply deleting all canceled, reversed and duplicate trades. In the following paragraph I briefly summarize Dick-Nielsen (2009) filtration procedure to eliminate those errors that are likely to affect the results over the daily windows that I will consider for my analysis. TRACE filtering procedure I start my raw data cleaning by correcting whenever possible, and eliminating in the rest of the cases those few observations that are characterized by obvious typos: nonpositive prices and/or volumes, consecutive prices that are reported with the wrong par value and hence show increments by factors of 10 or more, and observations where prices were replaced by volumes or yield-to-maturity (or viceversa). Following Dick-Nielsen (2009), I implement a filtering procedure to delete the reporting errors through the following steps18 : 1. Deleting true duplicates: I use the (unique) message sequence number to identify the duplicates 2. Deleting reversals: I match reversals with the (unique) original reports starting from the most recent corrections19 , and delete both. 3. Deleting same-day corrections: I first pinpoint the original observation that has been corrected using the trade status variable that characterizes each report. Then I either cancel both the original observation and the correction in case of cancellation, or just the original in case of correction. My original, uncleaned sample consists of 61.1 million observations20 . The filtering procedure deletes 4 million observations, reducing my sample to around 57.5 million intradaily data points. From this sample, I discard all trades for those bonds that do not match the characteristics specified in section 3.2. My final sample consists of 25,186 individual bonds, issued by 3,184 obligors. 18

There are a few errors in the data that cannot be fixed using this filtering procedure, and require further ”‘ad hoc”’ adjustments. Please refer to Dick-Nielsen (2009) for more detailed information about these issues. 19 This is due to the fact that reversals are recorded later than same-day corrections 20 From this number I already excluded all observations where dealer commissions were included in the reported price.

10

4

Model/Methodology: Abnormal Bond Returns and Matching Portfolio Model

To determine an appropriate control for computing abnormal returns, I create reference portfolios by sorting corporate bonds into different groups, according to their time-to-maturity and credit rating. In particular, I use Moody’s definition of long- (10+ years), medium- (510 years), and short-term (1-5 years) to classify bonds in terms of their time-to-maturity. I segment bonds into letter ratings according to the classic major categories used by Moody’s, Standard&Poor and Fitch, excluding ratings from CCC downwards, as they are likely to contain contaminating information, such as bankruptcies or defaults. Hence, I am left with three time-to-maturity rankings and seven rating rankings, for a total of 21 different groups/matching portfolios. For each one of them, I compute the value-weighted daily return for each day in the sample period as in Bessembinder, Kahle, Maxwell, and Xu (2009). One of the major issues encountered when dealing with corporate bond returns is the low frequency of trading days for most of the bonds, which make it difficult to obtain proper timeseries. Moreover, Feldh¨ utter (2012) points out how trades tend to cluster around specific days, often at different prices, which makes it difficult to extract a single daily price. For this purpose, I follow Bessembinder, Kahle, Maxwell, and Xu (2009) approach and compute bond returns from trade-weighted daily prices, setting missing prices to be equal to the last prior observed trade-weighted price whenever a bond is not traded on a specific day21 : Pt =

X

(Pt,i ∗ wt,i ) + AIt

(1)

i

where wt,i is the relative weight for trade i and AI is the accrued interest from the last coupon, that is coupon ∗ ndays 360 is the number of calendar days elapsed from the last coupon payment. I AIt =

where ndays

use trade-weighted prices mainly for two reasons. First, I reduce the risk that a wrongly reported closing price could affect the computation of returns whenever it would be used as reference. Secondly, as Edwards, Harris, and Piwowar (2007) and Bessembinder, Kahle, Maxwell, and Xu (2009) noted, there might be a significant impact on return calculations due to the difference in trading costs by the size of the trade. By using trade-weighted prices, 21

Other approaches to circumvent the low trading frequency are the ”‘trade-weighted price, trade ≥ 1000k”’ by Bessembinder, Kahle, Maxwell, and Xu (2009), and the one proposed by Cai, Helwege, and Warga (2007).

11

I put more weight on those trades that incur lower execution costs and therefore should more accurately reflect the underlying bond price. A potential drawback of this approach is that the resulting weighted price is characterized by market conditions throughout the day instead of the end only, and thus it is impossible to match the price that is computed for the event day with the timing of the event itself, whenever available. In figure 11, 12 and 13 I report the approximated22 average trading volumes in millions of USD for all trades in the sample, Investment Grade and Speculative Grade bonds respectively. It is worth noticing that trading activity usually spikes at the rating announcement day, and tend to be significantly higher than average on day +1 as well. This suggests that each event triggers a response on the market in terms of traded volumes over the (0,+1) window, and even on longer time windows in case speculative bonds are traded. I compute each bond raw log returns (RR) from the actual accrued returns of each issue over the holding period, that is  RRt = ln

Pt Pt−1

 (2)

where Pt is the trade-weighted price of the bond computed as before. To calculate the abnormal bond returns (AR), I subtract the contemporaneous “index” log returns obtained using the value-weighted matching-portfolio technique presented in Bessembinder, Kahle, Maxwell, and Xu (2009) and Chen, Lookman, Sch¨ urhoff, and Seppi (2012) described above, that is: i,j i,j ARi,j t = RRt − PRt

(3)

where ARi,j t is the abnormal return at time t on a bond with rating i and belonging to time-to-maturity segment/group j. Similarly, PRi,j t is the return at time t on the matching portfolio constructed as an average of equally-weighted raw accrued returns on the N bonds having the same rating i and time-to-maturity j, but whose issuer did not experience a change in rating in any of the six weeks preceding or following the event itself, i.e. during the time window (t − 30, t + 30): PRi,j t

N 1 X RRi,j = t,k N k

(4)

A possible source of bias that might affect the results of the event study at the issue-level is the positive correlation that exists between contemporaneous returns and standard errors 22

The approximation is due to the fact that trade size in TRACE is censored above 5 million USD for investment grade bonds and 1 million USD for speculative grade bonds

12

of bonds being issued by the same obligor. For this reason the standard deviation of the sample could be reduced due to this correlation effect, and hence the t-stat could potentially be biased upward. Moreover, ratings are assigned to issuers rather than issues, and this is particularly true in my subsample since I exclude bonds with credit-enhancement features23 from the analysis. In order to minimize this type of bias, I take a “firm level approach” following Bessembinder, Kahle, Maxwell, and Xu (2009), and compute the weighted average abnormal return at time t for each k-firm in the sample as follows:

ARk,t

J 1 X = ARi,k,t ∗ wi,t N i=1

(5)

where J is the number of bonds outstanding from firm k, and wi,t is the market weight of bond i relative to the total market value of bonds outstanding. In their approach the firm is seen as a portfolio of its own bonds, which helps reducing the problem of cross-correlation of bonds belonging to the same firm. Moreover, this method does not put excessive weight on firms in the sample with multiple bond observations24 , nor on the very same event happening at a specific date. Cumulative abnormal bond returns (CAR) are constructed as the sum of issuer-level abnormal returns over a specific time-window (t1 , t2 ): t2 X

CARi,j t1 ,t2 =

(ARi,j k )

(6)

k=t1

Finally, I compute the average cumulative abnormal bond return for each rating category i / time-to-maturity j / time-window (t1 , t2 ) as i,j

CARt1 ,t2 =

5 5.1

1 X (CARi,j t1 ,t2 ) N

(7)

Results Event Analysis

In my data panel I have to deal with the illiquidity that characterizes the U.S. corporate bond market. As bonds might not trade on each day surrounding a rating event, I have to impose some trading restrictions to my sample to ensure that the results of my study are driven by prevailing market movements. A large portion of the bonds in my original sample 23

Ratings on this category of bonds are meant to be assigned on the specific issue rather than the issuer. These firms are likely to have a higher-than-average rating too, as Bessembinder, Kahle, Maxwell, and Xu (2009) point out. 24

13

are only traded at issuance25 . The bottom quartile of bonds in terms of frequency of trading observed during a bond’s lifetime is traded on less than 5% of available trading dates, and is therefore discarded from the sample. This is a conservative approach that I set in place to eliminate the most illiquid bonds which do not provide any source of current market conditions. Also, I only consider rating events for those bonds which traded on at least 7 out of the 21 days during the (-10,+10) transfer window. As opposed to May (2010), I choose to relax the (-1,+1) trade restriction in order to avoid sample selection bias that might affect my results towards more informative rating changes. On the other hand, I require that the bond trades at least once before and once after the rating event. As shown in figures 11, 12 and 13, trades tend to concentrate in terms of traded volumes around the event date anyway, especially on day 0 and +1.26 The findings of average cumulative abnormal returns over a time window of (-10,+10) days around the rating event are described in table 4 for rating upgrades, and table 5 for rating downgrades. I report both the t-stat and the Wilcoxon signed-rank test on the median in order to provide a better picture of the significance of abnormal returns. Figures 1 and 2 describe the evolution of cumulative abnormal returns over the (-10,+10) event window for downgrades and upgrades respectively. Rating changes that are announced with reviews of the same sign, i.e. positive for upgrades, negative for downgrades, show stronger and more significant returns than the ones characterized by either a “mixed” rating/review event, or the whole sample of downgrades/upgrades. Notably, these results are much stronger for downgrades with negative watch than in case of upgrades. This is in line with the hypothesis that market participants take into account both reviews and rating changes when they make their investment decisions after a rating announcement, but they tend to put more weight on unfavorable information. This might be due to price pressure caused by rating-sensitive regulations, or the fact that investors elaborate negative credit watches as a source of (shortterm) additional information that is not included in the rating changes alone, while the same does not hold in case of positive reviews. Interestingly, my general results on average cumulative abnormal returns following upgrades and downgrades are in line with May (2010), who also uses transaction data from TRACE database to study the effects of rating down/upgrades alone on corporate bond prices. Hence, it seems crucial to include watchlists in the analysis to add a deeper level of understanding in the analysis of these events. In figures 1 and 2 we can see a remarkable difference in the economic impact of downgrade announcements on bond prices compared with upgrades: while a downgrade with a 25

Goldstein and Hotchkiss (2009) investigate the characteristics and frequency of trading of newly issued corporate bonds. 26 I perform several robustness checks to verify whether my results are mainly driven by my sample selection. The results of these robustness checks are briefly summarized in the Appendix.

14

negative review causes an average CAR of -1.90% in the (0,+1) window, this is only equal to +0.20% for the same window following an upgrade with positive review. Similar observations can be made for a longer time window, such as the (0,+10) one. Thus, the asymmetry in terms of impact due to down-/upgrades is even enhanced if we include credit reviews in the analysis. Such difference might be explained in several ways. One could argue that companies might spread only good news to the market, hiding potential negative ones (Goh and Ederington (1999)), which would in turn create a bias such that negative information content for credit rating changes would be considered more trustworthy - and even more so when downgrades are announced together with negative outlooks/reviews. Another motivation for this asymmetry could come from the fact that rating agencies would be more concerned about short-term, negative news due to high reputational costs that they would incur in case they failed to detect a critical financial situation of an obligor. Another line of thought suggests that there might be a price pressure due to rating-specific constraints, which would lead market participants to react to downgrades and upgrades in a different fashion. In particular, this would be important once we consider rating-sensitive boundaries such as the Investment Grade vs Speculative Grade (BBB to BB) and the NAIC1 vs NAIC2 (A to BBB) boundary, particularly critical for insurance companies. Indeed, rating-sensitive institutional investors carry a risk of costly litigation/forced liquidation in case the bond is downgraded below the lowest bound set by their investment mandates / capital requirements. For instance, investment-grade bond mutual funds may only hold up to 5% of their portfolio in junk bonds, and must immediately liquidate any asset falling below a B rating27 . Also SEC Rule 15c3-1 forces broker-dealers to take larger haircuts on high yield corporate bonds when calculating their net capital. I investigate this effect more in detail in the next section. Finally, I find that the corporate bond market partially anticipates rating changes in the days that precede a rating event, as Holthausen and Leftwich (1986), Goh and Ederington (1999) and May (2010) noted. This is true both in case of a future upgrade or a downgrade, and the effect is particularly strong during the three days preceding the event, i.e. the (-4,-1) window.

5.2

Boundary cases: Investment Grade vs Speculative Grade and NAIC1 vs NAIC 2

Downgrades seem to have a much larger, statistically more significant impact on daily bond prices compared to upgrades. By considering the rating announcements together with the watchlist inclusion close to critical rating boundaries, namely the NAIC1 vs NAIC2 (A vs 27

See Kisgen (2007) and Chen, Lookman, Sch¨ urhoff, and Seppi (2012), among others.

15

BBB) and the Investment Grade vs Speculative Grade (BBB vs BB), I intend to shed some light on a possible reason for such asymmetry. The rationale behind these thresholds is given by the investment mandates that mutual funds need to follow when investing in fixed-income securities, which is often set to investment grade only (BBB rated and above). On the other side, NAIC1-restricted institutional investors holding a relevant portion of their portfolios in fixed-income security might aim at reducing their inventory risk and the necessity of increasing their level of required capital on a risk-weighted basis28 . For this reason one could expect higher-than-average negative abnormal returns following downgrades announced with negative reviews, once the rating gets closer to the upper bound of the boundary: as negative reviews are a good predictor of future downgrades, institutional investors might want to anticipate the risk of future forced liquidation in the short term by selling their bonds before they are downgraded under the critical rating threshold. The results are summarized in tables 6 and 7. In figures 3 to 6 I plot the cumulative abnormal returns for the boundary cases IG vs SG (BBB/BB) and NAIC1 vs NAIC2 (A/BBB) respectively, for both the upper and lower bound of the threshold. Here it is possible to see how the impact of the rating/review announcement generates rather similar abnormal return in the two separate cases over the (0,+10) window, with a significant difference from the case where the rating change is announced without a review of the same sign. The fact that the Investment Grade vs Speculative grade threshold shows slightly stronger effects compared to the NAIC 1 vs NAIC2 can be motivated by the double nature of this boundary, which sets the critical level for portfolio investments, as well as the second boundary29 for insurance companies’ capital requirements. Overall, the results seem to support the hypothesis that there is a price pressure caused by rating-sensitive constraints that affect trading decisions of a set of investors. The lower parts of tables 6 and 7 show the cumulative abnormal returns for the cases where the critical boundaries are approached from below after an upgrade. In this case, one should expect rather different results due to the fact that high yield bonds cannot be held by rating-sensitive investors, hence there is a limited opportunity to hold speculative grade bonds with positive reviews. On the other hand, those investors that are only forced to hold investment grade bonds in their portfolios can anticipate future bond upgrades to the NAIC1 level, reaching a category with a much higher demand - and hence supposedly a higher positive price pressure. For this reason they might be able to extract a positive return by purchasing newly upgraded bonds that belong to NAIC2 category - and hence are 28

A good description of the NAIC risk classifications can be found, among others, in Becker and Ivashina (2013). 29 BBB rating is equivalent to NAIC2, and BB rating to NAIC3

16

more costly in terms of capital requirements - but that might be further upgraded to highest NAIC category. So we could expect a larger positive effect following upgrades with positive watchlist when a bond approaches the NAIC1 level from below (BBB/BBB+) compared to the case where a high yield bond is close to be upgraded to investment grade (BB/BB+). My findings seem to support this view to some extent, but unfortunately my sample consists in very few events which can be used to test this specific case. Therefore, my results are just indicative and should be analyzed more in detail as the number of observations/events increases over time. One can also argue that the portfolio choices of market participants around sensitive rating thresholds can indicate a possible strategic behavior of rating agencies whenever a corporate bond should be downgraded under the NAIC1 / Investment Grade threshold. Indeed, rating agencies might want to provide some extra time to rating-sensitive investors to liquidate their assets that are close to get excluded from their investment mandates, hence reducing the risk of a fire-sale following a drastic downgrade. Since corporate bonds that are under negative outlook / watchlist for downgrade will maintain their current rating for an average of three months, mutual funds and insurance companies might use such period of time to rebalance their portfolio without incurring in large transaction costs due to clustered sale orders.30

5.3

Expectations Model

In this section I illustrate a simple expectations model that should shed some light on how rating events should be distinguished according to their informativeness, and how unexpected (informative) events might have a stronger impact on bond prices with respect to expected (uninformative) events. A downgrade that is preceded by a watchlist inclusion, or by another downgrade by another rating agency should arguably have a weaker effect in terms of abnormal returns compared to another downgrade that comes unexpected, i.e. without watchlist inclusion or other announcements from other rating agencies. Moreover, a rating change announcement from a rating agency that sets a specific rating to the same level of another rating agency (example: Moody’s from AA to A when S&P’s rating is already set to A) might well be different from another one that sets a new upper/lower bound in terms of ratings. To my knowledge, no paper has tackled this problem, and all rating events have been considered as equal in the literature. Hand, Holthausen, and Leftwich (1992) used a different expectation model, focusing on the difference between yields to maturity on 30

[Would it be interesting to check how many bonds that are downgraded with negative outlook / review to the BBB/BBB- rating are actually further downgraded later on? And if this percentage is in line with the average of downgrades following negative outlooks / reviews?]

17

the event-affected bonds and the ones of their benchmarks. To complete this task, I define “expected”31 any rating event that: 1. is preceded by a watchlist inclusion in the same direction (negative for a downgrade, positive for an upgrade); 2. is preceded by any rating change in the same direction by any other credit rating agency in the past 30 days; 3. follows a previous rating change of the same kind (for instance from AA to A) by another rating agency, i.e. when a rating agency “adjusts” its rating to the ones set by other rating agencies; 4. happens within a 30-day window since the last event going on the same direction at the issuer level (for instance when a bond is downgraded to BBB after another bond belonging to the same issuer experienced the same rating movement in the previous 30 days); The results are shown in figures 8 to 9. Average CAR are described in tables 8 and 9. Given my definition of “unexpected event”, I observe that unexpected downgrades are characterized by larger and more significant negative CAR (-0.89%) during the time window (0,+1) compared to the expected case (-0.41%). The same consideration holds for rating downgrades with negative review, with stronger negative abnormal returns for the unexpected case (2.99%) than for the expected one (-1.58%). Again, there is no major impact on upgrades over a 10-day window despite some difference around the event day, and results do not differ significantly between expected and unexpected rating events even when credit reviews are considered.

5.4

Cross-sectional analysis

In this section I run a cross-sectional analysis on cumulative abnormal returns over the (0,+1) window for rating downgrades and rating upgrades.32 in order to identify whether the above discussed credit reviews and informativeness of rating announcements are among the main 31

For this part of my analysis, I exclude rating changes that happen in the first 60 days after a bond is included in the database. Therefore I am left with fewer observations/events compared to the cases discussed in the previous sections. While it’s true that I might lose some information from the events I drop, I believe that I cannot correctly determine whether a rating event is truly unexpected when it happens so quickly after the inclusion of the bond in my database. 32 I do not expect to find a significant effect of positive watchlist inclusion in case of upgrades. Nevertheless, I run the cross-sectional analysis for upgrades as well in order to identify the main drivers of cumulative abnormal returns for this kind of event.

18

determinants of abnormal bond returns. My cross-sectional regression in case of downgrades looks as follows: T CAR0,1 i = β0 + β1 NEGATIVEi + β Xi + 

(8)

while for upgrades I use the following: T CAR0,1 i = β0 + β1 POSITIVEi + β Xi + 

(9)

I consider the following variables: • NEGATIVE: dummy variable that is equal to 1 in case a negative review has been announced together with the downgrade. I expect this variable to be negative and significant for the reasons explained in the previous sections. • POSITIVE: dummy variable that is equal to 1 in case a positive review has been announced together with the upgrade. I expect this variable to be positive but not necessarily significant, considering the results that I have presented in the previous sections. • CONTROLS (X): 1. FALLEN ANGEL is a dummy variable that equals to 1 whenever a bond is downgraded under the Investment Grade threshold. Cantor, Ap Gwilym, and Thomas (2007) and May (2010) among others show how bonds/stocks that are downgraded to junk (the so-called “Fallen Angels”) show higher-than-average negative abnormal returns following the event33 . Therefore I include this variable as a control to show that the WATCH and INFO variables are still significant even after the FALLEN ANGEL variable is included in the regression. 2. RISING STAR is a dummy variable that equals to 1 whenever a bond is upgraded to the Investment Grade threshold. I include this variable for the very same reasons that hold for the fallen angel control. 3. EXPECTED: dummy variable that is equal to 1 in case a the rating announcement is considered as expected according to the classification in the previous section. I expect it to be positive and significant, since an enxpected rating announcement should generate a weaker reaction by market participants, as they should have had 33

Jorion and Zhang (2010) actually claim that this effect disappears once one takes into account the rating prior the announcement.

19

the opportunity to partially discount the expectation of a future rating change in the current bond price prior to the event date. 4. POSITIVE CAR−4,−1 and POSITIVE CAR−10,−1 (downgrade analysis): dummy variables equal to 1 whenever the cumulative abnormal return of the days preceding the event are negative. I introduce these controls as in May (2010), since one can expect that downgrades announced after some positive abnormal bond returns might be considered unexpected, and hence they might produce a higher negative impact after the announcement is made. Therefore, I expect these variables to take negative values. 5. NEGATIVE CAR−4,−1 and NEGATIVE CAR−10,−1 (upgrade analysis): dummy variables equal to 1 whenever the cumulative abnormal return of the days preceding the event are positive. I introduce these controls as in May (2010), since one can expect that upgrades announced after some negative abnormal bond returns might be considered unexpected, and hence they might produce a higher positive impact after the announcement is made. Therefore, I expect these variables to take positive values. 6. OLD RATING: variable representing the pre-announcement rating (AAA=1, AA+=2, AA=3, AA-=4 and so on), as discussed in Jorion and Zhang (2010). They show how that the stock price effect of rating changes depends on the value of the rating prior and after the announcement. According to their findings, I expect this variable to take negative and significant values in case of downgrades, since lower pre-event ratings should generate higher negative abnormal returns. 7. ON WATCH: dummy variable equal to 1 in case the specific issuer has been placed under credit watch prior to the event. I expect this variable to take a positive value in case of downgrades and negative value in case of upgrades, since the market should have accounted for a positive likelihood of a future change in rating once the watchlisting was announced. 8. MR and SPR: dummy variables that take value 1 when the CAR refers to a rating announcement made by Moody’s and Standard & Poors respectively. These variables might take different values according to the relative impact/importance of a specific rating agency with respect to others. For instance, one might find a negative and significant coefficient for MR in case the market would consider downgrade announcements made by Moody’s as timelier / more reliable than those made by S&P or Fitch (as pointed out by G¨ uttler and Wahrenburg (2007)). 9. RATING JUMP is a variable that takes different values according to the size of 20

the jump (in number notches: a change from AA+ to AA- is considered as for a 2size jump) for each rating event. This variable should take negative values in case of downgrades and positive values in case of upgrades, since a bigger size of the jump should translate into a stronger negative judgment on the creditworthiness of the obligor by the rating agency. 10. COUPON represents the (fixed) interest rate for that specific issue. I include this variable mainly to show that my results are not solely driven by the accrued interests on the bond. It’s not obvious whether this variable should take positive or negative values: on one hand, one might expect that high coupons characterize bonds issued by companies that had higher-than-normal credit risk or issued debt during economic recessions, and therefore had to offer higher interest rates to attract investors. Hence, a negative event on such issuers might be taken into extra account by bondholders. On the other hand, we need to keep in mind that the time interval of the analysis cover periods with rather different risk-free interest rates, which eventually have an impact on the interest rate that obligors have to offer. 11. TOTAL AMOUNT is the log dollar value of the outstanding debt for a specific issuer. I use this control to capture a possible bias/incentive for credit rating agencies to rate bigger companies better. Indeed, S&P, Moody’s and Fitch are issuer-financed agencies, and hence they might have an incentive to be more generous with their ratings with those companies that have large outstanding debt in order to avoid losing them as clients whenever too negative ratings are assigned to them. 12. RECESSION is a dummy variable that equals to 1 whenever the rating event happened during the NBER recession period November 2007 - December 2009. I include this variable in the analysis to give more strength to my results, since one may argue that downgrades are more frequent during recessions, and hence they might be more expected by the market as Jorion et al. (2005) point out. The results of the multivariate analysis are shown in tables 10 and 11. The main take of the results for the cross-sectional analysis of downgrades is that the NEGATIVE dummy variable appears to be strong and significant at the 1% level in all the setups, and does not lose its size/explanatory power once controls are added to the regression. In particular, the watchlist inclusion adds a -1.48% to -1.62% in terms cumulative abnormal return around the window (0,+1), with a t-stat of around 4. Also, the variable constructed using the same criteria of my expectation model seems to provide some evidence that one should take 21

into account the whole spectrum of announcements made by all rating agencies, and that downgrades are not all equal in terms of informativeness. The R-squares, thought small, are slightly higher than the ones found in similar cross-sectional regressions found in the literature. All my control variables show the right sign as discussed above, but only few appear to be important in the cross-sectional analysis of downgrades. In particular, there is a rather strong evidence that the Investment Grade to Speculative Grade threshold, captured by the “FALLEN ANGEL” variable, has a statistical and economical strong impact on bond returns: a bond that falls below the boundary shows an additional -1.5% abnormal return in all the setups, with a t-stat close to 3. In setups (3) to (5) I considered three different ways to check whether unexpected events show stronger negative returns compared to the average case. In particular, I analyze the case where a bond shows positive cumulative abnormal returns 4 days (setup (3)) or 10 days (setup (4)) prior to the event date, which should indicate that the event itself should be somehow unexpected. In setup (5) I include my dummy variable “EXPECTED” to assess whether less informative rating downgrades show weaker results in terms of CAR over the two-day post-event window. I find that expected downgrades are characterized by less negative CARs for about 54 basis points, and this effect is significant at the 95% level. There is also some strong evidence that events announced by Standard & Poor and Moody’s are considered far more informative than those announced by Fitch. The cross-sectional analysis of rating upgrades shows that there are two main determinants of cumulative abnormal returns: the pre-event negative abnormal return, and the upgrade to the investment grade level. The positive watch dummy variable takes values that are positive but not significantly different from zero. This was somehow expected from the first analysis performed in section 5.1. Interestingly, there is some evidence that Standard & Poor’s upgrades result in higher positive abnormal returns compared to Moody’s and Fitch. This might be due to the fact that the market considers upgrades announced by S&P as most informative. Finally, the “EXPECTED” variable that is constructed according to the rules in section 5.3 does not show any significant size / power in the cross-sectional analysis. This proves once again that upgrades and positive watchlist announcements by rating agencies are not considered as informative as in the case of downgrades.

6

Conclusions

In this paper, I examined the effects of contemporaneous credit rating and review announcements on the over-the-counter U.S. corporate bond market. I find significant negative daily abnormal returns over the (0,+1) and (0,+10) windows associated with a downgrade an22

nouncement with negative watch, with a much greater mangnitude compared to downgrades as a whole. This suggests that it is crucial to add rating reviews and outlooks in the analysis of the effects of rating downgrades on bond prices. The analysis on credit upgrades shows that the effects of the inclusion of positive watch are still significant, but much smaller in magnitude compared to the downgrade case. Also, there is some weak evidence of market timing during the days preceding a downgrade, but not in case of upgrades. I observe high abnormal bond returns following downgrades with negative watch around rating-sensitive boundaries, such as the Investment vs Speculative Grade and the NAIC1 vs NAIC2, over the (0,+10) window. These results indicate that bond abnormal returns could also be driven by regulation constraints, besides the information content of the ratings. Indeed, rating-sensitive institutional investors carry a risk of costly litigation/forced liquidation in case the bond is downgraded below the lowest bound set by their investment mandates / capital requirements. Hence, credit watches might represent a risk-management tool used to reduce the likelihood of a short-term negative impact on capital requirements / portfolio returns. I construct a simple expectation model to analyze the combined effects of rating agencies’ announcements and their effects on rating announcements. I find that unexpected downgrades show much stronger negative abnormal returns after the announcement, which is consistent with the hypothesis that such events are more informative, and hence solicit a bigger response by the market. Similar information about upgrades does not show the same impact, which suggests that the information provided by rating agencies in case of upgrades is not considered as important/informative about the creditworthiness of the obligor. Finally, a multivariate cross-sectional analysis on abnormal returns over the two-day window following a downgrade shows that the negative watchlist state is a key determinant of bond market’s response even when one includes key control variables. I also show that it’s important to consider the relative importance of each rating agencies’ announcements, separating the informative ones from the uninformative. The same does not hold for positive watchlists during upgrades, whose (0,+1) post-event abnormal returns are mainly driven by the the upgrade to the investment grade level and the unexpectedness of the rating change. My analysis suggests that credit watchlists are a crucial source of information whenever a rating agency announces a rating downgrade. Given their short-term focus, watchlists can be used as a tool to anticipate future downward rating movements that might be detrimental for specific rating-sensitive investors, such as mutual funds and insurance companies. An open question is whether rating agencies might use credit watchlists to “smooth” rating changes in different steps, allowing raring-sensitive investors to lighten their exposure in “risky” bonds before future actions regarding their credit ratings are taken. Indeed, such strategy would 23

give some extra time to insurance companies and mutual funds to sell, hence avoiding the risk of costly litigation/forced liquidation in case those bond are further downgraded below the lowest bound set by their investment mandates / capital requirements. This would in turn help to achieve a more gradual adjustment of prices in the weeks following a downgrade with a negative watch, therefore reducing the risk of high volatility around the announcement dates. To investigate this matter, a possible further step would be to focus on rating actions on bonds that are largely held by insurance companies / mutual funds.

7

Appendix

7.1

Robustness Analysis

In order to ensure that my results are not driven by pure sample selection, I performed a set of robustness checks that vary the composition of bonds in my dataset. • Minimum trading days in the time window: this constraint is set to 7 out to 21 in the paper, tested for 3/21 and 13/21 as well, and results hold. CAR (0,+1) with 3 days constraint show a -0.44% (-4.28 t-stat) for downgrades and -1.87% (-6.19 t-stat) for downgrades with negative reviews. Similarly for upgrades, with +0.08% (3.21 tstat) without positive watch and +0.23% (2.92 t-stat) with positive watch. The same holds for other CARs. When I test for 13 out of 21 trading days, I find a CAR (0,+1) a -0.56% (-3.55 t-stat) for downgrades and -1.92% (-5.06 t-stat) for downgrades with negative reviews. Upgrades show a +0.12% (3.55 t-stat) without positive watch and +0.30% (2.43 t-stat) with positive watch. Therefore, I can claim that my main results are not driven by sample selection due to the choice of minimum traded days in the event window. • Minimal trading frequency under bonds’ life span: in my sample I exclude all bonds that trade less than 10 times per year, i.e. the bottom quartile in terms of frequency of trades per year. While discarding such a big proportion of bonds might look detrimental for the sample size, I believe there is little information to be captured for such illiquid bonds. On the other hand, one could claim that one should be even stricter with the minimal requirement of traded days per year, in order to capture the true prevailing market moments at each time. While I feel this to be a different sample choice rather than necessarily a better one, I also test for a much stricter sample, solely including the top quartile of bonds in terms of frequency of trades. When I consider the highest quartile of my sample in terms of trading activity, I find a CAR (0,+1) of -0.53% (-3.93 t-stat) for all downgrades, and -1.60% (-4.53 t-stat) for downgrades with 24

negative watch. Analogously, I obtain a CAR (0,+1) of +0.08% (2.43 t-stat) for all upgrades, and +0.15% (1.05 t-stat) for upgrades with positive watch. This shows that my results on downgrades are still valid even when one considers a much more liquid sample, while upgrades appear to be affected in a slighly weaker way in terms of size and statistical significance of CAR. • Way of sorting bonds by time to maturity: in my matching-portfolio methodology, I choose to split all bonds into three groups broadly following Moody’s definition of Long, Medium and Short term. In the previous literature this has been done in different ways, most often using two groups only of equal size. To be certain that my results are not driven by the way I choose to select my time-to-maturity groups, I run a test using a two-group only as in the literature. I find that my results are very close to the ones obtained with this new setup: over the (0,+1) window, I get a statistically significant CAR of 1-88% with negative watchlist and -0.15% for a downgrade with any other watchlist inclusion, and similarly for upgrades I obtain a CAR(0,+1) of +0.22% with positive watchlist, and +0.11% with a non-positive watchlist. • Minimal outstanding amount (par value): my sample consists of all bonds issued by obligors having a total outstanding amount of debt of at least 10 million USD, measured in par value. I discard the smallest issuers since their abnormal returns might be mainly driven by liquidity. When I consider the whole sample without this constraint, I find a CAR (0,+1) of -0.59% (-4.64 t-stat) for all downgrades, and -1.95% (-5.15 t-stat) for downgrades with negative watch. Similarly, I obtain a CAR (0,+1) of +0.10% (3.05 t-stat) for all upgrades, and+0.17% (1.34 t-stat) for upgrades with positive watch. Therefore my results are not mainly driven by the abnormal returns on the smallest bonds.

25

References Alsakka, R., and O. ap Gwilym (2012): “Rating agencies’ credit signals: An analysis of sovereign watch and outlook,” International Review of Financial Analysis, 21(C), 45–55. Altman, E. I., and H. A. Rijken (2004): “How rating agencies achieve rating stability,” Journal of Banking & Finance, 28(11), 2679–2714. Bannier, C. E., and C. W. Hirsch (2010): “The economic function of credit rating agencies - What does the watchlist tell us?,” Journal of Banking & Finance, 34(12), 3037– 3049. Becker, B., and V. Ivashina (2013): “Reaching for Yield in the Bond Market,” NBER Working Papers 18909, National Bureau of Economic Research, Inc. Bessembinder, H., K. M. Kahle, W. F. Maxwell, and D. Xu (2009): “Measuring Abnormal Bond Performance,” Review of Financial Studies, 22(10), 4219–4258. Bessembinder, H., W. Maxwell, and K. Venkataraman (2006): “Market transparency, liquidity externalities, and institutional trading costs in corporate bonds,” Journal of Financial Economics, 82(2), 251–288. Cai, N. K., J. Helwege, and A. Warga (2007): “Underpricing in the Corporate Bond Market,” Review of Financial Studies, 20(6), 2021–2046. Campbell, J. Y., and G. B. Taksler (2003): “Equity Volatility and Corporate Bond Yields,” Journal of Finance, 58(6), 2321–2350. Cantor, R., O. Ap Gwilym, and S. Thomas (2007): “The Use of Credit Ratings in Investment Management in the US and Europe,” Journal of Fixed Income, 17, 13–26. ¨ rhoff, and D. J. Seppi (2012): “Bond Ratings Matter: Chen, Z., A. Lookman, N. Schu Evidence from the Lehman Brothers Index Rating Redefinition,” CEPR Discussion Papers 9108, C.E.P.R. Discussion Papers. Collin-Dufresne, P., R. S. Goldstein, and J. Helwege (2010): “Is Credit Event Risk Priced? Modeling Contagion via the Updating of Beliefs,” NBER Working Papers 15733, National Bureau of Economic Research, Inc. Dichev, I. D., and J. D. Piotroski (1999): “The Performance of Long-run Stock Returns Following Issues of Public and Private Debt,” Journal of Business Finance & Accounting, 26(9-10), 1103–1132. 26

Dick-Nielsen, J. (2009): “Liquidity Biases in TRACE,” Journal of Fixed Income, 19(2). ¨ tter, and D. Lando (2012): “Corporate bond liquidity Dick-Nielsen, J., P. Feldhu before and after the onset of the subprime crisis,” Journal of Financial Economics, 103(3), 471–492. Edwards, A. K., L. E. Harris, and M. S. Piwowar (2007): “Corporate Bond Market Transaction Costs and Transparency,” Journal of Finance, 62(3), 1421–1451. ¨ tter, P. (2012): “The Same Bond at Different Prices: Identifying Search Frictions Feldhu and Selling Pressures,” Review of Financial Studies, 25(4), 1155–1206. Fons, J., R. Cantor, and C. Mahoney (2002): “Understanding Moody’s Corporate Bond Ratings and Rating Process,” Moody’s investors service. Goh, J. C., and L. H. Ederington (1999): “Cross-sectional variation in the stock market reaction to bond rating changes,” The Quarterly Review of Economics and Finance, 39(1), 101–112. Goldstein, A., and E. Hotchkiss (2009): “Dealer Behavior and the Trading of Newly Issued Corporate Bonds,” Afa 2009 san francisco meetings paper. ¨ ttler, A., and M. Wahrenburg (2007): “The Adjustment of Credit Ratings in Gu Advance of Defaults,” Working Paper Series: Finance and Accounting 155, Department of Finance, Goethe University Frankfurt am Main. Hamilton, D., and R. Cantor (2004): “Rating Transition and Default Rates Conditioned on Outlooks,” Discussion paper. Hand, J. R. M., R. W. Holthausen, and R. W. Leftwich (1992): “The Effect of Bond Rating Agency Announcements on Bond and Stock Prices,” Journal of Finance, 47(2), 733–52. Hite, G., and A. Warga (1997): “The Effect of Bond-Rating Changes on Bond Price Performance,” Financial Analyst Journal, 53, 35–51. Holthausen, R. W., and R. W. Leftwich (1986): “The effect of bond rating changes on common stock prices,” Journal of Financial Economics, 17(1), 57–89. Jorion, P., and G. Zhang (2010): “Information Transfer Effects of Bond Rating Downgrades,” The Financial Review, 45(3), 683–706.

27

Keenan, S. C., L. Carty, and I. Shtogin (1998): “Historical default rates of corporate bond Issuers, 1920-97,” Moody’s investors service. Kisgen, D. J. (2007): “The Influence of Credit Ratings on Corporate Capital Structure Decisions,” Journal of Applied Corporate Finance, 19(3), 65–73. May, A. D. (2010): “The impact of bond rating changes on corporate bond prices: New evidence from the over-the-counter market,” Journal of Banking & Finance, 34(11), 2822– 2836. Metz, A., and N. Donmez (2008): “Testing the Cross-Sectional Power of the Credit Transition Model,” Moody’s credit policy special comment. Norden, L., and M. Weber (2004): “Informational Efficiency of Credit Default Swap and Stock Markets: The Impact of Credit Rating Announcements,” CEPR Discussion Papers 4250, C.E.P.R. Discussion Papers. Sarig, O., and A. Warga (1989): “Bond Price Data and Bond Market Liquidity,” Journal of Financial and Quantitative Analysis, 24(03), 367–378. Steiner, M., and V. G. Heinke (2001): “Event Study Concerning International Bond Price Effects of Credit Rating Actions,” International Journal of Finance & Economics, 6(2), 139–57. Wansley, J., J. Glascock, and T. Clauretie (1992): “Institutional Bond Pricing and Information Arrival: The Case of Bond Rating Changes,” Journal of Business Finance & Accounting, 19, 733–749.

28

8

Tables Year 2003 2004 2005 2006 2007 2008 2009 2010 2011 Total

Upgrades Positive Else None 2 19 2 7 42 9 40 95 17 39 164 23 14 183 27 6 114 26 3 66 36 10 179 43 62 29 121 924 212

Downgrades Negative Else None 13 43 3 32 59 9 82 90 38 65 119 25 50 129 26 136 279 59 63 359 81 17 218 49 4 36 8 462 1332 298

Table 1: Yearly distribution of events at the issuer level in the sample (January 2003 - June 2011), and included in Mergent FISD database. The sample consists of rating announcements published by Moody’s, S&P and Fitch.

Size 1 2 3 4 5 6 7 8 9 10 11 Total

Upgrades Positive Else None 108 778 178 10 100 28 2 24 4 1 8 9 3 1 2 1

121

924

212

Downgrades Negative Else None 350 1014 207 74 189 64 21 58 14 11 31 8 2 17 1 1 12 2 2 2 6 2 1 462

3 1332

298

Table 2: Distribution of rating jumps by size across type of event in the sample (January 2003 - June 2011) and included in Mergent FISD database. Size is measured in rating notches (1,2, and 3 for Moody’s and +, none, and - for S&P and Fitch), so that a downgrade from A+ to BBB- is characterized by a jump of size 6.

29

Size 1 2 3 4 5 6 7 8 9 10 11 Total

Investment Grade Speculative Grade AAA AA A BBB BB B 7 241 754 744 396 493 12 42 117 96 88 110 12 26 33 26 26 2 2 14 10 15 16 1 2 4 5 10 7 1 1 2 7 4 1 2 2 1 4 1 2 4 2

23

300

919

898

3 545

1 664

Table 3: Distribution of rating jumps by size across ratings in the sample (January 2003 - June 2011) and included in Mergent FISD database. Size is measured in rating notches (1,2, and 3 for Moody’s and +, none, and - for S&P and Fitch), i.e. a downgrade from A+ to BBB- shows a jump of size 6. The ratings represent the pre-downgrade and pre-upgrade letter rating for the specific issue.

9

Figures

30

-4

Abnormal return (%) -2 -1 0 -3

1

Downgrades

-10

-5

0 Days before/after the event

Downgrade_Negative_evol Downgrade_All_evol

5

10

Downgrade_else_evol

Sample: January 2003 - June 2012

Figure 1: Evolution of abnormal returns 10 days before and 10 days after a downgrade announcement. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

-.5

Abnormal return (%) 0 .25 -.25

.5

Upgrades

-10

-5

0 Days before/after the event

Upgrade_Positive_evol Upgrade_All_evol

5

10

Upgrade_else_evol

Sample: January 2003 - June 2012

Figure 2: Evolution of abnormal returns 10 days before and 10 days after an upgrade announcement.

31

-3

-2

Abnormal return (%) -1 0 1

2

IG/SG boundary (downwards)

-10

-5

0 Days before/after the event

BBB_Downgrade_Negative_evol

5

10

BBB_Downgrade_only_evol

Sample: January 2003 - June 2012

Figure 3: Evolution of abnormal returns 10 days before and 10 days after a downgrade announcement around the investment grade vs speculative grade (BBB/BBB- to BB+/BB) boundary. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

-2

-1.5

Abnormal return (%) -1 -.5 0 .5

1

IG/SG boundary (upwards)

-10

-5

0 Days before/after the event

BB_Upgrade_Positive_evol

5

10

BB_Upgrade_only_evol

Sample: January 2003 - June 2012

Figure 4: Evolution of abnormal returns 10 days before and 10 days after an upgrade announcement around the speculative grade vs investment grade (BB/BB+ to BBB-/BBB) boundary. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

32

-2

-1.5

Abnormal return (%) -.5 0 .5 -1

1

NAIC1 boundary (downwards)

-10

-5

0 Days before/after the event

A_Downgrade_Negative_evol

5

10

A_Downgrade_only_evol

Sample: January 2003 - June 2012

Figure 5: Evolution of abnormal returns 10 days before and 10 days after a downgrade announcement around the NAIC1 vs NAIC2 (A/A- to BBB+/BBB) boundary. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

-.5

-.25

Abnormal return (%) 0 .25 .5 .75

1

NAIC1 boundary (upwards)

-10

-5

0 Days before/after the event

BBB_Upgrade_Positive_evol

5

10

BBB_Upgrade_only_evol

Sample: January 2003 - June 2012

Figure 6: Evolution of abnormal returns 10 days before and 10 days after an upgrade announcement around the NAIC2 vs NAIC1 (BBB/BBB+ to A-/A) boundary. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

33

-4

Abnormal return (%) -2 -1 0 -3

1

Expected Downgrades

-10

-5

0 Days before/after the event

Downgrade_Negative_evol Downgrade_All_evol

5

10

Downgrade_else_evol

Sample: January 2003 - June 2012

Figure 7: Evolution of abnormal returns 10 days before and 10 days after an expected downgrade announcement. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

-5

-4

Abnormal return (%) -3 -2 0 -1

1

Unexpected Downgrades

-10

-5

0 Days before/after the event

Downgrade_Negative_evol Downgrade_All_evol

5

10

Downgrade_else_evol

Sample: January 2003 - June 2012

Figure 8: Evolution of abnormal returns 10 days before and 10 days after an unexpected downgrade announcement. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

34

CARs by type of event: Upgrades

CAR0,1

Post event window CAR2,5 CAR6,10 CAR0,10

n

Upgrade/Positive t-test signed-rank

0.20 1.63 2.29

0.01 0.14 0.01

-0.06 -0.61 -1.20

0.15 1.17 0.75

121

Upgrade/else t-test signed-rank

0.10 2.72 5.15

0.02 0.52 0.39

0.03 0.77 0.39

0.16 3.28 3.60

924

All Upgrades t-test signed-rank

0.10 3.02 5.35

-0.01 -0.23 -0.87

-0.00 -0.01 -0.54

0.09 2.02 2.44

1257

CAR−4,−1

Pre event window CAR−9,−5 CAR−10,−1 CAR−10,+10

n

Upgrade/Positive t-test signed-rank

0.17 1.50 1.77

-0.20 -1.52 0.17

0.07 0.45 0.56

0.22 1.09 1.48

121

Upgrade/else t-test signed-rank

0.05 1.23 -0.81

0.02 0.40 0.92

0.16 2.93 1.86

0.32 4.91 4.80

924

All Upgrades t-test signed-rank

0.08 2.06 0.31

0.01 0.28 1.27

0.15 3.20 2.49

0.24 4.09 4.82

1257

Table 4: Issuer-level cumulative abnormal returns (CAR) at issuer level by type of event (rating / review), time period January 2003 - June 2011. All transaction and rating data from TRACE and Mergent FISD. Abnormal returns are computed as the difference between raw (interest-accrued) daily simple returns on the bond and the contemporaneous matching portfolio simple return, constructed by matching rating and time to maturity, over a specific time window. The t-stat and the Wilcoxon signed-rank statistics are reported under each value.

35

CARs by type of event: Downgrades

CAR0,1

Post event window CAR2,5 CAR6,10 CAR0,10

n

Downgrade/Negative t-test signed-rank

-1.90 -5.40 -7.15

0.22 0.55 -1.05

-1.23 -3.42 -4.37

-2.91 -6.14 -7.84

462

Downgrade/else t-test signed-rank

-0.16 -1.08 -5.40

-0.18 -1.09 -2.44

-0.03 -0.12 1.40

-0.36 -1.35 -3.43

1332

All Downgrades t-test signed-rank

-0.56 -4.41 -8.22

-0.11 -0.80 -2.76

-0.24 -1.48 -0.71

-0.92 -4.42 -7.13

2092

CAR−4,−1

Pre event window CAR−9,−5 CAR−10,−1 CAR−10,+10

n

Downgrade/Negative t-test signed-rank

-0.58 -2.28 -2.66

0.09 0.30 -0.30

-0.27 -0.67 -1.70

-3.18 -5.79 -6.92

462

Downgrade/else t-test signed-rank

0.00 0.03 -0.85

-0.45 -2.45 0.00

-0.50 -2.45 -1.23

-0.86 -2.53 -3.98

1332

All Downgrades t-test signed-rank

-0.26 -2.09 -3.34

-0.26 -1.93 -0.24

-0.50 -3.09 -3.09

-1.42 -5.51 -7.91

2092

Table 5: Issuer-level cumulative abnormal returns (CAR) at issuer level by type of event (rating / review), time period January 2003 - June 2011. All transaction and rating data from TRACE and Mergent FISD. Abnormal returns are computed as the difference between raw (interest-accrued) daily simple returns on the bond and the contemporaneous matching portfolio simple return, constructed by matching rating and time to maturity, over a specific time window. The t-stat and the Wilcoxon signed-rank statistics are reported under each value.

36

CARs by type of event: NAIC1 vs NAIC2 boundary Upper boundary: post event window CAR0,1 CAR2,5 CAR6,10 CAR0,10 n A - Downgrade/Negative t-test signed-rank

-1.76 -1.98 -1.21

1.31 1.64 0.64

-1.34 -1.49 -0.39

-1.79 -1.99 -1.25

76

A - Downgrade only t-test signed-rank

0.45 1.41 -1.50

-0.63 -2.29 -1.24

-0.05 -0.22 -0.08

-0.22 -0.47 -1.44

361

Lower boundary: post event window CAR0,1 CAR2,5 CAR6,10 CAR0,10 n BBB - Upgrade/Positive t-test signed-rank

0.20 1.66 2.20

0.09 0.61 0.73

-0.05 -0.37 -0.26

0.23 1.09 0.97

17

BBB - Upgrade only t-test signed-rank

0.04 0.54 1.40

-0.02 -0.14 -0.14

0.03 0.26 -0.24

0.05 0.56 0.52

225

Table 6: Issuer-level cumulative abnormal returns (CAR) at issuer level by type of event (rating / review), time period January 2003 - June 2011. All transaction and rating data from TRACE and Mergent FISD. Abnormal returns are computed as the difference between raw (interest-accrued) daily simple returns on the bond and the contemporaneous matching portfolio simple return, constructed by matching rating and time to maturity, over a specific time window. The t-stat and the Wilcoxon signed-rank statistics are reported under each value.

37

CARs by type of event: Investment Grade vs Speculative Grade boundary Upper boundary: post-event window CAR0,1 CAR2,5 CAR6,10 CAR0,10 n BBB - Downgrade Negative t-test signed-rank

-1.03 -2.42 -2.38

-0.02 -0.05 0.08

-1.14 -1.26 -1.68

-2.15 -1.69 -3.27

69

BBB - Downgrade only t-test signed-rank

-1.00 -3.50 -3.04

0.24 0.66 -0.46

-0.10 -0.14 1.87

-0.85 -1.45 -1.51

324

Lower boundary: post-event window CAR0,1 CAR2,5 CAR6,10 CAR0,10 n BB - Upgrade/Positive t-test signed-rank

0.21 0.49 0.91

0.23 0.83 0.25

-0.27 -1.01 -1.28

0.17 0.41 0.68

28

BB - Upgrade only t-test signed-rank

0.13 1.31 2.61

0.15 1.04 0.29

0.10 0.78 0.54

0.38 2.63 2.68

143

Table 7: Issuer-level cumulative abnormal returns (CAR) at issuer level by type of event (rating / review), time period January 2003 - June 2011. All transaction and rating data from TRACE and Mergent FISD. Abnormal returns are computed as the difference between raw (interest-accrued) daily simple returns on the bond and the contemporaneous matching portfolio simple return, constructed by matching rating and time to maturity, over a specific time window. The t-stat and the Wilcoxon signed-rank statistics are reported under each value.

38

CARs by type of event: Expected vs Unexpected events Expected events: post-event window CAR0,1 CAR2,5 CAR6,10 CAR0,10 n Downgrade t-test signed-rank

-0.39 -1.95 -5.39

0.01 0.07 -1.75

-0.46 -1.68 0.19

-0.84 -2.49 -4.26

1114

Downgrade/Negative t-test signed-rank

-1.52 -3.37 -3.95

0.52 1.10 -1.03

-1.79 -3.68 -3.35

-2.80 -4.39 -5.15

301

Upgrade t-test signed-rank

0.13 2.00 3.59

0.05 0.72 0.41

-0.11 -1.46 -2.37

0.08 0.93 1.25

375

Upgrade/Positive t-test signed-rank

0.55 2.78 2.77

-0.18 -1.53 -1.01

-0.20 -1.18 -1.38

0.17 0.79 0.39

50

Unexpected events: post-event window CAR0,1 CAR2,5 CAR6,10 CAR0,10 n Downgrade t-test signed-rank

-0.90 -3.94 -5.72

-0.15 -0.54 -1.52

-0.19 -0.90 -1.25

-1.24 -3.45 -5.01

588

Downgrade/Negative t-test signed-rank

-3.08 -4.03 -5.74

-0.13 -0.14 -0.65

-0.86 -1.56 -2.81

-4.06 -4.29 -5.76

117

Upgrade t-test signed-rank

0.10 1.84 2.05

-0.01 -0.10 -0.77

-0.03 -0.51 -1.12

0.06 0.76 0.43

376

Upgrade/Positive t-test signed-rank

-0.04 -0.33 -0.08

-0.13 -1.14 -0.85

0.20 1.34 1.13

0.04 0.22 0.64

19

Table 8: Issuer-level cumulative abnormal returns (CAR) at issuer level by type of event (rating / review), time period January 2003 - June 2011. All transaction and rating data from TRACE and Mergent FISD. Abnormal returns are computed as the difference between raw (interest-accrued) daily simple returns on the bond and the contemporaneous matching portfolio simple return, constructed by matching rating and time to maturity, over a specific time window. The t-stat and the Wilcoxon signed-rank statistics are reported under each value.

39

CARs by type of event: Expected vs Unexpected events Expected events: pre-event window CAR−4,−1 CAR−9,−5 CAR−10,−1 CAR−10,+10

n

Downgrade t-test signed-rank

-0.21 -1.04 -2.37

-0.27 -1.25 0.74

-0.38 -1.45 -1.53

-1.24 -3.05 -3.89

1114

Downgrade/Negative t-test signed-rank

-0.42 -1.38 -1.67

0.23 0.58 0.16

0.19 0.34 -1.02

-2.77 -3.86 -4.11

301

Upgrade t-test signed-rank

0.10 1.38 -0.04

0.09 1.43 1.44

0.29 3.38 3.00

0.43 3.86 3.56

375

Upgrade/Positive t-test signed-rank

0.46 2.72 2.91

-0.08 -0.58 -0.34

0.34 1.53 1.14

0.50 1.76 1.36

50

Unexpected events: pre-event window CAR−4,−1 CAR−9,−5 CAR−10,−1 CAR−10,+10

n

Downgrade t-test signed-rank

-0.34 -1.73 -1.60

-0.37 -1.74 -2.86

-0.64 -2.42 -1.93

-1.85 -4.11 -6.08

588

Downgrade/Negative t-test signed-rank

-0.99 -1.60 -1.26

0.12 0.25 -0.28

-0.97 -1.49 -0.41

-4.75 -4.11 -5.14

117

Upgrade t-test signed-rank

0.03 0.53 -0.31

-0.07 -1.04 -0.54

-0.01 -0.09 -0.19

0.05 0.51 1.11

376

Upgrade/Positive t-test signed-rank

0.05 0.40 0.28

0.18 1.78 1.77

0.20 1.16 0.24

0.23 1.22 1.57

19

Table 9: Issuer-level cumulative abnormal returns (CAR) at issuer level by type of event (rating / review), time period January 2003 - June 2011. All transaction and rating data from TRACE and Mergent FISD. Abnormal returns are computed as the difference between raw (interest-accrued) daily simple returns on the bond and the contemporaneous matching portfolio simple return, constructed by matching rating and time to maturity, over a specific time window. The t-stat and the Wilcoxon signed-rank statistics are reported under each value.

40

Multivariate regression for Downgrades (1) (2) (3) NEGATIVE FALLEN ANGEL NAIC 1-2 EXPECTED POSITIVE CAR(-4,-1) POSITIVE CAR(-10,-1) OLD RATING ON WATCH MR SPR JUMP COUPON TOTAL AMOUNT RECESSION Constant

Observations R-squared

(4)

(5)

-1.514*** -1.482*** -1.622*** -1.607*** -1.582*** (-4.153) (-4.084) (-4.310) (-4.248) (-4.279) -1.585*** -1.464** -1.505*** -1.560*** (-2.739) (-2.549) (-2.614) (-2.738) 0.011 -0.117 -0.148 0.068 (0.029) (-0.300) (-0.380) (0.174) 0.543** (2.341) -0.624*** -0.641*** (-2.637) (-2.686) -0.452* (-1.813) -0.108*** -0.111*** (-2.691) (-2.775) 1.128*** 1.122*** (3.756) (3.733) -1.474*** -1.487*** -1.510*** -1.525*** -1.440*** (-4.212) (-4.266) (-4.183) (-4.222) (-4.164) -1.499*** -1.493*** -1.470*** -1.463*** -1.454*** (-4.679) (-4.672) (-4.669) (-4.652) (-4.584) -0.239 -0.207 -0.173 -0.166 -0.242 (-1.386) (-1.212) (-0.948) (-0.909) (-1.439) -0.172* -0.171* 0.004 0.007 -0.150* (-1.843) (-1.821) (0.041) (0.074) (-1.657) 0.030 0.025 -0.002 0.006 0.010 (0.275) (0.224) (-0.015) (0.058) (0.087) -0.395 -0.449* -0.619** -0.615** -0.531** (-1.577) (-1.791) (-2.329) (-2.305) (-2.082) 2.155 2.260 2.566 2.389 2.418 (1.326) (1.389) (1.621) (1.497) (1.515) 2,100 2,100 2,076 0.030 0.033 0.047 *** p<0.01, ** p<0.05, * p<0.1

2,076 0.046

2,100 0.037

Table 10: Multivariate analysis of the main determinants of cumulative abnormal returns at the issuer level in the two days following a downgrade announcement, time period January 2003 - June 2011. The dependent variable “CAR(0,1)” is the firm’s cumulative abnormal return (CAR) measured over days 0 and +1 from the event date, and expressed in % values. “NEGATIVE” is a dummy variable equal to one if the downgrade is announced with a negative watch/review. “FALLEN ANGEL” is a dummy variable equal to one if the rating moves from the investment grade (BBB- or above) to speculative grade (BB+ or below) level. “NAIC 1-2” is a dummy variable equal to one if the rating moves from the NAIC1 (A- or above) to NAIC2 (BBB+ to BBB-) level. “EXPECTED” is a dummy variable defining uninformative (expected) rating announcements. “POSITIVE CAR(-4,-1)” and “POSITIVE CAR(-10,-1)” are dummy variables equal to one if the pre-downgrade issuer-level cumulative abnormal return over days -4 to -1 and -10 to -1 are positive. “OLD RATING” represents the pre-event credit rating, in numeric notches (AAA = 1, AA+ = 2, etc). “ON WATCH” is a dummy variable equal to one if the issuer received a negative watchlist prior to the event. “MR” and “SPR” are two dummy variables equal to 1 if the announcement is made by Moody’s or S&P respectively. “JUMP” is a variable representing the size of the rating jump in number notches. “COUPON” represents the yearly coupon of the issue that has been downgraded. “TOTAL AMOUNT” is the log dollar value of the outstanding debt for a specific issuer. “RECESSION” is a dummy variable equal to one if the rating event happened during the NBER crisis (December 2007 - January 2010). Robust t-stat are reported in parentheses. All transaction and rating data from TRACE and Mergent FISD.

41

Multivariate regression for Upgrades (1) (2) (3) POSITIVE

(4)

(5)

0.080 0.078 0.054 0.100 (0.624) (0.619) (0.430) (0.795) RISING STAR 0.701*** 0.695*** 0.707*** 0.673*** (3.298) (3.107) (3.083) (3.132) NAIC 2-1 0.186 0.207 0.165 0.231 (1.304) (1.454) (1.152) (1.638) EXPECTED 0.069 (0.946) NEGATIVE CAR(-4,-1) 0.354*** 0.352*** (5.425) (5.502) NEGATIVE CAR(-10,-1) 0.340*** (5.141) OLD RATING 0.012 0.013 (1.085) (1.150) ON WATCH 0.002 0.022 (0.021) (0.242) MR 0.078 0.059 0.049 0.028 0.051 (0.908) (0.697) (0.579) (0.325) (0.609) SPR 0.129* 0.132* 0.141* 0.133* 0.145** (1.728) (1.789) (1.853) (1.760) (1.968) JUMP 0.071 0.070 0.094 0.090 0.079 (1.143) (1.117) (1.554) (1.455) (1.307) COUPON -0.008 -0.010 -0.026 -0.030 -0.010 (-0.457) (-0.609) (-1.241) (-1.394) (-0.610) TOTAL AMOUNT 0.007 0.014 0.021 0.015 0.016 (0.241) (0.486) (0.699) (0.488) (0.536) RECESSION -0.052 -0.041 -0.032 -0.055 -0.023 (-0.473) (-0.378) (-0.291) (-0.501) (-0.212) Constant -0.114 -0.201 -0.512 -0.375 -0.441 (-0.254) (-0.454) (-1.104) (-0.832) (-0.978) Observations R-squared

0.090 (0.696)

1,261 1,261 1,232 0.004 0.015 0.039 *** p<0.01, ** p<0.05, * p<0.1

1,232 0.037

1,261 0.039

Table 11: Multivariate analysis of the main determinants of cumulative abnormal returns at the issuer level in the two days following a upgrade announcement, time period January 2003 - June 2011. The dependent variable “CAR(0,1)” is the firm’s cumulative abnormal return (CAR) measured over days 0 and +1 from the event date, and expressed in % values. “POSITIVE” is a dummy variable equal to one if the upgrade is announced with a positive watch/review. “RISING STAR” is a dummy variable equal to one if the rating moves from the speculative grade (BB+ or below) to the investment grade (BBB- or above) level. “NAIC 2-1” is a dummy variable equal to one if the rating moves from the NAIC2 (BBB+ to BBB-) to NAIC1 (A- or above) level. “EXPECTED” is a dummy variable defining uninformative (expected) rating announcements. “NEGATIVE CAR(-4,-1)” and “NEGATIVE CAR(-10,-1)” are dummy variables equal to one if the predowngrade issuer-level cumulative abnormal return over days -4 to -1 and -10 to -1 respectively are negative. “OLD RATING” represents the pre-event credit rating, in numeric notches (AAA = 1, AA+ = 2, etc). “ON WATCH” is a dummy variable equal to one if the issuer received a positive watchlist prior to the event. “MR” and “SPR” are two dummy variables equal to 1 if the announcement is made by Moody’s or S&P respectively. “RATING JUMP” is a variable representing the size of the rating jump in number notches. “COUPON” represents the yearly coupon of the issue that has been downgraded. “TOTAL AMOUNT” is the log dollar value of the outstanding debt for a specific issuer. “RECESSION” is a dummy variable equal to one if the rating event happened during the NBER crisis (December 2007 - January 2010). Robust t-stat are reported in parentheses. All transaction and rating data from TRACE and Mergent FISD.

42

-1

Abnormal return (%) 0 .5 -.5

1

Expected Upgrades

-10

-5

0 Days before/after the event

Upgrade_Positive_evol Upgrade_All_evol

5

10

Upgrade_else_evol

Sample: January 2003 - June 2012

Figure 9: Evolution of abnormal returns 10 days before and 10 days after an expected upgrade announcement. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

-1

Abnormal return (%) 0 .5 -.5

1

Unexpected Upgrades

-10

-5

0 Days before/after the event

Upgrade_Positive_evol Upgrade_All_evol

5

10

Upgrade_else_evol

Sample: January 2003 - June 2012

Figure 10: Evolution of abnormal returns 10 days before and 10 days after an unexpected upgrade announcement. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

43

10.0 Average daily volume for in mUSD 6.0 8.0 7.0 9.0 -10

-5

0 Day in the event window

5

10

Average daily volume for IG in mUSD 7.0 9.0 8.0 10.0

11.0

Figure 11: Approximated average daily traded volumes for each day of the (-10,+10) time window. Size is censored above 5 million USD for investment grade bonds and 1 million USD for speculative grade bonds for each trade. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

-10

-5

0 Day in the event window

5

10

Figure 12: Approximated average daily traded volumes for each day of the (-10,+10) time window for investment grade issuers. Size is censored above 5 million USD for each trade. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

44

Average daily volume for SG in mUSD 6.0 7.0 4.0 5.0 3.0 -10

-5

0 Day in the event window

5

10

Figure 13: Approximated average daily traded volumes for each day of the (-10,+10) time window for speculative grade issuers. Size is censored above 1 million USD for each trade. All transaction and rating data from TRACE and Mergent FISD (January 2003 - June 2011).

45

The Effects of Credit Rating and Watchlist ...

Mar 10, 2014 - lists”) have been used by credit rating agencies to signal positive or negative ... credit rating announcements in order to “smooth” a downgrade ...

371KB Sizes 1 Downloads 183 Views

Recommend Documents

Credit Rating Agencies' Informational Content
The current crisis has prompted a set of downgrades of sovereign bonds, but do ratings matter? .... ahead, stock market prices, and the nominal exchange rate. 8 ...

Credit Rating Agencies' Informational Content
Meeting of the Latin American Network of Central Banks and Finance ... contribution is to explicitly test –using high frequency data— if ratings matter even after ... concern regarding their activities.5 In that context, market interest rates and

Credit Rating Agencies' Informational Content
MIT. Abstract. The financial crisis of 2008 and 2009 has prompted a set of ... contribution is to explicitly test –using high frequency data— if ratings matter even after .... We find that this is true for both changes in asset classes and other

Credit Rating Agencies' Informational Content
focused prior to the current spate of activity in continental Europe—and robust ... even after controlling for sovereign bonds spreads (i.e., taking into account the.

Credit risk valuation with rating transitions and partial ...
Sep 24, 2013 - ESC Rennes Business School & CREST, France. .... level L, it goes into bankruptcy and the bond pays R, the recovery rate, at .... their use in modelling financial risks is still recent: to the best ..... estimate it from historical dat

The effects of the supply of credit on real estate prices ...
nexus between the supply of credit and asset prices. However, it is difficult to clearly .... increase their overall supply of credit.5 In fact, the Central Bank of Venezuela points out in its 2005 Annual ... 5 For further information about these cha

The Economics of Zero-rating and Net Neutrality
Mar 1, 2017 - view of zero-rating programs around the world from December 2016, see ... whose customers can choose one out of 6 social media apps not to ...

The Minimum Wage and Inequality - The Effects of Education and ...
Show that the min wage affects skill prices, which change the incentives that people face when making educational decisions. General equilibrium model that ...