Managerial Methods to Control the Downside Risk of Derivatives Patrick L. Leoni Department of Business and Economics, University of Southern Denmark, Campusvej 55, DK-5230 Odense M, Denmark, [email protected] and EUROMED School of Management Abstract Derivatives are at the very heart of the recent financial disasters, and the surveillance of their downside risk is of paramount importance both to practitioners and regulators. We survey and present original managerial methods to efficiently control the downside risk of derivatives portfolios. We first describe the managerial methods currently used in practice and their relative cost, and we then show that the most common methods actually aggravate this downside risk. We then argue that selecting appropriate underlyings satisfying some specific statistical and easily identifiable properties is a natural way to significantly reduce the downside risk without involving costly managerial interventions.

1

Introduction

Many financial disasters have been caused by derivatives, despite the recent introduction of those products in the 70s. The resounding case of Barings Bank in the early 90s, with a loss of $1bn after dubious trades on interest rates futures that led to the bankruptcy of this century-old bank, was the first warning of the danger of derivatives trading. A long list of similar disasters soon followed, and we just give a few of them. In 1998, Long Term Capital Management lost $4bn on somewhat similar products, resulting as well in bankruptcy of this business with the involvement of well-established personalities. The record to date is held by the French bank Soci´et´e G´en´erale, which realized in 2008 a staggering loss of $7.1bn after dubious trades on standard derivatives. To fix ideas, this loss corresponds to the overall real GDP of Nicaragua in 2008 that got vaporized in a matter of months. Given the potential severity of losses associated with those products, and their always increasing volume of trades, both practitioners and regulators have sought managerial techniques to reduce the downside risk of derivatives portfolios. 1

We first describe the managerial methods currently used in practice and their relative cost, and we then show that the most common methods actually aggravate this downside risk. We then argue that selecting appropriate underlyings satisfying some specific statistical and easily identifiable properties is a natural way to significantly reduce the downside risk without involving costly managerial interventions.

2

Current managerial methods

The control of downside is thus of paramount importance for the long-run survival of financial businesses evolving in uncertain and volatile environments, but also for legal requirements. Financial regulators, often in charge of providing federal insurance to those companies in situation of financial distress, require reports of the potential losses and set preemptive measures to control them (see BIS [3]). The most common requirement for every class of financial assets, beyond derivatives, is the report of the Value at Risk (or VaR) of the business overall portfolio. This VaR is broadly defined as the worst possible expected loss of a portfolio, , at a given likelihood level, over a given period of time and under normal market conditions. The calculation of the VaR is daily practice for risk managers, and institutional regulators typically require that businesses immobilize funds to cover for this VaR. Needless to say, this method is costly for businesses since the opportunity cost of this capital set to cover possible losses is high. Moreover, the length of the time of horizon is at the heart of risk management and this relative cost may dramatically increase for long-term hedgers (see Demirer and Lienb [4]). In the case of the derivatives portfolios, the VaR method for risk management is recommended but hardly implementable. The main problem is that those products are very volatile, and the VaR may significantly change over a short period of time (sometimes a few hours), even under normal market conditions. Some regulatory measures are implemented to cope with this issue, such as margin requirements, but they are often too expensive and they lack flexibility. Quick responses to a sudden change in the value of a class of underlyings are often critical, and they are not foreseeable when setting the margins level. Financial managers dealing with derivatives use more practical methods to control the downside risk. The most common tool is the use of so-called stop-loss strategy, also called benchmarking, which come down to liquidating a whole position once a pre-determined loss level is reached (see Pedersen [16]). Jarrow and Zhao [9] give several explanations for the popularity of this method, but the most compelling reason arguably stems from psychological factors. It is largely observed in experiments that most individuals display a strong aversion for losses (see the seminal work of Kahneman and Tverski [10]), and the potentially staggering losses of derivatives portfolios can well trigger those drastic and costly liquidations. The efficiency of stop-loss strategies is questioned in the following section. However, other and 2

less popular methods are sometimes seen in practice. Large diversification of underlyings displaying low cross-correlation is a simple and efficient of reducing the downside risk (see also next section) without nonetheless directly controlling it. Adding commodities among the selected underlying, instead of the commonly chosen stocks or Exchange Traded Funds for instance, is also convenient for reducing this downside risk with the same drawbacks as before (see Lakshman [11] for other methods and their relative costs).

3

A numerical experiment

In this section, we describe the model, assumptions and trading strategies that we use to carry out to cross-compare current managerial methods, and to suggest new ones. We describe a trading strategy based on some derivatives, and we carry out a Monte-Carlo simulation of the methods to estimate their relative effectiveness. We first describe four classes of basic options, and the way to form our portfolio with those options. The price dynamics of the underlyings will vary over the following sections, to illustrate some managerial aspects, and they will be described in due time.

3.1

The options

We consider 400 different options, which are partitioned into four classes of 100 options each. Every option has a maturity of T = 3 months, starting with the same common date. • Class 1. 100 cash-or-nothing options with strike price K = 49 and end-payment Q = 10, each of then written on a different underlying. The payoff at time T of the cash-or-nothing option is Q if ST > K and 0 otherwise, where ST is the price of the underlying in 3 months. • Class 2. 100 lookback options, each of them written on a different underlying. The payoff of a lookback option is ST −min(S), where min(S) is the minimal price of the underlying between 0 and T . • Class 3. 100 Asian options, each of them written on a different underlying. The payoff of one ¯ where S¯ is the mean of underlying price between 0 and T . Asian option is max{0, ST − S}, • Class 4. 100 European calls with strike price K = 49, each of them written on a different underlying. The payoff at the end of the 3 months is max{0, ST − K}.

3.2

Portfolio formation

We now describe how our portfolio is formed. The arbitrary initial wealth of w0 = 1, 000, 000 is equally allocated among the four classes of options. In every class of options, the wealth allocated to this class is equally distributed across all of those options. That is, if wj is the wealth allocated to 3

Class j, then for every option in this class we purchase at current market price (described later so as to match the price dynamics of the underlyings) a number of contracts whose total value amounts to wj /100 monetary units (we implicitly assume that the options are infinitely indivisible to simplify the analysis, and without any significant loss of generality). Once the first time horizon (3 months) is reached and the payoffs of all of the options are realized, the proceeds are reinvested in a similar portfolio in the same manner as above. We call a quarter any of such times where options expire and proceeds are reinvested. We consider at most 24 of those quarters, since the results that we obtain in our simulations are all within this horizon. The fact that options are kept until expiration (or 3 months) in our scenario, instead of being sold before is not restrictive. Indeed, since the current reselling price of the option reflects any loss-gain incurred during the exercise, the reinvestment of the realized gain-loss into similar assets would not affect the portfolio value since the underlyings will always follow L´evy processes later on.

4

Benchmarking aggravates the downside risk

In this section, we question the relevance of benchmarking as a way to control the downside risk of derivatives portfolios. We actually argue that benchmarking aggravates this downside risk. We build on Leoni [12, 13], where a Monte-Carlo simulation of the performance of the previous derivatives portfolio is carried out to assess the relevance of benchmarking (see Glasserman [5] for an exhaustive coverage of Monte-Carlo methods in Finance). The experiments in this section rely critically on the assumption that the underlying risk-neutral price dynamics follow a Geometric Brownian Motion; i.e., the price dynamics can be written as dSt = µSt dt + σSt dWt ,

(1)

where St is the price of the underlying at time t, µ > 0 is the drift of process, σ > 0 is the variance of the jumps assumed to be constant over time, and Wt is a Brownian motion with law N (0, t) for every time t. This model is the foundation of the much-celebrated Black-Scholes framework, and it is the most used worldwide. The parameters are chosen to closely fit actual data, so as to mimic actual trades. Jumps can be correlated in those references; this issue will be discussed later. Fig. (1) gives two important information for evaluating benchmarking in this scenario. First, it gives the likelihood that this portfolio strategy be terminated before the end of the 24 quarters, which corresponds to the likelihood of liquidation before maturity. Second, it gives the recovery rate which corresponds to a likelihood that the losses would be recovered if termination did not take place. Those results show that, for every pre-determined loss level, it is always preferable not to liquidate despite the high likelihood of reaching those losses. The intuition for those results can be derived from the well-known Gambler’s Ruin problem, as described in Grimmett and Stirzaker [6] Ch. 3, even if the random process characterizing our portfolio return is far more complex and thus requires simulations. Consider a gambler tossing a 4

30 10

20

Rate in %

40

50

Failure with benchmarking

5

10

15

20

25

30

25

30

Benchmark

65 55

60

Rate in %

70

75

Recovery rate

5

10

15

20 Benchmark

Figure 1: Failure and recovery rates, uncorrelated underlyings.

fair coin, and winning (resp. loosing) one monetary unit if head (resp. tail) occurs at each toss. The gambler tosses the coin until either her wealth reaches a pre-determined upper-bound or ruin occurs. Standard results claim that the game will end for sure, and the average number of tosses needed to reach one of those two events decreases exponentially as the bound get closer to the initial wealth. A ruin corresponds to reaching a downside benchmark in our setting, and we observe similar results in the portfolio simulation. However, letting the gambler’s game continue even ruin occurs (through retaining barriers for instance) leads to a wealth distribution at a given future horizon whose mean 5

is different from zero. In our experiment, letting the portfolio reach the horizon of six years leads to an average return always greater than the considered benchmark levels (up to 30% losses). Similar qualitative results occur when the jumps are correlated, as pointed out in Leoni [12, 13]. In this case, the higher the correlation across underlyings the higher the likelihood of reaching a pre-determined level of loss. Moreover, the recovery rates are increasing with the correlation level.

5

Selecting the appropriate underlyings

Benchmarking is thus a dubious way of controlling the downside risk of derivatives portfolios. The selection of appropriate classes underlyings, as sometimes seen in practice, is thus a natural candidate for at least reducing the downside risk. Leoni [15] is based on the observation that many assets writable as underlying for derivatives display mean-reverting statistical properties. The insight is that the apparently random evolution of their prices actually display a recurrent attraction toward a mean value, and this property can prove valuable for controlling our downside risk. The study reproduces the same Monte-Carlo simulation of the same trading strategy as before, with the same objectives, but the underlyings are now assumed to have this mean-reversion property. The model is taken from Heston [7], and it has actually outperformed most of the standard models to price derivatives (Bakshi et al. [1]). The model allows for the volatility of the underlying asset to be randomly determined, since it follows a Ornstein-Uhlenbeck process. This model also has the critically important empirical property that stochastic volatility and returns are correlated. The parameters are chosen to closely fit actual data of the S&P 500 in the experiment. Formally, the simulation involves underlyings exhibiting 0-pairwise correlation with any other underlyings, and whose price dynamics in a risk-neutral world are described by the following stochastic differential equations dSt dνt

√ = I ∗ (r − δ)St dt + νt St dWt1 , √ = (α − βνt )dt + νt σν dWt2

(2) (3)

where St is the price of the underlying at time t, νt > 0 is the instantaneous variance of the underlying assumed to be stochastic, Wti (i = 1, 2) are independent Brownian motions with law N (0, t) for every time t, and δ, β, σν are positive parameters. The variable I captures the intensity of reversion of the variable St to its mean value r (the risk-free interest rate in this case), in the sense that the higher I the stronger the reversion effect. The variable I will be called the intensity of mean-reversion throughout. Our analysis comes down to observing how an increase in I affects the downside risk of the portfolio formation described earlier. Fig. (2) gives the failure rate as a function of the mean-reversion intensity. It is striking that, for every loss level, the higher the intensity the lower the failure rate. It turns out that the difference is statistically significant and large for high loss levels (15% losses and above), although it appears 6

60 40 20

30

Failure rate

50

Intensity=1 Intensity=5 Intensity=10

5

10

15

20

25

30

Loss level

Figure 2: Failure rates as a function of the mean-reversion intensity, for various levels of loss.

as minor for lower loss levels. In contrast, the failure rate is roughly halved at 30% loss level for the intensity levels I = 5 and I = 10, unambiguously showing the major reduction in downside risk reduction when doubling the intensity. It is also surprising to notice that the reduction in downside risk is sensible when switching from I = 1 to I = 5, at least for large enough loss levels, but the improvements are largely felt at every loss level only when switching to the highest intensity level I = 10. The intuition is that, when exhibiting strong mean-reversion effects, the price paths of the underlyings tend to be more concentrated in a probabilistic sense to the mean of the stochastic process (see Grimmett and Stirzaker [6] Ch. 13 for more on this issue). When dealing with risk-neutral dynamics, the mean of the price dynamics for the underlyings is typically the risk-free rate. Therefore, riskneutral price trajectories of the underlyings are increasingly unlikely to exhibit large and permanent deviations from this rate, as the intensity of mean-reversion increases. Since for most derivatives the extreme payoffs, either positive or negative, are obtained when the underlyings’ returns are far

7

off the risk-free return, the reduction in downside risk obtains naturally.

6

Not every underlying will work

The identification of appropriate classes of underlyings is thus a natural way to reduce the downside risk without involving costly managerial intervention. Beyond the issue of mean-reversion as discussed in the previous section, it is tempting to select underlyings low fluctuations in volatility. The hope is that a low fluctuation in the underlying’s price volatility will directly translate in a low fluctuation in price volatility for the derivative, making the hedging of those products significantly easier and reducing their downside risk (see Hull [8] Ch. 15 for a description of those hedging techniques). We next argue that this selection does not significantly reduce the downside risk, whereas it severely narrows down the class of underlyings to be used for trades. This point is made in Leoni [15], by using our now standard Monte-Carlo simulation. The experiment is thus identical, with the difference that the underlyings exhibit 0-pairwise correlation with any other underlyings, and their price processes in a risk-neutral world are described by the following stochastic differential equations √

νt St dWt1 , √ V ∗ (α − βνt )dt + νt σν dWt2

dSt

= (r − δ)St dt +

dνt

=

(4) (5)

where St is the price of the underlying at time t, νt > 0 is the instantaneous stochastic variance of the underlying, Wti (i = 1, 2) are independent Brownian motions with law N (0, t) for every time t, and δ, β, σν are positive parameters. The variable V , called volatility reversion, captures the stochastic reversion of the volatility νt to its mean value α, in the sense that the higher V the stronger the reversion effect and thus the lower the fluctuations of the volatility around its mean. Our analysis comes down to observing how an increase in V affects the downside risk of the portfolio formation described earlier. Fig. (3) gives the main result of the numerical simulation. For every loss level, the difference in failure rates (or equivalently liquidation likelihood) between V = 1 and V = 4 is very small. When looking at confidence intervals that are derived from the standard errors given in Leoni [15], it turns out that the difference is not statistically significant at 95% confidence level for loss levels lower than 30%. The reduction in failure rate is statistically significant only for high loss levels greater than 30%, although most practitioners would not wait until this loss level is reached to liquidate the portfolio. The difference in failure rates become sensible when considering V = 15, for every loss level. It thus takes a roughly a four-fold increase in stochastic reversion to obtain a reduction that is statistically detectable. The intuition is that the sensitivity of the derivative price to a change in the underlying price is typically too high to offset a reduction in the volatility of the underlying. It turns out that a significant reduction in the fluctuation of the underlying volatility will still be largely amplified in 8

60 40 20

30

Failure rate

50

V=1 V=4 V=15

5

10

15

20

25

30

Loss level

Figure 3: Failure rates as a function of the mean-reversion intensity, for various levels of loss.

the price of the derivatives. This issue is mostly explained by the fact that the famous ∆ (see Hull [8] Ch. 15), which captures this amplification effect, is much higher for most derivatives than what is largely believed. Consequently, the volatility of derivatives still remains high despite the meanreversion reduction – or stabilization – in the volatility of the underlying, and the downside risk of the derivative is not significantly affected.

7

Conclusion

We have surveyed the most common managerial methods to control the downside of derivativesportfolios. We have seen that the common benchmarking method is at best dubious, and we have suggested that selecting appropriate classes of underlyings is a natural way to control this downside risk. Selecting assets with high mean-reversion effects is an effective risk reduction technique that does not involve costly managerial interventions. Selecting underlyings with low fluctuations 9

in volatility is not as effective, whereas it severely reduces the class of tradable assets. The optimal management techniques are still to be identified though.

References [1] Bakshi, G., Cao, C. and Z. Chen. (1997) “Empirical Performance of Alternative Option Pricing Models,” Journal of Finance 52, 2003- 2049. [2] Basak, S., Shapiro, A. and L. Tepla (2006) “Risk management with benchmarking,” Management Science 52, 542–557. [3] Bank for International Settlements (2004) Financial Disclosure in the Banking, Insurance and Securities Sectors: Issues and Analysis. URL: http://www.bis.org/publ/joint08.pdf [4] Demirer, R. and D. Lienb (2003) “Downside risk for short and long hedgers.” International Review of Economics & Finance 12, 25–44. [5] Glasserman, P. (2004) Monte-Carlo Methods in Financial Engineering. New-York: Springer Science. [6] Grimmett, G. and D. Stirzaker (2006) Probability and Random Processes. Oxford: Oxford University Press. [7] Heston, S. (1993) “A Closed-Form Solution for Options with Stochastic Volatility, with Applications to Bonds and Currency Options,” Review of Financial Studies 6, 327-343. [8] Hull, J. (2006) Options, Futures and Other Derivatives. (6th ed.) Upper Saddle River: Prentice Hall. [9] Jarrow, R. and F. Zhao (2006) “Downside loss aversion and portfolio management,” Management Science 52, 558–566. [10] Kahneman, D. and A. Tversky (1979) “Prospect theory: An analysis of decision under risk,” Econometrica 47, pp. 263–291. [11] Lakshman, A. (2008) “An option pricing approach to the estimation of downside risk: A European cross-country study.,” Journal of Derivatives & Hedge Funds 14, 31–41. [12] Leoni, P. (2008) “Monte-Carlo estimations of the downside risk of derivative portfolios,” IEEE Proceedings of the 4th Conference on Wireless Communications, Networking and Mobile Computing (2008), 1-5 (DOI: 10.1109/WiCom.2008.2273).

10

[13] —— (2008) “Stop-loss strategies and derivatives portfolios,” International Journal of Business Forecasting and Marketing Intelligence 1, 82-93. [14] —— (2009) “Stochastic volatility in underlyings and the downside risk of derivative portfolios,” forthcoming in the IEEE Proceedings on Engineering Management and Service Sciences. [15] —— (2009) “Downside risk control of derivative portfolios with mean-reverting underlyings. SDU working papers series. [16] Pedersen, C. (2001) “Derivatives and downside risk,” Derivatives Use, Trades and Regulations 7, 251–268.

11

Managerial Methods to Control the Downside Risk of ...

Derivatives are at the very heart of the recent financial disasters, and the surveillance of their downside risk is of paramount importance both to practitioners and regulators. .... portfolio is carried out to assess the relevance of benchmarking (see Glasserman [5] ... Figure 1: Failure and recovery rates, uncorrelated underlyings.

128KB Sizes 1 Downloads 170 Views

Recommend Documents

Downside Risk Control of Derivative Portfolios with ...
∗University of Southern Denmark, Department of Business and Economics, ... Most of the worst financial disasters since the 70s have been caused by deriva-.

Downside risk to dividends
Core media business remains challenging; dividends at risk .... Its media business comprises of print, Internet, mobile ...... EXCEPTIONS AND SPECIAL CASES: UK and European Investment Fund ratings and definitions are: Buy: Positive .... exchange may

Discussion of ``Downside Risk at the Zero Lower ...
The authors argue that the adverse effects of uncertainty at the ZLB in the paper is due to a “precautionary motive” channel, as opposed to what the authors call ...

Stochastic Volatility in Underlyings and Downside Risk ...
that, in order to get a statistical evidence of a reduction in downside risk, we must increase the .... value σν = 0.189, α = 0.094, β = 12.861, and δ = 0.01 provide the best fit. We will also assume .... (2006) Statistical software. [Online]. A

Variance Premium, Downside Risk, and Expected ...
Send correspondence to Roméo Tédongap, Department of Finance, ESSEC Business School, 3 Avenue Bernard Hirsch, .... (2016). In that sense, Bollerslev et al. (2017) are analyzing implications of individual firms' skewness for the cross-section of exp

Applying Robust Methods to Operational Risk Modeling - CiteSeerX
Jan 12, 2007 - data. Section 3 discusses robust models and their possible applications to operational risk. Empirical study with operational risk data is presented in ..... Currie, C., 2005. A Test of the Strategic Effect of Basel II Operational Risk

Experimental methods: Eliciting risk preferences
Dec 31, 2012 - Risk is ubiquitous in decision-making. The extent to ..... first decision and switch over to Option B at some point before the last decision row.

Self Control, Risk Aversion, and the Allais Paradox
May 12, 2006 - that lab data supports the idea that the cost of self-control is convex. ...... 365 .00144 x y y. = ×. ×. = × . 13 Chetty and Szeidl [2006] extend Grossman ...... computer can find it, and the gap expands considerably as we increase

Self Control, Risk Aversion, and the Allais Paradox
May 12, 2006 - The stylized fact that people often reward themselves in one domain (for example, .... parameter constellation that would best fit all the data, we focus on the range and ..... only a limited amount of cash and no credit cards.

Self Control, Risk Aversion, and the Allais Paradox
May 12, 2006 - The stylized fact that people often reward themselves in one domain (for ..... At the beer bar tc represents expenditure on cheap beer, while at ...

SYNTHETIC CONTROL METHODS FOR ...
Panel data for the period 1970 d2000. Proposition 99 (P.99) was passed in 1988. Synthetic California is meant to reproduce the consumption of cigarettes that would have been observed without the treatment in. 1988. Discarding: Large$scale tobacco con

Evaluation of two numerical methods to measure the ...
singular measures, fractal dimension and fractional analysis play an impor- tant role in the ... d1, and the coefficient M that defines the size of the exploratory squares, ... possible approach, for example, is to use self-similar constructions). Mo