Slide 1 of 55

Non-Gaussian Financial Mathematics 5 AIMS 2011 William Shaw, UCL [email protected] NGFM5: Copula simulation of dependency

2

AIMS5.nb

Slide 2 of 55

ü Introduction This lecture explains simulation by the method of copulas. We look first at generalities then some specific cases, in several categories: 1. Recap of Normal and T 2. Other copulas in 2D 3. Archimedean via transforms For those of you who do not know about Laplace transforms, I shall give a cookbook approach for the 2D case so you can get going on the simulation. There are differing conventions here and there - I tend to follow Schmidt and Aas (see later).

AIMS5.nb

3

Slide 3 of 55 I should explain that I am not here going to get into the theoretical definitions of copulas here. I am going to focus on the simulation of dependency structures characterized by copulas. A copula is essentially a cumulative distribution function defined on an n-dimensional hypercube satisfying various technical conditions. The essential idea is that the marginals are all uniform distributions, and that the interesting aspects all lie in the dependency properties. One of my favourite entry points to the theory is “Coping with Copulas”, by Thorsten Schmidt of the University of Leipzig. Google it!

4

AIMS5.nb

Slide 4 of 55

ü Dependency Philosophy In this framework the dependency is managed completely separately from the marginals. First we simulate the copula by (through some method we need to be careful about) constructing samples from the unit hypercube: 8U1 , U2 , …, Un <

(1)

where 0 § Ui § 1, with the given copula dependency structure. Then, given marginal quantile functions Qi HuL the sample of interest is constructed by 8Y1 , Y2 , …, Yn < = 8Q1 HU1 L, Q2 HU2 L, …, Qn HUn L<

(2)

It is clear that the Qi can be anything we choose. So any dependency model can in principle be glued to any marginals. Important because we do not always have multivariate distributions with all the right marginals.

AIMS5.nb

5

Slide 5 of 55

ü Copulas from a “real” multivariate density So how do we make the list 8U1 , U2 , …, Un
(3)

6

AIMS5.nb

Slide 6 of 55

ü The Gaussian copula Let Zi be independent NH0, 1L random variables and consider n

Xi = ‚Aik Zk

(4)

k=1

for some matrix A. With the Einstein summation convention for repeated indices we can write this as: Xi = Ai k Zk

(5)

The covariance matrix of the Xi is then Ci j = CovIXi , X j M = CovIAi k Zk , A j m Zm M = Ai k A j m CovHZk , Zm L = Ai k A j m dkm = Ai k A j k = IA.AT Mi j

(6)

AIMS5.nb

7

Slide 7 of 55 so that the desired covariance structure is recovered provided we can make a matrix A with the property that C = A.AT

(7)

There are two commonly used routes to this decomposition.

ü Cholesky Decomposition The Cholesky decomposition of a symmetric positive definite matrix M is a factorization into a unique upper triangular G such that M ! GT .G

(8)

This factorization has a number of uses, one of which is that, because it is a triangular factorization, it can be used to solve systems of equations involving symmetric positive definite matrices.

8

AIMS5.nb

Slide 8 of 55 The Cholesky factorization can be computed in Mathematica with the function CholeskyDecomposition. If we take L = GT then we obtain M ! L.LT where L is lower triangular. Clearly this will do for the matrix A. There are good discussions and program implementations in all versions of Numerical Recipes, especially in Edition III for C++. There is also a decent discussion on Wikipedia: http://en.wikipedia.org/wiki/Cholesky_decomposition

(9)

AIMS5.nb

9

Slide 9 of 55 „ 2D Cholesky on the correlation matrix In two dimensions, if you first scale out the volatilities, it is easy to write down matters directly. Let In[3]:=

L = 881, 0<, 8r, Sqrt@1 - r ^ 2D<<; MatrixForm@LD

Out[3]//MatrixForm=

1 r

0 1 - r2

Then we have: In[2]:=

[email protected]@LDD

Out[2]//MatrixForm=

K

1 r O r 1

10

AIMS5.nb

Slide 10 of 55 and the operation of L on standard normal variables preserve unit variance.

ü Diagonalization This might seem a little over the top but I often prefer to explicitly create the diagonalized system and check out the eigenvalues. This will tell you rather clearly if the covariance or correlation structure has been meddled with so that the matrices are no longer positive semi-definite. This is a good check. In[4]:=

In[5]:=

Out[5]=

vols@n_D := [email protected], 0.5<, nD volatility = vols@5D 80.398755, 0.203311, 0.426955, 0.427198, 0.22989<

AIMS5.nb

11

Slide 11 of 55 rawcorr@n_D := Table@If@i ã j, 1, RandomReal@8- 0.1, 0.1
1. -0.0162137 -0.0828789 -0.0928268 0.0274449

-0.0162137 1. 0.0094464 -0.0172851 -0.0055935

-0.0828789 0.0094464 1. 0.0274902 0.0141664

-0.0928268 -0.0172851 0.0274902 1. 0.0156158

0.0274449 -0.0055935 0.0141664 0.0156158 1.

12

In[27]:=

AIMS5.nb

Slide 12 of 55 AssetCov = Outer@Times, volatility, volatilityD * AssetCorr; 8LocalEvals, LocalEvecs< = Eigensystem@AssetCovD; A matrix that is zero apart from diagonal entries that are the pseudo-vols of the local independent processes is:

In[28]:=

Out[28]=

In[29]:=

In[30]:=

LocalDiag = Table[If[i==j, Sqrt[LocalEvals[[i]]], 0], {i, 5}, {j, 5}] 880.445772, 0, 0, 0, 0<, 80, 0.421205, 0, 0, 0<, 80, 0, 0.384469, 0, 0<, 80, 0, 0, 0.229659, 0<, 80, 0, 0, 0, 0.203206<<

vals = RandomReal@NormalDistribution@0, 1D, 81000, 5
AIMS5.nb

13

Slide 13 of 55

In[32]:=

simcov = Covariance@CorrSetD; The maximum absolute difference in the entries between the inouts and the simulation is:

In[33]:=

Max@Flatten@Abs@simcov - AssetCovDDD

Out[33]=

0.00742902

„ Checking for trouble Let's be a bit more cavalier with some invented correlations: In[34]:=

rawcorr@n_D := Table@If@i ã j, 1, RandomReal@8- 1, 1
14

In[36]:=

AIMS5.nb

Slide 14 of 55 AssetCorr = corr@5D; Looks plausible enough if you check the entries.

In[41]:=

AssetCov = Outer@Times, volatility, volatilityD * AssetCorr; CholeskyDecomposition@AssetCovD CholeskyDecomposition::posdef : The matrix 880.159006, 0.000789389, 0.08653, 0.115204, -0.063937<, á3à, 8-0.063937, -0.0200026, á20à, -á19à, 0.0528495<< is not sufficiently positive definite to complete the Cholesky decomposition to reasonable accuracy. à

Out[41]=

[email protected], 0.000789389, 0.08653, 0.115204, - 0.063937<, 80.000789389, 0.0413354, 0.00735775, - 0.068139, - 0.0200026<, 80.08653, 0.00735775, 0.182291, - 0.104023, 0.00449454<, 80.115204, - 0.068139, - 0.104023, 0.182498, - 0.0718431<, 8- 0.063937, - 0.0200026, 0.00449454, - 0.0718431, 0.0528495<
AIMS5.nb

15

Slide 15 of 55

You get more information this way: In[40]:=

Eigenvalues@AssetCovD

Out[40]=

80.333711, 0.263745, 0.0686397, - 0.0507179, 0.00260123< The issue of what to do is a subtle one. When this first happened to me a trader had interfered with the data and the problem went away when the covariance was computed directly from the historical data. More generally, and especially when there are many assets compared to the history size, there are real problems. See e.g. Peter Jackelʼs book on rescuing correlation. You can do a lot by adding a small multiple of the identity and rescaling....

16

AIMS5.nb

Slide 16 of 55

ü From the distribution to the copula Having made samples we apply the CDF to get to the copula. Let's drop down to dimension two to see what is happening. In this case we can use the easy Cholesky representation. First we vewi the bivariate sample.

r = 0.5; L = 881, 0<, 8r, Sqrt@1 - r ^ 2D<<; uncorrsamp = RandomReal@NormalDistribution@0, 1D, 81000, 2
-3

-2

-1

1 -1 -2 -3

2

AIMS5.nb

17

Slide 17 of 55 Now we go get the CDF (we do this in C++ with either a rational approximation or the Marsaglia formula) here we will be lazy and do it with the built in gadget: In[56]:=

Ncdf@x_D := HErf@x ê Sqrt@2DD + 1L ê 2

In[57]:=

copsamp = Map@Ncdf@ÒD &, corrsampD; ListPlot@copsamp, AspectRatio Ø 1D 1.0

0.8

0.6 Out[57]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

18

AIMS5.nb

Slide 18 of 55 „ Other correlations Keeping the same underlying random data, we can fiddle with the correlations and see the impact on the copula samples. In[63]:=

r = 0.9; L = :81, 0<, :r,

1 - r2 >>;

corrsamp = HL.Ò1 &L êü uncorrsamp; copsamp = HNcdf@Ò1D &L êü corrsamp; ListPlot@copsamp, AspectRatio Ø 1D 1.0

0.8

0.6 Out[66]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

AIMS5.nb

19

Slide 19 of 55 In[67]:=

r = - 0.95; L = :81, 0<, :r,

1 - r2 >>;

corrsamp = HL.Ò1 &L êü uncorrsamp; copsamp = HNcdf@Ò1D &L êü corrsamp; ListPlot@copsamp, AspectRatio Ø 1D 1.0

0.8

0.6 Out[70]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

20

AIMS5.nb

Slide 20 of 55 r = 0; L = :81, 0<, :r,

1 - r2 >>;

corrsamp = HL.Ò1 &L êü uncorrsamp; copsamp = HNcdf@Ò1D &L êü corrsamp; ListPlot@copsamp, AspectRatio Ø 1D 1.0

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

1.0

AIMS5.nb

21

Slide 21 of 55

The T variation(s) The machinery developed above can all be re-jigged to make samples from a T distribution and its associated copula. In the good old-fashioned multivariate T, of often known as "the T copula", the only thing that changes is that having mixed up the normal samples, all the components of a single sample are divided by the square root of the same normalized chi-square variable. As shown in my paper with Kim Lee in the Journal of Multivariate Analysis, this may be generalized in all sorts of ways. Bivariate Student t distributions with variable marginal degrees of freedom and independence, JMVA 99 (2008) 1276.

22

In[81]:=

AIMS5.nb

Slide 22 of 55 gaussuncorrsamp = RandomReal@NormalDistribution@0, 1D, 810 000, 2
AIMS5.nb

In[91]:=

In[93]:=

In[94]:=

23

Slide 23 of 55 a@0, n_D := Gamma@Hn + 1L ê 2D ê Gamma@n ê 2D ê Sqrt@n PiD; a@k_, n_D := a@k, nD = Hn - 2 kL ê n ê H2 k + 1L a@k - 1, nD; TCDF@n_, x_D := 1 ê 2 + x * Sum@a@p, nD * x ^ H2 pL, 8p, 0, n ê 2 - 1
Out[94]//TraditionalForm=

1 2

In[95]:=

x

+1

x2 + 2

TraditionalForm@Simplify@TCDF@4, xDDD

Out[95]//TraditionalForm=

3ê2

x3 + Ix2 + 4M

+6x

3ê2

2 Ix2 + 4M

24

AIMS5.nb

Slide 24 of 55 In[170]:=

r = 0; L = :81, 0<, :r,

1 - r2 >>;

corrsamp = HL.Ò1 &L êü Tuncorrsamp; copsamp = HTCDF@2, Ò1D &L êü corrsamp; ListPlot@copsamp, AspectRatio Ø 1, PlotStyle Ø [email protected] 1.0

0.8

0.6 Out[173]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

This plot illustrates the phenomenon of asymptotic tail dependence, which may be quantified.

AIMS5.nb

25

Slide 25 of 55

ü Other copulas in two dimensions It is useful to make corresponding analysis of some other copulas in dimension two, as then the simulation remains straightforward. Let's look at some examples. In this case we do not work from some other distribution, but go direct for samples from the unit square, i.e., directly from the copula. I will focus on simulation methods rather than dwell on the motivation. Having thought about how what order to do things, I will fist supply you with some "cookbook recipes" for the 2D case, then delve a little deeper into where things come from and how to go to higher dimensions. Then at least you will be able to do some simulation right away. Note that the focus here will be on modern methods based more on the Laplace transform methodology.

26

AIMS5.nb

Slide 26 of 55

ü Some "Archimedean" cases in 2D We shall define what this means more carefully later, but for now let's look at some simple examples of 2D simulation. Frank 2-copula: In this case the simulation of a pair of correlated uniform deviates reduces to the following. Let Hv1 , v2 L be independent samples from a uniform distribution on @0, 1D. Then we set U1 = v1 and then, as a function of the copula parameter a, the second rv is defined by the following: In[174]:=

Frankcorrpair@alpha_D := ModuleB8uone = RandomReal@D, utwo, vtwo = RandomReal@D, emau<, emau = ‰-alpha uone; LogB1 + utwo = IfBalpha ã 0, vtwo, -

vtwo I1-‰-alpha M vtwo Hemau-1L-emau

alpha

F F; 8uone, utwo
AIMS5.nb

In[175]:=

27

Slide 27 of 55 SeedRandom@100D; frankten = Table@Frankcorrpair@10D, 810 000
0.8

0.6 Out[177]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

28

In[109]:=

AIMS5.nb

Slide 28 of 55 SeedRandom@100D; frankten = Table@Frankcorrpair@- 10D, 810 000
0.8

0.6 Out[111]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

AIMS5.nb

In[112]:=

29

Slide 29 of 55 SeedRandom@100D; frankten = Table@Frankcorrpair@0D, 810 000
0.8

0.6 Out[114]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

30

In[115]:=

AIMS5.nb

Slide 30 of 55 SeedRandom@100D; frankten = Table@Frankcorrpair@3D, 810 000
0.8

0.6 Out[117]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

AIMS5.nb

31

Slide 31 of 55 „ DensityFunction for Frank The copula itself is given by a JOINT CDF In[178]:=

FrankCop@u_, v_, a_D := Module@8expal = Exp@- aD<, - 1 ê a Log@1 + Hexpal ^ u - 1L * Hexpal ^ v - 1L ê Hexpal - 1LDD To get the copula density for comparison with the scatterplots we differentiate the copula with respect to both u and v :

In[179]:=

Out[179]=

FrankDenRaw@u_, v_, a_D = Simplify@D@FrankCop@u, v, aD, u, vDD ‰a H‰-a Lu+v H- 1 + ‰a L Log@‰-a D2 H1 + ‰a H- H‰-a Lu - H‰-a Lv + H‰-a Lu+v LL2 a

In[180]:=

FrankDen@u_, v_, a_D := FrankDenRaw@u, v, aD

32

In[123]:=

Out[123]=

AIMS5.nb

Slide 32 of 55 frankdenten = ContourPlot@FrankDen@u, v, 10D, 8u, 0.001`, 1<, 8v, 0.001`, 1
AIMS5.nb

In[181]:=

33

Slide 33 of 55 Show@GraphicsRow@8franktenplot, frankdenten
Out[181]=

34

AIMS5.nb

Slide 34 of 55

ü Clayton 2-copula This is easily done if one has a gamma simulator, as discussed by K. Aas (2004) "Modelling the dependence structure of financial assets: A survey of four copulas." The Clayton case is good when there is one-sided tail dependence. The procedure for this case is as follows, where the copula has a parameter d: 1. Simulate a Gamma variate X distributed as Gamma(1/d, 1). 2. Simulate 2 (more generally d) independent standard uniforms V1 , V2 . 3. Return U1 = 1 -

logHV1 L X

-1êd

; U2 = 1 -

logHV2 L X

-1êd

(10)

AIMS5.nb

In[127]:=

35

Slide 35 of 55 1 Claytoncorrpair@d_D := ModuleB:vone = RandomReal@D, vtwo = RandomReal@D, X = RandomBGammaDistributionB , 1FF>, d : 1-

In[186]:=

Log@voneD X

-1êd

, 1-

Log@vtwoD

-1êd

>F

X

clayten = Table@Claytoncorrpair@1D, 82000
0.8

0.6 Out[187]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

36

In[188]:=

AIMS5.nb

Slide 36 of 55 SeedRandom@100D; clayten = Table@Claytoncorrpair@10D, 83000
0.8

0.6 Out[189]=

0.4

0.2

0.2

0.4

0.6

0.8

1.0

AIMS5.nb

37

Slide 37 of 55 „ The Clayton density function This time I will just write this down directly: In[190]:=

In[143]:=

ClayCopDen@u_, v_, d_D := H1 + dL Hu * vL ^ H- 1 - dL Hu ^ H- dL + v ^ H- dLL ^ H- 1 ê d - 2L claydenten = ContourPlot@ClayCopDen@u, v, 10D, 8u, 0.001`, 1<, 8v, 0.001`, 1
Out[143]=

38

In[191]:=

AIMS5.nb

Slide 38 of 55 Show@GraphicsGrid@88clayplotten, claydenten<
Out[191]=

In general, before doing any detailed applications, it is a good idea to cross-check (a) the copula (joint CDF on the square); (b) the density function on the square; (c) the form of your scatterplots vs the density. This can eliminate some errors of convention (flipping..)

AIMS5.nb

39

Slide 39 of 55

ü Digging Deeper into Archimedean Copulas The recipe approach presented so far gets us simulating quickly in two dimensions but leaves us a little short on insight and generalizations to higher dimensions. We can make more sense of matters by proceeding more formally and introducing a Laplace transform description. The d-dimensional Archimedean copulas may be written as CHu1 , …, ud L = f-1 HfHu1 L + … + fHud LL

(11)

The function f is a decreasing function called the generator of the copula. f-1 denotes the inverse of the generator. The generators for some common cases are: „ Frank Generator fF Hu, aL = log

‰-a - 1

(12)

‰-a u - 1

40

AIMS5.nb

Slide 40 of 55 „ Clayton Generator fCl Hu, qL = u-q - 1

(13)

„ Gumbel generator We have not yet discussed this one but it has the simple generator fG Hu, qL = H-logHuLLq

(14)

In general these things make sense provided f is a strictly decreasing convex function mapping [0,1] to [0, ¶] with f(1)=0. See the book by Nelsen (1999) for details.

ü Laplace transform descriptions The idea is to regard f-1 HtL as the Laplace transform of an associated density function gHxL with associated CDF GHxL. That is, we set

AIMS5.nb

41

f-1 HtL = ‡



‰-t x gHxL „ x

Slide 41 of 55 (15)

0

Let's look at these associated densities first, before saying what we do with them. „ Density in the Clayton case In this case the generator and its inverse are just

In[192]:=

fCl Hu, qL = u-q - 1

(16)

fCl -1 Ht, qL = H1 + tL-1êq

(17)

InverseLaplaceTransform@H1 + tL ^ H- 1 ê qL, t, xD 1

Out[192]=

‰-x x-1+ q 1

GammaA q E

42

AIMS5.nb

To recognize this we pull out the gamma density function In[193]:=

Slide 42 of 55

PDF@GammaDistribution@a, 1D, xD

Out[193]=

‰-x x-1+a Gamma@aD

x>0

0

True

and we see a gamma distribution with a = 1 ê q. „ Density in the Frank case - discrete! It is easiest to quote the density and check its transform: In[149]:=

Out[149]=

Sum@H1 - Exp@- aDL ^ k ê Hk aL Exp@- k tD, 8k, 1, Infinity
Log@‰-t-a H1 - ‰a + ‰t+a LD a

AIMS5.nb

43

and this is the functional inverse of ‰

t = fF Hu, aL = log

-a

Slide 43 of 55

-1

(18)

‰-a u - 1

So the associated density is actually discrete with PHV = kL given by, for k = 1, 2, …, H1 - ‰-a Lk ka „ Density in the Gumbel case The inverse generator is given by In[150]:=

Exp@- t ^ H1 ê qLD It only comes out simply for a few values of q, e.g. 2:

44

AIMS5.nb

In[152]:=

Slide 44 of 55 InverseLaplaceTransform@Exp@- t ^ H1 ê 2LD, t, xD 1

‰- 4 x

Out[152]=

2

p x3ê2

For general values of q we obtain one of the so-called stable distributions. See the note by Aas for an algorithm. „ The algorithm for d-dimensional Archimedian This may now be stated simply - for the proof see McNeil, Frey Embrechts (2005). 1. Generate a random variable V with density g, CDF G; 2. Generate uniform IID X1 , X2 , …Xd ; 3. Let Ui = f-1 B-

logHXi L V

F

(19)

AIMS5.nb

45

Slide 45 of 55 In practice making V is the hard part unless we are in the Clayton case, where can use Gamma. It is interesting to consider numerical transform methods as well. For non Archimedean in general dimensions one is faced with a difficult iterative process using conditional densities (see Cherubini et al, Copulas in finance). „ Clayton in d=3 In[155]:=

Claytoncorrtrip@d_D := 1 ModuleB:vone = RandomReal@D, vtwo = RandomReal@D, vthree = RandomReal@D, X = RandomBGammaDistributionB , 1FF>, d : 1-

Log@voneD X

-1êd

, 1-

Log@vtwoD X

-1êd

, 1-

Log@vthreeD

-1êd

X

46

In[194]:=

In[195]:=

Out[195]=

>F

AIMS5.nb

Slide 46 of 55 simdata = Table@Claytoncorrtrip@5D, 8i, 1, 1000
AIMS5.nb

47

Slide 47 of 55

ü Kendall's tau for Archimedean copulae There are some nice properties of such copulae that facilitate calibration. In general, to encode dependency, we use concepts like correlation, Spearman's rank correlation and Kendall's t. (Look them up on Wiki if you do not know the definitions in terms of data - there are nice discussions in terms of data). The Spearman and Kendall quantities are actually functions only of the copula. For example, Kendall's t may be estimated from data. In terms of a general 2D copula it may be written (this is not obvious!) t = 4 E@C@U, VDD - 1

(20)

In the 2D Archimedian case this can be shown to be given by 1

t=4‡

0

fHtL f£ HtL

„t + 1

(21)

48

AIMS5.nb

For Clayton this gives In[161]:=

Out[161]=

Slide 48 of 55

p@u_D := u ^ H- qL - 1; Simplify@4 * Integrate@p@uD ê p '@uD, 8u, 0, 1<, Assumptions Ø q > - 1D + 1D q 2+q For Gumbel this gives

In[162]:=

Out[163]=

p@u_D := H- Log@uDL ^ q; Simplify@4 * Integrate@p@uD ê p '@uD, 8u, 0, 1<, Assumptions Ø q > - 1D + 1D -1 + q q

AIMS5.nb

49

Slide 49 of 55 For Frank this has to expressed in terms of polylogarithms or Debye functions, but inversion is possible. There are many other aspects of calibration, and choosing a copula. Doing some plots is always a good first step, and one may often wish to use measures of tail dependence as well. The use of a calibration formula based on e.g. Kendallʼs tau enables the use of different copulae on a like-with-like basis. We can go back to our risk models and explore the effect of the choice of dependency structure on risk numbers. Here is a very quick look.

50

AIMS5.nb

Slide 50 of 55 „ Dependency - quick look There are many types of dependency: 1. Company A has a real influence on company B 2. Both A, B influenced by common external factor 3. Spurious associations (same sector) A correlation number does not capture all the possibilities. This is not an excuse for throwing rocks at those who tried to come up with models to couple systems in a tractable manner. A low point in the discussion of the crisis has to be Felix Salmon's hype in Wired Magazine. “The Formula that Killed Wall Street:. Despite getting quotes from others that, when read carefully, made it pretty clear that the real problem was a naive (constant, historical, poor) choice of correlation, Salmon tried to pin a lot of blame on the Gaussian copula.

AIMS5.nb

51

Slide 51 of 55 Exercise : Sample from Gauss, T (pick your own dof) Clayton copulas with Kendall' s t varying and measure your favourite risk with function the same marginals. Deduce that the Gaussian copula is not the problem! Here is the initialization code (Run it all....) In[222]:=

In[223]:=

In[224]:=

In[225]:=

ClaytonParamFromTau@KendallTau_D := Module@8t = KendallTau<, 2 t ê H1 - tLD GaussCorrFromTau@KendallTau_D := Sin@Pi * KendallTau ê 2D

Ncdf@x_D :=

1 2

ErfB

x

F+1

2

uncorrsamp = RandomReal@NormalDistribution@0, 1D, 81000, 2
52

In[226]:=

AIMS5.nb

Slide 52 of 55 MakeGaussGraph@t_D := r = GaussCorrFromTau@tD ; L = :81, 0<, :r,

1 - r2 >>;

corrsamp = HL.Ò1 &L êü uncorrsamp; copsamp = HNcdf@Ò1D &L êü corrsamp; ListPlot@copsamp, AspectRatio Ø 1, PlotStyle Ø [email protected], PlotLabel Ø "Gaussian"D In[227]:=

In[228]:=

In[230]:=

Clear@a, m, x, kD a@0, m_D := Gamma@Hm + 1L ê 2D ê Gamma@m ê 2D ê Sqrt@m PiD; a@k_, m_D := a@k, mD = Hm - 2 kL ê m ê H2 k + 1L a@k - 1, mD; TCDF@m_, x_D := 1 ê 2 + x * Sum@a@p, mD x ^ H2 pL, 8p, 0, m ê 2 - 1
AIMS5.nb

In[231]:=

Out[231]=

53

Slide 53 of 55 TCDF@2, xD 1 2

x

+ 2

In[232]:=

In[233]:=

In[234]:=

2

1+

x2 2

tdenoms = Sqrt@1 ê 2 RandomReal@ChiSquareDistribution@2D, 81000
MakeTGraph@t_D := L = :81, 0<, :rt,

rt = GaussCorrFromTau@tD ; 1 - rt2 >>;

tcorrsamp = HL.Ò1 &L êü tuncorrsamp; tcopsamp = HTCDF@2, Ò1D &L êü tcorrsamp; ListPlot@tcopsamp, AspectRatio Ø 1, PlotStyle Ø [email protected], PlotLabel Ø "T_2 copula"D

54

In[235]:=

AIMS5.nb

Slide 54 of 55 1 Claytoncorrpair@d_D := ModuleB:vone = RandomReal@D, vtwo = RandomReal@D, X = RandomBGammaDistributionB , 1FF>, d : 1-

In[236]:=

Log@voneD X

-1êd

, 1-

Log@vtwoD X

-1êd

>F

MakeClayGraph@ct_D := Hclay = Table@Claytoncorrpair@ClaytonParamFromTau@ctDD, 81000
AIMS5.nb

55

ü Correlation spiking against copula choice The down tail starts to look the same once you spike the rank correlation measure - here Kendall's t: In[237]:=

Manipulate@GraphicsGrid@88MakeGaussGraph@tD, MakeTGraph@tD, MakeClayGraph@tD <<, ImageSize Ø 500D, 88t, 0.5, "Kendall t"<, 0.001, 0.97<, 88t, 0.5, "Kendall t"<, N@Range@1, 9D ê 10D<, ContinuousAction Ø TrueD

Kendall t

Kendall t

0.5 Gaussian

1.0

T_2 copula

1.0

Clayton

1.0

Out[237]=

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0.2

0.2

0.2

0.4

0.6

0.8

1.0

0.2

0.4

0.6

0.8

1.0

0.2

0.4

0.6

0.8

1.0

56

AIMS5.nb

Slide 55 of 55

ü Further References

K. Aas, 2004, Modelling the dependence structure of financial assets: A survey of four copulas. Norwegian Computing Centre, report SAMBA/22/04, December 2004. http://www.nr.no/files/samba/bff/SAMBA2204c.pdf U. Cherubini, E. Luciano, W. Vecchiato, 2004, Copula Methods in Finance, Wiley. P. Embrechts, F. Lindskog, A.J. McNeil, 2003, Modelling dependence with copulas and applications to risk management. In Handbook of heavy tailed distributions in finance, edited by Rachev ST, published by Elsevier/North-Holland, Amsterdam. A 2001 preprint version is available here: http://www.risklab.ch/ftp/papers/DependenceWithCopulas.pdf T. Schmidt, 2006, Coping with Copulas, In "Copulas - from Theory to Applications in Finance", RISK Books. Also at: http://www.math.uni-leipzig.de/~tschmidt/TSchmidt_Copulas.pdf

William Shaw Slide 5.pdf

Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

7MB Sizes 2 Downloads 160 Views

Recommend Documents

William Shaw Slide 2.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

William Shaw Slide 3.pdf
You need about 1015 Googol universe lifetimes! 6 AIMS3.nb. Page 3 of 22. William Shaw Slide 3.pdf. William Shaw Slide 3.pdf. Open. Extract. Open with.

PdF Moral Issues in Business William H. Shaw Full Book
gives readers the analytical tools to resolve those issues. Using a combination of true stories, interesting reading selections, and a conversational writing style, ...

pdf-1373\business-ethics-a-textbook-with-cases-by-william-h-shaw ...
pdf-1373\business-ethics-a-textbook-with-cases-by-william-h-shaw.pdf. pdf-1373\business-ethics-a-textbook-with-cases-by-william-h-shaw.pdf. Open. Extract.

Shaw(2010).pdf
protein dynamics: protein folding and the inter- conversion among distinct structural states of a. folded protein. Specifically, we have been able to formulate a. detailed description of the folding of a WW do- main (12) as well as the folded-state d

Slide switch plug
#define second_led 13 ​//second LED is connected to 13th pin. #define first_datapin 10 ​//D1 of slide switch is connected to. 10th pin. #define second_datapin ...

Slide 1
had received bachelor or professional degrees. By 1990, this had risen 10-fold to. Inore than 30%. 3. Although technological societies are undoubt- cdly more ...

Slide 0
1Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands. 2 Philips Research, Eindhoven, The Netherlands. Email: [email protected] ... patient care and monitoring, but also reveals health risks that might ...

Slide Dragan.pdf
Page 2 of 20. 2. BIOGRAPHY. Dragan Samardzija received the B.S. degree in electrical engineering and. computer science in 1996 from the University of Novi ...

Slide Biogas.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Slide Biogas.pdf.

No Slide Title
I will argue that this lack of proper definitions is the main reason why the field of research in Artificial Intelligence (and some of its subfields like Cognitive Robotics, e.g.) has been derailed for the last 60 years. The definitions that are in u

Cover Slide
Traffic Distribution By Region. 2010 Growth Rate. North America. 266%. Asia ... 2%. Oceania. 2%. Eastern. Europe. 2%. Source: AdMob Network, Dec 2010 ...

slide - Research at Google
Gunhee Kim1. Seil Na1. Jisung Kim2. Sangho Lee1. Youngjae Yu1. Code : https://github.com/seilna/youtube8m. Team SNUVL X SKT (8th Ranked). 1 ... Page 9 ...

Slide sem título
isrequired to enable the detector to search for core collapses in supernova events, neutron stars going to hydrodynamical instabilities, coalescence of neutron ...

No Slide Title
Hand-held computers (Palm© Z22s) were used to code children's behaviors, their social experiences, and their reactions to them. Using methods based on past research (Zakriski et al., 2005), each child was coded 3-6 times/day. 15,773 observation sess

No Slide Title
specific to their learning issues. ▫ 28-year old male. ▫ Cough productive of clear sputum with flecks of blood, worsens to frank blood. ▫ Smokes 1.5 packs a day, ...

Slide-SBL-Mobile.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Slide-SBL-Mobile.pdf. Slide-SBL-Mobile.pdf. Open. Extract.

35 Slide 1 -
Page 1. 1. 2. 3. 5. 6. 7. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 23. 19. 21. 22. 24. 25. 26. 2. 7. 28. 29. 30. 32. 33. 34. 35. 36. 37. 38. 40. 41. 42. 43. 44. 45. 46. 47. 4. 8.

SLIDE TTNT - FULL.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. SLIDE TTNT ...

No Slide Title
12. Models + Algorithms = Testing. • Models are good at representing systems. • Graph theory can use models to generate tests. • Different algorithms can provide tests to suit your needs: – Street sweepers. – Safecracking. – Markov chains