Slide 1 of 43

Non-Gaussian Financial Mathematics 3 AIMS 2011 William Shaw University College London This talk: Perceptions of Risk and the Choices influencing risk numbers; Student T simulation and analytical technology.

2

AIMS3.nb

Slide 2 of 43

Previously in non-Gaussian maths.. In Lecture 1 we looked at some of the properties of the statistics associated with single financial variable, with a focus on equity indices that were readily available in our computing environment. We looked at some simple properties of the distributions, in particular their moments, their tails and the use of Maximum Likelihood Estimation. Lecture 2 looked at some risk measures, portfolio basics and developed some technology for using the Student T distribution. Now we look at what the consequences are for our perceptions of risk.

AIMS3.nb

Goal: clarify role of maths ingredients

3

Slide 3 of 43

Within the realm of risk computations addressable by mathematical and statistical analysis, we have some choices to make in quoting a result Choice of consequence to report (temp increase, ... area flooded,... population homeless....) Choice of measure to quote (VaR/quantile, CVaR/ETL,... other) Choice of marginal distributions Choice of dependency structure Choice of event frequency

4

AIMS3.nb

Slide 4 of 43

Psychology and Management matter Regulation revisions will create an evolving set of minimal requirements that it will be necessary for organizations to follow. People will always be people and differing organizations will have different responses to the issues raised, e.g. notion of giving a risk manager power to challenge a previously profitable risk taker. The maths will get heavily diluted here, but I will be mischievous in making the following suggestion.

AIMS3.nb

5

Slide 5 of 43

Organization Induces Bias? The pressures to avoid spending money on projects with no short term return, have a ready scapegoat to hand create a certain leaning: There may a bias in those who have to spend the money on insurance to go thin-tailed so they can save money and also blame any subsequent failure of risk control on matters being `too extremeʼ to have been reasonably foreseen.

6

AIMS3.nb

Slide 6 of 43

Some classic excuses Hilary Benn, the UK Environment Secretary, on the Nov 2009 floods: whilst the areaʼs flood defences had been built to withstand a “one in 100 year” flood, “what we dealt with last night was probably more like one in a thousand, so even the very best defences, if you have such quantities of rain in such a short space of time, can be over-topped”. David Viniar, CFO, Goldman Sachs: We were seeing things that were 25-standard deviation moves, several days in a row. Interesting figure in relation to Gaussian modelling, in which a single 25sd event would not be expected in one lifetime of the universe. You need about 1015 Googol universe lifetimes!

AIMS3.nb

7

Slide 7 of 43

Who or what do we blame? During the recent crisis there were various whinges and excuses and blame games. The Mathematics community had a bit of a job on its hands to avoid being assigned the blame when the root cause of the problem was a simple failure in lending mortgage money to people who could not pay it back. WIRED Mag story: "The formula that destroyed Wall Street" saying that it was all the fault of the Gaussian copula - which is hopelessly wrong but a long story, to be considered in a later lecture.

8

AIMS3.nb

Slide 8 of 43

ü Chief Risk Officer Bart Simpson I mentioned Viniar and Benn. I tend to lump all these together as rewordings of CRO Bart Simpson It wasnʼt my fault. I didnʼt do it. Nobody saw me do it, you canʼt prove anything. The general idea for “Risk Manager Simpson” is to claim a running assumption that the distribution of events is narrow, so that extreme events are very unlikely and could not possibly have been foreseen. In the context of the 2008 crisis, talking about an unpredictable tsunami-class event is fashionable. We already saw that MLE estimation on pre-08 data gave a fat-tailed picture already.

AIMS3.nb

9

Slide 9 of 43

Two levels of understanding Level 1: Look at the statistics of the real data and build the consequences into risk management, asset allocation models etc. Level 2: Try to underpin the statistics by coming up with a model that derives such statistics as a consequence of a more basic assumption. L2 might be quite hard. L2 is desirable, especially given sparsity of extreme events. Not having it is not an excuse for not doing level 1.

10

AIMS3.nb

Slide 10 of 43

What do we have to do??!! The evidence for fat-tailed behaviour was manifest for decades. There were models (hyperbolic, T, VG...). In 1994 Ralph Bailey published a Student T sampling algorithm which modified the Gaussian Box-Muller algorithm (in the code of hundreds of banks for simulation models) by essentially a one-line modification. More of that later - the technology to “go fat” was there, at least for a simple model supported by the statistics.

AIMS3.nb

11

Slide 11 of 43

June 2009 In the Financial Times of 10th June 2009, Lord Turner, Chair of the UK FSA, is quoted as follows: The problem, he said, was that banksʼ mathematical models assumed a `normalʼ or `Gaussianʼ distribution of events, represented by the bell curve, which dangerously underestimated the risk of something going seriously wrong. While there is always the unpredictable tsunami event outside scope of historical data (the excuse), even the routine modelling from history was wrong. A 25 sigma event is over 10130 times more likely in T_4 than in Gaussian.

12

AIMS3.nb

Slide 12 of 43 „ Some numbers In[49]:=

F@x_D := CDF@NormalDistribution@0, 1D, xD Plot@F@xD, 8x, - 3, 3
0.8

0.6 Out[50]=

0.4

0.2

-3

In[51]:=

Out[51]=

-2

F@- 25D êê N 3.0567 µ 10-138

-1

1

2

3

AIMS3.nb

13

Slide 13 of 43

ü Trying to do better We ran an exercise looking at 3 ingredients for reporting risk Choice of risk measure (VaR=quantile, CVaR etc) Choice of distribution Choice of frequency level to report The critical thing is the quantile or other risk measure, expressed here as how many std devs for a given frequency. We will in fact be ultra conservative and assume that there are NO shifts in the mean NO explosion in the variance so it is purely about the choice of distribution with zero mean and a fixed variance.

14

AIMS3.nb

Slide 14 of 43 „ Some Maths: Quantiles, VaR formulae Normal Quantile In[52]:=

QuantileN@u_D := Sqrt@2D InverseErf@2 u - 1D T_4 case: Shaw (JCF 2006) closed form

In[53]:=

InverseCDF4@y_D := Module@8ra = Sqrt@1 - 4 Hy - 1 ê 2L ^ 2D<, 2 Sign@y - 1 ê 2D * Sqrt@Cos@1 ê 3 ArcCos@raDD ê ra - 1DD With all these power law VaR, we need to normalize to unit variance, assuming n > 2.

In[54]:=

Stunorm4@u_D := 1 ê Sqrt@2D InverseCDF4@uD

AIMS3.nb

15

Slide 15 of 43

ü T_n and T_3 In[55]:=

In[56]:=

In[57]:=

FMinusOne@y_, n_D := Module@8arg = If@y < 1 ê 2, 2 y, 2 H1 - yLD<, Sign@y - 1 ê 2D Sqrt@n * H1 ê InverseBetaRegularized@arg, n ê 2, 1 ê 2D - 1LDD Stunorm@u_, n_D := Sqrt@Hn - 2L ê nD FMinusOne@u, nD Stunorm3@u_D := 1 ê Sqrt@3D FMinusOne@u, 3D

That is all we need for the VaR computations. We use our results from last time to express the CVaR in closed form.

16

AIMS3.nb

Slide 16 of 43 „ Normal CVaR/ETL In[58]:=

f@x_D := Exp@- x ^ 2 ê 2D ê Sqrt@2 PiD; NCVaR@u_D := - f@QuantileN@uDD ê u „ T meta-density

In[59]:=

metaf@t_, n_D := - Hn ^ Hn ê 2L * Hn + t ^ 2L ^ H1 ê 2 - n ê 2L * Gamma@H- 1 + nL ê 2DL ê H2 * Sqrt@PiD * Gamma@n ê 2DL „ T_4 CVaR/ETL

In[60]:=

StuCVaR4@u_D := 1 ê Sqrt@2D metaf@InverseCDF4@uD, 4D ê u „ T_n CVaR/ETL

In[61]:=

StuCVaR@u_, n_D := Sqrt@Hn - 2L ê nD metaf@FMinusOne@u, nD, nD ê u

AIMS3.nb

17

Slide 17 of 43 „ VaR values One problem is banks reporting risk based only on high frequency events. Look at the 2.5% level (40 days). For contrast we will have some lower frequency events as well: In[62]:=

uvals = 82.5 ê 100, 1 ê 100, 1 ê 10 000, 1 ê 10 ^ 6<; In normal case the first number is the famous (-)1.96 figure:

In[63]:=

Out[63]=

Map@QuantileN, uvalsD êê N 8-1.95996, -2.32635, -3.71902, -4.75342< Making a switch to CVaR does NOT change matters much!

In[64]:=

Out[64]=

Map@NCVaR, uvalsD êê N 8-2.3378, -2.66521, -3.95848, -4.94833<

18

In[65]:=

Out[65]=

AIMS3.nb

Slide 18 of 43 Map@QuantileN, uvalsD êê N 8-1.95996, -2.32635, -3.71902, -4.75342< But switching the distribution is more interesting. ...

In[66]:=

Out[66]=

Map@Stunorm4, uvalsD êê N 8-1.96324, -2.64949, -9.2162, -29.4< The once in 40 days number is about -2 no matter what model you use. This is what I call the maths behind the observation that "Gaussian VaR is an airbag that always works until you need it." But look at the other numbers, which go through the roof - the choice of tail model seriously matters.

AIMS3.nb

19

Slide 19 of 43

ü The FatRisk tool In[67]:=

bennone = -Log@10, 1 ê H260LD êê N; bennten = -Log@10, 1 ê H10 * 260LD êê N; bennhundred = -Log@10, 1 ê H100 * 260LD êê N; bennthousand = -Log@10, 1 ê H1000 * 260LD êê N; Manipulate@onelevel = -Stunorm@1 ê 260, nD; tenlevel = -Stunorm@1 ê 260 ê 10, nD; hundredlevel = -Stunorm@1 ê 260 ê 100, nD; thousandlevel = -Stunorm@1 ê 260 ê 1000, nD; Plot@8-QuantileN@10 ^ H-xLD, -Stunorm4@10 ^ H-xLD, -Stunorm@10 ^ H-xL, nD< , 8x, 1, 6<, PlotRange Ø 80, 30<, PlotLabel Ø Style@"Daily VaR: Gauss, T_4, T_n Hm=0,s=1L", "Large"D, PlotStyle Ø [email protected], FrameLabel Ø 8Style@"-Log@10, freqD", LargeD, Style@"Std. Devs.", LargeD<, Frame Ø True, ImageSize Ø 600, Epilog Ø 88Line@881, 25<, 86, 25<
20

AIMS3.nb

Slide 20 of 43

Daily VaR: Gauss, T_4, T_n Hm=0,s=1L

30

1Y

10Y

100Y

1000Y

Viniar level

25

Std. Devs.

20

15

10

OK airbag

5

0

10%

1%

0.1%

0.01%

0.001%

1

2

3

4

5

-Log@10, freqD Power 1Y VaR 10Y VaR 100Y VaR 1000Y VaR

2.368 3.39656 9.07514 24.032 63.5594

0.0001% 6

AIMS3.nb

21

1000Y VaR

63.5594

CDF Power Law

CDF Power Law

22

AIMS3.nb

Slide 21 of 43

ü To Consider Proximity of curves in low frequency range. The Gaussian airbag is OK - you would barely notice you were using the wrong distribution based on fortnight to annual comparisons Look at T_4, suggested by Fergusson-Platen (2006) study. 10Y VaR is 6.5sd, not achieved in Gaussian for 1 in 106 If we believe the MIT-BU analysis, n=3, then we are more extreme territory still and an 8 sigma event happens once a decade on average. You need to be clear on the distribution and frequency choices to have any clue as to the possibilities.

AIMS3.nb

23

Slide 22 of 43 „ Related Issues There are all kinds of other problems. The choice of risk number to report, and the choice of dependency model are critical too. The use of VaR has been widely criticised in the finance community. Given a number, say 1%, this is the quantile at 0.01. There are presentational issues linked to this. Which sounds better? "The loss is better than this 99% of the time" "Things will be this bad or worse 1% of the time" Interestingly, the climate change work is sometimes using this same measure.

24

AIMS3.nb

Slide 23 of 43 A mathematically better idea is to use the expected loss for the worst 1% of occasions - aggregates better and captures tail details. But how much difference does it REALLY make? It is mathematically better to use CVaR as it is coherent - in particular the risks aggregate in a better way. VaR has been criticized for not being sub-additive - the risk of a portfolio can be larger that the sum of the component risks. This, it is argued, is bad because it fails to stimulate diversification. There is another can of worms here - is it diversification or “worsification”! What do you do if you are on a desert island with 3 streams and you only know that precisely one is poisoned. Would you take 1/3 your needs from each stream?

AIMS3.nb

25

Slide 24 of 43

How does use of CVaR/expected tail loss modulate? We already saw the Gaussian numbers were barely different Letʼs investigate further. In[72]:=

Manipulate@Plot@8-NCVaR@10 ^ H-xLD, -StuCVaR4@10 ^ H-xLD, -QuantileN@10 ^ H-xLD, -Stunorm4@10 ^ H-xLD, -Stunorm@10 ^ H-xL, nD, -StuCVaR@10 ^ H-xL, nD< , 8x, 1, 6<, PlotRange Ø 80, 30<, PlotLabel Ø Style@"Daily CVaR vs VaR: \n Gaussian@BD vs T_4@GD, T_3@RD", "Large"D, PlotStyle Ø [email protected], Blue<, [email protected], Green<, [email protected], Blue<, [email protected], Green<, [email protected], Red<, [email protected], Red<<, FrameLabel Ø 8Style@"-Log@10, freqD", LargeD, Style@"Std. Devs.", LargeD<, Frame Ø True, ImageSize Ø 700, Epilog Ø 88Text@"10%", 81, 0.7
26

AIMS3.nb

Slide 25 of 43

Daily CVaR vs VaR: Gaussian@BD vs T_4@GD, T_3@RD 30 1Y

10Y

100Y

1000Y

Std. Devs.

25

20

15

10

5

0

10% 1

1% 2

0.1% 3

0.01% 4

-Log@10, freqD CDF Power Law

CDF Power Law

3

0.001% 5

0.0001% 6

AIMS3.nb

27

Slide 26 of 43

ü VaR vs CVaR - the reality If you remain Gaussian the switch from VaR to CVaR is essentially pointless in practice - a small modulation of the numbers or a small adjustment in quoted frequency. What really matters is the choice of distribution. It changes both the VaR and CVaR significantly, and also induces more differentiation between VaR and CVaR. A 20 sigma capital reserve emerges as basic good practice.

28

AIMS3.nb

Slide 27 of 43

Analytics vs Numerics This is obviously a somewhat contrived model, with univariate behaviour, and all done analytically. We get some useful nevertheless. What if the risk computation has to be done for real for a large collection of positions all with potentially fat-tailed behaviour? What if we want to do asset allocation, e.g. by some form of portfolio optimization? We need the simulation tools....

AIMS3.nb

29

Slide 28 of 43

ü More T Technology I - Baileyʼs T Box-Muller In 1994 Ralph Bailey published a very useful paper. It extended the commonly used Box-Muller algorithm for samples from the normal distribution, and the polar variation, to sample instead from the Student T. You only need to change one line of code from the Gaussian model, and to note that you get one sample rather than two independent ones. Baileyʼs method actually samples from a 2D T distribution in “standard” form, and the two marginals are not independent. R.W. Bailey, 1994, Polar Generation of Random Variates with the t - distribution, Mathematics of Computation, 62 (206), April 1994, pp 779 - 781. Letʼs look at how it works and a simple implementation with Mathematica code, pseudo-code for other systems.

30

AIMS3.nb

Slide 29 of 43

ü Baileyʼs model Here is the Cartesian argument. Let U, V be independent uniform rvs drawn from H0, 1L and let n > 0 be a real parameter. In general, for a transformation to new rvs HX , Y L we have du dv = d x d y f Hx, yL f Hx, yL !

! Hu, vL

(1) (2)

! Hx, yL

Now we define q=2pv,

r=

n Iu-2ên - 1M

x = r cosHqL , y = r sinHqL We compute the Jacobian and establish that the 2D density is

(3) (4)

AIMS3.nb

31

Slide 30 of 43 „ 2D density f Hx, yL =

1

1 (5)

2 p I1 + Ix2 + y2 M ë nMHnê2+1L

some thought shows zero means, zero correlation, but obviously not independent as it is not a product. This is a standard 2D T-density. A careful integration over y eventually gives the ordinary 1D T density. Some further optimization (see Numerical Recipes) gives the polar form that avoids some trig evaluation. In the limit n Ø ¶ we recover instead r=

-2 logHuL

32

AIMS3.nb

Slide 31 of 43

ü Implementations I saw this in some CERN code written in 1999. I will give the simple polar form here rather explicitly - my understanding is that Mathematicaʼs internal algorithm uses something similar, but you can replicate the following in other languages easily: In[73]:=

In[74]:=

Out[75]=

BaileyStudent@n_D := Module@8W = 2, u, v, U, V<, While@W > 1, Hu = RandomReal@D; v = RandomReal@D; U = 2 u - 1; V = 2 v - 1; W = U ^ 2 + V ^ 2LD; U * Sqrt@n HW ^ H- 2 ê nL - 1L ê WDD baileydata = Table@BaileyStudent@6D, 8100 000
AIMS3.nb

33

Slide 32 of 43

ü Data histogram In[76]:=

baileyplot = Histogram@baileydata, 8- 6, 6, 0.5<, "PDF"D

Out[76]=

In[21]:=

h[n_, t_] := Gamma[(n+1)/2]/Sqrt[Pi n]/Gamma[n/2]/ ((1+t^2/n)^((1/2)*(n+1)))

34

AIMS3.nb

Slide 33 of 43

ü PDF plot for comparison In[44]:=

In[45]:=

pdfplot = Plot@h@6, tD, 8t, - 6, 6
Out[45]=

We can do for other d.o.f, even negative n. In[47]:=

baileydatab = Table@BaileyStudent@- 6D, 820 000
AIMS3.nb

35

Slide 34 of 43 baileyplotb = Histogram@baileydatab, 8- 3, 3, 0.25<, "PDF"D

In[48]:=

Out[48]=

N.B., the so-called “q-Gaussian”, is, for n>0, nothing more than a T re-parametrized with q = Hn + 3L ê Hn + 1L . You have a density with compact support for n<0, which looks like a symmetric beta distribution. I do not think there is anything new statistically, though there is much interesting thermodynamics.

36

AIMS3.nb

Slide 35 of 43

ü T simulation tech II - Gosset methods Gossetʼs original description can be used to simulate from Gaussians, for integer dof. We can use n + 1 Gaussians Z0

T=

IZ12 + Z22 + ... + Zn2 M ë n

(6)

We thought of this more compactly T=

Z0 cn2 ë n

and this allows an extension to real +ve dof based on a chi-squared distribution.

(7)

AIMS3.nb

37

Slide 36 of 43

ü Direct CDF inversion The use of the quantile directly was pioneered by Hill in the 1970s, and found its way into NAG library. Just to remind you, if a distribution of an RV X has CDF F(x), then FHXL ~ UH0, 1L

(8)

and so to simulate X it is sufficient to have the quantile function QHuL = F -1 HuL. This is an important second reason for having an understanding of this function. For live Monte Carlo simulation the full inverse beta function method is computationally quite slow and one wants instead to have a fast numerical approximation. The exceptions are the cases n=1,2,4 where there is a simple closed form, already discussed. The details of the series expansions are quite complicated, so I will mostly give you the references:

38

AIMS3.nb

Slide 37 of 43

ü T-quantile refs G.W. Hill, 1970, Algorithm 396, Studentʼs t-quantiles. Comm ACM 13(10), 619. W.T. Shaw 2006, Sampling Studentʼs T distribution - use of the inverse CDF, J. Comp Fin, 9 (4) p. 37, Summer 2006 (this has some short series) G. Steinbrecher, W.T. Shaw, 2008, Quantile Mechanics, Eur J Applied Math, 19(2), 87. (has solutions of nonlinear ODE with recurrences for central and tail series - can code up to very high precision if needed).

AIMS3.nb

39

Slide 38 of 43

ü Tail series Solve Non-linear ODE as in EJAM paper... In[146]:=

In[148]:=

Out[148]=

Clear@nD; Clear@dD; d@1D = 1; d@p_D := d@pD = Simplify@ 1 ê Hp ^ 2 + Hn ê 2 - 2L p + 1 - n ê 2L * HIf@p ã 2, 0, 1D * Sum@d@kD d@p + 1 - kD * HH1 - n ê 2L * k * Hp - kL - k Hk - 1LL, 8k, 2, p - 1
:1, -

1 n+2

,-

n2 + n - 6 2 Hn + 2L2 Hn + 4L

,-

n4 + 4 n3 - 10 n2 - 22 n + 36 3 Hn + 2L3 Hn + 4L Hn + 6L

>

40

In[151]:=

In[156]:=

In[132]:=

AIMS3.nb

Slide 39 of 43 data = Table@d@kD, 8k, 1, 30
AIMS3.nb

41

Slide 40 of 43

What precision do we get with this tail series?

n = 6; Plot@TailQuantile@6, u, dataD ê FMinusOne@u, 6D - 1, 8u, 0.8, 0.9999<, PlotRange Ø AllD 0.85

0.90

0.95

1.00

-1. µ 10-9

Out[158]=

-2. µ 10-9

-3. µ 10-9

-4. µ 10-9

Certainly good enough for the tail risk analysis. We need another series for the middle.

42

AIMS3.nb

Slide 41 of 43

ü Central series In[133]:=

In[137]:=

Out[137]=

In[138]:=

Clear@nD; Clear@cD; c@0D = 1; c@p_D := c@pD = Simplify@1 ê H2 pL ê H2 p + 1L Sum@Sum@c@kD * c@lD * c@p - l - 1 - kD * HH1 + 1 ê nL * HH2 l + 1L * H2 p - 2 k - 2 l - 1LL - H2 kL H2 k + 1L ê n L, 8l, 0, p - k - 1
:1,

1 1 6 n

+1 ,

Hn + 1L H7 n + 1L Hn + 1L I127 n2 + 8 n + 1M , > 120 n2 5040 n3

cdata = Table@c@kD, 8k, 0, 20
AIMS3.nb

In[154]:=

In[165]:=

43

Slide 42 of 43 CentralQuantile@n_, u_, data_D := Module@8v = Sqrt@n PiD Gamma@n ê 2D ê Gamma@Hn + 1L ê 2D Hu - 1 ê 2L, len = Length@dataD, w <, w = v ^ 2; v + v * Sum@data@@k + 1DD * w ^ k, 8k, 1, len - 1

1. µ 10-9

0.6

Out[165]=

0.7

0.8

0.9

-1. µ 10-9

-2. µ 10-9

44

AIMS3.nb

Slide 43 of 43

ü Patched Series By taking a pair of series we can get as much precision as we need, or optimize for speed. Need to patch at suitable point.

Summary We have enough technology to do closed-form analytics of VaR and CVaR for Student T, and a variety of practical methods for simulating it. The risk analysis reveals the importance of distributional choice, and the fact that the VaR-> CVaR switch needs to be coupled to a sensible distributional choice to yield useful results.

William Shaw Slide 3.pdf

You need about 1015 Googol universe lifetimes! 6 AIMS3.nb. Page 3 of 22. William Shaw Slide 3.pdf. William Shaw Slide 3.pdf. Open. Extract. Open with.

764KB Sizes 4 Downloads 141 Views

Recommend Documents

William Shaw Slide 2.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

William Shaw Slide 5.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

PdF Moral Issues in Business William H. Shaw Full Book
gives readers the analytical tools to resolve those issues. Using a combination of true stories, interesting reading selections, and a conversational writing style, ...

pdf-1373\business-ethics-a-textbook-with-cases-by-william-h-shaw ...
pdf-1373\business-ethics-a-textbook-with-cases-by-william-h-shaw.pdf. pdf-1373\business-ethics-a-textbook-with-cases-by-william-h-shaw.pdf. Open. Extract.

Shaw(2010).pdf
protein dynamics: protein folding and the inter- conversion among distinct structural states of a. folded protein. Specifically, we have been able to formulate a. detailed description of the folding of a WW do- main (12) as well as the folded-state d

Slide switch plug
#define second_led 13 ​//second LED is connected to 13th pin. #define first_datapin 10 ​//D1 of slide switch is connected to. 10th pin. #define second_datapin ...

Slide 1
had received bachelor or professional degrees. By 1990, this had risen 10-fold to. Inore than 30%. 3. Although technological societies are undoubt- cdly more ...

Slide 0
1Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands. 2 Philips Research, Eindhoven, The Netherlands. Email: [email protected] ... patient care and monitoring, but also reveals health risks that might ...

Slide Dragan.pdf
Page 2 of 20. 2. BIOGRAPHY. Dragan Samardzija received the B.S. degree in electrical engineering and. computer science in 1996 from the University of Novi ...

Slide Biogas.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Slide Biogas.pdf.

No Slide Title
I will argue that this lack of proper definitions is the main reason why the field of research in Artificial Intelligence (and some of its subfields like Cognitive Robotics, e.g.) has been derailed for the last 60 years. The definitions that are in u

Cover Slide
Traffic Distribution By Region. 2010 Growth Rate. North America. 266%. Asia ... 2%. Oceania. 2%. Eastern. Europe. 2%. Source: AdMob Network, Dec 2010 ...

slide - Research at Google
Gunhee Kim1. Seil Na1. Jisung Kim2. Sangho Lee1. Youngjae Yu1. Code : https://github.com/seilna/youtube8m. Team SNUVL X SKT (8th Ranked). 1 ... Page 9 ...

Slide sem título
isrequired to enable the detector to search for core collapses in supernova events, neutron stars going to hydrodynamical instabilities, coalescence of neutron ...

No Slide Title
Hand-held computers (Palm© Z22s) were used to code children's behaviors, their social experiences, and their reactions to them. Using methods based on past research (Zakriski et al., 2005), each child was coded 3-6 times/day. 15,773 observation sess

No Slide Title
specific to their learning issues. ▫ 28-year old male. ▫ Cough productive of clear sputum with flecks of blood, worsens to frank blood. ▫ Smokes 1.5 packs a day, ...

Slide-SBL-Mobile.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Slide-SBL-Mobile.pdf. Slide-SBL-Mobile.pdf. Open. Extract.

35 Slide 1 -
Page 1. 1. 2. 3. 5. 6. 7. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 23. 19. 21. 22. 24. 25. 26. 2. 7. 28. 29. 30. 32. 33. 34. 35. 36. 37. 38. 40. 41. 42. 43. 44. 45. 46. 47. 4. 8.

SLIDE TTNT - FULL.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. SLIDE TTNT ...

No Slide Title
12. Models + Algorithms = Testing. • Models are good at representing systems. • Graph theory can use models to generate tests. • Different algorithms can provide tests to suit your needs: – Street sweepers. – Safecracking. – Markov chains