TAIL MEASURES OF STOCHASTIC PROCESSES OR RANDOM FIELDS WITH REGULARLY VARYING TAILS GENNADY SAMORODNITSKY AND TAKASHI OWADA Abstract. A new notion called a tail measure is proposed to measure the dependence on extremes of stochastic processes or random fields with regularly varying tails. A tail measure is an infinitedimensional extension of a family of Radon limiting measures of the regular variation. These measures essentially have structure similar to that of L´evy measures of infinitely divisible processes. A tail measure can be seen, in a sense, to encompass related notions such as extremograms (Davis and Mikosch (2009)) and tail processes (Basrak and Segers (2009)). In addition, focusing on a certain class of stationary infinitely divisible processes, we relate the ergodic properties, described by the positive-null decomposition, to the properties of the probability laws of those processes.

1. Introduction Heavy-tail analysis typically assumes that a random variable X has an algebraically decaying tail: P (X > x) ∼ Cx−α L(x),

(1.1)

as x → ∞ ,

where C > 0 and α > 0 are constants, and L is a slowly varying function; that is, L(tx)/L(x) → 1 as x → ∞ for all t > 0. X is then said to have a regularly varying tail with index α. If 0 < α < 2, X has infinite variance, and if 0 < α < 1, even the mean of X becomes infinite. Heavy-tail analysis applies to the systems governed by a series of extremal events that occur at a non-negligible rate. Indeed, the heavy-tail assumption (1.1) has been applied to diverse fields such as data network analysis, finance, insurance, and natural disasters; for details, see Adler et al. (1998), Beirlant et al. (2004), de Haan and Ferreira (2006), Embrechts et al. (1997), and McNeil et al. (2005). The heavy-tail assumption (1.1) can be extended to a multivariate form as follows. Let X be a d-dimensional random vector. X is said to have a regularly varying tail if there exists a function H : k

(0, ∞) → (0, ∞) growing to infinity, and a nonzero Radon measure µ on R \ {0} = [−∞, ∞]k \ {0} k

with µ(R \ Rk ) = 0, such that (1.2)

( ) v H(u)P u−1 X ∈ · → µ(·)

1991 Mathematics Subject Classification. Primary 60F17, 60G18. Secondary 37A40, 60G52 . Key words and phrases. regular variation, extreme value theory, infinitely divisible process, positive-null decomposition, ergodicity . 1

2

GENNADY SAMORODNITSKY AND TAKASHI OWADA k

vaguely in R \ {0}. Various alternative definitions to (1.2) exist. Among the most popular is that there exists a random vector Θ on a unit sphere Sk−1 = {x ∈ Rk : |x| = 1}, such that ( ) P |X| > ux, X/|X| ∈ · ⇒ x−α P (Θ ∈ ·) P (|X| > u) weakly in Sk−1 . The probability measure P ◦ Θ−1 is said to be a spectral measure. For other alternative definitions, we refer to Basrak et al. (2002a) and Resnick (2007). Researchers are becoming increasingly interested in developing the statistics to measure dependence on extremes of stochastic processes. To this end, Ledford and Tawn (2003) introduced the so-called upper tail dependence coefficient; for a stationary sequence (Xn , n ≥ 0) of random variables, ( ) λ(n) = lim P Xn > x | X0 > x .

(1.3)

x→∞

Under the multivariate regular variation condition (1.2), Davis and Mikosch (2009) considered extremograms; for a stationary sequence (Xn , n ≥ 0) of Rd -valued random vectors, ( ) (1.4) γAB (n) = lim H(u)−1 P u−1 X0 ∈ A, u−1 Xn ∈ B , u→∞

(1.5)

( ) ρAB (n) = lim P u−1 Xn ∈ B | u−1 X0 ∈ A , u→∞

where H : (0, ∞) → (0, ∞) is a regularly varying function, and both A and A × B are Borel sets bounded away from zero. Resnick (2004) analyzed the alternative notion of extreme dependence measure. Fasen (2010) provides an elegant review of these types of notions. However, all of these measures mainly describe dependence on extremes between two vectors only, X0 and Xn . We are thus naturally motivated to develop the statistics for describing highlevel dependence of the whole process X. Let T be a (possibly infinite) parameter space and X = (Xt , t ∈ T ) be a stochastic process or a random field. We further assume that X has regularly ( ) varying tails. That is, for all k ≥ 1 and t1 , . . . , tk ∈ T , the random vector Xt1 , . . . , Xtk has a regularly varying tail with limiting measure µt1 ...tk . In this paper, we shall define a cylindrical measure ν on RT , termed a tail measure, such that for all k ≥ 1 and t1 , . . . , tk ∈ T , { } (1.6) ν x ∈ RT : (xt1 , . . . , xtk ) ∈ B = µt1 ...tk (B) , B ⊆ Rk \ {0} . The measure ν can be seen to be an infinite-dimensional extension of a family of Radon measures (µt1 ...tk : t1 , . . . , tk ∈ T, k ≥ 1). Basrak and Segers (2009) proposed a related infinite-dimensional object, called a tail process, which contains information on high-level dependence for multivariate time series models. The notion of regular variation in Rd has been extended to probability laws on non-locally compact metric spaces (e.g. C[0, 1], D[0, 1]); see Hult and Lindskog (2005, 2006). Such an extension, however,

TAIL MEASURES

3

requires that usual vague convergence be replaced by the so-called w-convergence b (w-convergence b is detailed in Daley and Vere-Jones (2003)). This paper is organized as follows. Section 2 provides a rigorous definition of a tail measure as a cylindrical measure ν given in (1.6). Subsequently, we study several properties of tail measures. We will then make use of the fact that, as an infinite-dimensional measure defined on a big space RT (T is an arbitrary parameter space), tail measures have similar structure as (function level) L´evy measures of infinitely divisible processes. In fact, our argument is heavily inspired by ?, which is an instructive lecture note on infinitely divisible processes and is partly based on Maruyama (1970). Section 3 covers several examples of tail measures for moving averages, independent processes, stochastic volatility processes and GARCH processes. Section 4 reveals the relation between tail measures and other related notions such as extremograms and tail processes. Finally, focusing on stationary infinitely divisible processes of stochastic integral forms, we will investigate the connection between the ergodic theoretical properties of tail measures and those of the probability laws of the processes. 2. Tail Measures and Their Properties Let T be an arbitrary (possibly infinite) index set and let (Xt , t ∈ T ) be a stochastic process or a random field, e.g., T = Z for a univariate time series, T = Z × {1, . . . , k} for a multivariate time series, and T = Rd for a random field. It is assumed throughout this paper is that for any parameter space T , (Xt , t ∈ T ) has regularly varying tails. More precisely, we suppose that there exists a function H : (0, ∞) → (0, ∞) growing to infinity such that for all t1 , . . . , tk ∈ T , k ≥ 1, k

k

there is a Radon measure µt1 ...tk on R \ {0} with µ(R \ Rk ) = 0, such that as u → ∞, ) (( ) v (2.1) H(u)P Xt1 , . . . , Xtk ∈ u · → µt1 ...tk (·) k

vaguely in R \ {0}. We assume that for at least one t0 ∈ T , µt0 is a nonzero measure. With this assumption, the standard argument regarding regular variation (e.g., Resnick (2007)) shows that H(x) is regularly varying with index −α for some α > 0. Furthermore, the Radon measure µt1 ...tk , t1 , . . . , tk ∈ T satisfies the homogeneity property: µt1 ...tk (sA) = s−α µt1 ...tk (A) for all Borel k

sets A ⊆ R \ {0} and s > 0. The existence of the homogeneity exponent α is often emphasized by saying that (Xt , t ∈ T ) has regularly varying tails with index α. Each Radon measure µt1 ...tk can be seen to contain information regarding the high-level dependence of (Xt1 , . . . , Xtk ). However, merely observing each µt1 ...tk will be insufficient if one hopes to comprehensively capture the extremal behavior of a stochastic process or a random field as a whole. This is particularly true for the relation between the probability laws of finite-dimensional random vectors and the probability law of a stochastic process. We may fail to keep track of the

4

GENNADY SAMORODNITSKY AND TAKASHI OWADA

way a stochastic process evolves dynamically if we focus solely on a family of finite-dimensional random vectors. However, the Kolmogorov extension theorem allows one to clarify the connection between the finite-dimensional random vectors and the corresponding stochastic process. Indeed, given a family of probability laws of these vectors, this theorem guarantees the existence of the corresponding stochastic process. On the other hand, the construction of an infinite-dimensional object that unifies the family of finite-dimensional Radon measures in (2.1) is a nontrivial matter. k

This is because Radon measures in (2.1) blow up at the origin, whence µt1 ...tk (R \ {0}) = ∞, and this disallows standard use of Kolmogorov extension theorem. However, it is still possible to prove the existence of such an infinite-dimensional measure using method suggested by Maruyama (1970). Let T be an arbitrary parameter space and let (Yt , t ∈ T ) ( ) be an infinitely divisible process. That is, for all k ≥ 1 and t1 , . . . , tk ∈ T , Yt1 , . . . , Ytk forms an ( ) infinitely divisible random vector. Specifically, the law of Yt1 , . . . , Ytk is identified by a triplet ( ) ΣF , ρF , bF , F = {t1 , . . . , tk }, where ΣF is the covariance matrix of the Gaussian part, bF ∈ RF , and ρF is the L´evy measure. Importantly, the system of L´evy measures {ρF : F ⊆ T, finite} is consistent in the sense that for all finite index sets F ⊆ G ⊆ T , ( ( )) (2.2) ρF (B) = ρG p−1 B \ {0 } , B ∈ B(RF ) , F GF where pGF : RG → RF is the projection (represented by a |F | × |G| matrix), and 0F is the origin of RF . Furthermore, (2.2) implies that every L´evy measure has no mass at the origin, i.e., for every finite F ⊆ T , (2.3)

( ) ρF {0F } = 0 .

Exploiting the structural properties (2.2) (and (2.3)) of finite-dimensional L´evy measures, Maruyama ( ) (1970) proves the existence of a big triplet Σ, ρ, b that characterizes the distribution of the whole ( ) ( ) process Y = (Yt , t ∈ T ). As a result, one can reconstruct each triplet ΣF , ρF , bF from Σ, ρ, b by

 T  ΣF = pF Σp(F , ( )) ρF (B) = ρ p−1 B \ {0} , F   bF = p F b ,

B ∈ B(RF ) ,

where pF : RT → RF is the projection given by pF x = x|F , x ∈ RT , and its adjoint pTF satisfies { ( T ) zt t ∈ F, pF z t = 0 t∈T \F . The situation for Radon measures defined by (2.1) is analogous to that for finite-dimensional L´evy measures of an infinitely divisible process. Indeed, in Theorem 2.1 below, slightly modified

TAIL MEASURES

5

versions of the Radon measures in (2.1) will be shown to satisfy (2.2) and (2.3). Consequently, the resulting cylindrical measure turns out to be well-defined on the space RT . Theorem 2.1. Let X = (Xt , t ∈ T ) be a stochastic process or a random field, assuming regularly ∏ varying tails in the sense of (2.1). Let B(R)T = t∈T B(Rt ), where Rt = R, be the cylindrical σ-field on RT . Then a cylindrical measure ν on (RT , B(R)T ), satisfying (i) and (ii) below, uniquely exists. This measure is called a tail measure of X. (i) : For any finite index set F = {t1 , . . . , tk } ⊆ T , ( ) µF (A) = ν p−1 F (A) for every Borel set A ⊆ RF \ {0F }. (ii) : For every countable T1 ⊆ T , there exists a countable set T2 , such that T1 ⊆ T2 ⊆ T and ( ( ( )) ( ) −1 ( )) −1 ν p−1 {0 } = ν p {0 } \ p {0 } . T T T 1 1 2 T1 T1 T2 Remark 2.2. Condition (ii) in Theorem 2.1 indirectly tells us that a tail measure ν has no mass ( ) at the origin (note that if T is uncountable, then the statement ν {0T } = 0 does not make sense, because {0T } is not measurable in RT ). As an evidence, we can show that if T is countable, ( ) condition (ii) is equivalent to ν {0T } = 0. To see this, condition (ii) implies ( ( ( ) ( )) ( ) −1 ( )) −1 ν {0T } = ν p−1 {0 } = ν p {0 } \ p {0 } = ν(∅) = 0 . T T T T T T ( ) Conversely, if ν {0T } = 0, then for every countable set T1 ⊆ T , ( ( ( ( ) −1 ( )) ( )) ( ) ( )) −1 −1 ν p−1 {0 } \ p {0 } = ν p {0 } − ν {0 } = ν p {0 } . T1 T T1 T T1 T1 T T1 T1 ( ( )) Remark 2.3. If there exists a countable set T0 ⊆ T such that ν p−1 {0 } = 0, then condition T 0 T0 (ii) follows. To show this, we take an arbitrary countable set T1 ⊆ T . Define T2 = T0 ∪ T1 , which ( ( ( )) ( )) −1 is still countable. Since ν p−1 {0 } ≤ ν p {0 } = 0, T T 2 0 T2 T0 ( ) ( ) ( ( ( ) −1 ( ) ( ) ( )) ( )) −1 −1 −1 ν p−1 {0 } \ p {0 } = ν p {0 } − ν p {0 } = ν p {0 } . T1 T2 T1 T2 T1 T1 T2 T1 T2 T1 As we will see in Proposition 2.4 below, if one can find such a countable set T0 ⊆ T , then ν turns out to be σ-finite. F

Proof. First, we identify every µF , with finite F ⊆ T , defined on R \ {0F } by a measure νF on RF as follows:

{

( ) νF {0F } = 0 , νF (A) = µF (A) for any Borel set A ⊆ RF \ {0F } .

We claim that for any finite sets F ⊆ G ⊆ T , ( ( )) (2.4) νF (B) = νG p−1 B \ {0 } , F GF

B ∈ B(RF ) .

6

GENNADY SAMORODNITSKY AND TAKASHI OWADA

We first show (2.4) for every B ∈ B(RF ) with 0F ∈ / B. Fix G ⊆ T and prove this inductively with respect to dim(F ) ∈ {1, . . . , dim(G)}. Suppose dim(F ) = 1. Then, it suffices to show that ( ( ) ( )) νF (−∞, −a] ∪ [b, ∞) = νG p−1 (−∞, −a] ∪ [b, ∞) GF for every a > 0, b > 0. We can assume without loss of generality that (−∞, −a] ∪ [b, ∞) and ( ) p−1 GF (−∞, −a] ∪ [b, ∞) both are continuity sets. Since (−∞, −a] ∪ [b, ∞) is relatively compact on F

R \ {0}, the Portmanteau theorem for vague convergence (see e.g. Proposition 3.12 in Resnick (1987)) gives νF ((−∞, −a] ∪ [b, ∞)) = µF ((−∞, −a] ∪ [b, ∞)) ( ) = lim H(u)P u−1 XF ∈ (−∞, −a] ∪ [b, ∞) u→∞ ( ( )) = lim H(u)P u−1 XG ∈ p−1 (−∞, −a] ∪ [b, ∞) . GF u→∞ ( ) G −1 Since pGF (−∞, −a] ∪ [b, ∞) is relatively compact on R \ {0G }, one more application of the Portmanteau theorem concludes

( ( )) lim H(u)P u−1 XG ∈ p−1 (−∞, −a] ∪ [b, ∞) GF u→∞ ( ( ( )) ( )) −1 = µG p−1 (−∞, −a] ∪ [b, ∞) = ν p (−∞, −a] ∪ [b, ∞) . G GF GF

Next, suppose that (2.4) is true as long as 1 ≤ dim(F ) ≤ m < dim(G). We take dim(F ) = m + 1 and let a = (a1 , . . . , am+1 ) and b = (b1 , . . . , bm+1 ). We need to show that ( ( ) ( )) c νF (−a, b)c = νG p−1 (−a, b) GF ( ) c both continuity sets. If a = 0, for every a, b ∈ [0, ∞)F \ {0F } with (−a, b)c and p−1 i GF (−a, b) bi > 0 (or ai > 0, bi = 0) for some i ∈ {1, . . . , m + 1}, then 0F ∈ (−a, b)c ; therefore, one does not need to consider such cases. If ai = bi = 0 for some i ∈ {1, . . . , m + 1}, the statement is automatically true by induction hypothesis. Hence, it suffices to check the cases ai > 0, bi > 0 for all i ∈ {1, . . . , m + 1}. Then, the same argument as applied in the one-dimensional case finishes the ( ) proof. To complete (2.4) for any Borel set, consider B ∈ B(RF ) with 0F ∈ B. Since νF {0F } = 0, ( ( ( ) )) νF (B) = νF B \ {0F } = νG p−1 B \ {0 . F GF We have seen that a family of measures (νF , F ⊆ T, finite), with each νF defined on (RF , B(RF )), satisfies the same conditions as (2.2) (and hence (2.3)). Now, the Kolmogorov extension-like argument, which was essentially adopted in Proposition 1.1 of Maruyama (1970), proves the existence of a cylindrical measure that fulfills (i) and (ii) in Theorem 2.1. To prove uniqueness of a tail measure, we suppose that there exists another tail measure ρ on (RT , B(R)T ), such that ( ( ( )) ( )) −1 (2.5) ν p−1 B \ {0 } = ρ p B \ {0 } , B ∈ B(RF ) , F F F F

TAIL MEASURES

7

for all finite sets F ⊆ T . In the sequel, we will prove that ν = ρ. Since B(R)T can be expressed as { } s (2.6) B(R)T = p−1 S (B) : B ∈ B(R ), S ⊆ T is a countable set , it is enough to show that (2.7)

−1 ν ◦ p−1 S = ρ ◦ pS

for any countable set S ⊆ T .

By Monotone class theorem, it suffices to check (2.7) for all finite sets F ⊆ T . For every B ∈ B(RF ), ( ( ( ) ( )) ( )) −1 −1 ν p−1 (B) = ν p B \ {0 } + ν p B ∩ {0 } F F F F F ( ( ( )) ( )) = ρ p−1 + ν p−1 F B \ {0F } F {0F } 1{0F ∈B} . Thus, (2.7) will be established if

( ( ( )) ( )) −1 ν p−1 {0 } = ρ p {0 } F F F F

for all finite F ⊆ T . By condition (ii) in Theorem 2.1, there is a countable set F ⊆ S ⊆ T , such that ( ( ( )) ( ) −1 ( )) −1 ν p−1 {0 } = ν p {0 } \ p {0 } , F F S F F S ( ( ( )) ( ) −1 ( )) −1 ρ p−1 {0 } = ρ p {0 } \ p {0 } . F F S F F S Since S is countable, there exists a sequence of finite sets F ⊆ Gn ↑ S so that ( ( ( ( )) ( ) ( )) ( ( ) )) −1 −1 ν p−1 = lim ν p−1 {0F } \ p−1 {0 } = lim ν p p {0 } \ {0 } . G F G n n F {0F } F G G G F n n n n→∞ n→∞ ( ) ( ) ( ) ( −1 ( ) ) Similarly, we get ρ p−1 = limn→∞ ρ p−1 F {0F } Gn pGn F {0F } \ {0Gn } . Now, (2.5) finishes the 

proof.

Although a tail measure is not necessarily σ-finite, the next proposition provides a necessary and sufficient condition for it to be σ-finite. Proposition 2.4. Under the assumptions of Theorem 2.1, a tail measure ν is σ-finite if and only ( ( )) if there is a countable set T0 ⊆ T , such that ν p−1 = 0. T0 {0T0 } Proof. Suppose, first, that ν is σ-finite. There is a sequence (Aj ) ⊆ B(R)T with RT = p−1 Sj (Bj )

∪∞

j=1 Aj

and

ν(Aj ) < ∞. In view of (2.6), each Aj can be written as Aj = for some countable Sj ⊆ T ∪ ∞ and Bj ∈ B(RSj ). Define a countable set T1 = j=1 Sj . ( ( ) ( )) −1 −1 If 0Sj ∈ Bj for some j ≥ 1, then p−1 {0 } ⊆ p (B ) and, hence, ν p ≤ ν(Aj ) < ∞. j T1 T1 Sj T1 0T1 From condition (ii) in Theorem 2.1, it follows that ( ( ( ) −1 ( )) ( )) −1 ν p−1 {0 } \ p {0 } = ν p {0 } T T T 1 2 1 T1 T2 T1

8

GENNADY SAMORODNITSKY AND TAKASHI OWADA

for some countable T2 with T1 ⊆ T2 ⊆ T . Now we get

( ( ) ( −1 ( ) −1 ( ))) )) ( ( −1 } } \ p {0 ν p−1 {0 } = ν p {0 } \ p {0 T T T T 1 2 1 2 T2 T1 T2 T1 ( ) ( ) ) ( ) ( = 0. = ν p−1 − ν p−1 T1 {0T1 } T1 {0T1 }

On the contrary, if 0Sj ∈ / Bj for all j ≥ 1, ∞ ( ∞ ( ) ∑ ( )) ∑ ( ) −1 −1 −1 ν pT1 {0T1 } ≤ ν pT1 {0T1 } ∩ pSj (Bj ) = ν(∅) = 0 . (

j=1

j=1

( )) Conversely, assume that ν p−1 {0 } = 0 for some countable T0 ⊆ T . We can express RT by T0 T0 RT =

∞ ( ∪ ∪ ( ) { }) T −1 p−1 {0 } ∪ x ∈ R : |x | > n . t T 0 T0 t∈T0 n=1

Since

( ( ) { }) T −1 ν p−1 {0 } ∪ x ∈ R : |x | > n t T 0 T0 ( ) ( ) = ν x ∈ RT : |xt | > n−1 = µt y : |y| > n−1 < ∞ , 

ν is σ-finite.

The next proposition describes the homogeneity property of a tail measure. It can be proved directly by the homogeneity of finite-dimensional Radon measures in (2.1). So, we only presents the result. Proposition 2.5. Under the assumptions of Theorem 2.1, a tail measure ν has the homogeneity property. That is, there exists an α > 0 such that ν(cA) = c−α ν(A)

for all A ∈ B(R)T , c > 0 .

3. Examples This section presents the tail measures of several stochastic processes; moving averages, independent processes, stochastic volatility processes, and GARCH processes. Example 3.1. Let T = Z or R, and (Xt , t ∈ T ) be a stochastic process with integral representation ∫ (3.1) Xt = ft (x)dM (x), t ∈ T , E

where M is an independently scattered infinitely divisible random measure on a measurable space (E, E) with σ-finite control measure m and a local L´evy measure ρ(s, ·), s ∈ E. The functions ft are deterministic functions of the form ft (x) = f ◦ ψt (x), x ∈ E, t ∈ T , where f : E → R is a measurable function, and ψt : E → E, t ∈ T is a family of measurable maps. Rajput and Rosi´ nski (1989) describes a condition under which Xt is well-defined, which will be assumed throughout

TAIL MEASURES

9

this example. Then (Xt , t ∈ T ) is, automatically, an well-defined infinitely divisible process. The function level L´evy measure of (Xt , t ∈ T ) is given by (ρ × m) ◦ h−1 , where h(x, s) = xf· (s), x ∈ R, s ∈ E. Furthermore, we suppose the following conditions: • There exists a measurable and regularly varying function H : (0, ∞) → (0, ∞) of index −α for some α > 0. There also exist measurable functions w± : E → [0, ∞) such that for every s ∈ E, (3.2)

( ) ρ s, (u, ∞) lim = w+ (s) u→∞ H(u)

and

( ) ρ s, (−∞, −u) lim = w− (s) . u→∞ H(u)

• The convergence above is uniform: there exists u0 > 0 with ( ) ( ) ρ s, (−∞, −u) ρ s, (u, ∞) (3.3) sup ≤ 2w+ (s) and sup ≤ 2w− (s) H(u) H(u) u≥u0 u≥u0 for all s ∈ E. • f : E → R is bounded on E and, for some ξ ∈ (0, α), ∫ (3.4) w± (s)|ft (s)|α−ξ m(ds) < ∞ E

for all t ∈ T . Then, one can show that (Xt , t ∈ T ) has regularly varying tails with index α and the tail measure of (Xt , t ∈ T ) is given by ν = (ρ∗ × m) ◦ h−1 , where (3.5)

ρ∗ (s, dx) = w+ (s)

α

1 dx x1+α {x>0}

+ w− (s)

α 1 dx . |x|1+α {x<0}

To show this, we only have to prove that as u → ∞, { } ( ) v ( ) H(u)−1 P (Xt1 , . . . , Xtk ) ∈ u · → (ρ∗ × m) (x, s) : xft1 (s), . . . , xftk (s) ∈ · k

vaguely in R \ {0} for all t1 , . . . , tk ∈ T , k ≥ 1. Equivalently, we need prove that for all ai > 0 and ei ∈ {−1, 1}, i = 1, . . . , k, ( ) { } (3.6) H(u)−1 P ei Xti > ai u, i = 1, . . . , k → (ρ∗ × m) (x, s) : xei fti (s) > ai , i = 1, . . . , k . The tail behavior of the probability law of (Xt , t ∈ T ) is known to coincide with that of the function level L´evy measure of (Xt , t ∈ T ). See Theorem 2.1 in Rosi´ nski and Samorodnitsky (1993). Thus, (3.6) is equivalent to (3.7)

{ } H(u)−1 (ρ × m) (x, s) : xei fti (s) > ai u, i = 1, . . . , k { } → (ρ∗ × m) (x, s) : xei fti (s) > ai , i = 1, . . . , k .

10

GENNADY SAMORODNITSKY AND TAKASHI OWADA

The left hand side of (3.7) is equal to ∫ ( ( )) H(u)−1 ρ s, u max ai |fti (s)|−1 , ∞ m(ds) 1≤i≤k

{ei fti (s)>0, i=1,...,k}

∫ +

( ( )) H(u)−1 ρ s, −∞, −u max ai |fti (s)|−1 m(ds) . 1≤i≤k

{ei fti (s)<0, i=1,...,k}

On the other hand, the right hand side of (3.7) is equal to ) ( ∫ ∫ |fti (s)| α m(ds) + w+ (s) min 1≤i≤k ai {ei fti (s)>0, i=1,...,k} {ei fti (s)<0,

( w− (s) min 1≤i≤k i=1,...,k}

|fti (s)| ai

)α m(ds) .

Due to their symmetric structure, it suffices to check the convergence of the integral defined on {ei fti (s) > 0, i = 1, . . . , k}. By condition (3.2), ( ) ( ( )) |fti (s)| α H(u)−1 ρ s, u max ai |fti (s)|−1 , ∞ → w+ (s) min 1≤i≤k 1≤i≤k ai for every s ∈ E. Therefore, we only need to justify taking the limit inside. In order to apply the dominated convergence theorem, we must find a nonnegative function K ∈ L1 (E, m) such that ( ( )) H(u)−1 ρ s, u max ai |fti (s)|−1 , ∞ ≤ K(s) 1≤i≤k

for every s ∈ E and sufficiently large u > 0. In view of uniformity condition (3.3), for all u ≥ u0 sups∈E |f (s)|/ max1≤i≤k ai (the right hand side is finite, since f is bounded),

( ) ( ( ) H u max1≤i≤k ai |fti (s)|−1 ) . H(u)−1 ρ s, u max ai |fti (s)|−1 , ∞ ≤ 2w+ (s) 1≤i≤k H(u)

The Potter bounds (see e.g. Proposition 0.8 in Resnick (1987)) provide, for some Ci > 0, i = 1, 2, ( ) ( )−α+ξ −1 k H u max1≤i≤k ai |fti (s)| ∑ ai ≤ C2 |fti (s)|α−ξ ≤ C1 max 1≤i≤k |fti (s)| H(u) i=1

for all u large enough. Now, because of (3.4), an appropriate

L1 (E, m)-upper

bound K(·) is easy

to take and the proof is complete. Example 3.2. We will consider, once again, the process (3.1), but here, a local L´evy measure ρ is independent of s ∈ E. We assume that ρ has a balanced regularly varying tail: for some α > 0, ( ) ρ x : |x| > · is regularly varying with index −α, and (3.8)

ρ(y, ∞) ( ) → p, ρ x : |x| > y

as y → ∞, where 0 ≤ p, q ≤ 1 with p + q = 1.

ρ(−∞, −y) ( ) →q ρ x : |x| > y

TAIL MEASURES

11

In this example, we remove boundedness assumption of f , and instead, the following integrability condition is assumed: there exists 0 < β ≤ 2 such that {∫ |f (s)|α−ξ ∨ |ft (s)|β m(ds) < ∞ for some 0 < ξ < β − α if 0 < α < β , ∫E t α−ξ α+ξ ∨ |ft (s)| m(ds) < ∞ for some 0 < ξ < α if α ≥ β , E |ft (s)| for all t ∈ T . If β ̸= 2, the lower tail behavior of ρ has to be specified explicitly, that is, ( ) y β ρ x : |x| > y → 0 as y ↓ 0 . Under these assumptions, (Xt , t ∈ T ) has, once again, regularly varying tails with index α and its tail measure is given by ν = (ρ∗ × m) ◦ h−1 , where h is the same function as before, and (3.9)

ρ∗ (dx) = p

α x1+α

1{x>0} dx + q

α 1 dx . |x|1+α {x<0}

( ) For the proof, let H(u) = ρ x : |x| > u . By the same argument as Example 3.1, it suffices to verify that, for all t1 , . . . , tk ∈ T , k ≥ 1, ai > 0 and ei ∈ {−1, 1}, i = 1, . . . , k, ) ( ∫ ∫ ( ) |fti (s)| α m(ds) , H(u)−1 ρ u max ai |fti (s)|−1 , ∞ m(ds) → p min 1≤i≤k ai {ei fti (s)>0, i=1,...,k} {ei fti (s)>0, i=1,...,k} 1≤i≤k ) ( ∫ ∫ ) ( |fti (s)| α −1 −1 q min m(ds) . H(u) ρ −∞, −u max ai |fti (s)| m(ds) → 1≤i≤k ai {ei fti (s)<0, i=1,...,k} 1≤i≤k {ei fti (s)<0, i=1,...,k} By regular variation of H(u), we have, as u → ∞,

) |fti (s)| α H(u) ρ u max ai |fti (s)| , ∞ → p min , 1≤i≤k 1≤i≤k ai ) ( ( ) |fti (s)| α −1 −1 H(u) ρ −∞, −u max ai |fti (s)| → q min 1≤i≤k 1≤i≤k ai −1

(

−1

)

(

for any s ∈ E. It remains to find a measurable function K ∈ L1 (E, m), such that ( ) H(u)−1 ρ x : |x| > u max ai |fti (s)|−1 ≤ K(s) 1≤i≤k

for any s ∈ E and sufficiently large u > 0. We see from the Potter bounds that, for some Ci > 0, i = 1, 2, ( ) ) ρ x : |x| > u max1≤i≤k ai |fti (s)|−1 ( ( ) 1 u max ai |fti (s)|−1 > 1 1≤i≤k ρ x : |x| > u {( )−α+ξ ( )−α−ξ } ai ai ≤ C1 max + max 1≤i≤k |fti (s)| 1≤i≤k |fti (s)| ≤ C2

k { } ∑ |fti (s)|α−ξ + |fti (s)|α+ξ i=1

12

GENNADY SAMORODNITSKY AND TAKASHI OWADA

for all u large enough. ( ) ( ) Since y β ρ x : |x| > y → 0 as y ↓ 0, there exists C3 > 0 with ρ x : |x| > y < C3 y −β for all 0 < y ≤ 1. Thus, we see that ( ) ) ρ x : |x| > u max1≤i≤k ai |fti (s)|−1 ( ( ) 1 u max ai |fti (s)|−1 ≤ 1 1≤i≤k ρ x : |x| > u k ( ) ∑ C4 ) ≤ β ( |fti (s)|β 1 u max ai |fti (s)|−1 ≤ 1 1≤i≤k u ρ x : |x| > u i=1 ( ) for some C4 > 0. If 0 < α < β, then uβ ρ x : |x| > u → ∞ as u → ∞ and, hence, ( ) k ) ∑ ρ x : |x| > u max1≤i≤k ai |fti (s)|−1 ( ( ) 1 u max ai |fti (s)|−1 ≤ 1 ≤ C4 |fti (s)|β 1≤i≤k ρ x : |x| > u i=1

for all u large enough. On the contrary, if α ≥ β, there is C5 > 0 such that 1 ( ) ≤ C5 uα−β+ξ β u ρ x : |x| > u for all u large enough. Therefore, for some C6 > 0, ) ( ) ρ x : |x| > u max1≤i≤k ai |fti (s)|−1 ( ( ) 1 u max ai |fti (s)|−1 ≤ 1 1≤i≤k ρ x : |x| > u ≤ C4 C5 uα−β+ξ

k ∑

k ) ( ∑ |fti (s)|α+ξ . |fti (s)|β 1 u max ai |fti (s)|−1 ≤ 1 ≤ C6 1≤i≤k

i=1

In either case, we have found an

L1 (E, m)-function

i=1

K(·) as desired.

Example 3.3. Let T be an arbitrary set and X = (Xt , t ∈ T ) be an independent process (i.e., for all t1 , . . . , tk ∈ T , Xt1 , . . . , Xtk are independent). Suppose that there is a regularly varying function H : (0, ∞) → (0, ∞) with index −α for some α > 0, such that for every t ∈ T , there is a Radon measure µt on R \ {0} with µ(R \ R) = 0, such that, as u → ∞, ( ) v (3.10) H(u)−1 P Xt ∈ u · → µt (·) vaguely in R \ {0}. Assume that at least one Radon measure µt0 , t0 ∈ T , is a nonzero measure. Then the tail measure of X is given by ν(A) =



νt (A), A ∈ B(R)T ,

t∈T

where (

) νt (A) = µt (1, ∞)





αy 0

−(α+1)

(

1{ye(t)∈A} dy + µt (−∞, −1) {

e(t) ∈ RT with e(t)|s =

)



0

−∞

1 if s = t , 0 otherwise.

α|y|−(α+1) 1{ye(t)∈A} dy ,

TAIL MEASURES

13

( ) To see this, the multivariate regular variation of Xt1 , . . . , Xtk is derived from (3.12) and inde) ( pendence of Xt1 , . . . , Xtk (see e.g. Lemma 7.2 in Resnick (2007)): −1

H(u)

(

)

v

P (Xt1 , . . . , Xtk ) ∈ u · →

k ∑ (

ϵ0 × · · · × µtj × · · · × ϵ0

)

j=1 k

vaguely in R \ {0}. Here

{ ϵ0 (A) =

1 if 0 ∈ A , 0 otherwise.

It is easy to verify that, with F = {t1 , . . . , tk }, { νtj ◦ p−1 F = ϵ0 × · · · × µtj × · · · × ϵ0 , j = 1, . . . , k , −1 νt ◦ pF = 0 if t ∈ /F. Therefore, ∑

νt ◦

p−1 F

=

k ∑ (

) ϵ0 × · · · × µtj × · · · × ϵ0 .

j=1

t∈T

Since the choice of a finite index set F is arbitrary, the tail measure of X turns out to be ν =

∑ t∈T

νt .

Example 3.4. Let (Xn , n = 1, 2, . . . ) be a simple stochastic volatility process of the form Xn = σn Zn ,

n = 1, 2, . . . ,

where (σn ) is a nonnegative stationary sequence, representing volatility. (Zn ) is a sequence of i.i.d. random variables and is independent of (σn ), and (Zn ) has a regularly varying tail with index α. Let µ denote the limiting Radon measure of the regular variation for (Zn ). Assume that volatility sequence (σn ) has a significantly lighter tail than that of (Zn ); that is, (σn ) has finite (α + ϵ)th moment for some ϵ > 0. Then, their product X = (Xn , n = 1, 2, . . . ) becomes a stationary sequence with regularly varying tail of the same index α, and the tail measure of X is given by (

ν(A) = E σ

α

∞ )∑

νj (A)

j=1

for a Borel set A, where ∫ ( ) νj (A) = µ (1, ∞) 0

To see this, since



( ) αy −(α+1) 1{ye(j)∈A} dy + µ (−∞, −1)

Eσ α+ϵ



0

−∞

α|y|−(α+1) 1{ye(j)∈A} dy .

< ∞, the multivariate Breiman’s theorem (see e.g. Basrak et al.

(2002b)) yields −1

H(u)

(

)

v

P (σ1 Z1 , . . . , σk Zk ) ∈ u · →

k ∑ j=1

E

[(

ϵ0 × · · · × µ × · · · × ϵ0

){

}] ( ) x : σ1 x1 , . . . , σk xk ∈ ·

14

GENNADY SAMORODNITSKY AND TAKASHI OWADA k

vaguely in R \ {0} for every k ≥ 1. Because of the stationarity of (σn ) and the homogeneity property of µ, k ∑

E

[(

ϵ0 × · · · × µ × · · · × ϵ0

k }] ){ ( ) ( )∑ ( ) x : σ1 x1 , . . . , σk xk ∈ · = E σα ϵ0 × · · · × µ × · · · × ϵ0 .

j=1

j=1

The same argument as in Example 3.3 establishes k ∑ ( j=1

∞ ) ∑ ϵ0 × · · · × µ × · · · × ϵ0 = νj ◦ p−1 {1...k} , j=1

which finishes the proof. If Zn = Z1 for all n = 1, 2, . . . , then Z1 works as a common heavy-tailed component for (Xn ), which means that (Xn ) is expected to exhibit a longer memory than the previous i.i.d. setup. In this case, the tail measure of X can be simply written as { } Eµ x : x(σ1 , σ2 , . . . )′ ∈ · . Example 3.5. This example considers the tail measure of GARCH processes. Regular variation of GARCH processes was rigorously discussed by Basrak et al. (2002b) from the viewpoint of stochastic recurrence equations. A nice review on heavy-tailed GARCH processes, including continuous-time models, is provided by Fasen (2010). In order to calculate the tail measure explicitly, we will concentrate on GARCH(1, 1) processes and the argument is, to some extent, parallel to that of Davis and Mikosch (2009). Specifically, we will consider the following GARCH(1, 1) process: (3.11)

Xn = σ n Z n , n ∈ N ,

(3.12)

2 2 σn2 = α0 + α1 Xn−1 + β1 σn−1 ,

where α0 , α1 and β1 are positive constants, σ0 is a nonnegative random variable, and (Zn ) are 2 i.i.d. symmetric random variables with unit variance. Let An = α1 Zn−1 + β1 , and suppose that ) ( E log A = E log α1 Z 2 + β1 < 0. Under such a circumstance, there exists a stationary solution

(Xn , σn ) to stochastic equations (3.13) and (3.14); see Babillot et al. (1997) for more details. Assume, additionally, that the law of log A is nonarithmetic, P (A > 1) > 0 and there exists 1 < h0 ≤ ∞, such that EAh < ∞ for all h < h0 and EAh0 = ∞ . Then, some constant α > 0 satisfies EAα/2 = 1 and, further, (σn ) is regularly varying with index α (see Mikosch and Stˇaricˇa (2000) for a detailed proof). We denote by µ the limiting Radon measure for the regular variation of σn ; namely, there is a function H : (0, ∞) → (0, ∞) such that ( ) v H(u)−1 P σn ∈ u · → µ(·)

TAIL MEASURES

15

vaguely in R \ {0}. Then, a stationary sequence (Xn ) can be proved to have regularly varying tails with the same index α, and one can also see that the tail measure of (Xn ) is given by { } √ √ ( ) Eµ x : x Z0 , Z1 A1 , Z2 A1 A2 , . . . ∈ · . For the proof, fix (a0 , . . . , ak ) ∈ [0, ∞)k+1 \ {0} and ei ∈ {−1, 1}, i = 0, . . . , k. Lemma 2.1 in Davis and Mikosch (2009) gives a useful approximation of (X0 , . . . , Xk ): √ √ ( ) (X0 , X1 , . . . , Xk ) = σ0 Z0 , Z1 A1 , . . . , Zk A1 A2 . . . Ak + R , ( ) where H(u)−1 P ∥R∥ > uϵ → 0, as u → ∞, for every ϵ > 0. Thus, as u → ∞, we have

( ) H(u)−1 P ei Xi > uai , i = 0, . . . , k √ √ ) ( ∼ H(u)−1 P σ0 e0 Z0 > ua0 , σ0 e1 Z1 A1 > ua1 , . . . , σ0 ek Zk A1 . . . Ak > uak . )α+ϵ ( √ < ∞ for ϵ small enough, an application of the multivariate Breiman’s Since E Zi A1 . . . Ai theorem yields √ √ ( ) H(u)−1 P σ0 e0 Z0 > ua0 , σ0 e1 Z1 A1 > ua1 , . . . , σ0 ek Zk A1 . . . Ak > uak √ √ } { → Eµ x : xe0 Z0 > a0 , xe1 Z1 A1 > a1 , . . . , xek Zk A1 . . . Ak > ak as required. 4. Connection between Tail Measures and Other Related Notions This section investigates the relation between tail measures and their related notions. Consequently, the tail measure turns out be a more comprehensive notion than those alternatives. First of all, for a stationary sequence (Xn , n ≥ 0), the relation between the tail measure ν and upper tail dependence coefficient (1.3) is clearly described by { } ν x ∈ RN : min(x0 , xn ) > 1 { } λ(n) = . ν x ∈ RN : x0 > 1 Let (Xt , t ∈ T ), T = R or Z be a stationary process with tail measure ν. Fasen (2010) studied extreme dependence measure defined by χ(t

1 ...td )

( ) (y1 , . . . , yd ) = lim H(u)−1 P Xt1 > uy1 , . . . , Xtd > uyd , u→∞

( ) χ(t1 ...td ) (y1 , . . . , yd ) = lim H(u)−1 P Xt1 > uy1 or . . . or Xtd > uyd u→∞

for some regularly varying function H : (0, ∞) → (0, ∞). One can obtain an obvious relation { } χ(t ...t ) (y1 , . . . , yd ) = ν x ∈ RT : xtj > yj , j = 1, . . . , d . 1

d

16

GENNADY SAMORODNITSKY AND TAKASHI OWADA

The connection between ν amd χ can also be formulated easily by the inclusion-exclusion property. Let (Xn , n ≥ 0) be a stationary process in Rd . The tail measure ν then relates to extremogram (1.4) and (1.5) in such a way that { } γAB (n) = ν x ∈ (Rd )N : x0 ∈ A, xn ∈ B , { } ν x ∈ (Rd )N : x0 ∈ A, xn ∈ B { } ρAB (n) = , ν x ∈ (Rd )N : x0 ∈ A where xi = (xi,1 , . . . , xi,d ), i ≥ 0 and both A and A × B are Borel sets bounded away from zero. In relation to the examples in the preceding section, Fasen (2010) calculated the upper tail dependence coefficient and the extreme dependence measure of the process in Example 3.5. The extremograms for the processes in Examples 3.4 and 3.5 are provided by Davis and Mikosch (2009). Moreover, it is not difficult to calculate these quantities for the infinitely divisible processes in Examples 3.1 and 3.2. We will consider a multivariate stationary time-series X = (Xn , n ∈ Z) in Rd with regularly varying tails of index α > 0. Basrak and Segers (2009) defined a limiting process Y = (Yn , n ∈ Z) in Rd , called tail process, by ) ) (( (( ) ) P Xm , . . . , Xn ∈ u · ∥X0 ∥ > u → P Ym , . . . , Yn ∈ · weakly in Rd(n−m+1) for all m, n ∈ Z with m ≤ n. Here ∥ · ∥ is an arbitrary norm on Rd . On the other hand, the tail measure ν of X satisfies P

((

} { ) ( d )Z : x , . . . , x ∈ · , ∥x ∥ > 1 ν x ∈ (R 0 m n ) { } Xm , . . . , Xn ∈ u · ∥X0 ∥ > u → . d Z ν x ∈ (R ) : ∥x0 ∥ > 1

Therefore, we conclude

)

{ } ν x ∈ (Rd )Z : x ∈ · , ∥x0 ∥ > 1 { } P (Y ∈ · ) = . ν x ∈ (Rd )Z : ∥x0 ∥ > 1

Basrak and Segers (2009) also defined the spectral process of X by Θn = Yn /∥Y0 ∥, n ∈ Z. Because ( of their Corollary 3.2, we find that Θ = Θn , n ∈ Z) fulfills { } ν x ∈ (Rd )Z : x/∥x0 ∥ ∈ · , ∥x0 ∥ > 1 { } P (Θ ∈ · ) = . ν x ∈ (Rd )Z : ∥x0 ∥ > 1 The two most important properties of the tail process and the spectral process of X are given in Theorem 3.1 in Basrak and Segers (2009). For instance, statement (iii) of that theorem says that for all i, m, n ∈ Z with m ≤ 0 ≤ n and for all bounded and continuous f : (Rd )n−m+1 → R, [ ( ) ] [ ( )] Θm Θn α (4.1) E f Θm−i , . . . , Θn−i = E f ,..., ∥Θi ∥ . ∥Θi ∥ ∥Θi ∥

TAIL MEASURES

17

Exploiting some nice properties of tail measures, we can provide a more natural alternative proof of (4.1). First, recall that the tail measure ν possesses homogeneity property as mentioned in Proposition 2.5. The second nice property is that due to the stationarity of X, ν is shift invariant, that is, ν ◦ ϕ−1 n =ν

for every n ∈ Z ,

where ϕn : (Rd )Z → (Rd )Z is the shift operator defined by ϕn (. . . x−1 , x0 , x1 . . . ) = (. . . xn−1 , xn , xn+1 . . . ) . { } For the proof of (4.1), suppose for notational ease that ν x ∈ (Rd )Z : ∥x0 ∥ > 1 = 1. Using the identity



1 ∥x−i ∥α we write ∫ [ ( )] E f Θm−i , . . . , Θn−i = 0

(Rd )Z

f

αuα−1 du = 1 ,

0

(

∞∫

∥x−i ∥

xm−i xn−i ,..., ∥x0 ∥ ∥x0 ∥

)

αuα−1 1 ν(dx)du . ∥x−i ∥α {∥x−i ∥>u, ∥x0 ∥>1}

By virtue of the shift invariance and the homogeneity property of ν, ) ( ∫ ∞∫ xn−i αuα−1 xm−i ,..., f 1 ν(dx)du ∥x0 ∥ ∥x0 ∥ ∥x−i ∥α {∥x−i ∥>u, ∥x0 ∥>1} 0 (Rd )Z ) ( ∫ ∞∫ xn−i αu−α−1 xm−i ,..., 1 = f −1 ν(dx)du ∥x0 ∥ ∥x0 ∥ ∥x−i ∥α {∥x−i ∥>1, ∥x0 ∥>u } 0 (Rd )Z ( ) ∫ xm−i xn−i ∥x0 ∥α = f ,..., 1 ν(dx) ∥x0 ∥ ∥x0 ∥ ∥x−i ∥α {∥x−i ∥>1} (Rd )Z ) ( ∫ xn ∥xi ∥α xm ,..., = f 1 ν(dx) ∥xi ∥ ∥xi ∥ ∥x0 ∥α {∥x0 ∥>1} (Rd )Z ) ] [ ( Θn Θm =E f ,..., ∥Θi ∥α . ∥Θi ∥ ∥Θi ∥ Notice that a similar argument can prove statement (ii) of Theorem 3.1 in Basrak and Segers (2009) as well. 5. Application: Ergodic Theoretical Properties of Tail Measures and Those of Probability Laws of (Xt , t ∈ T ) In this section, we will always consider a stationary process X = (Xt , t ∈ T ) with T = Z or R, assuming that X has regularly varying tails. Let ν be the tail measure of X. As pointed out in the preceding section, ν is shift invariant: ν ◦ ϕ−1 t =ν

for every t ∈ T ,

18

GENNADY SAMORODNITSKY AND TAKASHI OWADA

where ϕt : RT → RT is defined by ϕt (x) = xt+· , x ∈ RT . Thus, we are motivated to study the properties of the tail measure from ergodic-theoretical viewpoint. In particular, we will investigate the connection between the ergodic theoretical properties of the tail measure and those of the probability law of the process X. Here we need to recall the so-called positive-null decomposition by which the ergodic properties of the tail measure will be rigorously described. For details, we refer to Wang et al. (2011), which is essentially based on Takahashi (1971). See also Aaronson (1997) and Krengel (1985). First, suppose that a tail measure ν is σ-finite (a necessary and sufficient condition for σ-finiteness is given in Proposition 2.4). Then (RT , B(R)T , ν) becomes a standard Lebesgue measure space (see Appendix A in Pipiras and Taqqu (2004) for the terminology). We define { } Λ = Q ≪ ν : Q is a finite measure on RT , Q ◦ ϕ−1 t = Q for all t ∈ T , } { dQ T (x) > 0 , Q ∈ Λ . SQ = x ∈ R : dν According to Lemma 2.2 in Wang et al. (2011), {SQ : Q ∈ Λ} has a unique maximal element P in the sense that (i): for all R ∈ Λ, ν(SR \ P ) = 0, (ii): if there exists another P ′ satisfying (i), then P = P ′ mod ν. Then, such P is called a positive part and N = RT \ P is a null part. It is shown by Theorem 2.3 in Wang et al. (2011) that both P and N are invariant with respect to (ϕt , t ∈ T ), i.e., for all t ∈ T , ( ) ( ) ( ) ( ) −1 −1 µ ϕt P △P = 0 and µ ϕt N △N = 0 . If RT = P mod ν, then (ϕt , t ∈ T ) is said to be a positive flow, and if RT = N mod ν, then it is called a null flow. Our first result below relates the ergodic properties of the flow (ϕt , t ∈ T ) defined on (RT , B(R)T , ν) { } to the Ces`aro type convergence of ν x ∈ RT : |x0 | > 1, |xt | > 1 . More precisely, if (ϕt , t ∈ T ) is { } a null flow, ν x ∈ RT : |x0 | > 1, |xt | > 1 converges to zero in the Ces`aro sense. On the contrary, if (ϕt , t ∈ T ) has a positive component, then the same quantity does not converge to zero in the Ces`aro sense. Alternatively, we may say that if (ϕt , t ∈ T ) has a positive component, then the original process X exhibits stronger dependence among their extremes. Proposition 5.1. Let λ denote either the counting measure (if T = Z) or the Lebesgue measure (if T = R). (i): If (ϕt , t ∈ T ) is a null flow on (RT , B(R)T , ν), then ∫ { } 1 ν x ∈ RT : |x0 | > δ, |xt | > δ λ(dt) → 0 T [0,T ]

TAIL MEASURES

19

for every δ > 0. (ii): If (ϕt , t ∈ T ) has a positive component on (RT , B(R)T , ν), then ∫ { } 1 lim inf ν x ∈ RT : |x0 | > δ, |xt | > δ λ(dt) > 0 T →∞ T [0,T ] { } for every δ > 0 with ν x ∈ P : |x0 | > δ > 0. Here, P is a positive part of RT . Proof. Because of an invariance property of ν, it suffices to check these statements when δ = 1. (i): Let (ϕt , t ∈ T ) be a null flow defined on (RT , B(R)T , ν). Then, } ∫ ∫ ∫ 1 { { } 1 1 ν x ∈ RT : |x0 | > 1, |xt | > 1 λ(dt) = ν x∈A: 1A ◦ ϕt (x)λ(dt) > y dy , T [0,T ] T [0,T ] 0 where A = {x ∈ RT : |x0 | > 1} is a measurable set of ν-finite measure. It follows from Krengel’s stochastic ergodic theorem (see Theorem 4.9 of Krengel (1985)) that { } ∫ 1 ν x∈A: 1A ◦ ϕt (x)λ(dt) > y → 0 T [0,T ] for every 0 ≤ y ≤ 1. So, the result follows. (ii): By virtue of (i), we may assume without loss of generality that (ϕt , t ∈ T ) is a positive flow on the whole measure space (RT , B(R)T , ν). Then, there exists a probability measure Q that is equivalent to ν and is preserved under (ϕt , t ∈ T ). Let g = dQ/dν be its Radon-Nikodym derivative. Let A = {x ∈ RT : |x0 | > 1}. Since Q is a probability measure, the Birkhoff ergodic theorem yields



1 T

( ) 1A ◦ ϕt (x)λ(dt) → EQ 1A |I , Q-a.e.,

[0,T ]

where I is the σ-field of all (ϕt , t ∈ T )-invariant measurable sets. Consequently, ∫ = A

Choose K > 0 so that

1 T



1 T





[0,T ]

A∩ϕ−1 t A

g(x)ν(dx)λ(dt) ∫

( ) EQ 1A |I dQ > 0 .

1A ◦ ϕt (x)λ(dt)Q(dx) → [0,T ]

A

∫ A

g(x)1{g(x)>K} ν(dx) ≤

1 2



( ) EQ 1A |I dQ .

A

Now, we have ∫ ∫ ∫ ∫ ( ) ( ) 1 1 K g(x)ν(dx)λ(dt) ≤ EQ 1A |I dQ + ν A ∩ ϕ−1 A λ(dt) . t T [0,T ] A∩ϕ−1 2 A T [0,T ] t A Therefore, 1 lim inf T →∞ T



(

ν A∩ [0,T ]

ϕ−1 t A

)

1 λ(dt) ≥ 2K



( ) EQ 1A |I dQ > 0 .

A



20

GENNADY SAMORODNITSKY AND TAKASHI OWADA

In the sequel, we only focus on the process studied in Examples 3.1 and 3.2: with T = Z or R, ∫ (5.1) Xt = ft (x)dM (x), t ∈ T . E

Here M is an independently scattered infinitely divisible random measure on a measurable space (E, E) with local L´evy measure ρ(s, ·), s ∈ E, and σ-finite control measure m. Assume that M has no Gaussian component. In other words, the characteristic function of M (A), for m-finite set A ∈ E, is given by (5.2)

Ee

iuM (A)

[∫ { } ] ∫ ( iux ) = exp iub(s) + e − 1 − iuτ (x)) ρ(s, dx) m(ds) , A

R

where b : E → R and τ (x) = x/ max{1, |x|}. The functions ft are defined by (5.3)

ft (x) = f ◦ ψt (x) , x ∈ E, t ∈ T ,

where ψt : E → E, t ∈ T , is a family of measurable maps, and f : E → R is a measurable function. ψt and f are taken in such a way that the resulting process X = (Xt , t ∈ T ) becomes a stationary and well-defined infinitely divisible process; see Rajput and Rosi´ nski (1989). Examples 3.1 and 3.2 both have proved that the tail measure of X is (ρ∗ × m)◦ h−1 , where ρ∗ (dx) is defined by either (3.5) or (3.10). As seen in Proposition 5.1, the Ces`aro convergence of { } (ρ∗ × m) ◦ h−1 x ∈ RT : |x0 | > δ, |xt | > δ is characterized by the ergodic theoretical properties of the flow (ϕt , t ∈ T ) defined on (RT , B(R)T ). Moreover, ergodicity of the probability law of X is characterized by the Ces`aro convergence of the L´evy measure of X. Namely, X is ergodic if and only if, for every η > 0, ∫ { } 1 (ρ × m) ◦ h−1 x ∈ RT : |x0 | > η, |xt | > η λ(dt) → 0 as T → ∞ . T [0,T ] ˙ See e.g. Rosi´ nski and Zak (1997). Due to the similarity of the L´evy measure (ρ × m) ◦ h−1 and the tail measure (ρ∗ × m) ◦ h−1 , strong connection between the ergodic properties of (ϕt , t ∈ T ) and ergodicity of X is expected to exist. Theorem 5.2. Let (Xt , t ∈ T ) be a stationary infinitely divisible process of the form (5.1), where M is an independently scattered infinitely divisible random measure given in (5.2), and ft is defined in (5.3). We assume (3.2) and (3.4) and, furthermore, a regularly varying function H : (0, ∞) → (0, ∞) is bounded away from infinity on every compact interval. We assume a stronger version of (3.3): that is, for all v > 0, there exists a K(v) > 0, such that ( ) ( ) ρ s, (u, ∞) ρ s, (−∞, −u) (5.4) sup ≤ K(v)w+ (s) and sup ≤ K(v)w− (s) H(u) H(u) u≥v u≥v

TAIL MEASURES

21

for all s ∈ E. We put an extra assumption on the lower bound of the quantities in (3.2): there exists u0 > 0 and L > 0, such that ( ) ρ s, (u0 , ∞) (5.5) ≥ Lw+ (s) H(u0 )

and

( ) ρ s, (−∞, −u0 ) ≥ Lw− (s) H(u0 )

for all s ∈ E. Applying the positive-null decomposition to the tail measure ν = (ρ∗ × m) ◦ h−1 , we have ν = ν|N + ν|P . Then (Xt , t ∈ T ) is ergodic if and only if ν|P is identically zero. Proof. Recall that (Xt , t ∈ T ) is ergodic if and only if, for every η > 0, ∫ { } 1 (5.6) (ρ × m) (x, s) : |xf (s)| > η, |xft (s)| > η λ(dt) → 0 T [0,T ] First, we will prove that (5.6) is equivalent to ∫ ∫ ( ) 1 w+ (s) + w− (s) m(ds)λ(dt) → 0 (5.7) (ϵ) T [0,T ] At { } (ϵ) for every ϵ > 0, where At = s ∈ E : |f (s)| > ϵ, |ft (s)| > ϵ .

as T → ∞ .

as T → ∞

Assume that (5.6) holds for every η > 0. For any ϵ > 0, let δ = ϵu0 . Then ∫ { } 1 (ρ × m) (x, s) : |xf (s)| > δ, |xft (s)| > δ λ(dt) T [0,T ] ∫ { } 1 ≥ (ρ × m) (x, s) : |x| > u0 , |f (s)| > ϵ, |ft (s)| > ϵ λ(dt) T [0,T ] ( ) ∫ ∫ ρ s, {x : |x| > u0 } H(u0 ) m(ds)λ(dt) = (ϵ) T H(u0 ) [0,T ] At ∫ ∫ ( ) LH(u0 ) ≥ w+ (s) + w− (s) m(ds)λ(dt) . (ϵ) T [0,T ] At Here, the last inequality follows from (5.5) and, thus, (5.6) completes one direction of the assertion. Conversely, assume that (5.7) holds for any ϵ > 0. For every η > 0, we split the integral in (5.6) into three parts. 1 T ∫

+ +

1 T 1 T

∫ [0,T ]

1 = T [0,T ] ∫ ∫ [0,T ]



[0,T ]

{ } (ρ × m) (x, s) : |xf (s)| > η, |xft (s)| > η λ(dt) ∫

|f (s)|≤δ

( { }) ρ s, x : |xf (s)| > η, |xft (s)| > η m(ds)λ(dt)

|f (s)|>δ, |ft (s)|≤ϵ



|f (s)|>δ, |ft (s)|>ϵ

( { }) ρ s, x : |xf (s)| > η, |xft (s)| > η m(ds)λ(dt) ( { }) ρ s, x : |xf (s)| > η, |xft (s)| > η m(ds)λ(dt) = I1 + I2 + I3 .

22

GENNADY SAMORODNITSKY AND TAKASHI OWADA

{ } Notice that (ρ × m) (x, s) : |xf (s)| > η < ∞, since the process X is well-defined. For the first term I1 , the stationarity of the process and the Cauchy-Schwarz inequality give the upper bound { }1/2 { }1/2 I1 ≤ (ρ × m) (x, s) : |xf (s)| > η, |f (s)| ≤ δ (ρ × m) (x, s) : |xf (s)| > η . The right hand side above converges to zero as δ ↓ 0, by the dominated convergence theorem. Next, we get

{ } I2 ≤ (ρ × m) (x, s) : |xf (s)| > η, |x| > η/ϵ ,

which goes to zero as ϵ ↓ 0 by the dominated convergence theorem. Fix δ > 0 and ϵ > 0 so small that both I1 and I2 are sufficiently small. Applying the CauchySchwarz inequality, ( ∫ ∫ 1 I3 ≤ T [0,T ] |f (s)|>δ,

( { }) ρ s, x : |xf (s)| > η m(ds)λ(dt)

)1/2

|ft (s)|>ϵ

{ }1/2 (ρ×m) (x, s) : |xf (s)| > η .

Thus, it suffices to show that, for every ϵ > 0, ∫ ∫ ( { }) 1 ρ s, x : |xf (s)| > η m(ds)λ(dt) → 0 T [0,T ] A(ϵ) t From (5.4), we have 1 T





[0,T ]

K(η/ sups∈E |f (s)|) ≤ T

(ϵ)

as T → ∞ .

( { }) ρ s, x : |xf (s)| > η m(ds)λ(dt)

At



[0,T ]



( (ϵ)

At

) ( ) w+ (s) + w− (s) H η|f (s)|−1 m(ds)λ(dt) .

Since H is bounded away from infinity on every compact interval, the Potter bounds yields, for some C > 0, ( ) H η|f (s)|−1 ≤ C

(

η

)−α/2

<∞ sups∈E |f (s)| for all s ∈ E. Therefore, (5.7) completes the other direction of the assertion. Now we have checked that (5.6) and (5.7) are equivalent. Observe that even if one replaces ρ with ρ∗ defined in (3.5), statements (5.6) and (5.7) are still equivalent. In fact, ρ∗ satisfies (3.2), (5.4) and (5.5), if we set H(u) = u−α . In conclusion, (5.6) is equivalent to ∫ { } 1 (ρ∗ × m) (x, s) : |xf (s)| > η, |xft (s)| > η λ(dt) → 0 as T → ∞ (5.8) T [0,T ] for every η > 0. However, we find from Proposition 5.1 that (5.8) holds if and only if ν|P is identically zero.



We will, next, study the process given in Example 3.2. Theorem 5.3. Let (Xt , t ∈ T ) be a stationary infinitely divisible process of the form (5.1), where M is an independently scattered infinitely divisible random measure given in (5.2), and ft is defined

TAIL MEASURES

23

in (5.3). However, we let ρ be independent of s ∈ E. We assume tail balanced regularly varying condition (3.9). We will specify the integrability of f as follows: for every t ∈ T , {∫ |f (s)|α−ξ ∨ |ft (s)|2 m(ds) < ∞ for some 0 < ξ < 2 − α if 0 < α < 2 , ∫E t α−ξ α+ξ ∨ |ft (s)| m(ds) < ∞ for some 0 < ξ < α if α ≥ 2 . E |ft (s)| Furthermore, if 0 < α < 2, the lower tail of ρ is assumed to satisfy ( ) (5.9) xp0 ρ y : |y| > x → 0 as x ↓ 0 for some p0 ∈ (α, 2). Under this setup, (Xt , t ∈ T ) is ergodic if and only if ν|P is identically zero. Proof. We only prove that ∫ { } 1 (5.10) (ρ × m) (x, s) : |xf (s)| > η, |xft (s)| > η λ(dt) → 0 , for every η > 0 , T [0,T ] is equivalent to (5.11)

1 T

∫ [0,T ]

( (ϵ) ) m At λ(dt) → 0

for every ϵ > 0 ,

{ } (ϵ) where At = s ∈ E : |f (s)| > ϵ, |ft (s)| > ϵ . Once the above equivalence is established, the rest of the argument is almost the same as that in Theorem 5.2. First, we assume (5.10). For any ϵ > 0, ∫ { } 1 (ρ × m) (x, s) : |xf (s)| > ϵ, |xft (s)| > ϵ λ(dt) T [0,T ] ∫ { } 1 ≥ (ρ × m) (x, s) : |x| > 1, |f (s)| > ϵ, |ft (s)| > ϵ λ(dt) T [0,T ] { }∫ ( (ϵ) ) ρ x : |x| > 1 = m At λ(dt) . T [0,T ] ( ) ∫ (ϵ) Thus, T −1 [0,T ] m At λ(dt) → 0 as T → ∞. Assume, conversely, that (5.11) holds. Once again, we need split the integral in (5.10) into three terms. For every η > 0, ∫ { } 1 (ρ × m) (x, s) : |xf (s)| > η, |xft (s)| > η λ(dt) T [0,T ] ∫ ∫ ( ) 1 = ρ x : |xf (s)| > η, |xft (s)| > η m(ds)λ(dt) T [0,T ] |f (s)|≤δ ∫ ∫ ( ) 1 ρ x : |xf (s)| > η, |xft (s)| > η m(ds)λ(dt) + T [0,T ] |f (s)|>δ, |ft (s)|≤ϵ ∫ ∫ ( ) 1 + ρ x : |xf (s)| > η, |xft (s)| > η m(ds)λ(dt) T [0,T ] |f (s)|>δ, |ft (s)|>ϵ = I1 + I2 + I3 .

24

GENNADY SAMORODNITSKY AND TAKASHI OWADA

By a similar argument as the proof of Theorem 5.2, I1 and I2 can be arbitrarily small by taking δ > 0 and ϵ > 0 sufficiently small. Having fixed such δ > 0 and ϵ > 0 and assuming ϵ < δ without loss of generality, we have I3 ≤

1 T





[0,T ]

E

( ) 1A(ϵ) (s)ρ x : |xf (s)| > η m(ds)λ(dt) . t

If 0 < α < 2, an application of the H¨older’s inequality provides ( ∫ )1−p0 /2 ( ∫ )p0 /2 ∫ ( (ϵ) ) ( )2/p0 1 1 I3 ≤ m At λ(dt) ρ x : |xf (s)| > η m(ds)λ(dt) . T [0,T ] T [0,T ] A(ϵ) t By virtue of (5.11), it is enough to verify ∫ ∫ ( )2/p0 1 (5.12) lim sup ρ x : |xf (s)| > η m(ds)λ(dt) < ∞ . T →∞ T [0,T ] A(ϵ) t Indeed,

∫ ∫ ( )2/p0 1 lim sup ρ x : |xf (s)| > η m(ds)λ(dt) (ϵ) T →∞ T [0,T ] At ∫ ( )2/p0 ≤ ρ x : |x| > η|f (s)|−1 1{ η≤|f (s)| } m(ds) . E

Because of (5.9),

( ) ( )−p ρ x : |x| > η|f (s)|−1 1{ η≤|f (s)| } ≤ C η|f (s)|−1 0

for some C > 0. Since f ∈ L2 (E), (5.12) follows. In case of α ≥ 2, let ϵ0 ∈ (0, ξ). From the H¨older’s inequality, )ϵ0 /(α+ξ) ( ∫ ( (ϵ) ) 1 m At λ(dt) I3 ≤ T [0,T ] ( ×

1 T





[0,T ]

(

(ϵ)

ρ x : |xf (s)| > η

)(α+ξ)/(α+ξ−ϵ0 )

)1−ϵ0 /(α+ξ) m(ds)λ(dt)

.

At

In this case, we have, for some C > 0, ∫ ∫ ( )(α+ξ)/(α+ξ−ϵ0 ) 1 ρ x : |xf (s)| > η m(ds)λ(dt) lim sup (ϵ) T →∞ T [0,T ] At ∫ ( )(α+ξ)/(α+ξ−ϵ0 ) ≤ ρ x : |x| > η|f (s)|−1 1{ η≤|f (s)| } m(ds) E

≤ Cη

−(α+ξ)

∫ |f (s)|α+ξ m(ds) < ∞ . E

( ) The last inequality follows from y α+ξ−ϵ0 ρ x : |x| > y → 0 as y ↓ 0. Now, in either case, lim supT →∞ I3 = 0 and, hence, (5.10) has been established.



TAIL MEASURES

25

References Aaronson, J. (1997): An Introduction to Infinite Ergodic Theory, volume 50 of Mathematical Surveys and Monographs, Providence: American Mathematical Society. Adler, R., R. Feldman, and M. Taqqu (1998): A Practical Guide to Heavy Tails: Statistical Techniques and Applications, New York: Springer. Babillot, M., P. Bougerol, and L. Elie (1997): “The random difference equation Xn = An Xn−1 + Bn in the critical case,” The Annals of Probability, 25, 478–493. Basrak, B., R. Davis, and T. Mikosch (2002a): “A characterization of multivariate regular variation,” The Annals of Applied Probability, 12, 908–920. ——— (2002b): “Regular variation of GARCH processes,” Stochastic Processes and their Applications, 99, 95–115. Basrak, B. and J. Segers (2009): “Regularly varying multivariate time series,” Stochastic Processes and their Applications, 119, 1055–1080. Beirlant, J., Y. Goegebeur, J. Segers, and J. Teugels (2004): Statistics of Extremes: Theory and Applications, Chichester: Wiley. Daley, D. J. and D. Vere-Jones (2003): An Introduction to the Theory of Point Processes, New York: Springer. Davis, R. and T. Mikosch (2009): “The extremogram: A correlogram for extreme events,” Bernoulli, 15, 977–1009. de Haan, L. and A. Ferreira (2006): Extreme Value Theory: An Introduction, New York: Springer. ¨ ppelberg, and T. Mikosch (1997): Modelling Extremal Events: for Embrechts, P., C. Klu Insurance and Finance, New York: Springer. Fasen, V. (2010): “High-level dependence in time series models,” Extremes, 13, 1–33. Hult, H. and F. Lindskog (2005): “Extremal behavior for regularly varying stochastic processes,” Stochastic Processes and their Applications, 115, 249–274. ——— (2006): “On regular variation for infinitely divisible random vectors and additive processes,” Advances in Applied Probability, 38, 134–148. Krengel, U. (1985): Ergodic Theorems, Berlin: de Gruyter. Ledford, A. W. and J. A. Tawn (2003): “Diagnostics for dependence within time series extremes,” Journal of the Royal Statistical Society Series B, 65, 521–543. Maruyama, G. (1970): “Infinitely divisible processes,” Theory of Probability and its Applications, 15, 1–22.

26

GENNADY SAMORODNITSKY AND TAKASHI OWADA

McNeil, A. J., R. Frey, and P. Embrechts (2005): Quantitative Risk Management: Concepts, Techniques, and Tools, New Jersey: Princeton University Press. ˇ rica ˇ (2000): “Limit theory for the sample autocorrelations and extremes Mikosch, T. and C. Sta of a GARCH (1,1) process,” The Annals of Statistics, 28, 1427–1451. Pipiras, V. and M. Taqqu (2004): “Stable stationary processes related to cyclic flows,” The Annals of Probability, 32, 2222–2260. ´ ski (1989): “Spectral representations of infinitely divisible processes,” Rajput, B. and J. Rosin Probability Theory and Related Fields, 82, 451–488. Resnick, S. (1987): Extreme Values, Regular Variation and Point Processes, New York: SpringerVerlag. ——— (2004): “The extremal dependence measure and asymptotic independence,” Stochastic Models, 20, 205–227. ——— (2007): Heavy-Tail Phenomena: Probabilistic and Statistical Modeling, New York: Springer. ´ ski, J. and G. Samorodnitsky (1993): “Distributions of subadditive functionals of sample Rosin paths of infinitely divisible processes,” The Annals of Probability, 21, 996–1014. ˙ ´ ski, J. and T. Zak Rosin (1997): “The equivalence of ergodicity and weak mixing for infinitely divisible processes,” Journal of Theoretical Probability, 10, 73–86. Takahashi, W. (1971): “Invariant functions for amenable semigroups of positive contractions on L1 ,” Kodai Mathematical Journal, 23, 131–143. Wang, Y., P. Roy, and S. Stoev (2011): “Ergodic properties of sum- and max- stable stationary random fields via null and positive group actions,” Forthcoming in The Annals of Probability. School of Operations Research and Information Engineering, and Department of Statistical Science, Cornell University, Ithaca, NY 14853 E-mail address: [email protected] School of Operations Research and Information Engineering, Cornell University, Ithaca, NY 14853 E-mail address: [email protected]

Tail measures of stochastic processes or random fields ...

bi > 0 (or ai > 0, bi = 0) for some i ∈ {1,...,m + 1}, then 0F ∈ (−a,b)c; therefore, ..... ai. )α for every s ∈ E. Therefore, we only need to justify taking the limit inside.

185KB Sizes 7 Downloads 254 Views

Recommend Documents

stochastic processes on Riesz spaces
The natural domain of a conditional expectation is the maximal Riesz ...... We extend the domain of T to an ideal of Eu, in particular to its maximal domain in. Eu.

The Dynamics of Stochastic Processes
Jan 31, 2010 - after I obtained the masters degree. ...... were obtained joint with Jan Rosiński, under a visit at the University of Tennessee, USA, in April, 2009.

Relativistic Stochastic Processes
A general relativistic H-Theorem is also mentioned. ... quantities characterizing the system vary on 'large' scale only, both in time and in space. .... This class of processes can be used to model the diffusion of a particle in a fluid comoving with

Curse of Dimensionality in Approximation of Random Fields Mikhail ...
Curse of Dimensionality in Approximation of Random Fields. Mikhail Lifshits and Ekaterina Tulyakova. Consider a random field of tensor product type X(t),t ∈ [0 ...

Speech Recognition with Segmental Conditional Random Fields
learned weights with error back-propagation. To explore the utility .... [6] A. Mohamed, G. Dahl, and G.E. Hinton, “Deep belief networks for phone recognition,” in ...

The Dynamics of Stochastic Processes - Department of Mathematics ...
Jan 31, 2010 - after I obtained the masters degree. Manuscripts D–H ...... Then the dual (Ft)t≥0-predictable projection of (At)t≥0 is for t ≥ 0 given by. Ap t = ∫.

103796670-Papoulis-Probability-Random-Variables-and-Stochastic ...
С расписанием работы врачей поликлиники Вы можете. Page 3 of 678. 103796670-Papoulis-Probability-Random-Variables-and-Stochastic-Processes.pdf.

Small Deviations of Gaussian Random Fields in Lq-spaces Mikhail ...
We investigate small deviation properties of Gaussian random fields in the space Lq(RN ,µ) where µ is an arbitrary finite compactly supported Borel measure. Of special interest are hereby “thin” measures µ, i.e., those which are singular with

Approximation Complexity of Additive Random Fields ...
of Additive Random Fields. Mikhail Lifshits and Marguerite Zani. Let X(t, ω),(t, ω) ∈ [0,1]d × Ω be an additive random field. We investigate the complexity of finite ...

Stochastic Processes on Vector Lattices
where both the independence of families from the Riesz space and of band projections with repect to a given conditional expectation operator are considered.

Co-Training of Conditional Random Fields for ...
Bootstrapping POS taggers using unlabeled data. In. CoNLL-2003. [26] Berger, A., Pietra, A.D., and Pietra, J.D. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71,. 1996. [26] Kudo, T. and Matsumoto, Y.

On the Fisher's Z transformation of correlation random fields (PDF ...
Our statistics of interest are the maximum of a random field G, resulting from the .... by a postdoctoral fellowship from the ISM-CRM, Montreal, Quebec, Canada. 1.

Product of Random Stochastic Matrices and Distributed ...
its local time τi using its own Central Processing Unit (CPU) clock. Ideally, after the calibration, each processor's local time should be equal to the Coordinated. Universal Time t. However, due to the hardware imperfections of CPU clocks, differen

Stochastic slowdown in evolutionary processes
Jul 28, 2010 - starting from any i, obeys the master equation 6 . Pi. N t = 1 − Ti. + − Ti .... Color online The conditional mean exit time 1. N /1. N 0 ..... Financial support by the Emmy-Noether program of the .... University Press, New York, 2

Product of Random Stochastic Matrices and ... - IDEALS @ Illinois
for the degree of Doctor of Philosophy in Industrial Engineering ...... say that {W(k)} is an identically independently distributed chain, and we use the abbreviation.

Product of Random Stochastic Matrices and ... - IDEALS @ Illinois
As a final application for the developed tools, an alternative proof for the second .... This mathematical object is one of the main analytical tools that is frequently ...

Decompositions of stochastic processes based on ...
Oct 14, 2008 - is the (Borel) σ-field generated by G. An immediate consequence (see [7, Section 10.3]) of the ..... defined in (12)) is a X-measurable mapping.

Context-Specific Deep Conditional Random Fields - Sum-Product ...
In Uncertainty in Artificial Intelli- gence (UAI), pp ... L. R. Rabiner. A tutorial on hidden markov models and ... ceedings of 13th Conference on Artificial Intelligence.

Ergodicity and Gaussianity for spherical random fields - ORBi lu
hinges on the fact that one can regard T as an application of the type T: S2 ..... analysis of isotropic random fields is much more recent see, for instance, Ref. ...... 47 Yadrenko, M. I˘., Spectral Theory of Random Fields Optimization Software, In