Gaussian limits for vector-valued multiple stochastic integrals Giovanni PECCATI Laboratoire de Statistique Th´eorique et Appliqu´ee Universit´e de Paris VI 175, rue du Chevaleret 75013 Paris, France email:
[email protected] Ciprian A. TUDOR Laboratoire de Probabilit´es et Mod`eles Al´eatoires Universit´es de Paris VI &VII 175, rue du Chevaleret 75013 Paris, France email:
[email protected] May 21, 2004 Abstract We establish necessary and sufficient conditions for a sequence of d-dimensional vectors of multiple stochastic integrals Fkd = F1k , ..., Fdk , k ≥ 1, to converge in distribution to a d-dimensional Gaussian vector Nd = (N1 , ..., Nd ). In particular, we show that if the covariance structure of Fkd converges to that of Nd , then componentwise convergence implies joint convergence. These results extend to the multidimensional case the main theorem of [10]. Keywords – Multiple stochastic integrals; Limit theorems; Weak convergence; Brownian motion. AMS Subject classification – 60F05; 60H05
1
Introduction
For d ≥ 2, fix d natural numbers 1 ≤ n1 ≤ ... ≤ nd and, for every k ≥ 1, let Fkd = F1k , ..., Fdk be a vector of d random variables such that, for each j = 1, ..., d, Fjk belongs to the nj th Wiener chaos associated to a real valued Gaussian process. The aim of this paper is to prove necessary and sufficient conditions to have that the sequence Fkd converges in distribution to a given d-dimensional Gaussian vector, when k tends to infinity. In particular, our main result states that, if for every 1 ≤ i, j ≤ d limk→+∞ E Fik Fjk = δij , where δij is the Kronecker symbol, then the following two conditions are equivalent: (i) Fkd converges in distribution to a standard centered Gaussian vector Nd (0, Id ) (Id is the d × d identity matrix), (ii) for every j = 1, ..., d, Fjk converges in distribution to a standard Gaussian random variable. Now suppose that, for every k ≥ 1 and every j = 1, ..., d, the random variable Fjk is the multiple Wiener-Itˆo stochastic (k)
n
integral of a square integrable kernel fj , for instance on [0, 1] j . We recall that, according to the main h 4 i result of [10], condition (ii) above is equivalent to either one of the following: (iii) limk→+∞ E Fjk =3 (k)
(k)
for every j, (iv) for every j and every p = 1, ..., nj − 1 the contraction fj ⊗p fj converges to zero 2(n −p) in L2 [0, 1] j . Some other necessary and sufficient conditions for (ii) to hold are stated in the subsequent sections, and an extension is provided to deal with the case of a Gaussian vector Nd with a more general covariance structure.
1
Besides [10], our results should be compared with other central limit theorems (CLT) for non linear functionals of Gaussian processes. The reader is referred to [2], [6], [7], [8], [15] and the references therein for several results in this direction. As in [10], the main tool in the proof of our results is a well known time-change formula for continuous local martingales, due to Dambis, Dubins and Schwarz (see e.g. [13, Chapter 5]). In particular, this technique enables to obtain our CLTs, by estimating and controlling expressions that are related uniquely to the fourth moments of the components of each vector Fkd . The paper is organized as follows. In Section 2 we introduce some notation and discuss preliminary results; in Section 3 our main theorem is stated and proved; finally, in Section 4 we present some applications, to the weak convergence of chaotic martingales (that is, martingales admitting a multiple Wiener integral representation), and to the convergence in law of random variables with a finite chaotic decomposition.
2
Notation and preliminary results
Let H be a separable Hilbert space. For every n ≥ 1, we define H ⊗n to be the nth tensor product of H √ and write H n for the nth symmetric tensor product of H, endowed with the modified norm n! k·kH ⊗n . We denote by X = {X (h) : h ∈ H} an isonormal process on H, that is, X is a centered H-indexed Gaussian family, defined on some probability space (Ω, F, P) and such that E [X (h) X (k)] = hh, kiH ,
for every h, k ∈ H.
For n ≥ 1, let Hn be the nth Wiener chaos associated to X (see for instance [9, Chapter 1]): we denote by InX the isometry between Hn and H n . For simplicity, in this paper we consider uniquely spaces of the form H = L2 (T, A, µ), where (T, A) is a measurable space and µ is a σ-finite and atomless measure. In this case, InX can be identified with the multiple Wiener-Itˆo integral with respect to the process X, as defined e.g. in [9, Chapter 1]. We also note that, by some standard Hilbert space argument, our results can be immediately extended to a general H. The reader is referred to [10, Section 3.3] for a discussion of this fact. Let H = L2 (T, A, µ); for any n, m ≥ 1, every f ∈ H n , g ∈ H m , and p = 1, ..., n ∧ m, the pth contraction between f and g, noted f ⊗p g, is defined to be the element of H ⊗m+n−2p given by Z f ⊗p g (t1 , ..., tn+m−2p ) = f (t1 , ..., tn−p , s1 , ..., sp ) × Tp
×g (tn−p+1 , ..., tm+n−2p , s1 , ..., sp ) dµ (s1 ) · · · dµ (sp ) ; by convention, f ⊗0 g = f ⊗ g denotes the tensor product of f and g. Given φ ∈ H ⊗n , we write (φ)s for its canonical symmetrization. In the special case T = [0, 1], A = B ([0, 1]) and µ = λ, where λ is Lebesgue measure, some specific notation is needed. For any 0 < t ≤ 1, ∆nt stands for the symplex contained n n in [0, t] , i.e. ∆nt := {(t1 , ..., tn ) : 0 < tn < ... < t1 < t}. Given a function f on [0, 1] and t ∈ [0, 1], ft n−1 denotes the application on [0, 1] given by (s1 , ..., sn−1 ) 7→ f (t, s1 , ..., sn−1 ) . n
n
For any n, m ≥ 1, for any pair of functions f, g such that f ∈ L2 ([0, 1] , B ([0, 1] ) , dλ⊗n ) := n m L ([0, 1] ) and g ∈ L2 ([0, 1] ), and for every 1 < t ≤ 1 and p = 1, ..., n ∧ m, we write f ⊗tp g for the pth contraction of f and g on [0, t], defined as Z f ⊗tp g (t1 , ..., tn+m−2p ) = f (t1 , ..., tn−p , s1 , ..., sp ) × 2
[0,t]p
×g (tn−p+1 , ..., tm+n−2p , s1 , ..., sp ) dλ (s1 ) · · · dλ (sp ) ; as before, f ⊗t0 g = f ⊗ g. Eventually, we recall that if H = L2 ([0, 1] , B ([0, 1]) , dλ), then X coincides with the Gaussian space generated by the standard Brownian motion t 7→ Wt := X 1[0,t] , t ∈ [0, 1] 2
and this implies in particular that, for every n ≥ 2, the multiple Wiener-Itˆo integral InX (f ), f ∈ n L2 ([0, 1] ), can be rewritten in terms of an iterated stochastic integral with respect to W , that is: X In (f ) = In1 ((f )s ) = n!Jn1 ((f )s ), where
3
Z
t
Z
un−1
Jnt ((f )s )
=
Int ((f )s )
= n!Jnt ((f )s ) , t ∈ [0, 1] .
···
(f (u1 , ..., un ))s dWun ...dWu1
0
0
d - dimensional CLT
The following facts will be used to prove our main results. Let H = L2 (T, A, µ), f ∈ H n and g ∈ H m . Then, F1: (see [1, p. 211] or [9, Proposition 1.1.3]) X InX (f ) Im (g) =
n∧m X p=0
n m X p! I (f ⊗p g) ; p p n+m−2p
(1)
F2: (see [16, Proposition 1]) 2
(n + m)! k(f ⊗0 g)s kH ⊗n+m
2
2
= m!n! kf kH ⊗n kgkH ⊗m n∧m X nm 2 n!m! kf ⊗q gkH ⊗n+m−2q + q q q=1
(2)
F3: (see [10]) h i 4 E InX (f )
=
2
4
3 (n!) kf kH ⊗n +
n−1 X
4
(n!)
h
2
kf ⊗p f kH ⊗2(n−p) (p! (n − p)!)
2n − 2p
(f ⊗p f ) 2 ⊗2(n−p) . + s H n−p 2
p=1
(3)
4
Let Vd be the set of all (i1 , i2 , i3 , i4 ) ∈ (1, ..., d) , such that one of the following conditions is satisfied: (a) i1 6= i2 = i3 = i4 , (b) i1 6= i2 = i3 6= i4 and i4 6= i1 , (c) the elements of (i1 , ..., i4 ) are all distinct. Our main result is the following. Theorem 1 Let d ≥ 2, and consider a collection 1 ≤ n1 ≤ ... ≤ nd < +∞ of natural numbers, as well as a collection of kernels o n (k) (k) f1 , ..., fd :k≥1 (k)
such that fj
∈ H nj for every k ≥ 1 and every j = 1, ..., d, and
(k) 2 lim j! fj ⊗n j k→∞ h H i (k) (k) lim E InXi fi InXl fl
k→∞
Then, the following conditions are equivalent:
3
=
1,
∀j = 1, ..., d
=
0,
∀1 ≤ i < l ≤ d.
(4)
(i) for every j = 1, ..., d
(k) (k) lim fj ⊗p fj
H ⊗2(nj −p)
k→∞
=0
for every p = 1, ..., nj − 1; 4 P (k) X f = 3d2 , and (ii) limk→∞ E I i i=1,...,d ni " lim E
k→∞
4 Y
InXi
l
(k) fil
# =0
l=1
for every (i1 , i2 , i3 , i4 ) ∈ Vd ; (k) (k) (iii) as k goes to infinity, the vector InX1 f1 , ..., InXd fd converges in distribution to a ddimensional standard Gaussian vector Nd (0, Id ); (k) (iv) for every j = 1, ..., d, InXj fj converges in distribution to a standard Gaussian random variable; (v) for every j = 1, ..., d, 4 (k) X lim E Inj fj = 3.
k→∞
Proof. We show the implications (iii) ⇒ (ii) ⇒ (i) ⇒ (iii) and (iv) ⇔ (v) ⇔ (i) (k) (k) are [(iii) ⇒ (ii)] First notice that, for every k ≥ 1, the multiple integrals InX1 f1 , ..., InXd fd contained in the sum of the first nd chaoses associated to the Gaussian measure X. As a consequence, condition (4) implies (see e.g. [3, Chapter V]) that for every M ≥ 2 and for every j = 1, ..., d M X (k) sup E Inj fj < +∞ k≥1
and the conclusion is obtained by standard arguments. [(ii) ⇒ (i)] The key of the proof is the following simple equality ! d 4 X (k) E InXi fi i=1
=
d X
4 (k) X E Ini fi +6
i=1
1≤i
" +
2 2 (k) (k) X X Ini fi E Inj fj
X
X (i1 ,...,i4 )∈Vd
E
4 Y
InXi
l
(k) fil
# .
l=1
By the multiplication formula (1), for every 1 ≤ i < j ≤ d ni X ni nj X (k) (k) (k) (k) InXi fi InXj fj = q! Ini +nj −2q fi ⊗q fj q q q=0 4
and therefore ni 2
2 2 2 X ni nj
(k) (k) (k) (k) . (ni + nj − 2q)! fi ⊗q fj InXj fj = q! E InXi fi
q q s H ⊗ni +nj −2q q=0 Now, relations (2) and (3) imply that ! d 4 X (k) = T1 (k) + T2 (k) + T3 (k) E InXi fi i=1
where T1 (k)
=
d X
(
(k) 4 3 (ni !) fi 2
H ⊗ni
4
(ni !)
2
2
(k) (k)
fi ⊗p fi
(p! (ni − p)!) 2 2ni − 2p (k)
(k) +
fi ⊗p fi
ni − p s H ⊗2(ni −p)
i=1
(k) 2 ni !nj ! fi
X
T2 (k) = 6
+
nX i −1 p=1
H ⊗ni
1≤i
(k) 2
fj
H ⊗nj
H ⊗2(ni −p)
+
" ni 2
2 X ni nj
(k)
(k) q! (ni + nj − 2q)! fi ⊗q fj +
q q s H ⊗ni +nj −2q q=1
2 ni nj
(k) (k) ni !nj ! fi ⊗q fj ⊗n +n −2q , + q q H j i and
" X
T3 (k) =
E
4 Y
InXi
l
(k) fil
# .
l=1
(i1 ,...,i4 )∈Vd
But 3
d X
(k) 4 (ni !) fi 2
H ⊗ni
i=1
(k) 2 ni !nj ! fi
X
+6
H ⊗ni
1≤i
(k) 2
fj
H ⊗nj
" =3
d X
(k) 2 ni ! fi
i=1
#2
H ⊗ni
and the desired conclusion is immediately obtained, since condition (4) ensures that the right side of the above expression converges to 3d2 when k goes to infinity. [(i) ⇒ (iii)] We will consider the case H = L2 ([0, 1] , B ([0, 1]) , dx)
(5)
where dx stands for Lebesgue measure, and use the notation introduced at the end of Section 2. We stress again that the extension to a general, separable Hilbert space H can be done by following the line of reasoning presented in [10, Section 3.3.] and it is not detailed here. Now suppose (i) and (5) hold. The result is completely proved, once the asymptotic relation d X i=1
λi InXi
(k) fi
=
d X
Law (k) λi ni !Jn1i fi ⇒ λd Rd × N (0, 1) k↑+∞
i=1
5
is verified for every vector λd = (λ1 , ..., λd ) ∈ Rd . Thanks to the Dambis-Dubins-Schwarz Theorem (see [13, Chapter V]), we know that for every k, there exists a standard Brownian motion W (k) (which depends also on λd ) such that ! Z 1 X d d 2 X (k) (k) = W (k) λi ni !Jnt −1 fi,t λi ni !Jn1 fi dt i
" = W
(k)
d X
λ2i
+2
1
Z
0
i=1
i=1
2 (k) ni !Jnt i −1 fi,t dt 1
Z
X
i
0
i=1
λi λj ni !nj !
h
0
1≤i
(k)
Jnt i −1 fi,t
(k)
Jnt j −1 fj,t
i
dt .
Now, since (4) implies E
ni !Jn1i
(k) fi
2
→ 1
k↑+∞
for every i, condition (i) yields – thanks to Proposition 3 in [10] – that d X
λ2i
1
Z 0
i=1
2 L2 2 (k) ni !Jnt i −1 fi,t dt → λd
To conclude, we shall verify that (i) implies also that for every i < j h i (k) (k) Z 1h Z 1 It i fi,t Int j −1 fj,t n −1 i L2 (k) (k) Jn1i −1 fi,t Jn1j −1 fj,t dt → 0. dt = k↑+∞ (ni − 1)! (nj − 1)! 0 0 To see this, use once again the multiplication formula (1) to write Z 1 h i (k) (k) dt Int i −1 fi,t Int j −1 fj,t =
0 nX i −1 q=0
ni − 1 nj − 1 (ni + nj − 2 (q + 1))!q! × q q Z 1 Z (k) (k) × n +n −2(q+1) dt fi,t ⊗tq fj,t s1 , ..., sni +nj −2(q+1) ∆1 i
j
s
s1
dWs1 ...dWsni +nj −2(q+1) , when ni < nj , or, when ni = nj Z 1 h i (k) (k) dt Int i −1 fi,t Int j −1 fj,t 0
Z =
1
dtE
h
0
Int i −1
Z ×
n +nj
∆1 i
nX i −2
2 ni − 1 + (2ni − 2 (q + 1))!q! × q q=0 Z 1 (k) t (k) dt f ⊗ f s , ..., s 1 ni +nj −2(q+1) q j,t i,t −2(q+1)
(k) fi,t
Int i −1
(k) fj,t
i
s
s1
dWs1 ...dWsni +nj −2(q+1) . In what follows, for every m ≥ 2, we write tm to indicate a vector (t1 , ..., tm ) ∈ Rm , whereas dtm stands for Lebesgue measure on Rm ; we shall also use the symbol b tm = maxi (ti ). Now fix q < ni − 1 ≤ nj − 1, 6
and observe that, by writing p = q + 1, 1
Z
Z
dt
n +nj −2(q+1)
∆1 i
(k) fi,t
⊗tq
(k) fj,t
Z
s1 , ..., sni +nj −2(q+1)
s
s1
2 ds1 ...dsni +nj −2(q+1)
Z
≤
dsni −p
[0,1]ni −p
"Z
dτnj −p
[0,1]nj −p
1
#2
Z
×
dt [0,t]p−1
b sni −p ∨b τnj −p
(k) dup−1 fj
(k) t, τnj −p , up−1 fi
(t, sni −p , up−1 )
= C (k) and moreover (Z
2
C (k)
1
=
Z
1
Z
dt
dup−1
dt
[0,1]p−1
0
"Z
0
Z [0,1]p−1
0
dvp−1 1(bup−1 ≤t,bvp−1 ≤t0 ) #
× [0,t∧t0 ]ni −p
(k) dsni −p fi
(k) (t, sni −p , up−1 ) fi
0
(t , sni −p , vp−1 ) #)2
"Z × [0,t∧t0 ]nj −p
(k) dτnj −p fj
(k) t, τnj −p , up−1 fj
0
t , τnj −p , vp−1
≤ Ci (k) × Cj (k) where, for γ = i, j 1
Z Cγ (k)
Z
=
Z
dt0
dup−1 [0,1]p−1
0
1
Z
dt
dvp−1 [0,1]p−1
0
#2
"Z
dsnγ −p fγ(k) t, snγ −p , up−1 fγ(k) t0 , snγ −p , vp−1
[0,t∧t0 ]nγ −p
and the calculations contained in [10] imply immediately that both Cj (k) and Ci (k) converge to zero whenever (i) is verified. On the other hand, when q = ni − 1 < nj − 1 1
Z
Z
dt
n −ni
∆1 j
(k) fi,t
s1
⊗tni −1
(k) fj,t
s1 , ..., snj −ni
s
2 ds1 ...dsnj −ni
Z ≤
dτnj −ni
[0,1]nj −ni
"Z
1
×
#2
Z
(k) duni −1 fj
dt [0,t]ni −1
τbnj −ni
(k) t, τnj −ni , uni −1 fi
(t, uni −1 )
= D (k) and also 2
D (k) ≤ D1 (k) × D2 (k) where 1
Z D1 (k)
=
Z dt [0,1]ni −1
0
Z duni −1
"Z × [0,t∧t0 ]nj −ni
1
Z
dt0
[0,1]ni −1
0
dvni −1 #2
(k) dτnj −ni fj
(k) t, τnj −ni , uni −1 fj
7
t0 , τnj −ni , vni −1
and Z D2 (k)
1
=
Z [0,1]ni −1
0
1
Z
dt
dt0
duni −1
Z [0,1]ni −1
0
2 (k) (k) dvni −1 fi (t, uni −1 ) fi (t0 , vni −1 )
(k) 4 = fi
H ⊗ni
so that the conclusion is immediately achieved, due to (4). Finally, recall that for ni = nj Z
1
dtE 0
h
Int i −1
(k) fi,t
Int i −1
(k) fj,t
i
1
Z =
(ni − 1)! 0
=
2
Z
dt Z
((ni − 1)!)
=
(ni − 1)! ni !
2
duni −1 fj (k)
n
∆1 i
(k)
[0,t]ni −1
dtduni −1 fj
(k)
(t, uni −1 ) fi (k)
(t, uni −1 ) fi
(t, uni −1 )
(t, uni −1 )
h i (k) (k) E InXi fi InXi fj → 0 k↑+∞
again by assumption (4). The proof of the implication is concluded. [(iv) ⇔ (v) ⇔ (i)] This is a consequence of Theorem 1 in [10]. In what follows, Cd = {Cij : 1 ≤ i, j ≤ d} indicates a d × d positive definite symmetric matrix. In the case of multiple Wiener integrals of the same order, a useful extension of Theorem 1 is the following Proposition 2 Let d ≥ 2, and fix n ≥ 2 as well as a collection of kernels n o (k) (k) f1 , ..., fd :k≥1 (k)
such that fj
∈ H n for every k ≥ 1 and every j = 1, ..., d, and
(k) 2 lim j! fj ⊗n k→∞ h H i (k) (k) X X lim E In fi In fj
k→∞
= Cjj ,
∀j = 1, ..., d
= Cij ,
∀1 ≤ i < j ≤ d.
(6)
Then, the following conditions are equivalent: (k) (k) (i) as k goes to infinity, the vector InX f1 , ..., InX fd converges in distribution to a d-dimensional Gaussian vector Nd (0, Cd ) = (N1 , ..., Nd ) with covariance matrix Cd ; (ii) 4 d X X (k) lim E InX fi Cii + 2 = 3
k→∞
i=1
i=1,...,d
and
" lim E
k→∞
4 Y
InX
l=1
(k) fil
2 X
Cij = E
#
" =E
4 Y
d X
!4 Ni ,
i=1
1≤i
# N il
l=1
for every (i1 , i2 , i3 , i4 ) ∈ Vd ; (k) (iii) for every j = 1, ..., d, InX fj converges in distribution to Nj , that is, to a centered Gaussian random variable with variance Cjj ; 8
(iv) for every j = 1, ..., d, 4 (k) 2 lim E InX fj = 3Cjj ;
k→∞
(v) for every j = 1, ..., d
(k) (k) lim fi ⊗p fi
H ⊗2(n−p)
k→∞
= 0,
for every p = 1, ..., n − 1. Sketch of the proof – The main idea is contained in the proof of Theorem 1. We shall discuss only implications (ii) ⇒ (v) and (v) ⇒ (i). In particular, one can show that (ii) implies (v) by adapting the same arguments as in the proof of Theorem 1 to show that ! d 4 X (k) = V1 (k) + V2 (k) + V3 (k) E InX fi i=1
where V1 (k)
=
d X
(
4 2 (k) 3 (n!) fi
+
n−1 X
4
(n!)
2
2
(k) (k)
fi ⊗p fi
(p! (n − p)!) 2 2n − 2p (k)
(k) +
fi ⊗p fi
⊗2(n−p) n−p s H H ⊗n
i=1
2 2 (k) (n!) fi
X
V2 (k) = 6
H ⊗2(n−p)
p=1
H
1≤i
(k) 2
f
j ⊗n
H ⊗n
+
2 !2
2
(k)
(k) q! n (2n − 2q)! fi ⊗q fj +
q s H ⊗2n−2q q=1 #) 2
2 n (k) 2 (k) + (n!) fi ⊗q fj ⊗2n−2q q H D E2 X (k) (k) 2 + 12 (n!) fi , fj ⊗n n−1 X
H
1≤i
and
" X
V3 (k) =
E
(i1 ,...,i4 )∈Vd
4 Y
InX
(k) fil
# .
l=1
But (6) yields d
X
(k) 4 3 (n!)
fi 2
i=1
→ 3
k↑+∞
d X i=1
H ⊗n
Cii2 + 6
+6
H
1≤i
X
(k) 2
f
j ⊗n
2
H ⊗n
+ 2 (n!)
D
(k) (k) fi , fj
E2
2 Cii Cjj + 2Cij
1≤i
and the conclusion is obtained, since !4 d d X X Cii2 + 6 E Ni = 3 i=1
2 2 (k) (n!) fi
X
i=1
" X
2 Cii Cjj + 2Cij +
1≤i
X (i1 ,...,i4 )∈Vd
9
E
4 Y l=1
# N il .
H ⊗n
Now keep the notations of the last part of the proof of Theorem 1. The implication (v) ⇒ (i) follows from the calculations therein contained, implying, thanks to (6), that the quantity d X
1
Z 0
t λi n!Jn−1
(k) fi,t
!2 dt
i=1
P P 2 converges in L2 to i=1,...,d λi Cii + 2 1≤i
4
Applications
In this section, we will present some consequences of our results. We mention that our list of applications is by no means exhaustive; for instance, the weak convergence results for quadratic functionals of (fractional) Brownian motion given in [10], [11] and [12] can be immediately extended to the multidimensional case. An example is given in the following generalization of the results contained in [12]. Proposition 3 Let W be a standard Brownian motion on [0, 1] and, for every d ≥ 2, define the process t 7→ Wt⊗d :=
Z
t
sd−1
Z ···
0
dWsd ...dWs1 ,
t ∈ [0, 1] .
1
da W ⊗2d ad+1 a
0
Then: (a) for every d ≥ 1 the vector Z
1 q
log
1 ε
1
ε
da ⊗2 W , a2 a
Z ε
1
da ⊗4 W , ..., a3 a
Z ε
converges in distribution, as ε → 0, to p √ N1 (0, 1) , 2 3!N2 (0, 1) , ..., d (2d − 1)!Nd (0, 1) where the Nj (0, 1), j = 1, ..., d, are standard, independent Gaussian random variables; (b) by defining, for every d ≥ 1 and for every j = 0, ..., d, the positive constant c (d, j)
=
(2d)! , 2d−j (d − j)!
for every d ≥ 1 the vector Z 1 Z 1 1 da 2 1 da 4 1 q W − c (1, 0) log , Wa − c (2, 0) log , ... a 2 3 a ε a ε 1 ε ε log ε Z 1 da 1 2d ..., W − c (d, 0) log a d+1 ε ε a converges in distribution to a Gaussian vector (G1 , ..., Gd ) with the following covariance structure: 0
E [G Gk ] = k0
k X
c (k, j) c (k 0 , j) j 2 (2j − 1)!
j=1
for every 1 ≤ k 0 ≤ k ≤ d.
10
Proof. From Proposition 4.1 in [12], we obtain immediately that for every j = 1, ..., d, Z 1 1 da (d) p q Wa⊗2j → j (2j − 1)!Nj (0, 1) , j+1 log 1ε ε a and the asymptotic independence follows from Theorem 1, since for every i 6= j Z 1 Z 1 Z 1 Z 1 db da db ⊗2j ⊗2i da ⊗2i ⊗2j W W E Wa Wb = E a b j+1 i+1 j+1 i+1 ε b ε a ε b ε a = 0. To prove point (b), use for instance Stroock’s formula (see [14]) to obtain that for every k = 1, ..., d Z ε
1
k
X da Wa2k = c (k, j) k+1 a j=1
Z
1
da 1 Wa⊗2j + c (k, 0) log , j+1 a ε
ε
so that the result derives immediately from point (a). In what follows, we prove a new asymptotic version of Knight’s theorem – of the kind discussed e.g. in [13, Chapter XIII] – and a necessary and sufficient condition for a class of random variables living in a finite sum of chaos – and satisfying some asymptotic property – to have a Gaussian weak limit. Further applications will be explored in a subsequent paper. More specifically, we are interested in an asymptotic Knight’s theorem for chaotic martingales, that, in our terminology, are martingales having a multiple Wiener integral representation (we stress that there is no relation with normal martingales with the chaotic representation property, as discussed e.g. in [1, Chapter XXI]). To this end, take d ≥ 2 integers 1 ≤ n1 ≤ n2 ≤ ... ≤ nd , and, for j = 1, ..., d and k ≥ 1 take a class
φtj,k : t ∈ [0, 1]
of elements of H nj , such that there exists a filtration {Ft : t ∈ [0, 1]}, satisfying the usual conditions and such that, for every k and for every j, the process t 7→ Mj,k (t) = InXj φtj,k , t ∈ [0, 1] , is a Ft - continuous martingale on [0, 1], vanishing at zero. We note hMj,k , Mj,k i and hMj,k , Mi,k i, 1 ≤ i, j ≤ d, the corresponding quadratic variation and covariation processes, whereas βj,k is the DambisDubins-Schwarz Brownian motion associated to Mj,k . Then, we have the following Proposition 4 (Asymptotic Knight’s theorem for chaotic martingales) Under the above assumptions and notation, suppose that for every j = 1, ..., d, hMj,k , Mj,k i
(d)
→
k→+∞
Tj ,
where t 7→ Tj (t) is a deterministic, continuous and non-decreasing process. If in addiction lim E hMi,k , Mj,k it = 0 k→+∞
for every i 6= j and for every t, then {Mj,k : 1 ≤ j ≤ d} converges in distribution to {Bj ◦ Tj : 1 ≤ j ≤ d} , where {Bj : 1 ≤ j ≤ d} is a d dimensional standard Brownian motion. 11
(7)
(8)
Proof. Since Mj,k (t) = βj,k hMj,k , Mj,k it , t ∈ [0, 1] , and hMj,k , Mj,k i weakly converges to Tj , we immediately obtain that Mj,k converges in distribution to the Gaussian process Bj ◦ Tj . Thanks to Theorem 1, it is now sufficient to prove that, for every i 6= j and for every s, t ∈ [0, 1], the quantity E [Mj,k (s) Mi,k (t)] converges to zero. But E [Mj,k (s) Mi,k (t)] = E hMi,k , Mj,k it∧s and assumption (8) yields the result. Remark – An analogue of Proposition 4 for general martingales verifying (7) can be found in [13, Exercise XIII.1.16], but in this case (8) has to be replaced by hMj,k , Mi,k i
(d)
→
k→+∞
0
for every i 6= j. Since chaotic martingales have a very explicit covariance structure (due to the isometric properties of multiple integrals), condition (8) is usually quite easy to verify. We also recall that – according e.g. to [13, Theorem XIII.2.3] – if condition (7) is dropped, to prove the asymptotic independence of the Brownian motions {βj,k : 1 ≤ j ≤ d} one has to check the condition lim hMi,k , Mj,k iτ k (t) = lim hMi,k , Mj,k iτ k (t) = 0
k→+∞
k→+∞
j
i
in probability for every i 6= j and for every t, where τjk and τik are the stochastic time-changes associated respectively to hMj,k , Mj,k i and hMi,k , Mi,k i. We conclude the paper by stating a result on the weak convergence of random variables belonging to a finite sum of Wiener chaoses to a standard normal random variable (the proof is a direct consequence of the arguments contained in the proof of Theorem 1).
Proposition 5 Let 1 ≤ n1 < ... < nd , Assume that nj ! lim
(k)
d ≥ 2, and let fj
k↑+∞
(k) 2
fj
H ⊗nj
and
" X
lim
k↑+∞ (k)
Define moreover Sd
=
(k)
(i) the sequence d−1/2 Sd to infinity;
(k)
fj
4 Y
j = 1, ..., d,
InXi l
(k) fil
(9)
# ≥ 0.
(10)
l=1
(i1 ,...,i4 )∈Vd
X j=1,...,d Inj
P
E
= 1,
∈ H nj , for every k ≥ 1 and 1 ≤ j ≤ d.
. Then, the following conditions are equivalent:
converges in distribution to a standard Gaussian random variable, as k tends
(ii) for every j = 1, ..., d,
2
(k) (k) lim fj ⊗p fj
k↑+∞
H ⊗2(nj −p)
= 0,
p = 1, ..., nj − 1;
(k) (iii) for every j = 1, ..., d, InXj fj converges in law to a standard Gaussian random variable, as k goes to infinity.
12
An interesting consequence of the above result is the following (k)
∈ H nj , k ≥ 1 and 1 ≤ j ≤ d. Assume moreover that (9) (k) is verified and that, for every k, the random variables InXj fj , j = 1, ..., d, are pairwise independent.
Corollary 6 Let 1 ≤ n1 < ... < nd , d ≥ 2, fj (k)
Then, the sequence d−1/2 Sd , k ≥ 1, defined asbefore, converges in law to a standard Gaussian random (k) X converges in law to N (0, 1). variable N (0, 1) if, and only if, for every j Inj fj Proof. We know from [16] (see also [4]) that, in the case of multiple stochastic integrals, pairwise independence implies mutual independence, so that condition (10) is clearly verified. (k) Remarks – (i) If we add the assumption that, for every j, the sequence InXj fj , k ≥ 1, admits a weak limit, say µj , then the conclusion of Corollary 6 can be directly deduced from [5, p. 248]. As a matter of fact, in such a reference R the following implication is proved: if the d probability measures µj , j = 1, ..., d, are such that (a) xdµj (x) = 0 for every j, and (b) µ1 ? · · · ? µd , where ? indicates convolution, is Gaussian, then each µj is necessarily Gaussian. (ii) Condition (10) is also satisfied when d = 2 and n1 + n2 is odd.
References [1] Dellacherie, C., Maisonneuve, B. and Meyer, P.A. (1992), Probabilit´es et Potentiel, Chapitres XVII `a XXIV, Hermann, Paris. [2] Giraitis, L. and Surgailis, D. (1985), “CLT and Other Limit Theorems for Functionals of Gaussian Processes”, Z. Wahr. verw. Gebiete 70(2), 191-212. [3] Janson, S. (1997), Gaussian Hilbert spaces, Cambridge University Press, Cambridge. [4] Kallenberg, O. (1991), “On an independence criterion for multiple Wiener integrals”. Ann. Probab. 19(2), 483–485. [5] Lukacs, E. (1983), Developments in characteristic functions, MacMillan Co., New York. [6] Major, P. (1981), Multiple Wiener-Itˆ o Integrals, Lecture Notes in Mathematics 849, Springer Verlag, New York. [7] Maruyama, G. (1982), “Applications of the multiplication of the Ito-Wiener expansions to limit theorems”, Proc. Japan Acad. 58, 388-390. [8] Maruyama, G. (1985), “Wiener functionals and probability limit theorems, I: the central limit theorem”, Osaka Journal of Mathematics 22, 697-732. [9] Nualart, D. (1995), The Malliavin Calculus and Related Topics, Springer, Berlin Heidelberg New York. [10] Nualart, D. and Peccati, G. (2004), “Central limit theorems for sequences of multiple stochastic integrals”, to appear in The Annals of Probability. [11] Peccati, G. and Yor, M. (2004a), “Four limit theorems for quadratic functionals of Brownian motion and Brownian bridge”, to appear in the volume: Asymptotic Methods in Stochastics, American Mathematical Society, Communication Series.
13
[12] Peccati, G. and Yor, M. (2004b), “Hardy’s inequality in L2 ([0, 1]) and principal values of Brownian local times”, to appear in the volume: Asymptotic Methods in Stochastics, American Mathematical Society, Communication Series. [13] Revuz, D. and Yor, M. (1999), Continuous Martingales and Brownian Motion, Springer, Berlin Heidelberg New York. [14] D. W. Stroock (1987), “Homogeneous chaos revisited”, in: S´eminaire de Probabilit´es XXI, Springer, Berlin, LNM 1247, 1-8. [15] Surgailis, D. (2003), “CLTs for Polynomials of Linear Sequences: Diagram Formula with illustrations”, in: Theory and Applications of Long Range Dependence, Birkh¨auser, Boston. ¨ unel, A. S. and Zakai, M. (1989), “Independence and conditioning on Wiener space”, The Annals [16] Ust¨ of Probability 17 (4), 1441-1453.
14