Entropy and the fourth moment phenomenon Ivan Nourdin, Giovanni Peccati and Yvik Swan April 4, 2013

Abstract We develop a new method for bounding the relative entropy of a random vector in terms of its Stein factors. Our approach is based on a novel representation for the score function of smoothly perturbed random variables, as well as on the de Bruijn’s identity of information theory. When applied to sequences of functionals of a general Gaussian field, our results can be combined with the Carbery-Wright inequality in order to yield multidimensional entropic rates of convergence that coincide, up to a logarithmic factor, with those achievable in smooth distances (such as the 1-Wasserstein distance). In particular, our findings settle the open problem of proving a quantitative version of the multidimensional fourth moment theorem for random vectors having chaotic components, with explicit rates of convergence in total variation that are independent of the order of the associated Wiener chaoses. The results proved in the present paper are outside the scope of other existing techniques, such as for instance the multidimensional Stein’s method for normal approximations. Keywords: Carbery-Wright Inequality; Central Limit Theorem; De Bruijn’s Formula; Fisher Information; Fourth Moment Theorem; Gaussian Fields; Relative Entropy; Stein factors. MSC 2010: 94A17; 60F05; 60G15; 60H07

Contents 1 Introduction 1.1 Overview and motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Illustration of the method in dimension one . . . . . . . . . . . . . . . . . 1.3 Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 2 5 10

2 Entropy bounds via de Bruijn’s identity and 2.1 Entropy . . . . . . . . . . . . . . . . . . . . . 2.2 Fisher information and de Bruijn’s identity . 2.3 Stein matrices and a key lemma . . . . . . . . 2.4 A general bound . . . . . . . . . . . . . . . .

. . . .

11 11 11 14 16

3 Gaussian spaces and variational calculus 3.1 Wiener chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The language of Malliavin calculus: chaoses as eigenspaces . . . . . . . . . 3.3 The role of Malliavin and Stein matrices . . . . . . . . . . . . . . . . . . .

19 19 20 22

1

Stein matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

4 Entropic fourth moment bounds on a Gaussian space 4.1 Main results . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Estimates based on the Carbery-Wright inequalities . . 4.3 Proof of Theorem 4.1 . . . . . . . . . . . . . . . . . . . . 4.4 Proof of Corollary 4.2 . . . . . . . . . . . . . . . . . . .

1 1.1

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

23 23 24 28 29

Introduction Overview and motivation

The aim of this paper is to develop a new method for controlling the relative entropy of a general random vector with values in Rd , and then to apply this technique to settle a number of open questions concerning central limit theorems (CLTs) on a Gaussian space. Our approach is based on a fine analysis of the (multidimensional) de Bruijn’s formula, which provides a neat representation of the derivative of the relative entropy (along the Ornstein-Uhlenbeck semigroup) in terms of the Fisher information of some perturbed random vector – see e.g. [1, 4, 5, 19, 21]. The main tool developed in this paper (see Theorem 2.10 as well as relation (1.14)) is a new powerful representation of relative entropies in terms of Stein factors. Roughly speaking, Stein factors are random variables verifying a generalised integration by parts formula (see (2.43) below): these objects naturally appear in the context of the multidimensional Stein’s method for normal approximations (see e.g. [15, 33]), and implicitly play a crucial role in many probabilistic limit theorems on Gaussian or other spaces (see e.g. [33, Chapter 6], as well as [31, 34, 37]). The study of the classical CLT for sums of independent random elements by entropic methods dates back to Linnik’s seminal paper [28]. Among the many fundamental contributions to this line of research, we cite [2, 3, 5, 8, 20, 9, 10, 12, 21] (see the monograph [19] for more details on the history of the theory). All these influential works revolve around a deep analysis of the effect of analytic convolution on the creation of entropy: in this respect, a particularly powerful tool are the ‘entropy jump inequalities’ proved and exploited e.g. in [3, 5, 20, 6]. As discussed e.g. in [6], entropy jump inequalities are directly connected with challenging open questions in convex geometry, like for instance the Hyperplane and KLS conjectures. One of the common traits of all the above references is that they develop tools to control the Fisher information and use the aforementioned de Bruijn’s formula to translate the bounds so obtained into bounds on the relative entropy. One of the main motivations of the present paper is to initiate a systematic informationtheoretical analysis of a large class of CLTs that has emerged in recent years in connection with different branches of modern stochastic analysis. These limit theorems typically involve: (a) an underlying infinite dimensional Gaussian field G (like for instance a Wiener process), (b) a sequence of rescaled centered random vectors Fn = Fn (G), n > 1, having the form of some highly non-linear functional of the field G. For example, each Fn may be defined as some collection of polynomial transformations of G, possibly depending on a parameter that is integrated with respect to a deterministic measure (but much more general forms are possible). Objects of this type naturally appear e.g. in the high-frequency analysis of random fields on homogeneous spaces [29], fractional processes [30, 39], Gaussian polymers [49], or random matrices [13, 32]. In view of their intricate structure, it is in general not possible to meaningfully represent the vectors Fn in terms of some linear transformation of independent (or weakly dependent) vectors, so that the usual analytical techniques based on stochastic independence and convolution (or mixing) cannot be applied. To overcome these difficulties, a recently developed line of research (see [33] for an introduction) has revealed that, by using tools from infinite-dimensional Gaussian analysis (e.g. the so-called Malliavin calculus

2

of variations – see [38]) and under some regularity assumptions on Fn , one can control the distance between the distribution of Fn and that of some Gaussian target by means of quantities that are no more complex than the fourth moment of Fn . The regularity assumptions on Fn are usually expressed in terms of the projections of each Fn on the eigenspaces of the Ornstein-Uhlenbeck semigroup associated with G [33, 38]. In this area, the most prominent contribution is arguably the following one-dimensional inequality established by the first two authors (see, R e.g., [33, Theorem 5.2.6]): let f be the density of a random variable F , assume that R x2 f (x)dx = 1, and that F belongs to the qth eigenspace of the Ornstein-Uhlenbeck semigroup of G (customarily called the qth Wiener chaos of G), then sZ r Z 1 1 1 − × |f (x) − φ1 (x)| dx 6 x4 (f (x) − φ1 (x))dx, (1.1) 4 R 3 3q R φ1 (x) = (2π)−1/2 exp(−x2 /2) is the standard Gaussian R 4 density (one can prove that Rwhere 4 x φ1 (x)dx = 3. A standard use x (f (x) − φ (x))dx > 0 for f as above). Note that 1 R R of hypercontractivity therefore allows one to deduce the so-called fourth moment theorem established in [40]: for a rescaled sequence {Fn } of random variables living inside the qth Wiener chaos of G, one has that Fn converges in distribution to a standard Gaussian random variable if and only if the fourth moment of Fn converges to 3 (and in this case the convergence is in the sense of total variation). See also [39]. qR The quantity x4 (f (x) − φ1 (x))dx appearing in (1.1) is often called the kurtosis of R the density f : it provides a rough measure of the discrepancy between the ‘fatness’ of the tails of f and φ1 . The systematic emergence of the normal distribution from the reduction of kurtosis, in such a general collection of probabilistic models, is a new phenomenon that we barely begin to understand. A detailed discussion of these results can be found in [33, Chapters 5 and 6]. M. Ledoux [25] has recently proved a striking extension of the fourth moment theorem to random variables living in the eigenspaces associated with a general Markov operator, whereas references [17, 22] contain similar statements in the framework of free probability. The estimate (1.1) is obtained by combining the Malliavin calculus of variations with the Stein’s method for normal approximations [15, 33]. Stein’s method can be roughly described as a collection of analytical techniques, allowing one to measure the distance between random elements by controlling the regularity of the solutions to some specific ordinary (in dimension 1) or partial (in higher dimensions) differential equations. The needed estimates are often expressed in terms of the same Stein factors that lie at the core of the present paper (see Section 2.3 for definitions). It is important to notice that the strength of these techniques significantly breaks down when dealing with normal approximations in dimension strictly greater than 1. For instance, in view of the structure of the associated PDEs, for the time being there is no way to directly use Stein’s method in order to obtain bounds in the multidimensional total variation distance (see [14, 43]). In contrast, the results of this paper allow one to deduce a number of information-theoretical generalisations of (1.1) that are valid in any dimension. It is somehow remarkable that our techniques make a pervasive use of Stein factors, without ever applying Stein’s method. As an illustration, we present here a multidimensional entropic fourth moment bound that will be proved in full generality in Section 4. For d > 1, we write φd (x) = φd (x1 , ..., xd ) to indicate the Gaussian density (2π)−d/2 exp(−(x21 + · · · + x2d )/2), (x1 , ..., xd ) ∈ Rd . From now on, every random object is assumed to be defined on a common probability space (Ω, F , P ), with E denoting expectation with respect to P . Theorem 1.1 (Entropic fourth moment bound) Let Fn = (F1,n , ..., Fd,n ) be a sequence of d-dimensional random vectors such that: (i) Fi,n belongs to the qi th Wiener

3

chaos of G, with 1 6 q1 6 q2 6 · · · 6 qd ; (ii) each Fi,n has variance 1, (iii) E[Fi,n Fj,n ] = 0 for i 6= j, and (iv) the law of Fn admits a density fn on Rd . Write Z kxk4 (fn (x) − φd (x))dx, ∆n := Rd

where k · k stands for the Euclidean norm, and assume that ∆n → 0, as n → ∞. Then, Z fn (x) fn (x) log dx = O(1) ∆n | log ∆n |, (1.2) φ d d (x) R where O(1) stands for a bounded numerical sequence, depending on d, q1 , ..., qd and on the sequence {Fn }. As in the one-dimensional case, one has always that ∆n > 0 for fn as in the previous statement. The quantity of the left-hand-side of (1.2) equals of course the relative entropy of fn . In view of the Csiszar-Kullback-Pinsker inequality (see [16, 23, 42]), according to which Z Z 2 1 fn (x) dx > (1.3) fn (x) log fn (x) − φd (x) dx , φ (x) 2 d d d R R

relation (1.2) then translates in a bound on the square of the total variation distance between fn and φd , where the dependence in ∆n hinges on the order of the chaoses only via a multiplicative constant. This bound agrees up to a logarithmic factor with the estimates in smoother distances established in [37] (see also [35]), where it is proved that there exists a constant K0 = K0 (d, q1 , ..., qd ) such that W1 (fn , φd ) 6 K0 ∆1/2 n ,

where W1 stands for the usual Wasserstein distance of order 1. Relation (1.2) also drastically improves the bounds that can be deduced from [31], yielding that, as n → ∞, Z d fn (x) − φd (x) dx = O(1) ∆α n , Rd

1 , and the symwhere αd is any strictly positive number verifying αd < 1+(d+1)(3+4d(q d −1)) bol O(1) stands again for some bounded numerical sequence. The estimate (1.2) seems to be largely outside the scope of any other available technique. Our results will also show that convergence in relative entropy is a necessary and sufficient condition for CLTs involving random vectors whose components live in a fixed Wiener chaos. As in [31, 36], an important tool for establishing our main results is the Carbery-Wright inequality [11], providing estimates on the small ball probabilities associated with polynomial transformations of Gaussian vectors. Observe also that, via the Talagrand’s transport inequality [48], our bounds trivially provide estimates on the 2-Wasserstein distance W2 (fn , φd ) between fn and φd , for every d > 1.

We stress that, albeit our principal motivation comes from asymptotic problems on a Gaussian space, the methods developed in Section 2 are general. In fact, at the heart of the present work lie the powerful equivalences (2.45)– (2.46) (which can be considered as a new form of so-called Stein identities) that are valid under very weak assumptions on the target density; it is also easy to uncover a wide variety of extensions and generalizations so that we expect that our tools can be adapted to deal with a much wider class of multidimensional distributions. The connection between Stein identities and information theory has already been noted in the literature (although only in dimension 1). For instance, explicit applications

4

are known in the context of Poisson and compound Poisson approximations [7, 45], and recently several promising identities have been discovered for some discrete [26, 44] as well as continuous distributions [24, 27, 41]. However, with the exception of [27], the existing literature seems to be silent about any connection between entropic CLTs and Stein’s identities for normal approximations. To the best of our knowledge, together with [27] (which however focusses on bounds of a completely different nature) the present paper contains the first relevant study of the relations between the two topics. Remark on notation. Given random vectors X, Y with values in Rd (d > 1) and densities fX , fY , respectively, we shall denote by TV(fX , fY ) and W1 (fX , fY ) the total variation and 1-Wasserstein distances between fX and fY (and thus between the laws of X and Y ). Recall that we have the representations TV(fX , fY ) = sup P [X ∈ A] − P [Y ∈ A] A∈B(Rd )

1 sup E[h(X)] − E[h(Y )] 2 khk∞ 61 Z 1 1 = fX (x) − fY (x) dx =: kfX − fY k1 , 2 Rd 2 =

(1.4)

where (here and throughout the paper) dx is shorthand for the Lebesgue measure on Rd , as well as W1 (fX , fY ) = sup E[h(X)] − E[h(Y )] . h∈Lip(1)

In order to simplify the discussion, we shall sometimes use the shorthand notation TV(X, Y ) = TV(fX , fY ) and W1 (X, Y ) = W1 (fX , fY ).

It is a well-known fact the the topologies induced by TV and W1 , over the class of probability measures on Rd , are strictly stronger than the topology of convergence in distribution (see e.g. [18, Chapter 11] or [33, Appendix C]). Finally, we agree that every logarithm in the paper has base e. To enhance the readability of the text, the next Subsection 1.2 contains an intuitive description of our method in dimension one.

1.2

Illustration of the method in dimension one

Let F be a random variable with density f : R → [0, ∞), and let Z be a standard Gaussian random variable with density φ1 . We shall assume that E[F ] = 0 and E[F 2 ] = 1, and that Z and F are stochastically independent. As anticipated, we are interested in bounding the relative entropy of F (with respect to Z), which is given by the quantity Z D(F kZ) = f (x) log(f (x)/φ1 (x))dx. R

Recall also that, in view of the Pinsker-Csiszar-Kullback inequality, one has that p 2TV(f, φ1 ) 6 2D(F ||Z).

(1.5)

Our aim is to deduce a bound on D(F kZ) that is expressed in terms of the so-called Stein factor associated with F . Whenever it exists, such a factor is a mapping τF : R → R that is uniquely determined (up to negligible sets) by requiring that τF (F ) ∈ L1 and E[F g(F )] = E[τF (F )g ′ (F )]

5

for every smooth test function g. Specifying g(x) = x implies, in particular, that E[τF (F )] = E[F 2 ] = 1. It is easily seen that, R ∞under standard regularity assumptions, a version of τF is given by τF (x) = (f (x))−1 x zf (z)dz, for x in the support of f (in particular, the Stein factor of Z is 1). The relevance of the factor τF in comparing F with Z is actually revealed by the following Stein’s bound [15, 33], which is one of the staples of Stein’s method: TV(f, φ1 ) = sup E[g ′ (F )] − E[F g(F )] , (1.6) where the supremum runs over all continuously differentiable functions g : R → R satisp fying kgk∞ 6 2/π and kg ′ k∞ 6 2. In particular, from (1.6) one recovers the bound in total variation (1.7)

TV(f, φ1 ) 6 2 E[|1 − τF (F )|],

providing a formal meaning to the intuitive fact that the distributions of F and Z are close whenever τF is close to τZ , that is, whenever τF is close to 1. To motivate the reader, we shall now present a simple illustration of how the estimate (1.7) applies to the usual CLT. P Example 1.2 Let {Fi : i > 1} be a sequence of i.i.d. copies of F , set Sn = n−1/2 ni=1 Fi and assume that E[τF (F )2 ] < +∞ (a simple sufficient condition for this to hold is e.g. that f has compact support, and f is bounded from below inside its support). Then, using e.g. [47, Lemma 2], # " n X 1 (1.8) τF (Fi ) Sn . τSn (Sn ) = E n i=1 Since (by definition) E [τSn (Sn )] = E [τF (Fi )] = 1 for all i = 1, . . . , n we get  #!2  " n X   1  E (1 − τSn (Sn ))2 = E  (1 − τF (Fi )) F E n i=1  !2  n 1  X (1 − τF (Fi ))  ≤ 2E n i=1 !   n X E (1 − τF (F ))2 1 = 2 Var (1 − τF (Fi )) = . n n i=1

(1.9)

In particular, writing fn for the density of Sn , we deduce from (1.7) that

TV(fn , φ1 ) ≤ 2

Var(τF (F ))1/2 √ . n

(1.10)

We shall demonstrate in Section 3 and Section 4 that the quantity E[|1 − τF (F )|] (as well as its multidimensional generalisations) can be explicitly controlled whenever F is a smooth functional of a Gaussian field. In view of these observations, the following question is therefore natural: can one bound D(F kZ) by an expression analogous to the right-hand-side of (1.7)? Our strategy for connecting τF (F ) and D(F kZ) is based on an integral version of the classical de Bruijn’s formula of information √ √ theory. To introduce this result, for t ∈ [0, 1] denote by ft the density of Ft = tF + 1 − tZ, in such a way that f1 = f and f0 = φ1 .

6

 √  √ √ Of course ft (x) = E φ1 (x − tF )/ 1 − t / 1 − t has support R and is C ∞ for all t < 1. We shall denote by ρt = (log ft )′ the score function of Ft (which is, by virtue of the preceding remark, well defined at all t < 1 irrespective of the properties of F ). For every t < 1, the mapping ρt is completely characterised by the fact that E[g ′ (Ft )] = −E[g(Ft )ρt (Ft )]

(1.11)

for every smooth test function g. We also write, for t ∈ [0, 1), Z ′ 2 ft (x) dx J(Ft ) = E[ρt (Ft )2 ] = R ft (x) for the Fisher information of Ft , and we observe that 0 6 E[(Ft + ρt (Ft ))2 ] = J(Ft ) − 1 =: Jst (Ft ), where Jst (Ft ) is the so-called standardised Fisher information of Ft (note that Jst (F0 ) = Jst (Z) = 0). With this notation in mind, de Bruijn’s formula (in an integral and rescaled version due to Barron [8]) reads D(F kZ) =

Z

0

1

J(Ft ) − 1 dt = 2t

Z

0

1

Jst (Ft ) dt 2t

(1.12)

(see Lemma 2.3 below for a multidimensional statement). Remark 1.3 Using the standard relation Jst (Ft ) 6 tJst (F ) + (1 − t)Jst (Z) = tJst (F ) (see e.g. [19, Lemma 1.21]), we deduce the upper bound D(F kZ) 6

1 Jst (F ), 2

(1.13)

a result which is often proved by using entropy power inequalities (see also Shimizu [46]). Formula (1.13) is a quantitative counterpart to the intuitive fact that the distributions of F and Z are close, whenever Jst (F ) is close to zero. Using (1.5) we further deduce that closeness between the Fisher informations of F and Z (i.e. Jst (F ) ≈ 0) or between the entropies of F and Z (i.e. D(F ||Z) ≈ 0) both imply closeness in terms of the total variation distance, and hence in terms of many more probability metrics. This observation lies at the heart of the approach from [8, 10, 20] where a fine analysis of the behavior of ρF (F ) over convolutions (through projection inequalities in the spirit of (1.8)) is used to provide explicit bounds on the Fisher information distance which in turn are transformed, by means of de Bruijn’s identity (1.12), into bounds on the relative entropy. We will see in Section 4 that the bound (1.13) is too crude to be of use in the applications we are interested in. Our key result in dimension 1 is the following statement (see Theorem 2.10 for a general multidimensional version), providing a new representation of relative entropy in terms of Stein factors. From now on, we denote by Cc1 the class of all functions g : R → R that are continuously differentiable and with compact support. Proposition 1.4 Let the previous notation prevail. We have Z   1 1 t D(F kZ) = E E[Z(1 − τF (F ))|Ft ]2 dt. 2 0 1−t

7

(1.14)

Using ρZ (Z) = −Z we see that, for any function g ∈ Cc1 , one has √ √ E[Z(1 − τF (F ))g( tF + 1 − tZ)] √ √ √ = 1 − t E[(1 − τF (F ))g ′ ( tF + 1 − tZ)] √  1 = 1 − t E[g ′ (Ft )] − √ E[F g(Ft )] t √ √ 1 − t 1−t ′ = E[g (Ft )] − E[Ft g(Ft )] = − E[(ρt (Ft ) + Ft )g(Ft )], t t

Proof.

yielding the representation

t E[Z(1 − τF (F ))|Ft ]. ρt (Ft ) + Ft = − √ 1−t

(1.15)

This implies

J(Ft ) − 1 = E[(ρt (Ft ) + Ft )2 ] =

  t2 E E[Z(1 − τF (F ))|Ft ]2 , 1−t

and the desired conclusion follows from de Bruijn’s identity (1.12).

To properly control the integral on the right-hand-side of (1.14), we need to deal with t is not integrable in t = 1, so that we cannot directly the fact that the mapping t 7→ 1−t   apply the estimate E E[Z(1 − τF (F ))|Ft ]2 6 Var(τF (F )) to deduce the desired bound. Intuitively, one has to exploit the fact that the mapping t 7→ E[Z(1 − τF (F ))|Ft ] satisfies E[Z(1 − τF (F ))|F1 ] = 0, thus in principle compensating for the singularity at t ≈ 1.

As we will see below, one can make this heuristic precise provided there exist three constants c, δ, η > 0 such that   E[|τF (F )|2+η ] < ∞ and E |E[Z(1 − τF (F ))|Ft ]| 6 c t−1 (1 − t)δ , 0 < t 6 1. (1.16) Under the assumptions appearing in condition (1.16), the following strategy can indeed be implemented in order to deduce a satisfactory bound. First split the integral in two parts: for every 0 < ε 6 1, 2 D(F kZ)

Z 1   t t dt + E E[Z(1 − τF (F ))|Ft ]2 dt 1−t 1−ε 1 − t 0 Z 1   t 6 E[(1 − τF (F ))2 ] | log ε| + (1.17) E E[Z(1 − τF (F ))|Ft ]2 dt, 1−ε 1 − t R 1−ε t dt R 1 (1−u)du R1 the last inequality being a consequence of 0 6 ε du 1−t = ε u u = − log ε. To deal with the second term in (1.17), let us observe that, by using in particular the Hölder inequality and the convexity of the function x 7→ |x|η+2 , one deduces from (1.16) that   E E[Z(1 − τF (F ))|Ft ]2   η+2 η η+1 η+1 |E[Z(1 − τF (F ))|Ft ]| = E |E[Z(1 − τF (F ))|Ft ]| 6 E[(1 − τF (F ))2 ]

Z

1−ε

  η  1  6 E |E[Z(1 − τF (F ))|Ft ]| η+1 × E |E[Z(1 − τF (F ))|Ft ]|η+2 η+1   1   1 η δη η 6 c η+1 t− η+1 (1 − t) η+1 × E |Z|η+2 η+1 × E |1 − τF (F )|η+2 η+1   1   1 η δη 6 c η+1 t−1 (1 − t) η+1 × 2 E |Z|η+2 η+1 1 + E |τF (F )|η+2 η+1 δη

= Cη t−1 (1 − t) η+1 ,

8

(1.18)

with  1   1  η Cη := 2c η+1 E |Z|η+2 η+1 1 + E |τF (F )|η+2 η+1 .

By virtue of (1.17) and (1.18), the term D(F kZ) is eventually amenable to analysis, and one obtains: Z 1 δη 2 D(F kZ) 6 E[(1 − τF (F ))2 ] | log ε| + Cη (1 − t) η+1 −1 dt 1−ε

δη Cη (η + 1) η+1 = E[(1 − τF (F ))2 ] | log ε| + ε . δη

Assuming finally that E[(1−τF (F ))2 ] 6 1 (recall that, in the applications we are interested in, such a quantity is meant to be close to 0) we can optimize over ε and choose ε = η+1 E[(1 − τF (F ))2 ] δη , which leads to D(F kZ) 6

η+1 E[(1 − τF (F ))2 ] | log E[(1 − τF (F ))2 ]| 2δη Cη (η + 1) + E[(1 − τF (F ))2 ]. 2δη

(1.19)

Clearly, combining (1.19) with (1.5), one also obtains an estimate in total variation which agrees with (1.7) up to the square root of a logarithmic factor. The problem is now how to identify sufficient conditions on the law of F for (1.16) to hold; we shall address this issue by means of two auxiliary results. We start with a useful technical lemma, that has been suggested to us by Guillaume Poly. Lemma 1.5 Let X be an integrable random variable and let Y be a Rd -valued random vector having an absolutely continuous distribution. Then (1.20)

E |E [X | Y ]| = sup E [Xg(Y )] , where the supremum is taken over all g ∈ Cc1 such that kgk∞ ≤ 1. Proof.

Since |sign(E[X|Y ])| = 1 we have, by using e.g. Lusin’s Theorem,

E |E [X | Y ]| = E[Xsign(E[X|Y ])] 6 sup E(Xg(Y )). To see the reversed inequality, observe that, for any g bounded by 1, |E(Xg(Y ))| = |E(E(X|Y )g(Y ))| 6 E |E [X | Y ]| . The lemma is proved. Our next statement √ relates √ (1.16) to the problem of estimating the total variation distance between F and tF + 1 − tx for any x ∈ R and 0 < t 6 1. Lemma 1.6 Assume that, for some κ, α > 0, √ √ TV( tF + 1 − t x, F ) 6 κ(1 + |x|)t−1 (1 − t)α , Then (1.16) holds, with δ =

1 2

∧ α and c = 4(κ + 1).

9

x ∈ R, t ∈ (0, 1].

(1.21)

Proof.

Take g ∈ Cc1 such that kgk∞ 6 1. Then, by independence of Z and F ,

E [Z(1 − τF (F ))g(Ft )] = E [g(Ft )Z] − E [Zg(Ft )τF (F )] √ = E [g(Ft )Z] − 1 − tE [τF (F )g ′ (Ft )] r 1−t = E [Z(g(Ft ) − g(F ))] − E [g(Ft )F ] t p so that, since kgk∞ 6 1 and E|F | 6 E[F 2 ] = 1, r 1−t |E [Z(1 − τF (F ))g(Ft )]| 6 |E [Z (g(Ft ) − g(F ))]| + t √ 6 |E [Z (g(Ft ) − g(F ))]| + t−1 1 − t. We have furthermore Z √ √ E [Z(g(Ft ) − g(F ))] = xE[g( tF + 1 − t x) − g(F )]φ1 (x)dx ZR √ √ 6 2 |x| TV( tF + 1 − t x, F )φ1 (x)dx R Z −1 α 6 2κt (1 − t) |x|(1 + |x|)φ1 (x)dx R

6 4κ t−1 (1 − t)α .

Inequality (1.16) now follows by applying Lemma 1.5. As anticipated, in Section 4 (see Lemma 4.4 for a precise statement) we will describe a wide class of distributions satisfying (1.21). The previous discussion yields finally the following statement, answering the original question of providing a bound on D(F kZ) that is comparable with the estimate (1.7). Theorem 1.7 Let F be a random variable with density f : R → [0, ∞), satisfying E[F ] = 0 and E[F 2 ] = 1. Let Z ∼ N (0, 1) be a standard Gaussian variable (independent of F ). If, for some α, κ, η > 0, one has E[|τF (F )|2+η ] < ∞

(1.22)

and √ √ TV( tF + 1 − t x, F ) 6 κ(1 + |x|)t−1 (1 − t)α ,

x ∈ R, t ∈ (0, 1],

(1.23)

then, provided ∆ := E[(1 − τF (F ))2 ] 6 1, D(F kZ) 6

Cη (η + 1) η+1 ∆ | log ∆| + ∆, (1 ∧ 2α)η (1 ∧ 2α)η

(1.24)

where

1.3

 1   1  η Cη = 2(4κ + 4) η+1 E |Z|η+2 η+1 1 + E |τF (F )|η+2 η+1 .

Plan

The rest of the paper is organised as follows. In Section 2 we will prove that Theorem 1.7 can be generalised to a fully multidimensional setting. Section 3 contains some general results related to (infinite-dimensional) Gaussian stochastic analysis. Finally, in Section 4 we shall apply our estimates in order to deduce general bounds of the type appearing in Theorem 1.1.

10

2 Entropy bounds via de Bruijn’s identity and Stein matrices In Section 2.1 and Section 2.2 we discuss some preliminary notions related to the theory of information (definitions, notations and main properties). Section 2.3 contains the proof of a new integral formula, allowing one to represent the relative entropy of a given random vector in terms of a Stein matrix. The reader is referred to the monograph [19], as well as to [1, Chapter 10], for any unexplained definition and result concerning information theory.

2.1

Entropy

Fix an integer d > 1. Throughout this section, we consider a d-dimensional squareintegrable and centered random vector F = (F1 , ..., Fd ) with covariance matrix B > 0. We shall assume that the law of F admits a density f = fF (with respect to the Lebesgue measure) with support S ⊆ Rd . No other assumptions on the distribution of F will be needed. We recall that the differential entropy (or, simply, the entropy) of F is given by the R R quantity Ent(F ) := −E[log f (F )] = − Rd f (x) log f (x)dx = − S f (x) log f (x)dx, where we have adopted (here and for the rest of the paper) the standard convention 0 log 0 := 0. Note that Ent(F ) = Ent(F + c) for all c ∈ Rd , i.e. entropy is location invariant. As discussed above, we are interested in estimating the distance between the law of F and the law of a d-dimensional centered Gaussian vector Z = (Z1 , ..., Zd ) ∼ Nd (0, C), where C > 0 is the associated covariance matrix. Our measure of the discrepancy between the distributions of F and Z is the relative entropy (often called Kullback-Leibler divergence or information entropy)   Z f (x) dx, (2.25) f (x) log D(F ||Z) := E [log(f (F )/φ(F ))] = φ(x) Rd where φ = φd (· ; C) is the density of Z. It is easy to compute the Gaussian entropy Ent(Z) = 1/2 log (2πe)d |C| (where |C| is the determinant of C), from which we deduce the following alternative expression for the relative entropy 0 6 D(F ||Z) = Ent(Z) − Ent(F ) +

tr(C −1 B) − d , 2

(2.26)

where ‘tr’ stands for the usual trace operator. If Z and F have the same covariance matrix then the relative entropy is simply the entropy gap between F and Z so that, in particular, one infers from (2.26) that Z has maximal entropy among all absolutely continuous random vectors with covariance matrix C. We stress that the relative entropy D does not define a bona fide probability distance (for absence of a triangle inequality, as well as for lack of symmetry): however, one can easily translate estimates on the relative entropy in terms of the total variation distance, using the already recalled Pinsker-Csiszar-Kullback inequality (1.3). In the next subsection, we show how one can represent the quantity D(F ||Z) as the integral of the standardized Fisher information of some adequate interpolation between F and Z.

2.2

Fisher information and de Bruijn’s identity

Without loss of generality, we may assume for the rest of the paper that the vectors F and Z (as defined in the previous Section 2.1) are stochastically independent. For √ √ every t ∈ [0, 1], we define the centered random vector Ft := tF + 1 − tZ, in such a way that F0 = Z and F1 = F . It is clear that Ft is centered and has covariance Γt = tB + (1 − t)C > 0; moreover, whenever t ∈ [0, 1), Ft has a strictly positive and

11

infinitely differentiable density, that we shall denote by ft (see e.g. [21, Lemma 3.1] for more details). For every t ∈ [0, 1), we define the score of Ft as the Rd -valued function given by ρt : Rd → Rd : x 7→ ρt (x) = (ρt,1 (x), ..., ρt,d (x))T := ∇ log ft (x),

(2.27)

with ∇ the usual gradient in Rd (note that we will systematically regard the elements of Rd as column vectors). The quantity ρt (x) is of course well-defined for every x ∈ Rd and every t ∈ [0, 1); moreover, it is easily seen that the random vector ρt (Ft ) is completely characterized (up to sets of P -measure zero) by the relation (2.28)

E[ρt (Ft )g(Ft )] = −E[∇g(Ft )],

holding for every smooth function g : Rd → R. Selecting g = 1 in (2.28), one sees that ρt (Ft ) is a centered random vector. The covariance matrix of ρt (Ft ) is denoted by J(Ft ) := E[ρt (Ft )ρt (Ft )T ]

(2.29)

(with components J(Ft )ij = E [ρt,i (Ft )ρt,j (Ft )] for 1 ≤ i, j ≤ d), and is customarily called the Fisher information matrix of Ft . Focussing on the case t = 0, one sees immediately that the Gaussian vector F0 = Z ∼ Nd (0, C) has linear score function ρ0 (x) = ρZ (x) = −C −1 x and Fisher information J(F0 ) = J(Z) = C −1 . Remark 2.1 Fix t ∈ [0, 1). Using formula (2.28) one deduces that a version of ρt (Ft ) is given by the conditional expectation −(1 − t)−1/2 E[C −1 Z|Ft ], from which we infer that the matrix J(Ft ) is well-defined and its entries are all finite. For t ∈ [0, 1), we define the standardized Fisher information matrix of Ft as h T i  = Γt J(Ft ) − Id , Jst (Ft ) := Γt E ρt (Ft ) + Γt−1 Ft ρt (Ft ) + Γ−1 F t t

(2.30)

where Id is the d × d identity matrix, and the last equality holds because E [ρt (Ft )Ft ] = −Id . Note that the positive semidefinite matrix Γt−1 Jst (Ft ) = J(Ft ) − Γ−1 is the differt ence between the Fisher information matrix of Ft and that of a Gaussian vector having distribution Nd (0, Γt ). Observe that i h T (2.31) Jst (Ft ) := E (ρ⋆t (Ft ) + Ft ) (ρ⋆t (Ft ) + Ft ) Γ−1 t , where the vector

ρ⋆t (Ft ) = (ρ⋆t,1 (Ft ), ..., ρ⋆t,d (Ft ))T := Γt ρt (Ft )

(2.32)

is completely characterized (up to sets of P -measure 0) by the equation E[ρ⋆t (Ft )g(Ft )] = −Γt E[∇g(Ft )],

(2.33)

holding for every smooth test function g. Remark 2.2 Of course the above information theoretic quantities are not defined only for Gaussian mixtures of the form Ft but more generally for any random vector satisfying the relevant assumptions (which are necessarily verified by the Ft ). In particular, if F has covariance matrix B and differentiable density f then, letting ρF (x) := ∇ log f (x) be the score function for F , the standardized Fisher information of F is   (2.34) Jst (F ) = BE ρF (F )ρF (F )T . In the event that the above be well-defined then it is also scale invariant in the sense that Jst (αF ) = Jst (F ) for all α ∈ R.

12

The following fundamental result is known as the (multidimensional) de Bruijn’s identity: it shows that the relative entropy D(F ||Z) can be represented in terms of the integral of the mapping t 7→ tr(CΓ−1 t Jst (Ft )) with respect to the measure dt/2t on (0, 1]. It is one of the staples of the entire paper. We refer the reader e.g. to [1, 8] for proofs in the case d = 1. Our multidimensional statement is a rescaling of [21, Theorem 2.3] (some more details are given in the proof). Lemma 2.3 (Multivariate de Bruijn’s identity) Let the above notation and assumptions prevail. Then, Z 1  1 tr CΓ−1 (2.35) D(F ||Z) = t Jst (Ft ) dt 0 2t Z 1    1 1 −1 + tr C B − d + tr CΓ−1 t − Id dt. 2 2t 0 In [21, Theorem 2.3] it is proved that Z ∞  √ 1 tr C(B + τ C)−1 Jst (F + τ Z)) dτ D(F ||Z) = 2 0    Z   1 ∞ 1 C −1 −1 −1 dτ tr C B − d + + tr C (B + τ C) − 2 2 0 1+τ

Proof.

(note that the definition of standardized Fisher information used in [21] is different from ours). The conclusion is obtained by using the change of variables t = (1 + τ )−1 , as well as the fact that ! r √  √ 1−t Z = Jst Jst F + tF + 1 − tZ , t which follows from the scale-invariance of standardized Fisher information mentionned in Remark 2.2. Remark 2.4 Assume that Cn , n > 1, is a sequence of d × d nonsingular covariance matrices such that Cn;i,j → Bi,j for every i, j = 1, ..., d, as n → ∞. Then, the second and third summands of (2.35) (with Cn replacing C) converge to 0 as n → ∞. For future reference, we will now rewrite formula (2.35) for some specific choices of d, F , B and C. Example 2.5 (i) Assume F ∼ Nd (0, B). Then, Jst (Ft ) = 0 (null matrix) for every t ∈ [0, 1), and formula (2.35) becomes Z 1    1 1 tr C −1 B − d + (2.36) tr CΓ−1 D(F ||Z) = t − Id dt. 2 0 2t (ii) Assume that d = 1 and that F and Z have variances b, c > 0, respectively. Defining γt = tb + (1 − t)c, relation (2.35) becomes    Z 1  Z 1 c c 1 b 1 D(F ||Z) = Jst (Ft )dt + − 1 dt (2.37) −1 + 2 c γt 0 2tγt 0 2t   Z 1 1 b log c − log b c −1 2 E[(ρt (Ft ) + γt Ft ) ]dt + −1 + . = 2t 2 c 2 0 Relation (2.37) in the case b = c (= γt ) corresponds to the integral formula (1.12) proved by Barron in [8, Lemma 1].

13

(iii) If B = C, then (2.35) takes the form D(F ||Z) =

Z

1

0

1 tr (Jst (Ft )) dt. 2t

(2.38)

In the special case where B = C = Id , one has that d

D(F ||Z) =

1X 2 j=1

Z

0

1

i 1 h 2 E (ρt,j (Ft ) + Ft,j ) dt, t

(2.39)

of which (1.12) is a particular case (d = 1).   In the univariate setting, the variance case (E F 2 = σ 2 ) follows trivially  2general  from the standardized one (E F = 1) through scaling; the same cannot be said in the multivariate setting since the appealing form (2.39) cannot be directly achieved for d ≥ 2 when the covariance matrices are not the identity because here the dependence structure of F needs to be taken into account. In Lemma 2.6 we provide an estimate allowing one to deal with this difficulty in the case B = C, for every d. The proof is based on the following elementary fact: if A, B are two d × d symmetric matrices, and if A is semi-positive definite, then (2.40)

λmin (B) × tr(A) 6 tr(AB) 6 λmax (B) × tr(A),

where λmin (B) and λmax (B) stand, respectively, for the maximum and minimum eigenvalue of B. Observe that λmax (B) = kBkop , the operator norm of B. Lemma 2.6 Fix d > 1, and assume that B = C. Then, CΓ−1 = Id , and one has the t following estimates λmin (C) ×

d X

E[(ρt,j (Ft ) + (C −1 Ft )j )2 ] 6 tr (Jst (Ft ))

(2.41)

j=1

6 λmax (C) × λmin (C −1 ) ×

d X

d X

E[(ρt,j (Ft ) + (C −1 Ft )j )2 ],

j=1

E[(ρ⋆t,j (Ft ) + Ft,j )2 ] 6 tr (Jst (Ft ))

(2.42)

j=1

6 λmax (C −1 ) × 

d X

E[(ρ⋆t,j (Ft ) + Ft,j )2 ].

j=1

Proof. Write tr (Jst (Ft )) = tr C −1 Jst (Ft )C and apply (2.40) first to A = C −1 Jst (Ft ) and B = C, and then to A = Jst (Ft )C and B = C −1 . In the next section, we prove a new representation of the quantity ρt (Ft ) + C −1 Ft in terms of Stein matrices: this connection will provide the ideal framework in order to deal with the normal approximation of general random vectors.

2.3

Stein matrices and a key lemma

The centered d-dimensional vectors F, Z are defined as in the previous section (in particular, they are stochastically independent).

14

Definition 2.7 (Stein matrices) Let M (d, R) denote the space of d × d real matrices. We say that the matrix-valued mapping τF : Rd → M (d, R) : x 7→ τF (x) = {τFi,j (x) : i, j = 1, ..., d} is a Stein matrix for F if τFi,j (F ) ∈ L1 for every i, j and the following equality is verified for every differentiable function g : Rd → R such that both sides are well-defined: (2.43)

E [F g(F )] = E [τF (F )∇g(F )] , or, equivalently, E [Fi g(F )] =

d X j=1

h i E τFi,j (F )∂j g(F ) ,

i = 1, ..., d.

(2.44)

The entries of the random matrix τF (F ) are called the Stein factors of F . Remark 2.8 (i) Selecting g(F ) = Fj in (2.43), one deduces that, if τF is a Stein matrix for F , then E[τF (F )] = C. More to this point, if F ∼ Nd (0, C), then the covariance matrix C is itself a Stein matrix for F . This last relation is known as the Stein’s identity for the multivariate Gaussian distribution. (ii) Assume that d = 1 and that F has density f and variance b > 0.R Then, under some ∞ standard regularity assumptions, it is easy to see that τF (x) = b x yf (y)dy/f (x) is a Stein factor for F . Lemma 2.9 (Key Lemma) Let the above notation and framework prevail, and assume that τF is a Stein matrix for F such that τFi,j (F ) ∈ L1 (Ω) for every i, j = 1, ..., d. Then, for every t ∈ [0, 1), the mapping i h  t x 7→ − √ (2.45) E Id − C −1 τF (F ) C −1 Z Ft = x − C −1 x 1−t is a version of the score ρt of Ft . Also, the mapping h i  t E Γt − Γt C −1 τF (F ) C −1 Z Ft = x − Γt C −1 x x 7→ − √ 1−t

(2.46)

is a version of the function ρ⋆t defined in formula (2.32).

Proof. Remember that −C −1 Z is the score of Z, and denote by x 7→ At (x) the mapping defined in (2.45). Removing the conditional expectation and exploiting the independence of F and Z, we infer that, for every smooth test function g, h i   t E Id − C −1 τF (F ) −C −1 Z g(Ft ) E [At (Ft )g(Ft )] = √ 1−t √ √   − tC −1 E [F g(Ft )] − 1 − tE C −1 Zg(Ft )    = −tE Id − C −1 τF (F ) ∇g(Ft ) − tC −1 E [τF (F )∇g(Ft )] − (1 − t)E [∇g(Ft )]

= −E [∇g(Ft )] , thus yielding the desired conclusion.

e = To simplify the forthcoming discussion, we shall use the shorthand notation: Z ed ) := C −1 Z ∼ Nd (0, C −1 ), Fe = (Fed , ..., Fed ) := C −1 F , and τeF = {e (Zed , ..., Z τFi,j : i, j = 1, ..., d} := C −1 τF . The following statement is the main achievement of the section, and is obtained by combining Lemma 2.9 with formulae (2.38) and (2.41)–(2.42), in the case where C = B.

15

Theorem 2.10 Let the above notation and assumptions prevail, assume that B = C, and introduce the notation  " d #2  Z 1 d X X 1 t A1 (F ; Z) := E E (1j=k − τeFj,k (F ))Zek Ft  dt, (2.47) 2 0 1−t j=1 k=1  " d #2  Z 1 d X X t 1 E E (C(j, k) − τFj,k (F ))Zek Ft  dt. (2.48) A2 (F ; Z) := 2 0 1−t j=1 k=1

Then one has the inequalities

(2.49)

λmin (C) × A1 (F ; Z) 6 D(F kZ) 6 λmax (C) × A1 (F ; Z), λmin (C

−1

) × A2 (F ; Z) 6 D(F kZ) 6 λmax (C

−1

(2.50)

) × A2 (F ; Z).

In particular, when C = B = Id ,  #2  " d Z d X X 1 1 t D(F kZ) = E E (1j=k − τFj,k (F ))Zk Ft  dt. 2 0 1−t j=1

(2.51)

k=1

The next subsection focusses on general bounds based on the estimates (2.47)–(2.51).

2.4

A general bound

The following statement provides the announced multidimensional generalisation of Theorem 1.7. In particular, the main estimate (2.55) provides an explicit quantitative counterpart to the heuristic fact that, if there exists a Stein matrix τF such that kτF − CkH.S. is small (with k · kH.S. denoting the usual Hilbert-Schmidt norm), then the distribution of F and Z ∼ Nd (0, C) must be close. By virtue of Theorem 2.10, the proximity of the two distributions is expressed in terms of the relative entropy D(F kZ). Theorem 2.11 Let F be a centered and square integrable random vector with density f : Rd → [0, ∞), let C > 0 be its covariance matrix, and assume that τF is a Stein matrix for F . Let Z ∼ Nd (0, C) be a Gaussian random vector independent of F . If, for some κ, η > 0 and α ∈ (0, 21 , one has   E |τFj,k (F )|η+2 < ∞,

(2.52)

j, k = 1, . . . , d,

as well as

TV

√  √ tF + 1 − t x, F 6 κ(1 + kxk1 )(1 − t)α ,

x ∈ Rd , t ∈ [1/2, 1] ,

(2.53)

2 

(2.54)

then, provided

∆ := E[kC −

τF k2H.S. ]

=

d X

j,k=1

E



C(j, k) −

τFj,k (F )

6 2−

η+1 αη

,

one has D(F kZ) 6

d(η + 1)λmax (C −1 ) max E[(Zel )2 ] × ∆ | log ∆| 16l6d 2αη Cd,η,τ (η + 1)λmax (C −1 ) + ∆, 2αη

16

(2.55)

where Cd,η,τ := 2d2

r 2κ E[kZk1(1 + kZk1 )] + max C(j, j) j

×

d  X

j,k=1

!

1

el ]η+2 ] η+1 max E[|Z

16l6d

1   η+1 |C(j, k)|η+2 + E |τFj,k (F )|η+2 ,

(2.56)

e = C −1 Z. and (as above) Z

Proof. Take g ∈ Cc1 such that kgk∞ 6 1. Then, by independence of Ze and F and using (2.44), one has, for any j = 1, . . . , d, " d # X j,k E (C(j, k) − τF (F ))Zek g(Ft ) k=1

=

d X

k,l=1

C(j, k)C −1 (k, l) E [Zl g(Ft )] −

= E [Zj g(Ft )] −

√ 1−t

d X

k,l,m=1

d X

k,l=1

h i C −1 (k, l)E τFj,k (F )Zl g(Ft )

h i C −1 (k, l)C(l, m)E τFj,k (F )∂m g(Ft )

d h i X √ = E [Zj (g(Ft ) − g(F ))] − 1 − t E τFj,k (F )∂k g(Ft )

= E [Zj (g(Ft ) − g(F ))] −

r

k=1

1−t E [Fj g(Ft )] . t

Using (2.53), we have |E [Zj (g(Ft ) − g(F ))]| Z −1 1 √ √ e− 2 hC x,xi = √ xj E[g( tF + 1 − t x) − g(F )] dx d Rd (2π) 2 det C Z −1 1 √ √ e− 2 hC x,xi dx 62 |xj |TV( tF + 1 − t x, F ) d√ (2π) 2 det C Rd Z −1 1 e− 2 hC x,xi √ 6 2κ(1 − t)α kxk1 (1 + kxk1 ) dx d (2π) 2 det C Rd = 2κ E[kZk1(1 + kZk1 )] (1 − t)α . As a result, due to Lemma 1.5 and since E|Fj | 6

q p E[Fj2 ] 6 maxj C(j, j), one obtains

" " d # # X j,k e max E E (C(j, k) − τF (F ))Zk Ft 16j6d k=1 ! r 6 2κ E[kZk1(1 + kZk1 )] + max C(j, j) (1 − t)α . j

17

Now, using among others the Hölder inequality and the convexity of the function x 7→ |x|η+2 , we have that  "  #2 d d X X E E (C(j, k) − τFj,k (F ))Zek Ft  j=1

k=1

 " η # η+1 d d X X E  E = (C(j, k) − τFj,k (F ))Zek Ft × j=1

k=1

 " d # η+2 η+1 X j,k  × E (C(j, k) − τF (F ))Zek Ft k=1

η " " d # # η+1 d X X E E × 6 (C(j, k) − τFj,k (F ))Zek Ft

j=1

k=1

 "  1 # η+2 η+1 d X × E  E (C(j, k) − τFj,k (F ))Zek Ft  k=1

6 Cd,η,τ (1 − t)

αη η+1

,

(2.57)

with Cd,η,τ given by (2.56). At this stage, we shall use the key identity (2.50). To properly control the right-hand-side of (2.48), we split the integral in two parts: for every 0 < ε 6 21 , 2 D(F kZ) λmax (C −1 )  !2  Z d d X X E (C(j, k) − τFj,k (F ))Zek  6 j=1

+

0

k=1

Z

1

1−ε

t 1−t

d X j=1

el )2 ] 6 d max E[(Z 16l6d

+ Cd,η,τ

Z

1

1−ε

 "

E E d X

E

j,k=1

d X

k=1



t dt 1−t

 #2 (C(j, k) − τFj,k (F ))Zek Ft  dt

C(j, k) −

2 

τFj,k (F )

| log ε|

αη

(1 − t) η+1 −1 dt

el )2 ] 6 d max E[(Z 16l6d

1−ε

d X

j,k=1

E



2  αη η + 1 η+1 C(j, k) − τFj,k (F ) | log ε| + Cd,η,τ ε , αη

R 1−ε t dt R 1 (1−u)du the second inequality being a consequence of (2.57) as well as 0 6 1−t = ε u R 1 du η+1 = − log ε. Since (2.54) holds true, one can optimize over ε and choose ε = ∆ αη , ε u which leads to the desired estimate (2.55). Before applying the content of Theorem 2.11 to entropic CLTs on a Gaussian space (see Section 4), we devote the forthcoming Section 3 to some preliminary results about Gaussian stochastic analysis.

18

3

Gaussian spaces and variational calculus

As announced, we shall now focus on random variables that can be written as functionals of a countable collection of independent and identically distributed Gaussian N (0, 1) random variables, that we shall denote by n o G = Gi : i > 1 . (3.58)

Note that our description of G is equivalent to saying that G is a Gaussian sequence such that E[Gi ] = 0 for every i and E[Gi Gj ] = 1{i=j} . We will write L2 (σ(G)) := L2 (P, σ(G)) to indicate the class of square-integrable (real-valued) random variables that are measurable with respect to the σ-field generated by G. The reader is referred e.g. to [33, 38] for any unexplained definition or result appearing in the subsequent subsections.

3.1

Wiener chaos

We will now briefly introduce the notion of Wiener chaos. Definition 3.1 (Hermite polynomials and Wiener chaos) 1. The sequence of Hermite polynomials {Hm : m > 0} is defined as follows: H0 = 1, and, for m > 1, Hm (x) = (−1)m e

x2 2

dm − x2 e 2, dxm

x ∈ R.

It is a standard result that the sequence {(m!)−1/2 Hm : m > 0} is an orthonormal basis of L2 (R, φ1 (x)dx). 2. A multi-index α = {αi : i > 1} is a sequence of nonnegative integers such that αi 6= 0 only for a finite number of indices i. We use the symbol P Λ in order to indicate the collection of all multi-indices, and use the notation |α| = i>1 αi , for every α ∈ Λ. 3. For every integer q > 0, the qth Wiener chaos associated with G is defined as follows: C0 = R, and, for q > 1, Cq is the L2 (P )-closed vector space generated by random variables of the type Φ(α) =

∞ Y

α ∈ Λ and |α| = q.

Hαi (Gi ),

i=1

(3.59)

It is easily seen that two random variables belonging to Wiener chaoses of different orders are orthogonal in L2 (σ(G)). Moreover, since L linear combinations of polynomials are dense in L2 (σ(G)), one has that L2 (σ(G)) = q>0 Cq , that is, any square-integrable functional of G can be written as an infinite sum, converging in L2 and such that the qth summand is an element of Cq . This orthogonal decomposition of L2 (σ(G)) is customarily called the Wiener-Itô chaotic decomposition of L2 (σ(G)). It is often convenient to encode random variables in the spaces Cq by means of increasing tensor powers of Hilbert spaces. To do this, introduce an (arbitrary) separable real Hilbert space H having an orthonormal basis {ei : i > 1}. For q > 2, denote by H⊗q (resp. H⊙q ) the qth tensor power (resp. symmetric tensor power) of H; write moreover H⊗0 = H⊙0 = R and H⊗1 = H⊙1 = H. With every multi-index α ∈ Λ, we associate the tensor e(α) ∈ H⊗|α| given by ⊗αi1

e(α) = ei1

⊗αik

⊗ · · · ⊗ eik

,

19

where {αi1 , ..., αik } are the non-zero elements of α. We also denote by e˜(α) ∈ H⊙|α| the canonical symmetrization of e(α). It is well-known that, for every q > 2, the set {˜ e(α) : α ∈ Λ, |α| = q} is a complete orthogonal system in H⊙q . For every q > 1 and P every h ∈ H⊙q of the form h = α∈Λ, |α|=q cα e˜(α), we define X

Iq (h) =

(3.60)

cα Φ(α),

α∈Λ, |α|=q

where Φ(α) is given in (3.59). Another classical result (see e.g. [33, 38]) is that, for every q > 1, the mapping Iq : H⊙q → Cq (as defined in (3.60)) is onto, and defines an isomorphism between Cq and the Hilbert space H⊙q , endowed with the modified norm √ q!k · kH⊗q . This means that, for every h, h′ ∈ H⊙q , E[Iq (h)Iq (h′ )] = q!hh, h′ iH⊗q . Finally, we observe that one can reexpress the Wiener-Itô chaotic decomposition of L2 (σ(G)) as follows: every F ∈ L2 (σ(G)) admits a unique decomposition of the type F =

∞ X

(3.61)

Iq (hq ),

q=0

where the series converges in L2 (G), the symmetric kernels hq ∈ H⊙q , q > 1, are uniquely determined by F , and I0 (h0 ) := E[F ]. This also implies that E[F 2 ] = E[F ]2 +

∞ X q=1

3.2

q!khq k2H⊗q .

The language of Malliavin calculus: chaoses as eigenspaces

We let the previous notation and assumptions prevail: in particular, we shall fix for the rest of the section a real separable Hilbert space H, and represent the elements of the qth Wiener chaos of G in the form (3.60). In addition to the previously introduced notation, L2 (H) := L2 (σ(G); H) indicates the space of all H-valued random  elements u, that are measurable with respect to σ(G) and verify the relation E kuk2H < ∞. Note that, as it is customary, H is endowed with the Borel σ-field associated with the distance on H given by (h1 , h2 ) 7→ kh1 − h2 kH . Let S be the set of all smooth cylindrical random variables of the form  F = g I1 (h1 ), . . . , I1 (hn ) , where n > 1, g : Rn → R is a smooth function with compact support and hi ∈ H. The Malliavin derivative of F (with respect to G) is the element of L2 (H) defined as DF =

n X i=1

 ∂i g I1 (h1 ), . . . , I1 (hn ) hi .

By iteration, one can define the mth derivative Dm F (which is an element of L2 (H⊗m )) for every m > 2. For m > 1, Dm,2 denotes the closure of S with respect to the norm k · km,2 , defined by the relation kF k2m,2

m X   E kDi F k2H⊗i . = E[F ] + 2

i=1

m,2 It P is a standard result that a random variable F as in (3.61) if and only Lq is in D m,2 m 2 C ∈ D for every < ∞, from which one deduces that if q q!kf k q H⊗q k=0 k q>1 q, m > 1. Also, DI1 (h) = h for every h ∈ H. The Malliavin derivative D satisfies the

20

following chain rule: if g : Rn → R is continuously differentiable and has bounded partial derivatives, and if (F1 , ..., Fn ) is a vector of elements of D1,2 , then g(F1 , . . . , Fn ) ∈ D1,2 and Dg(F1 , . . . , Fn ) =

n X

∂i g(F1 , . . . , Fn )DFi .

(3.62)

i=1

In what follows, we denote by δ the adjoint of the operator D, also called the divergence operator. A random element u ∈ L2 (H) belongs to the domain of δ, written Dom δ, if and only if it satisfies p |E [hDF, uiH ] | 6 cu E[F 2 ] for any F ∈ S ,

for some constant cu depending only on u. If u ∈ Dom δ, then the random variable δ(u) is defined by the duality relationship (customarily called “integration by parts formula”): (3.63)

E [F δ(u)] = E [hDF, uiH ] ,

which holds for every F ∈ D1,2 . A crucial object for our discussion is the OrnsteinUhlenbeck semigroup associated with G. Definition 3.2 (Ornstein-Uhlenbeck semigroup) Let G′ be an independent copy of G, and denote by E ′ the mathematical expectation with respect to G′ . For every t > 0 the operator Pt : L2 (σ(G)) → L2 (σ(G)) is defined as follows: for every F (G) ∈ L2 (σ(G)), p Pt F (G) = E ′ [F (e−t G + 1 − e−2t G′ )],

in such a way that P0 F (G) = F (G) and P∞ F (G) = E[F (G)]. The collection {Pt : t > 0} verifies the semigroup property Pt Ps = Pt+s and is called the Ornstein-Uhlenbeck semigroup associated with G. The properties of the semigroup {Pt : t > 0} that are relevant for our study are gathered together in the next statement. Proposition 3.3 1. For every t > 0, the eigenspaces of the operator Pt coincide with the Wiener chaoses Cq , q = 0, 1, ..., the eigenvalue of Cq being given by the positive constant e−qt . 2. The infinitesimal generator of {Pt : t > 0}, denoted by L, acts on square-integrable random variables as follows: a random variable F with the form (3.61) is in the P domain of L, written Dom L, if and only if q>1 qIq (hq ) is convergent in L2 (σ(G)), and in this case X LF = − qIq (hq ). q>1

In particular, each Wiener chaos Cq is an eigenspace of L, with eigenvalue equal to −q.

3. The operator L verifies the following properties: (i) Dom L = D2,2 , and (ii) a random variable F is in DomL if and only if F ∈ Dom δD (i.e. F ∈ D1,2 and DF ∈ Domδ), and in this case one has that δ(DF ) = −LF .

In view of the previous statement, it is immediate to describe the pseudo-inverse of P L, denoted by L−1 , as follows: for every mean zero random variable F = q>1 Iq (hq ) of L2 (σ(G)), one has that L−1 F =

X q>1

1 − Iq (hq ). q

21

It is clear that L−1 is an operator with values in D2,2 . For future reference, we record the following estimate involving random variables living in a finite sum of Wiener chaoses: it is a direct consequence of the hypercontractivity of the Ornstein-Uhlenbeck semigroup – see e.g. in [33, Theorem 2.7.2 and Theorem 2.8.12]. Proposition 3.4 (Hypercontractivity) Let q > 1 and 1 6Ls < t < ∞. Then, there exists a finite constant c(s, t, q) < ∞ such that, for every F ∈ qk=0 Ck , 1

1

E[|F |t ] t 6 c(s, t, q) E[|F |s ] s .

(3.64)

In particular, all Lp norms, p > 1, are equivalent on a finite sum of Wiener chaoses. Since we will systematically work on a fixed sum of Wiener chaoses, we will not need to specify the explicit value of the constant c(s, t, q). See again [33], and the references therein, for more details. Example 3.5 Let W = {Wt : t > 0} be a standard Brownian motion, letR{ej : j > 1} ∞ be an orthonormal basis of L2 (R+ , B(R+ ), dt) =: L2 (R+ ), and define Gj = 0 ej (t)dWt . Then, σ(W ) = σ(G), where G = {Gj : j > 1}. In this case, the natural choice of a Hilbert space is H = L2 (R+ ) and one has the following explicit characterisation of the Wiener chaoses associated with W : for every q > 1, one has that F ∈ Cq if and only if there exists a symmetric kernel f ∈ L2 (Rq+ ) such that F = q!

Z



Z

t1

···

0

0

Z

tq−1

f (t1 , ..., tq )dWtq · · · dWt1 := q!Jq (f ).

0

The random variable Jq (f ) is called the multiple Wiener-Itô integral of order q, of f with respect to WP . It is a well-known fact that, if F ∈ D1,2 admits the chaotic expansion F = E[F ] + q>1 Jq (fq ), then DF equals the random function t 7→ =

X q>1

X q>1

qJq−1 (fq (t, ·))

q!

Z

0



Z

0

t1

···

Z

0

tq−2

f (t, t1 ..., tq−1 )dWtq−1 · · · dWt1 ,

t ∈ R+ ,

which is a well-defined element of L2 (H).

3.3

The role of Malliavin and Stein matrices

Given a vector F = (F1 , ..., Fd ) whose elements are in D1,2 , we define the Malliavin matrix Γ(F ) = {Γi,j (F ) : i, j = 1, ..., d} as Γi,j (F ) = hDFi , DFj iH . The following statement is taken from [31], and provides a simple necessary and sufficient condition for a random vector living in a finite sum of Wiener chaoses to have a density. Lq Theorem 3.6 Fix d, q > 1 and let F = (F1 , ..., Fd ) be such that Fi ∈ i=0 Ci , i = 1, ..., d. Then, the distribution of F admits a density f with respect to the Lebesgue measure on Rd if and only if E[det Γ(F )] > 0. Moreover, if this condition is verified one has necessarily that Ent(F ) < ∞.

22

Proof. The equivalence between the existence of a density and the fact that E[det Γ(F )] > 0 is shown in [31, TheoremS3.1]. Moreover, by [31, Theorem 4.2] we have in this case that the density f satisfies f ∈ p>1 Lp (Rd ). Relying on the inequality log u 6 n(u1/n − 1),

u > 0,

n∈N

(which is a direct consequence of log u 6 u − 1 for any u > 0), one has that  Z Z 1 1+ n dx − 1 . f (x) log f (x)dx 6 n f (x) Rd

Rd

1

Hence, by choosing n large enough so that f ∈ L1+ n (Rd ), one obtains that Ent(F ) < ∞. To conclude, we present a result providing an explicit representation for Stein matrices associated with random vectors in the domain of D. Proposition 3.7 (Representation of Stein matrices) Fix d > 1 and let F = (F1 , ..., Fd ) be a centered random vector whose elements are in D1,2 . Then, a Stein matrix for F (see Definition 2.7) is given by τFi,j (x) = E[h−DL−1 Fi , DFj iH F = x], i, j = 1, ..., d. Proof. Let g : Rd → R ∈ Cc1 . For every i = 1, ..., d, one has that Fi = −δDL−1 Fi . As a consequence, using (in order) (3.63) and (3.62), one infers that E[Fi g(F )] = E[h−DL−1 Fi , Dg(F )iH ] =

d X j=1

E[h−DL−1 Fi , DFj iH ∂j g(F )].

Taking conditional expectations yields the desired conclusion. The next section contains the statements and proofs of our main bounds on a Gaussian space.

4 4.1

Entropic fourth moment bounds on a Gaussian space Main results

We let the framework of the previous section prevail: in particular, G is a sequence of i.i.d. N (0, 1) random variables, and the sequence of Wiener chaoses {Cq : q > 1} associated with G is encoded by means of increasing tensor powers of a fixed real separable Hilbert space H. We will use the following notation: given a sequence of centered and squareintegrable d-dimensional random vectors Fn = (F1,n , ..., Fd,n ) ∈ D1,2 with covariance matrix Cn , n > 1, we let ∆n := E[kFn k4 ] − E[kZn k4 ],

(4.65)

where k · k is the Euclidian norm on Rd and Zn ∼ Nd (0, Cn ).

Our main result is the following entropic central limit theorem for sequences of chaotic random variables.

Theorem 4.1 (Entropic CLTs on Wiener chaos) Let d > 1 and q1 , . . . , qd > 1 be fixed integers. Consider vectors Fn = (F1,n , . . . , Fd,n ) = (Iq1 (h1,n ), . . . , Iqd (hd,n )),

23

n > 1,

with hi,n ∈ H⊙qi . Let Cn denote the covariance matrix of Fn and let Zn ∼ Nd (0, Cn ) be a centered Gaussian random vector in Rd with the same covariance matrix as Fn . Assume that Cn → C > 0 and ∆n → 0, as n → ∞. Then, the random vector Fn admits a density for n large enough, and D(Fn kZn ) = O(1) ∆n | log ∆n |

(4.66)

as n → ∞,

where O(1) indicates a bounded numerical sequence depending on d, q1 , ..., qd , as well as on the sequence {Fn }. One immediate consequence of the previous statement is the following characterisation of entropic CLTs on a finite sum of Wiener chaoses. Corollary 4.2 Let the sequence {Fn } be as in the statement of Theorem 4.1, and assume that Cn → C > 0. Then, the following three assertions are equivalent, as n → ∞: (i) ∆n → 0 ;

(ii) Fn converges in distribution to Z ∼ Nd (0, C);

(iii) D(Fn kZn ) → 0.

The proofs of the previous results are based on two technical lemmas that are the object of the next section.

4.2

Estimates based on the Carbery-Wright inequalities

We start with a generalisation of the inequality proved by Carbery and Wright in [11]. Recall that, in a form that is adapted to our framework, the main finding of [11] reads as follows: there is a universal constant c > 0 such that, for any polynomial Q : Rn → R of degree at most d and any α > 0 we have 1

1

E[Q(X1 , . . . , Xn )2 ] 2d P (|Q(X1 , . . . , Xn )| 6 α) 6 cdα d ,

(4.67)

where X1 , . . . , Xn are independent random variables with common distribution N (0, 1). Lemma 4.3 Fix d, q1 , . . . , qd > 1, and let F = (F1 , . . . , Fd ) be a random vector such that Fi = Iqi (hi ) with hi ∈ H⊙qi . Let Γ = Γ(F ) denote the Malliavin matrix of F , and assume that E[det Γ] > 0 (which is equivalent to assuming that F has a density by Theorem 3.6). Set N = 2d(q − 1) with q = max16i6d qi . Then, there exists a universal constant c > 0 such that P (det Γ 6 λ) 6 cN λ1/N (E[det Γ])−1/N .

(4.68)

Proof. Let {ei : i > 1} be an orthonormal basis of H. Since det Γ is a polynomial of degree L2q−2 d in the entries of Γ and because each entry of Γ belongs to k=0 Ck by the product formula for multiple integrals (see, e.g., [33, Chapter 2]), we have, by iterating the product LN formula, that det Γ ∈ k=0 Ck . Thus, there exists a sequence {Qn , n > 1} of real-valued polynomials of degree at most N such that the random variables Qn (I1 (e1 ), . . . , I1 (en )) converge in L2 and almost surely to det Γ as n tends to infinity (see [36, proof of Theorem 3.1] for an explicit construction). Assume now that E[det Γ] > 0. Then, for n sufficiently large, E[|Qn (I1 (e1 ), . . . , I1 (en ))|] > 0. We deduce from the estimate (4.67) the existence of a universal constant c > 0 such that, for any n > 1, P (|Qn (I1 (e1 ), . . . , I1 (en )| 6 λ) 6 cN λ1/N (E[Qn (I1 (e1 ), . . . , I1 (en )2 ])−1/2N . Using the property E[Qn (I1 (e1 ), . . . , I1 (en )2 ] > (E[|Qn (I1 (e1 ), . . . , I1 (en )|])2

24

we obtain P (|Qn (I1 (e1 ), . . . , I1 (en )| 6 λ) 6 cN λ1/N (E[|Qn (I1 (e1 ), . . . , I1 (en ))|])−1/N , from which (4.68) follows by letting n tend to infinity. The next statement, whose proof follows the same lines than that of [31, Theorem 4.1], provides an √upper √ bound on the total variation distance between the distribution of F and that of tF + 1 − tx, for every x ∈ Rd and every t ∈ [1/2; 1]. Although Lemma 4.4 is, in principle, very close to [31, Theorem 4.1], we detail its proof because, here, we need to keep track of the way the constants behave with respect to x. Lemma 4.4 Fix d, q1 , . . . , qd > 1, and let F = (F1 , . . . , Fd ) be a random vector as in Lemma 4.3. Set q = max16i6d qi . Let C be the covariance matrix of F , let Γ = Γ(F ) denote the Malliavin matrix of F , and assume that β := E[det Γ] > 0. Then, there exists a constant cq,d,kCkH.S. > 0 (depending only on q, d and kCkH.S. — with a continuous dependence in the last parameter) such that, for any x ∈ Rd and t ∈ [ 21 , 1],   √ √ 1 1 TV( tF + 1 − tx, F ) 6 cq,d,kCkH.S. β − N +1 ∧ 1 (1+kxk1) (1−t) 2(2N +4)(d+1)+2 . (4.69) Here, N = 2d(q − 1).

Proof. The proof is divided into several steps. In what follows, we fix t ∈ [ 12 , 1] and x ∈ Rd and we use the convention that c{·} denotes a constant in (0, ∞) that only depends on the arguments inside the bracket and whose value is allowed to change from one line to another. qP     d 2 Step 1. One has that E kF k2∞ 6 cd E kF k22 6 cd i,j=1 C(i, i) 6 cd kCkH.S. so p   that E kF k∞ 6 cd kCkH.S. = cd,kCkH.S. Let g : Rd → R be a (smooth) test function bounded by 1. We can write, for any M > 1, √ √ E[g( tF + 1 − tx)] − E[g(F )] h i √ √   6 E (g1[−M/2,M/2]d )( tF + 1 − tx) − E (g1[−M/2,M/2]d )(F ) √ √ + P (k tF + 1 − txk∞ > M/2) + P (kF k∞ > M/2) √ √ E[φ( tF + 1 − tx)] − E[φ(F )] 6 sup kφk∞ 61

suppφ⊂[−M,M]d

i  √ 2  h √ E k tF + 1 − txk∞ + E [kF k∞ ] M √ √ E[φ( tF + 1 − tx)] − E[φ(F )] + cd,kCkH.S. (1 + kxk∞ ). 6 sup M kφk∞ 61 +

(4.70)

suppφ⊂[−M,M]d

Step 2. Let φ : Rd → R be C ∞ with compact support in [−M, M ]dR and satisfying kφk∞ 6 1. Let 0 < α 6 1 and let ρ : Rd → R+ be in Cc∞ and satisfying Rd ρ(x)dx = 1. Set ρα (x) = α1d ρ( αx ). By [36, formula (3.26)], we have that φ ∗ ρα is Lipschitz continuous

25

with constant 1/α. We can thus write, √ √ E[φ( tF + 1 − tx)] − E[φ(F )] h i √ √ 6 E φ ∗ ρα ( tF + 1 − tx) − φ ∗ ρα (F ) + |E [φ(F ) − φ ∗ ρα (F )]| h √ i √ √ √ + E φ( tF + 1 − tx) − φ ∗ ρα ( tF + 1 − tx) √   1−t  6 E kF k∞ + kxk∞ + |E [φ(F ) − φ ∗ ρα (F )]| α h √ i √ √ √ + E φ( tF + 1 − tx) − φ ∗ ρα ( tF + 1 − tx) √  1−t 6 cd,kCkH.S. 1 + kxk1 + |E [φ(F ) − φ ∗ ρα (F )]| α h √ i √ √ √ + E φ( tF + 1 − tx) − φ ∗ ρα ( tF + 1 − tx) .

(4.71)

In order to estimate the two last terms in (4.71), we decompose the expectation into two parts using the identity 1=

td

ε td det Γ + d , det Γ + ε t det Γ + ε

ε > 0.

Step 3. For all ε, λ > 0 and using (4.68),   ε E d t det Γ + ε  1/N   ε λ 6E d 1{det Γ>λ} + cN t det Γ + ε E[det Γ]  1/N ε λ 6 d + cN . t λ β N

1

Choosing λ = ε N +1 β N +1 yields E



  N1+1    N1+1 ε ε ε −d 6 c 6 (t + cN ) q,d td det Γ + ε β β

(recall that t > 12 ).

As a consequence, h √ i √ √ √ E φ( tF + 1 − tx) − φ ∗ ρα ( tF + 1 − tx) h √  √ √ √ = E φ( tF + 1 − tx) − φ ∗ ρα ( tF + 1 − tx) ×   ε td det Γ × + d d t det Γ + ε t det Γ + ε   N1+1 ε 6 cq,d β   √ √ √ √  td det Γ . + E φ( tF + 1 − tx) − φ ∗ ρα ( tF + 1 − tx) d t det Γ + ε

(4.72)

Step 4. In this step we recall from [31, page 11] the following integration by parts formula (4.73). Let h : Rd → R be a function in C ∞ with compact support, and consider a

26

random variable W ∈ D∞ . Consider the Poisson kernel in Rd , defined as the solution to the equation ∆Qd = δ0 . We know that Q2 (x) = c2 log |x| and Qd (x) = cd |x|2−d for d 6= 2. We have the following identity √ √ E[W det Γ h( tF + 1 − tx)]   Z d √ √ 1 X h(y)∂i Qd ( tF + 1 − tx − y)dy , E Ai (W ) =√ (4.73) t i=1 Rd where Ai (W ) = −

d X

a=1

 hD(W (AdjΓ)a,i ), DFa iH + (AdjΓ)a,i W LFa ,

with Adj the usual adjugate matrix operator. Step 5. Let us apply the identity (4.73) to the function h = φ − φ ∗ ρα and to the random variable W = Wε = td det1 Γ+ε ∈ D∞ ; we obtain 

 √ √ td det Γ E (φ − φ ∗ ρα )( tF + 1 − tx) d t det Γ + ε   Z d X √ √ d− 12 E Ai (Wε ) (φ − φ ∗ ρα )(y)∂i Qd ( tF + 1 − tx − y)dy . =t

(4.74)

Rd

i=1

From the hypercontractivity property together with the equality ( d X 1 − d (hD(AdjΓ)a,i , DFa iH − (AdjΓ)a,i LFa ) Ai (Wε ) = t det Γ+ε a=1 ) 1 + d (AdjΓ)a,i hD(det Γ), DFa iH , (t det Γ + ε)2 one immediately deduces the existence of cq,d,kCkH.S. > 0 such that E[Ai (Wε )2 ] 6 cq,d,kCkH.S. ε−4 . On the other hand, in [31, page 13] it is shown that there exists cd > 0 such that, for any R > 0 and u ∈ Rd , Z  6 cd R + α + αR−d (kuk1 + M )d . (φ − φ ∗ ρ )(y)∂ Q (u − y)dy α i d Rd

Substituting this estimate into (4.74) and assuming that M > 1, yields   d √ √ E (φ − φ ∗ ρα )( tF + 1 − tx) t det Γ d t det Γ + ε  6 cq,d,kCkH.S. ε−2 R + α + αR−d (M + kxk1 )d . 1

d

Choosing R = α d+1 (M + kxk1 ) d+1 and assuming α 6 1, we obtain   d √ √ E (φ − φ ∗ ρα )( tF + 1 − tx) t det Γ d t det Γ + ε 1

d

6 cq,d,kCkH.S. ε−2 α d+1 (M + kxk1 ) d+1 .

27

(4.75)

It is worthwhile noting that the inequality (4.75) is valid for any t ∈ [ 21 , 1], in particular for t = 1. Step 6. From (4.71), (4.72) and (4.75) we obtain √ √ |E[φ( tF + 1 − tx)] − E[φ(F )]| √   N1+1 ε 1−t (1 + kxk1 ) + cq,d 6 cd,kCkH.S. α β 1

d

+ cq,d,kCkH.S. ε−2 α d+1 (M + kxk1 ) d+1 .

By plugging this inequality into (4.70) we thus obtain that, for every M > 1, ε > 0 and 0 < α 6 1: √ √ TV( tF + 1 − tx, F ) √   N1+1 d 1 ε 1−t (1 + kxk1 ) + cq,d 6 cd,kCkH.S. + cq,d,kCkH.S. ε−2 α d+1 (M + kxk1 ) d+1 α β cd,kCkH.S. (1 + kxk1 ) + M ) ( 1   √1 − t 1 1 α d+1 M − N1+1 N +1 + +ε 6 cq,d,kCkH.S. (1 + kxk1 ) β ∧1 + α ε2 M (4.76)

(2N +4)(d+1)

N +1

1

Choosing M = ε− N +1 , ε = α (2N +4)(d+1) and α = (1 − t) 2((2N +4)(d+1)+1) , one obtains the desired conclusion (4.69).

4.3

Proof of Theorem 4.1

In the proof of [37, Theorem 4.3], the following two facts have been shown: " 2 # 1 2 2 E Cn (j, k) − hDFj,n , DFk,n iH 6 Cov(Fj,n , Fk,n ) − 2Cn (j, k)2 qj d X      2 2 E kFn k4 − E kZn k4 = Cov(Fj,n , Fk,n ) − 2Cn (j, k)2 . j,k=1

As a consequence, one deduces that " 2 # d X 1 6 ∆n . E Cn (j, k) − hDFj,n , DFk,n iH qj j,k=1

Using Proposition 3.7, one infers immediately that (x) := τFj,k n

1 E[hDFj,n , DFk,n iH |Fn = x], qj

j, k = 1, . . . , d,

defines a Stein’s matrix for Fn , which moreover satisfies the relation  d 2  X j,k 6 ∆n . E C(j, k) − τFn (Fn )

(4.77)

(4.78)

j,k=1

Now let Γn denote the Malliavin matrix of Fn . Thanks to [39, Lemma 6], we know that, for any i, j = 1, . . . , d, hDFi,n , DFj,n iH

L2 (σ(G))

−→



qi qj C(i, j)

28

as n → ∞.

(4.79)

Since hDFi,n , DFj,n iH lives in a finite sum of chaoses (see e.g. [33, Chapter 5]) and is bounded in L2 (σ(G)), we can again apply the hypercontractive estimate (3.64) to deduce that hDFi,n , DFj,n iH is actually bounded in Lp (σ(G)) for every p > 1, so that the convergence in (4.79) is in the sense of any of the spaces Lp (σ(G)). As a consequence, Qd E[det Γn ] → det C i=1 qi =: γ > 0, and there exists n0 large enough so that inf E[det Γn ] > 0.

n>n0

We are now able to deduce from Lemma 4.4 the existence of two constants κ > 0 and α ∈ (0, 21 ] such that, for all x ∈ Rd , t ∈ [ 12 , 1] and n > n0 , √ √ TV( tFn + 1 − tx, Fn ) 6 κ(1 + kxk1 )(1 − t)α .

This means that relation (2.53) is satisfied uniformly on n. Concerning (2.52), again by hypercontractivity and using the representation (4.77), one has that, for all η > 0,   (Fn )|η+2 < ∞, sup E |τFj,k n

j, k = 1, . . . , d.

n>1

Finally, since ∆n → 0 and because (4.78) holds true, the condition (2.54) is satisfied for n large enough. The proof of (4.66) is concluded by applying Theorem 2.11.

4.4

Proof of Corollary 4.2

In view of Theorem 4.1, one has only to prove that (b) implies (a). This is an immediate consequence of the fact that the covariance Cn converges to C, and that the sequence {Fn } lives in a finite sum of Wiener chaoses. Indeed, by virtue of (3.64) one has that sup E [kFn kp ] < ∞, n>1

∀p > 1,

yielding in particular that, if Fn converges in distribution to Z, then EkFn k4 → EkZk4 or, equivalently, ∆n → 0. The proof is concluded. Acknowledgement. We heartily thank Guillaume Poly for several crucial inputs that helped us achieving the proof of Theorem 4.1. We are grateful to Larry Goldstein and Oliver Johnson for useful discussions. Ivan Nourdin is partially supported by the ANR grant ANR-10-BLAN-0121.

References [1] C. Ané, S. Blachère, D. Chafaï, P. Fougères, I. Gentil, F. Malrieu, C. Roberto, and G. Scheffer. Sur les inégalités de Sobolev logarithmiques, volume 10 of Panoramas et Synthèses [Panoramas and Syntheses]. Société Mathématique de France, Paris, 2000. With a preface by Dominique Bakry and Michel Ledoux. [2] S. Artstein, K. Ball, F. Barthe, and A. Naor. Solution of Shannon’s problem on the monotonicity of entropy. Journal of the American Mathematical Society, 17(4):975– 982, 2004. [3] S. Artstein, K. M. Ball, F. Barthe, and A. Naor. On the rate of convergence in the entropic central limit theorem. Probab. Theory Related Fields, 129(3):381–390, 2004. [4] D. Bakry and M. Émery. Diffusions hypercontractives. In Lecture Notes in Math., number 1123 in Séminaire de Probabilités XIX, pages 179–206. Springer, 1985.

29

[5] K. Ball, F. Barthe, and A. Naor. Entropy jumps in the presence of a spectral gap. Duke Math. J., 119(1):41–63, 2003. [6] K. Ball and V. Nguyen. Entropy jumps for random vectors with log-concave density and spectral gap. Preprint, arxiv:1206.5098v3, 2012. [7] A. D. Barbour, O. Johnson, I. Kontoyiannis, and M. Madiman. Compound Poisson approximation via information functionals. Electron. J. Probab., 15:1344–1368, 2010. [8] A. R. Barron. Entropy and the central limit theorem. Ann. Probab., 14(1):336–342, 1986. [9] S. G. Bobkov, G. P. Chistyakov, and F. Götze. Fisher information and the central limit theorem. Preprint, arXiv:1204.6650v1, 2012. [10] L. D. Brown. A proof of the central limit theorem motivated by the Cramér-Rao inequality. In Statistics and probability: essays in honor of C. R. Rao, pages 141–148. North-Holland, Amsterdam, 1982. [11] A. Carbery and J. Wright. Distributional and Lq norm inequalities for polynomials over convex bodies in Rn . Math. Res. Lett., 8(3):233–248, 2001. [12] E. Carlen and A. Soffer. Entropy production by block variable summation and central limit theorems. Commun. Math. Phys., 140(2):339–371, 1991. [13] S. Chatterjee. Fluctuations of eigenvalues and second order Poincaré inequalities. Probab. Theory Related Fields, 143(1-2):1–40, 2009. [14] S. Chatterjee and E. Meckes. Multivariate normal approximation using exchangeable pairs. ALEA Lat. Am. J. Probab. Math. Stat., 4:257–283, 2008. [15] L. H. Y. Chen, L. Goldstein, and Q.-M. Shao. Normal approximation by Stein’s method. Probability and its Applications (New York). Springer, Heidelberg, 2011. [16] I. Csiszár. Informationstheoretische Konvergenzbegriffe im Raum der Wahrscheinlichkeitsverteilungen. Magyar Tud. Akad. Mat. Kutató Int. Közl., 7:137–158, 1962. [17] A. Deya, S. Noreddine, and I. Nourdin. Fourth moment theorem and q-Brownian chaos. Comm. Math. Phys., to appear, 02 2012. [18] R. M. Dudley. Real analysis and probability. The Wadsworth & Brooks/Cole Mathematics Series. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA, 1989. [19] O. Johnson. Information theory and the central limit theorem. Imperial College Press, London, 2004. [20] O. Johnson and A. Barron. Fisher information inequalities and the central limit theorem. Probab. Theory Related Fields, 129(3):391–409, 2004. [21] O. Johnson and Y. Suhov. Entropy and random vectors. Journal of Statistical Physics, 104(1):145–165, 2001. [22] T. Kemp, I. Nourdin, G. Peccati, and R. Speicher. Wigner chaos and the fourth moment. Ann. Probab., 40(4):1577–1635, 2012. [23] S. Kullback. A lower bound for discrimination information in terms of variation. IEEE Trans. Info. Theory, 4, 1967. [24] S. Kumar Kattumannil. On Stein’s identity and its applications. Statist. Probab. Lett., 79(12):1444–1449, 2009. [25] M. Ledoux. Chaos of a Markov operator and the fourth moment condition. Ann. Probab., 40(6):2439–2459, 2012. [26] C. Ley and Y. Swan. Stein’s density approach for discrete distributions with applications to information inequalities. Preprint, arXiv:1211.3668, 2012.

30

[27] C. Ley and Y. Swan. Stein’s density approach and information inequalities. Electron. Comm. Probab., 18(7):1–14, 2013. [28] J. V. Linnik. An information-theoretic proof of the central limit theorem with Lindeberg conditions. Theor. Probability Appl., 4:288–299, 1959. [29] D. Marinucci and G. Peccati. Random fields on the sphere : Representation, limit theorems and cosmological applications, volume 389 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2011. [30] I. Nourdin. Selected Aspects of Fractional Brownian Motion. Springer-Verlag, 2012. [31] I. Nourdin, D. Nualart, and G. Poly. Absolute continuity and convergence of densities for random vectors on Wiener chaos. Electron. J. Probab., 18(22):1–19, 2012. [32] I. Nourdin and G. Peccati. Universal Gaussian fluctuations of non-Hermitian matrix ensembles: from weak convergence to almost sure CLTs. ALEA Lat. Am. J. Probab. Math. Stat., 7:341–375, 2010. [33] I. Nourdin and G. Peccati. Normal approximations with Malliavin calculus : from Stein’s method to universality. Cambridge Tracts in Mathematics. Cambridge University Press, 2012. [34] I. Nourdin, G. Peccati, and G. Reinert. Invariance principles for homogeneous sums: universality of Gaussian Wiener chaos. Ann. Probab., 38(5):1947–1985, 2010. [35] I. Nourdin, G. Peccati, and A. Réveillac. Multivariate normal approximation using Stein’s method and Malliavin calculus. Ann. Inst. Henri Poincaré Probab. Stat., 46(1):45–58, 2010. [36] I. Nourdin and G. Poly. Convergence in total variation on Wiener chaos. Stochastic Process. Appl., to appear, 2012. [37] I. Nourdin and J. Rosiński. Asymptotic independence of multiple Wiener-Itô integrals and the resulting limit laws. Ann. Probab., to appear, 2013. [38] D. Nualart. The Malliavin calculus and related topics. Probability and its Applications (New York). Springer-Verlag, Berlin, second edition, 2006. [39] D. Nualart and S. Ortiz-Latorre. Central limit theorems for multiple stochastic integrals and Malliavin calculus. Stochastic Process. Appl., 118(4):614–628, 2008. [40] D. Nualart and G. Peccati. Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab., 33(1):177–193, 2005. [41] S. Park, E. Serpedin, and K. Qaraqe. On the equivalence between Stein and de Bruijn identities. IEEE Trans. Info. Theory, 58(12):7045–7067, 2012. [42] M. S. Pinsker. Information and information stability of random variables and processes. Translated and edited by Amiel Feinstein. Holden-Day Inc., San Francisco, Calif., 1964. [43] G. Reinert and A. Roellin. Multivariate normal approximation with Stein’s method of exchangeable pairs under a general linearity condition. Ann. Probab., 37(6):2150– 2173, 2009. [44] I. Sason. Entropy bounds for discrete random variables via coupling. Preprint, arXiv:1209.5259, 2012. [45] I. Sason. An information-theoretic perspective of the Poisson approximation via the Chen-Stein method. Preprint, arXiv:1206.6811v3, 2012. [46] R. Shimizu. On Fisher’s amount of information for location family. In A Modern Course on Statistical Distributions in Scientific Work, pages 305–312. Springer, 1975. [47] C. Stein. Approximate computation of expectations. Institute of Mathematical Statistics Lecture Notes—Monograph Series, 7. Institute of Mathematical Statistics, Hayward, CA, 1986.

31

[48] M. Talagrand. Transportation cost for Gaussian and other product measures. Geom. Funct. Anal., 6(3):587–600, 1996. [49] F. G. Viens. Stein’s lemma, Malliavin calculus, and tail bounds, with application to polymer fluctuation exponent. Stochastic Process. Appl., 119(10):3671–3698, 2009.

32

Entropy and the fourth moment phenomenon

Apr 4, 2013 - ... (F))|F1]=0, thus in principle compensating for the singularity at t ≈ 1. ...... G, and denote by E the mathematical expectation with respect to G ...

402KB Sizes 0 Downloads 195 Views

Recommend Documents

Moment-entropy inequalities
MOMENT-ENTROPY INEQUALITIES. Erwin Lutwak, Deane Yang and Gaoyong Zhang. Department of Mathematics. Polytechnic University. Brooklyn, NY 11201.

MOMENT-ENTROPY INEQUALITIES Erwin Lutwak ...
probability and analytic convex geometry. In this paper we ..... The following lemma presents the solution to the problem of maximizing the λ-Rényi entropy when ...

The optimal fourth moment theorem
Oct 11, 2013 - non-zero element of the qth Wiener chaos of X, for some q ⩾ 2, then E[F4] ... exactly coincides with the one related to the smooth test functions ...

Moment-entropy inequalities for a random vector
S = positive definite symmetric n-by-n matrices ... a unique matrix A ∈ S such that. E[|X| ..... He received his B.S. degree in mathematics from Wuhan University of ...

Cramér-Rao and moment-entropy inequalities for Renyi ...
be the extremal densities for the new inequalities. An extension of ... Polytechnic University, Brooklyn, New York and were supported in part by. NSF Grant ...

Light Phenomenon Lab.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Light Phenomenon Lab.pdf. Light Phenomenon Lab.pdf. Open.

The Aha! Moment
CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE. 210. Volume .... sought to trace the or- igins of insight farther back in time to answer a more funda-.

An “Anti-Gleason” Phenomenon and Simultaneous ...
space1 H. It is equipped with the bracket [A,B] = i (AB − BA), where is the ... orthogonal projection operators on H, that is the simplest observables which attain ... 2 Deformed sphere. The. Reeb graph ΓF is a tripod a point. This notion was firs

Entropy, Compression, and Information Content
By comparing the literal translation to the more fluid English translation, we .... the Winzip utility to shrink a document before sending it over the internet, or if you.

An “Anti-Gleason” Phenomenon and Simultaneous ... - Springer Link
Jul 31, 2007 - contrast with the quantum case, the algebra of classical observables can carry a non- linear quasi-state, a monotone functional which is linear on all subspaces generated by Poisson-commuting functions. We present an example of such a

In this moment the dream
Biggest loser s14e08.Blood thelast vampireeng. ... Pdf professionaleditor.In this moment the dream.Iamnumber ... Big bang s07e09.Ray lavender.Mandy S exgf.

pdf entropy
Page 1 of 1. File: Pdf entropy. Download now. Click here if your download doesn't start automatically. Page 1 of 1. pdf entropy. pdf entropy. Open. Extract.

[RAED] PDF The Lysenko Controversy as a Global Phenomenon
[RAED] PDF The Lysenko Controversy as a Global Phenomenon

Schwinger, On Quantum-Electrodynamics and the Magnetic Moment ...
Schwinger, On Quantum-Electrodynamics and the Magnetic Moment of the Electron.pdf. Schwinger, On Quantum-Electrodynamics and the Magnetic Moment of ...

The Self-Organization of Insight: Entropy and Power ...
Dec 5, 2007 - When children are learning simple addition, they are often asked to sum ...... Foundational issues in artificial intelligence and cognitive science: ...

The Self-Organization of Insight: Entropy and Power ...
5 Dec 2007 - structure in a problem-solving task. Future explorations of problem solving will .... cated theory of cognitive structure, but the conceptual and quantitative tools for pursuing the theoretical and empirical ..... A combined use of motio