HOMOGENIZATION PROBLEM FOR STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS OF ZAKAI TYPE NAOYUKI ICHIHARA

Abstract. We discuss homogenization for stochastic partial differential equations of Zakai type with periodic coefficients appearing typically in nonlinear filtering problems. We prove such homogenization by two different approaches. One is rather analytic and the other is comparatively probabilistic.

1. Introduction In this paper, we consider the homogenization problem for the following stochastic partial differential equations (SPDEs) with small parameter ² > 0 :   du² (t, x) = A² u² (t, x) dt + M ² (u² (t, ·))(x) dWt ,  u² (0, x) = u0 (x) ∈ L2 (Rd ) ,

(1.1)

where W = (Wt )t∈[0,T ] is an n-dimensional standard Brownian motion, and A² , M ² are the operators of the forms (A² u)(x) =

d ∑ i,j=1

(Mk² u)(x)

= Bk

( )

aij (

x ²

( ) d ∂ 2u x ∂u 1 ∑ bi (x) + (x) , ∂xi ∂xj ² i=1 ² ∂xi

(1.2)

)

x x, , u(x) , ²

k = 1, . . . , n ,

respectively. We assume the periodicity of coefficients aij (·), bi (·), and Bk (x, ·, u) (i, j = 1, . . . , d, k = 1, . . . , n) for all x ∈ Rd and u ∈ R. Our aim is to show that u² behaves like the solution of an SPDE with constant coefficients when ² is very small, that is, to prove the convergence of { u² ; ² > 0 } in some sense as ² → 0 and to specify the limit u. We will see that u satisfies the following SPDE :   du(t, x) = A0 u(t, x) dt + M 0 (u(t, ·))(x) dWt ,  u(0, x) = u0 (x) ∈ L2 (Rd ) ,

(1.3)

2000 Mathematics Subject Classification. 60H15, 35R60, 93E11. Short title. Homogenization for stochastic PDEs of Zakai type Key words and phrases. homogenization, stochastic partial differential equations, nonlinear filtering, backward stochastic differential equations. 1

where (A0 u)(x) =

d ∑

q¯ij

i,j=1



(Mk0

u)(x) =

Td

∂ 2u (x) , ∂xi ∂xj

(1.4)

Bk (x, y, u(x)) m(y) dy ,

and the coefficient q¯ = (¯ qij )i,j is characterized by q¯ij =

d ∫ ∑

(

d k,l=1 T

m(y) δik +

( ∂χi ) ∂χj ) (y) akl (y) δjl + (y) dy. ∂yk ∂yl

(1.5)

The functions χk (k = 1, . . . , d) and m are respectively the unique solutions of the following PDEs on the d-dimensional unit torus Td :   A1 χk (y) + bk (y) = 0 , ∫   d χk (y)m(y) dy = 0 ,

y ∈ Td ,

(1.6)

T

  (A1 )∗ m(y) = 0 , ∫   d m(y) dy = 1,

y ∈ Td ,

(1.7)

T

where the operator A1 is identified with that on Td : (A1 v)(y) =

d ∑ i,j=1

aij (y)

d ∑ ∂ 2v ∂v bi (y) (y) + (y) , ∂yi ∂yj ∂yi i=1

y ∈ Td .

Homogenizations of partial differential equations (PDEs) or that of corresponding diffusion processes have been investigated by many authors in various ways. In [4], homogenizations for linear, second-order PDEs with periodic coefficients were discussed by purely analytic approaches in Chapter 1, Chapter 2, and by probabilistic ones in Chapter 3. The literature [13] treated the martingale approach to some limit theorems including the homogenization for diffusion processes with periodic structures (see also [12] for the asymptotic analysis of linear PDEs). Homogenizations for nonlinear, second-order PDEs with periodic coefficients were considered in [6], [15] for semi-linear cases, and in [5] for quasi-linear cases. There, they adopted probabilistic approaches and investigated the asymptotic behavior of solutions by using the theory of backward stochastic differential equations (BSDEs), which gives a representation for solutions of semi-linear PDEs through the so-called “nonlinear Feynman-Kac formula” (see [17] for more details). In this paper, we shall prove homogenization for SPDEs (1.1) by two different methods. One is the martingale approach, which is rather analytic. We show that the family of laws induced by the solutions of (1.1) converges weakly to the law induced by 2

the solution of (1.3) by employing some analytic tools (variational formula, Fredholm’s alternative, etc). The other one is the BSDE approach. We employ a representation formula concerning the solutions of (1.1) or (1.3), which is similar to the nonlinear Feynman-Kac formula for deterministic PDEs. This method is comparatively probabilistic, since the convergence derives from the ergodic property for the corresponding diffusion processes through this formula. We also consider homogenization for the “adjoint” equations of (1.1) :   du² (t, x) = (A² )∗ u² (t, x) dt + M ² (u² (t, ·))(x) dWt ,

(1.8)

 u² (0, x) = u0 (x) ∈ L2 (Rd ).

When M ² is the linear multiplicative operator of the form ( ) x ² (Mk u)(x) = hk u(x) ²

(1.9)

for periodic functions hk (k = 1, . . . , n), the equation (1.8) becomes the Zakai equation of the following nonlinear filtering problem :   dx² = ²−1 b(x² /²) dt + σ(x² /²) dw ² , t t t t ∫t ² ²  ²  yt = h (x ) ds + v , s

0

(1.10)

t

where h² (x) = h(x/²), and w² = (wt² ), v ² = (vt² ) are mutually independent Brownian motions under the probability measure P ² such that ¯

dP ² ¯¯ = exp dP ¯Ft²

(∫

t

²

h 0

(x²s ) dys

n ∫ t 1∑ − |h²k (x²s )|2 ds 2 k=1 0

)

,

with Ft² = σ( ws² , ys | s ≤ t ). Then, the solution of (1.8) appears in the equality of optimal filter represented as ∫

E



[ ψ(x²t ) | σ(ys

Rd

; s ≤ t) ] =



ψ(x)u² (t, x) dx , ² Rd u (t, x) dx

P -a.s.

Such kind of homogenization has been considered by Bensoussan and Blankenship [3]. They treated the case where the nonlinear filtering problem is written by   dx² = b(x² /²) x² dt + σ(x² /²) dw ² , t t t t t ∫t ² ² ²  ² yt = h (x ) x ds + v , 0

s

s

t

and proved that for any bounded, uniformly continuous function ψ, the family of ∫ random variables p² = Rd ψ(x)u² (T, x) dx converges weakly in L2 (Ω, ZT , P ) to the ∫ random variable p = Rd ψ(x)u(T, x) dx, where ZT = σ(yt ; t ≤ T ). We remark that in [3], they reduced the problem to the convergence of (ω-wise) PDEs that had already been considered in [4], while we treat the Zakai equations directly by using the theory 3

of infinite-dimensional SDEs. A similar result to ours has been obtained by [1], but they imposed a rather strong assumption on the function h²k in (1.10) : h²k (x) −→ hk (x) ²↓0

pointwise convergence,

which forbids an oscillatory behavior of the form h²k (x) = hk (x/²). Both of our two methods enable us to eliminate this assumption, and to obtain stronger results than [1] and [3]. We remark finally that the introduction of [3] and references therein are useful to acquire information about physical and engineering aspects of nonlinear filtering problem with homogenization. This paper is organized as follows. In the next section, we state assumptions that we impose through this paper, and formulate our homogenization problem precisely. Section 3 is devoted to the proof of our main theorem by the martingale approach. We also show the homogenization for the adjoint equation (1.8) as its simple application. Finally, we develop the BSDE approach in Section 4.

2. Assumptions and main result 2.1. Function spaces on Rd . Let us set H = L2 (Rd ), the totality of square integrable functions on Rd with canonical inner product and norm ∫

(u, v) =

Rd

u(x)v(x) dx ,

|u|20 = (u, u) ,

u, v ∈ L2 (Rd ).

Next, we denote by H 1 = H 1 (Rd ) the Sobolev space of order 1, that is, the completion of Cc∞ (Rd ), the set of smooth functions with compact support, with respect to the norm | · |1 induced by the inner product (u, v)1 = (u, v) +

d ∑

(∂i u, ∂i v) ,

i=1

where ∂i stands for the partial differential operator ∂/∂xi . We denote by (H 1 )0 = H −1 the dual space of H 1 . Then, under the identification H = H 0 by the Riesz representation theorem, we have dense and continuous inclusions : H 1 ,→ H ,→ H −1 . Now, we fix a smooth and strictly positive function θ on Rd such that θ(x) = |x| for all |x| ≥ 1, and let us denote by Hλm = Hλm (Rd ) (m = 0, 1, λ ∈ [0, ∞)) the weighted Sobolev space with norm | · |Hλm : Hλm = { v | veλθ ∈ H m },

|v|Hλm = |veλθ |m .

Let (Hλm )0 be the dual space of Hλm . Then we have −m = { v | ve−λθ ∈ H −m }. (Hλm )0 = H−λ

4

Clearly, H = H00 , H 1 = H01 , and H −1 = H0−1 . We notice that if m ≥ m0 (m, m0 ∈ 0 {−1, 0, 1}) and λ ≥ λ0 (λ, λ0 ∈ R), the injection Hλm ,→ Hλm0 is continuous. Moreover, if m > m0 and λ > λ0 , the injection is compact (see Lemma 9.21 of [7]). From now on, we fix a positive number λ > 0 arbitrarily. −m We shall denote by h , i the duality product between Hλm and H−λ . Then, the equality ∫ Rd

u(x)v(x) dx = (u, v) = hu, vi

holds for every u ∈ Hλ1 and v ∈ H. 2.2. Assumptions. Through this paper, we always identify functions on Td with their periodic extension to Rd . The symbols x, y stand for elements in Rd and Td , respectively. Now, what we impose on the coefficients are the following conditions. Assumption 2.1. (1) aij (·), bi (·) ∈ C 3 (Td ) (i, j = 1, . . . , d) and Bk (·) ∈ Cb3 (Rd × Td × R) (k = 1, . . . , n), where Cb3 stands for the set of functions of class C 3 whose partial derivatives of order less than or equal to 3 are bounded. (2) aij (y) = aji (y) for any y ∈ Td , and there exists α > 0 such that d ∑

α|ξ|2 ≤

aij (y) ξi ξj ≤ α−1 |ξ|2

(2.1)

i,j=1

for every ξ ∈ Rd and y ∈ Td . (3) bi (·) (i = 1, . . . , d) satisfy the so-called “centering condition” : ∫ Td

bi (y)m(y) dy = 0 ,

where m is the solution of (1.7). (4) There exists K > 0 such that n ∫ ∑ k=1

Rd

sup |Bk (x, y, 0)|2 dx ≤ K .

(2.2)

y∈Td

Remark 2.1. The condition Bk (·) ∈ Cb3 (Rd × Td × R) implies that there exists K > 0 such that |Bk (x, y, u) − Bk (x, y, v)| ≤ K |u − v| , for every x ∈ Rd , y ∈ Td and u, v ∈ R. 5

k = 1, . . . , n,

(2.3)

2.3. Main result. Let (Ω, F, P ; Ft , Wt ) be a filtered probability space with standard (Ft )-Brownian motion W = (Wt ), and we consider the SPDE (1.1). The existence and uniqueness of solution to (1.1) are well known. Theorem 2.1 (Pardoux [14]). There exists an (Ft )-progressively measurable process u² = (u²t ) ∈ L2 (Ω × [0, T ] ; H 1 ) such that ∫

(u²t , v) = (u0 , v) +

t 0

hA² u²s , vi ds +



t 0

(M ² (u²s ), v) dWs ,

∀v ∈ H 1 , (2.4)

for almost all (t, ω) ∈ [0, T ] × Ω. Such process is unique in the following sense : (

)

P u²t = u˜²t in H −1 , ∀t ∈ [0, T ] = 1 for any u² , u˜² satisfying (2.4). Now we are in position to state our main result. We set −1 S = C(0, T, H−λ ) ∩ L2 (0, T ; H−λ ) . −1 Let T1 be the topology of uniform convergence in C(0, T, H−λ ), and T2 the topology 2 2 induced by the L -norm in L (0, T ; H−λ ). Our main result is the following.

Theorem 2.2. Let u² (² > 0) be the solution of (1.1). We denote by π m,² , π ² the laws of m² u² and u² respectively, where we have set m² (x) = m(x/²), (m² u² )(x) = m(x/²) u² (x), and m is the solution of (1.7). Then, we have π m,² =⇒ π

in (S, T1 ) ,

π ² =⇒ π

in (S, T2 ) ,

as ² → 0, where π is the probability law on S induced by the solution of (1.3). Remark 2.2. Two topologies T1 and T2 generate the same Borel σ-algebra on S (see p.74 of [11]).

3. Martingale approach The proof of Theorem 2.2 by the martingale approach is divided into two steps. The first step is to show tightness and the second step is devoted to specify the limit measure. 6

3.1. Tightness. We begin with some uniform estimates concerning u² . Proposition 3.1. Let u² be a solution of (1.1) with initial value u0 ∈ H. Then, there exists a positive constant C which is independent of ² such that [

sup E sup |u²t |40 ²

]

( ∫ + sup E  ²

0≤t≤T

T 0

)2 

 ≤ C (1 + |u0 |4 ) . 0

|u²t |21 dt

(3.1)

Proof. We assume u0 ∈ Cc∞ (Rd ) for a while, and choose a version of u² such that u² (t, x) ∈ C 0,2 ([0, T ] × Rd ) P -almost surely, which is possible under Assumption 2.1 (see for example Theorem 2.1 of [16]). Then, by Ito’s formula, we have |u (t, x)| = |u0 (x)| + 2 ²

2



t

2

+

(A² u² )(s, x)u² (s, x) ds

0

n ∫ ∑

t

k=1 0

|Mk² (u² (s, ·))(x)|2 ds

which is equivalent to ²

²

2

∫ ²

m (x)|u (t, x)| = m (x)|u0 (x)| + 2 +

n ∫ ∑

t

m

k=1 0

²

t

2

0



t

+2 0

M ² (u² (s, ·))(x)u² (s, x) dWs ,

(A²m u² )(s, x)u² (s, x) ds

(x)|Mk² (u² (s, ·))(x)|2 ds



t

+2 0

m² M ² (u² (s, ·))(x)u² (s, x) dWs ,

(3.2)

where A²m is the operator defined by (A²m u)(x)

=

d ∑

( )

am ij

i,j=1

x ²

(

( ) d ∂ 2u ∂u 1 ∑ m x bi (x) + (x) ∂xi ∂xj ² i=1 ² ∂xi )

( ) ( ) d ∂ ∂u ∂u 1 ∑ m x m x aij βi = (x) + (x) , ² ∂xj ² i=1 ² ∂xi i,j=1 ∂xi d ∑

m and we have put am ij (y) = m(y)aij (y), bi (y) = m(y)bi (y), and

βim (y) = bm i (y) −

d ∑ ∂am ij j=1

∂yj

(y).

m Then, under the notation am,² ij (x) = aij (x/²) , we have, by integrating both sides of (3.2) with respect to x, that

(m² u²t , u²t ) = (m² u0 , u0 ) − 2 +

n ∫ ∑ k=1 0

t

d ∫ ∑ i,j=1 0

t

² ² (am,² ij ∂i us , ∂j us ) ds



(m² Mk² (u²s ), Mk² (u²s )) ds + 2

t 0

(m² M ² (u²s ), u²s ) dWs ,

since the equality 2hA²m u²s , u²s i = −2

d ∑ i,j=1

7

² ² (am,² ij ∂i us , ∂j us )

(3.3)



holds in view of i ∂i βim = ²−1 (A1 )∗ m = 0. Since m satisfies δ ≤ m ≤ δ −1 for some δ > 0 (see p.380 of [4]), we have δ

|u²t |20 +2 α δ ≤δ



t 0

−1

|∇u²s |20 ds

|u0 |20



+ C1

t

(1 + 0

|u²s |20 ) ds



t

+2 0

(m² M ² (u²s ), u²s ) dWs

(3.4)

for some C1 > 0 which depends only on n, δ, and K. Here, we have used (2.1) and the inequality sup ²

n ∑

|Mk² (u)|20 ≤ C0 (1 + |u|20 ) ,

u∈H,

k=1

for some C0 > 0, which can be verified by (2.2) and (2.3). From now on, we shall denote by Ci (i = 1, 2, · · · ) the constants which may depend on d, n, α, δ, K, and T , but independent of ² > 0. By Gronwall’s lemma and the localization argument as usual, we obtain sup sup E[ |u²t |20 ] ≤ C2 (1 + |u0 |20 ) . ²

0≤t≤T

Applying Ito’s formula to (3.3), we can see (m² u²t , u²t )2 + 4

d ∫ ∑

t

i,j=1 0

² ² ² ² ² (am,² ij ∂i us , ∂j us )(m us , us ) ds

²

2

= (m u0 , u0 ) + 2 +4

n ∑



t

(m

²

k=1 0

from which we have C3 |u²t |40 + C3



t 0

n ∫ ∑

t

k=1 0

(m² Mk² (u²s ), Mk² (u²s ))(m² u²s , u²s ) ds ∫

Mk² (u²s ), u²s )2

t

ds + 4 0

|∇u²s |20 |u²s |20 ds ≤ |u0 |40 + ∫

t

+ 0



t 0

(m² M ² (u²s ), u²s )(m² u²s , u²s ) dWs ,

|u²s |20 ds +



t 0

|u²s |40 ds

(m² M ² (u²s ), u²s )(m² u²s , u²s ) dWs

by the positiveness of m and the uniform ellipticity of a = (aij ). Now, we consider the expectation after taking the supremum with respect to t of both sides. Then, we have [

C3 E sup 0≤s≤t

|u²s |40

]

[∫

+ C3 E

t 0

≤ |u0 |40 +

|∇u²s |20



t 0

]

|u²s |20 ds

E[ |u²s |20 ] ds + [

+E



t 0

[

]

E sup |u²r |40 ds 0≤r≤s

¯] ¯∫ s ¯ ¯ ² ² ² ² ² ² ² ¯ sup ¯ (m M (ur ), ur )(m ur , ur ) dWr ¯¯ .

0≤s≤t

8

0

Since Burkholder’s inequality implies that the third term of the right-hand side can be estimated as [

E

¯∫ s ¯] ¯ ¯ sup ¯¯ (m² M ² (u²r ), u²r )(m² u²r , u²r ) dWr ¯¯ 0

0≤s≤t

≤ C4 E

[(∫

t 0

)1/2 ]

(1 + |u²s |20 ) |u²s |60 ds [∫

] C3 [ E sup |u²s |40 + C5 E ≤ 2 0≤s≤t

t

(1 + 0

|u²s |20 ) |u²s |20

]

ds ,

we can conclude [

]

sup E sup |u²t |40 ≤ C6 (1 + |u0 |40 ) . ²

(3.5)

0≤t≤T

Now, we return to (3.4). By taking the square of both sides, we get ( ∫ E

T 0

( )2  ∫ |∇u²s |20 ds  ≤ C7 (1 + |u0 |40 ) + C7 E  ( ∫ + C7 E 

T 0

T

)2  |u² |2 ds  s 0

0

)2 

(m² M ² (u²s ), u²s ) dWs



≤ C8 (1 + |u0 |40 ) by virtue of (3.5). We can also verify by approximation that (3.1) is still valid for every u0 ∈ H. Hence we completed the proof. ¤ Next, we shall show the equicontinuity of { (m² u² , ϕ) }²>0 for each ϕ ∈ Cc∞ (Rd ). Proposition 3.2. Let u² be a solution of (1.1) with initial value u0 ∈ H. Then, for every ϕ ∈ Cc∞ (Rd ), there exists a constant C > 0 such that [

]

sup E (m² u²t − m² u²s , ϕ)4 ≤ C |t − s|2 (1 + |u0 |40 ) ²

for all s, t ∈ [0, T ]. Proof. By the definition of u² , we can see (m² u²t



m² u²s , ϕ)4

≤8

(∫

t s

hA²m u²r , ϕi dr

)4

(∫

t

)4 ²

(m M

+8 s

²

(u²r ), ϕ) dWr

First, we note that there exist γik (y) ∈ H 1 (Td ), (i, k = 1, . . . , d) such that βim (y)

=

d ∑ ∂γik k=1

∂yk

(y) ,

9

y ∈ Td ,

.

for each i = 1, . . . , d. Indeed, if we consider the following PDE on Td :   i m (A1 )∗ χ m ˆ − βi = 0 , ∫   dχ ˆi (y) dy = 0 , T

then, γik (y) can be given by γik (y) =

d ∑

am kj (y)

j=1

∂ χˆi (y) − βkm (y) χˆi (y) ∂yj

for each i, k = 1, . . . , d. Taking account of above expression, we have hA²m u, ϕi = −

d ∑

−1 (am,² ij ∂j u, ∂i ϕ) − ²

i,j=1

=−

d ∑

d ∑

(βim,² u, ∂i ϕ)

i=1

(am,² ij ∂j u, ∂i ϕ) +

i,j=1

d { ∑

² ² 2 (γik ∂k u, ∂i ϕ) + (γik u, ∂ik ϕ)

}

i,k=1

≤ C1 |ϕ|2 |u|1 2 for some C1 > 0, where ∂ik = ∂ 2 /∂xi ∂xk and the symbol | · |2 stands for the norm of Sobolev space H 2 (Rd ). Then, we obtain

[

E (m² u²t − m² u²s , ϕ)4 ≤

C2 |ϕ|42

] [(∫

t

E s

≤ C3 |ϕ|42 |t − s|2 ≤

C4 |ϕ|42

|u²r |1 {

)4 ]

dr

+

[(∫

t

E s

|t − s| ( 1 + 2

|u0 |40

C2 |ϕ|40

|u²r |21 dr

)2 ]

[(∫

t

E

(1 + s

|u²r |0 )2

[

)2 ]

ds

+ 1 + E sup |u²t |40

]

}

0≤t≤T

)

for every 0 ≤ s < t ≤ T . Hence we completed the proof.

¤

Now, we shall prove the tightness for { π m,² ; ² > 0 } in (S, T1 ). Theorem 3.1. Let u² be a solution of (1.1), and π m,² the probability measure induced by m² u² . Then, { π m,² ; ² > 0 } is tight in (S, T1 ). Proof. It is obvious by (3.1) that { m² u² ; ² > 0 } satisfies the uniform estimate [

]

sup E sup |m² u²t |40 < ∞ . ²

(3.6)

0≤t≤T

−1 Since the injection H ,→ H−λ is compact, we have only to prove the tightness for the family of real valued processes { (m² u² , ϕ) ; ² > 0 } for each ϕ ∈ Cc∞ (Rd ) (see for example Section 4 of [9]). But this fact is easily verified by Kolmogorov’s tightness criterion. Therefore, { π m,² ; ² > 0 } is tight in (S, T1 ). ¤

10

Corollary 3.1. There exists a subsequence { ²k ; k ≥ 1 } with ²k & 0 as k → ∞, and a probability measure π ¯ on S such that in (S, T1 )

¯ π m,²k =⇒ π as k → ∞.

Proof. The convergence is an easy consequence of the tightness for { π m,² ; ² > 0 }. We can also show −1 π ¯ (C(0, T, H−λ ) ∩ L2 (0, T ; H−λ )) = 1

¤

by virtue of (3.6). Theorem 3.2. Let π ²k be the probability measure induced by u²k . Then, we have in (S, T2 )

¯ π ²k =⇒ π

(3.7)

as k → ∞ along the subsequence { ²k ; k ≥ 1 } extracted in Corollary 3.1. Proof. Since the space (S, T1 ) is Polish, Skorokhod’s theorem implies that there exists ˜ F, ˜ P˜ ) and S-valued random variables u˜m,²k , u˜ on (Ω, ˜ F, ˜ P˜ ) such a probability space (Ω, that the laws of u˜m,²k and u˜ coincide with π m,²k and π ¯ respectively, and u˜m,²k −→ u˜

in (S, T1 ),

P˜ -almost surely.

(3.8)

Then, in order to show (3.7), it suffices to prove E˜

[∫

T 0

] k |(m²k )−1 u˜m,² t

− u˜t |2H−λ dt −→ 0

(3.9)

as k → ∞ since (m²k )−1 u˜m,²k and u²k have the same law on S and the law of u˜ coincides with π ¯ , where E˜ stands for the expectation with respect to the probability measure P˜ . First, we have E˜

[∫

T 0

]

k |(m²k )−1 u˜m,² t

≤ 2E˜

[∫ [∫

T 0 T

= 2E 0



u˜t |2H −1 −λ

dt ]

k (1 |(m²k )−1 u˜m,² t

−m

²k

]

|u²tk (1

−m

²k

)|2H −1 −λ

)|2H −1 −λ

dt + 2E˜

dt + 2E˜

[∫

T 0

[∫

T 0

k |˜ um,² t



] k |˜ um,² t



]

u˜t |2H −1 −λ

dt

for each k ≥ 1. Let ζ = ζ(y) be a solution of the following PDE on Td :   ∆ζ(y) = 1 − m(y) , ∫   d ζ(y) dy = 0 . T

11

y ∈ Td ,

u˜t |2H −1 −λ

dt

Then, by the integration by parts formula, we have |u²tk (1 − m²k )|H −1 = |u²tk ∆ζ ²k |H −1 = sup −λ

−λ

v6=0

|hu²tk ∆ζ ²k , vi| ≤ ²k C |u²tk |1 |v|Hλ1

for some C > 0 which is independent of ²k > 0, where ∆ζ ² (x) = (∆ζ)(x/²). Therefore, we obtain E˜

[∫

T 0

]

k |(m²k )−1 u˜m,² t





[∫

2 ²2k

u˜t |2H −1 −λ

T

2

C E 0

dt

|u²t k |2H 1

]

dt + 2E˜

[∫

]

T

k |˜ um,² t

0



u˜t |2H −1 −λ

dt −→ 0

(3.10)

k as k → ∞ in view of (3.1), (3.8) and uniform integrability of |˜ um,² − u˜t |2H −1 . Now, we t −λ

recall the following lemma (see p.59 of [10] for its proof). Lemma 3.1. Let E1 , E2 , and E3 are reflective Banach spaces which satisfy the relation: E1 ,→ E2 ,→ E3 . We assume that the inclusion E1 ,→ E2 is compact and E2 ,→ E3 is continuous. Then, for every ρ > 0, there exists a constant C(ρ) > 0 such that |v|E2 ≤ ρ |v|E1 + C(ρ) |v|E3 ,

v ∈ E1 .

−1 Applying this lemma by taking E1 = H 1 , E2 = H−λ , and E3 = H−λ , we finally obtain (3.9) since (3.1) and (3.10) hold. ¤

3.2. Identification of the limit measure. The goal of this section is to verify that π ¯ coincides with the law induced by the solution of (1.3). For this purpose, we introduce the notion of martingale problems for SDEs in infinite-dimensional spaces according to [11]. Let X = (Xt ) be the canonical process, that is, Xt (w) = wt for w ∈ S, and Dt the ∨ canonical filtration on S. We set D = 0≤t≤T Dt . Definition 3.1. A probability measure µ on (S, D) is called a solution of martingale problem (a martingale solution, for short) for the SPDE (1.3) with initial value u0 ∈ H if µ satisfies (M1) µ(X0 (·) = u0 ) = 1, and (M2) For every φ ∈ Cc∞ (R) and ξ ∈ Cc∞ (Rd ), the stochastic process Hφ,ξ (t) defined by Hφ,ξ (t) =

A0 ,M 0 Hφ,ξ (t)

= φ(hXt , ξi) − φ(hX0 , ξi) − n 1∑ − 2 k=1



0 t

∫ 0

t

φ0 (hXs , ξi)hXs , (A0 )∗ ξi ds

φ00 (hXs , ξi)(Mk0 (Xs ), ξ)2 ds

belongs to Mcloc (µ), the set of continuous Dt -local martingales under the measure µ. 12

Remark 3.1. A probability measure µ on (S, D) is a martingale solution of (1.3) if and only if there exists a filtered probability space with Brownian motion (Ω, F, P ; Ft , Wt ) and a stochastic process u = (ut ) such that u is a solution of (1.3) on this probability space and the image measure of u coincides with µ (p.76 of [11]). What we should show are the uniqueness of martingale solution to (1.3) and the martingale property of Hφ,ξ (t) under π ¯ . We begin with the proof of uniqueness. Proposition 3.3. The SPDE (1.3) has at most one martingale solution on (S, D). Proof. It is sufficient to check the pathwise uniqueness property of the SPDE (1.3) on S (see p.89 of [11]). Let u be a solution of (1.3) and set u˜t = ut e−λ θ . Then, u˜ satisfies the equality −λ θ

(˜ ut , ξ) = (u0 e



t

, ξ) + 0

hA˜0 u˜s , ξi ds +



t 0

˜ 0 (˜ (M us ) dWs , ξ) ,

P -a.s.,

∀ξ ∈ H 1 , (3.11)

˜ 0 are some operators which satisfy the coerciveness condition of the where A˜0 and M form ˜ 0 (v)|2 , −2hA˜0 v, vi + ν1 |v|20 + ν2 ≥ ν3 |v|21 + |M 0

∀v ∈ H 1 .

Therefore, similarly to Theorem 2.1, we have P (˜ ut = v˜t ,

in H −1 , ∀t ∈ [0, T ]) = 1

for any two stochastic processes u˜, v˜ satisfying (3.11). Hence we obtain the pathwise uniqueness property of the SPDE (1.3) on S. ¤ Proposition 3.4. The stochastic process Hφ,ξ (t) is a continuous Dt -martingale under the probability measure π ¯ , that is, for any bounded, Ds -measurable functional Φ on S, we have [

]

E π¯ Φ {Hφ,ξ (t) − Hφ,ξ (s)} = 0 ,

0 ≤ ∀s < ∀t ≤ T .

(3.12)

Proof. For the simplicity of description, we shall use the sign ² in place of ²k , the subsequence extracted in Corollary 3.1. We can assume without loss of generality that Φ is continuous with respect to the supremum topology T of T1 and T2 , which generates on S the same Borel σ-algebra as that generated by T1 or T2 . Now, we define ² the functional Hφ,ξ (t) (² > 0) on (S, D) by ² (t) Hφ,ξ

= φ(hm Xt , ξi) − φ(hm X0 , ξi) − ²



t

²

n 1∑ − 2 k=1

13



0 t

0

φ0 (hm² Xs , ξi)hXs , (A0 )∗ ξi ds

φ00 (hm² Xs , ξi)(Mk0 (Xs ), ξ)2 ds .

Since (m² )−1 u˜m,² satisfies (3.9), we can extract a subsequence { ²l ; l ≥ 1 } such that (m²l )−1 u˜m,²l −→ u˜ as l → ∞. Therefore, we have ²

[

²l ²l (s)} (t) − Hφ,ξ E π l Φ {Hφ,ξ

in (S, T2 ),

P˜ -almost surely

]

[

²l ²l (s)((m²l )−1 u˜m,²l )} (t)((m²l )−1 u˜m,²l ) − Hφ,ξ = E˜ Φ((m²l )−1 u˜m,²l ) {Hφ,ξ

[

]

[

−→ E˜ Φ(˜ u) {Hφ,ξ (t)(˜ u) − Hφ,ξ (s)(˜ u)} = E π¯ Φ {Hφ,ξ (t) − Hφ,ξ (s)}

] ]

(3.13) as l → ∞ in view of (3.1), (3.8), (3.9) and the bounded convergence theorem. Then, the equality (3.12) will be justified if we show that the left-hand side of (3.13) goes to zero as ² → 0 without extracting subsequence. For this purpose, we define another A² ,M ² functional Hφ,ξ (t) on S by ² ²

²

A ,M Hφ,ξ (t) = φ(hXt , ξ² i) − φ(hX0 , ξ² i) − ²





t 0

φ0 (hXs , ξ² i)hXs , (A² )∗ ξ² i ds

n ∫ t 1∑ φ00 (hXs , ξ² i)(Mk² (Xs ), ξ² )2 ds , 2 k=1 0 ²

²

A ,M where ξ² ∈ Cc∞ (Rd ). Then, it can be verified that Hφ,ξ (t) is a continuous Dt ² ² martingale under π , that is, ²

[

²

²

²

]

²

A ,M A ,M E π Φ {Hφ,ξ (t) − Hφ,ξ (s)} = 0 ,

0 ≤ ∀s < ∀t ≤ T .

Subtracting this term from the left-hand side of (3.13), we have ²

[

²

²

²

²

]

A ,M A ,M ² ² E π Φ {Hφ,ξ (t) − Hφ,ξ (s) − Hφ,ξ (t) + Hφ,ξ (s)} , ² ²

which is equal to [

]

E Φ(u² ) {I1 (u² ) − I2 (u² ) − I3 (u² ) − I4 (u² ) − I5 (u² )} , where I1 = φ(hm² Xt , ξi) − φ(hXt , ξ² i) − φ(hm² Xs , ξi) + φ(hXs , ξ² i) , ∫

I2 = ∫

I3 =

t

{φ0 (hm² Xr , ξi) − φ0 (hXr , ξ² i)}hXr , (A0 )∗ ξi dr ,

t

φ0 (hXr , ξ² i)hXr , (A0 )∗ ξ − (A² )∗ ξ² i dr ,

s

s

n 1∑ I4 = 2 k=1

I5 =

n ∑

1 2 k=1



t

{φ00 (hm² Xr , ξi) − φ00 (hXr , ξ² i)}(Mk0 (Xr ), ξ)2 dr ,

t

φ00 (hXr , ξ² i){(Mk0 (Xr ), ξ)2 − (Mk² (Xr ), ξ² )2 } dr .

s

∫ s

14

(3.14)

Now, we shall construct a family of test functions ξ² ∈ Cc∞ (Rd ) such that (3.14) goes to zero as ² → 0. Let us define ξ² ∈ Cc∞ (Rd ) by ( ){

x ξ² (x) = m ²

d ∑

ξ(x) + ²

}

( )

η

x ∂ξ (x) ² ∂xk

k

k=1

.

The symbols η k (k = 1, . . . , d) stand for solutions of the PDE on Td :   (A1 )∗ η k − β˜m = 0 , k m ∫ k  d η (y) dy = 0 ,

(3.15)

T

where A1m = m A1 , and β˜km (y) = βkm (y) −

d ∑ ∂am kj j=1

∂yj

(y) .

We note that the PDE (3.15) is uniquely solvable by Fredholm’s alternative since ∫ ˜m Td βk (y)dy = 0. Lemma 3.2. The function ξ² satisfies (m² )−1 ξ² −→ ξ ² ∗

in Hλ , 0 ∗

(A ) ξ² −→ (A ) ξ

in H

−1

(3.16) ,

(3.17)

as ² → 0. Proof. It is clear that ξ² satisfies (3.16), so we shall prove (3.17). By easy, but careful computations (c.f. Section 5 of Chapter 3 in [4]), we can see (

)

(A² )∗ ξ² (x) =

d ∑

( )

cij

i,j=1

d ∑ x ∂ 2ξ ∂3ξ k,² (x) + ² (x)η (x) (x) , am,² ij ² ∂xi ∂xj ∂xi ∂xj ∂xk i,j,k=1

m k,² k where we have put am,² ij (x) = aij (x/²), η (x) = η (x/²) and

cij (y) =

am ij (y)

d ) ∑ ∂ ( m 1 m 1 m j i aik (y) η j (y) . − bi (y) η (y) − bj (y) η (y) + 2 2 2 k=1 ∂yk

Then, for any ψ ∈ Hλ , we have ² ∗

0 ∗

((A ) ξ² − (A ) ξ, ψ)Hλ =

(

d ∫ ∑ d i,j=1 R

+² −→

cij

(x)

²

∫ d ∑ d i,j,k=1 R

d ∑

− q¯ij

)

∂ 2ξ (x)ψ(x)e2λθ(x) dx ∂xi ∂xj

k,² am,² ij (x)η (x)

(¯ cij − q¯ij )

i,j=1

15

∫ Rd

∂ 3ξ (x)ψ(x)e2λθ(x) dx ∂xi ∂xj ∂xk

∂2ξ (x)ψ(x)e2λθ(x) dx ∂xi ∂xj

as ² → 0, where









1 1 j cij (y) dy = bm bm (y) η i (y) dy c¯ij = − i (y) η (y) dy − d d d 2 T 2 Td j T T for i, j = 1, . . . , d. Now we shall show c¯ij = q¯ij . Since χk and η k (k = 1, . . . , d) satisfy (1.6) and (3.15) respectively, we have am ij (y) dy



Td

j bm i (y) η (y) dy = −

=− and ∫

Td

χi (y) bm j (y) dy = − =





Td

Td

A1m χi (y) η j (y) dy = − χ

i

(y) bm j (y) dy

+2



d ∫ ∑

Td

χi (y) (A1m )∗ η j (y) dy

∂am χ (y) jk (y) dy , d ∂yk T

k=1

i

∫ Td

χi (y) A1m χj (y) dy

d ∫ ∑ d k,l=1 T

d ∫ ∑ ∂χi ∂χj ∂χj m (y) akl (y) (y) dy − χi (y) βkm (y) (y) dy . d ∂yk ∂yl ∂yk k=1 T

Therefore, we obtain ∫ ∫ 1 1 j bm bm (y) η i (y) dy (y) η (y) dy + 2 Td i 2 Td j d ∫ d ∫ ∑ ∑ ∂χj ∂χi m (y) ajk (y) dy − (y) am =− ik (y) dy d d k=1 T ∂yk k=1 T ∂yk −

d ∫ ∑ d k,l=1 T

∂χi ∂χj (y) am (y) (y) dy , kl ∂yk ∂yl

by the integration by parts formula and the fact have (A² )∗ ξ² −→ (A0 )∗ ξ

∑ k

(3.18)

∂k βkm = 0. Thus, c¯ij = q¯ij , and we

weakly in Hλ

as ² → 0, which implies (A² )∗ ξ² −→ (A0 )∗ ξ

strongly in H −1

by virtue of the compactness of the injection Hλ ,→ H −1 .

¤

Now, we shall prove E[|Ii (u² )|] −→ 0 for each i = 1, 2, . . . , 5 one by one. Taking account of |hm² u²t , ξi − hu²t , ξ² i| ≤ |m² u²t |0 |ξ − (m² )−1 ξ² |0 , we have E[|I1 (u² )|] −→ 0 in view of (3.6) and (3.16). Similarly, we can show the convergences E[|Ii (u² )|] −→ 0 for i = 2 and i = 4. For i = 3, we have ¯ ¯ ¯ E[ |I3 (u )| ] ≤ C1 ¯¯(A² )∗ ξ² − (A0 )∗ ξ ¯

[∫

²

16

−1

T

E 0

]

|u²r |1

dr −→ 0

as ² → 0 by virtue of (3.1) and (3.17). Finally, we shall show E[|I5 (u² )|] −→ 0 as ² → 0. Since u² and (m² )−1 u˜m,² have the same law on S and the equality 0 ² −1 m,² (Mk² ((m² )−1 u˜m,² ˜r ), ξ) r ), ξ² ) − (Mk ((m ) u ² = (Mk² ((m² )−1 u˜m,² ur ), ξ² ) + (Mk0 (˜ ur ) − Mk0 ((m² )−1 u˜m,² r ) − Mk (˜ r ), ξ)

+ (m² Mk² (˜ ur ), (m² )−1 ξ² − ξ) + (m² Mk² (˜ ur ) − Mk0 (˜ ur ), ξ) holds for k = 1, . . . , n, we can conclude n ∑ k=1

[∫

] ¯2 ¯ ¯ ² ² 0 ² ¯ (Mk (ur ), ξ² ) − (Mk (ur ), ξ) ¯ dr −→ 0



E 0

as ² → 0 by using (3.1), (3.9), (3.16), and the convergence (m² Mk² (u) − Mk0 (u), ξ) −→ 0 ,

as ² → 0 ,

for each u ∈ H−λ . Hence E[|I5 (u² )|] −→ 0 and we completed the proof of Theorem 2.2. ¤ 3.3. Homogenization for the Zakai equations. In this section, we discuss a simple application of Theorem 2.2. We consider the SPDE (1.8), which includes the Zakai equation derived from the nonlinear filtering problem (1.10). The main theorem of this section is the following. Theorem 3.3. The family of laws { π ² ; ² > 0 } induced by the solutions of (1.8) converges weakly in (S, T1 ) to the law π of the SPDE   ˆ 0 (u(t, ·))(x) dWt , du(t, x) = A0 u(t, x) dt + M  u(0, x) = u0 (x) ∈ L2 (Rd ),

ˆ 0 is defined by where A0 is the same operator as (1.4), and M ˆ k0 u)(x) = (M

∫ Td

Bk (x, y, u(x)m(y)) dy ,

for k = 1, . . . , n. Proof. If we set v ² (t) = (m² )−1 u² (t), then (1.8) can be rewritten by   ˆ² v ² (t, x) dt + M ˆ ² (v ² (t, ·))(x) dWt , dv ² (t, x) = A  v ² (0) = (m² )−1 u0 ∈ L2 (Rd ) , 17

(3.19)

where (Aˆ² v)(x) = (m² )−1 (A²m )∗ v(x) =

d ∑

( )

( ) d x ∂ 2v 1 ∑ x ∂v (x) − (m−1 β˜im ) (x) , ² ∂xi ∂xj ² i=1 ² ∂xi

aij

i,j=1

ˆ ² v)(x) = (M k

1 Bk (x, x/², v(x)m(x/²)) , m(x/²)

and (m−1 β˜im )(y)

d ∂am β˜im (y) 2 ∑ ik (y) , = = bi (y) − m(y) m(y) k=1 ∂yk

y ∈ Td .

ˆ ² satisfy all conditions stated in Assumption 2.1. Therefore, It is easy to see that Aˆ² , M Theorem 2.2 deduces that the family of laws induced by u² = m² v ² converges weakly in (S, T1 ) to the law π induced by the solution to the following SPDE :   ˆ0 u(t, x) dt + M ˆ 0 (u(t, ·))(x) dWt , du(t, x) = A  u(0, x) = u0 (x) ∈ L2 (Rd ) ,

where (Aˆ0 u)(x) =

d ∑

qˆij

i,j=1

qˆij =

d ∫ ∑ d k,l=1 T

(

δik +

∂ 2u (x) , ∂xi ∂xj

∂ ηˆi ) m ( ∂ ηˆj ) (y) akl (y) δjl + (y) dy , ∂yk ∂yl

and ηˆk (k = 1, . . . , d) are the solutions of   ˆ1 ηˆk (y) − (m−1 β˜km )(y) = 0 , A y ∈ Td , ∫   d ηˆk (y)m(y) dy = 0 , k = 1, . . . , n. T

We note that ηˆk coincides with the solution of (3.15) up to constant. Thus, we have only to check qˆij = q¯ij for i, j = 1, . . . , d, where q¯ = (¯ qij ) is the constant matrix given by (1.5). First, we can show d ∫ ∑ d k,l=1 T

=− =−

∂ ηˆi ∂ ηˆj (y) (y) am (y) dy kl ∂yk ∂yl d ∫ ∑

d k=1 T

d ∫ ∑ d k=1 T

ηˆi (y) i

ηˆ

∫ ) ∂ ( m η j (y) dy − βk (y)ˆ ηˆi (y) β˜jm (y) dy ∂yk Td

(y) βkm (y)

∂ ηˆj (y) dy − ∂yk

∫ i

Td

ηˆ

18

(y) bm j (y) dy

+2

d ∫ ∑ k=1

∂am ηˆ (y) jk (y) dy , d ∂yk T i



in view of the equality k ∂k βkm = 0. By the symmetry of a = (aij ) and the integration by parts formula, we have d ∫ ∑ d k,l=1 T

∂ ηˆi ∂ ηˆj (y) am (y) dy (y) kl ∂yk ∂yl

{∫ } ∫ d ∂ ηˆj ∂ ηˆi 1 ∑ ∂ ηˆj ∂ ηˆi m m (y) akl (y) (y) dy + (y) akl (y) (y) dy = 2 k,l=1 Td ∂yk ∂yl ∂yl Td ∂yk

=−

d ∫ ∑ d k=1 T

am kj (y) 1 − 2

d ∑ ∂ ηˆi (y) dy − ∂yk k=1





i

Td

ηˆ

(y) bm j (y) dy

Td

am ki (y)

1 − 2

∫ Td

∂ ηˆj (y) dy ∂yk ηˆj (y) bm i (y) dy .

Thus, the relation qˆij = q¯ij follows immediately from the fact that η − ηˆ is a constant ∫ and Td b(y)m(y) dy = 0 , and the equality (3.18). ¤

4. Probabilistic approach 4.1. Problem. In this section, we develop the probabilistic approach to the homogenization for SPDEs. Roughly speaking, the probabilistic approach is a way of investigating asymptotic behaviors of the solutions of partial differential equations by considering the corresponding stochastic processes. The book [4] applies this approach to many kinds of PDEs, but they treated mainly linear cases. In 90’s, a new method for studying homogenizations of nonlinear PDEs was discovered. This method is based on the theory of backward stochastic differential equations (BSDEs), and the nonlinear Feynman-Kac formula which gives a probabilistic interpretation of the solutions of semi-linear parabolic PDEs. We can refer to [17] for the theory of BSDEs, the nonlinear Feynman-Kac formula, and to [5], [6], [15] for the probabilistic approach to the homogenizations for semi-linear and quasi-linear PDEs. The literature [8] presents the homogenization for semi-linear PDEs whose nonlinearity may have a quadratic growth in the gradient by applying results of weak convergence of BSDEs. Now, let us consider the following time-reversed SPDE with parameter ² > 0 :  −−  −du² (t, x) = A² u² (t, x) dt + f (x/², u² (t, x)) dt + B(x/², u² (t, x)) ← dWt ,  u² (T, x) = g(x) ∈ C 3 (Rd ) , b

(4.1)

←−− where W = (Wt )t∈[0,T ] is an n-dimensional standard Brownian motion, and the sign dWt denotes the backward Ito integral. A² is the operator defined by (1.2), and f = f (y, u), 19

B = B(y, u) are real-valued functions belonging to Cb3 (Td × R). What we shall prove in this section is the following theorem. Theorem 4.1. Let (Ω, F, P ) be a probability space equipped with a standard Brownian motion W = (Wt ), and u² the solution of (4.1) on (Ω, F, P ; W ). Further, we consider a solution of the following SPDE on (Ω, F, P ; W ) :  ←−−  ¯ −du(t, x) = A0 u(t, x) dt + f¯(u(t, x)) dt + B(u(t, x)) dWt ,  u(T, x) = g(x) ∈ C 3 (Rd ) , b

(4.2)

¯ are the functions characterized by where A0 is the operator defined by (1.4), and f¯, B f¯(u) = ¯k (u) = B





Td

Td

u ∈ R,

f (y, u) m(y) dy ,

u ∈ R,

Bk (y, u) m(y) dy ,

respectively. Then, for any (t, x) ∈ [0, T ] × Rd , we have E[ |u² (t, x) − u(t, x)|2 ] −→ 0 as ² → 0. Remark 4.1. The assumption on f and B ensure that there exists K0 > 0 such that |f (y, u) − f (y, v)| + |B(y, u) − B(y, v)| ≤ K0 |u − v|

(4.3)

for every u, v ∈ R and y ∈ Td . ˆ t = WT − WT −t , then uˆ² Remark 4.2. If we take f ≡ 0, uˆ² (t, x) = u² (T − t, x), and W ˆ. satisfies (1.1) with M ² (u)(x) = B(x/², u(x)) and the Brownian motion W = W 4.2. Nonlinear Feynman-Kac formula. For the proof of Theorem 4.1, we adopt the concept of backward doubly stochastic differential equations (BDSDEs) introduced by Pardoux and Peng [16], and use a probabilistic interpretation for the solutions of (4.1) and (4.2). According to [16], we have the equality u² (s, Xs² (t, x)) = Ys² (t, x) ,

0 ≤ ∀t ≤ ∀s ≤ T,

∀x ∈ Rd ,

P -almost surely, under our assumptions on the coefficients. The symbols Xs² = Xs² (t, x) and Ys² = Ys² (t, x) stand for the solutions of the following system of forward and backward stochastic differential equations (FBSDEs) :   ² −1 ˜s ,  b(Xs² /²) ds + σ(Xs² /²) dW dXs = ²  

←−−

˜s , −dYs² = f (Xs² /², Ys² ) ds + B(Xs² /², Ys² ) dWs − Zs² dW     X ² = x ∈ Rd , Y ² = g(X ² ) , t≤s≤T, T T t 20

(4.4)

˜ = (W ˜ t ) is another n-dimensional standard Brownian motion which is indewhere, W pendent of W = (Wt ), and σ = (σij ) is chosen so that 2 a = σ σ ∗ . We note here that Xs² = Xs² (t, x) is nothing but a solution of forward SDE with initial value x at time t, and the pair (Ys² , Zs² ) can be obtained by solving the backward doubly SDE with terminal value YT² = g(XT² ). Similarly, the solution of (4.2) is represented as 0 ≤ ∀t ≤ ∀s ≤ T,

u(s, Xs (t, x)) = Ys (t, x) ,

∀x ∈ Rd ,

by using the following system of FBSDE :   ˜s ,  dXs = σ ¯ dW   

←−−

¯ s ) dWs − Zs dW ˜s , −dYs = f¯(Ys ) ds + B(Y     Xt = x ∈ Rd , YT = g(XT ) , t≤s≤T,

(4.5)

where σ ¯ = (¯ σij ) is the constant matrix such that 2 q¯ = σ ¯σ ¯∗. 4.3. Some lemmas. Lemma 4.1. There exists C > 0 such that sup E[ |Ys (t, x) − Ys (t, x0 )|2 ] ≤ C |x − x0 |2

0≤t≤s≤T

for any x, x0 ∈ Rd . Especially, we have E[ |u(t, x) − u(t, x0 )|2 ] ≤ C |x − x0 |2 . Proof. Let Ys (t, x) and Ys (t, x0 ) be the solutions of (4.5) with initial value x and x0 at time t respectively. Then, we have 0

Ys (t, x) − Ys (t, x ) +



T s

˜r (Zr (t, x) − Zr (t, x0 )) dW

= g(XT (t, x)) − g(XT (t, x0 )) +



T s

{f¯(Yr (t, x)) − f¯(Yr (t, x0 ))} dr ∫

T

+ s

−− ¯ r (t, x)) − B(Y ¯ r (t, x0 ))} ← {B(Y dWr ,

which implies E[|Ys (t, x) − Ys (t, x0 )|2 ] +



T s

E[|Zr (t, x) − Zr (t, x0 )|2 ] dr 0

≤ C1 E[|g(XT (t, x)) − g(XT (t, x ))| ] + C1 ∫

+ C1 0 2

T s



T

2

s

E[|f¯(Yr (t, x)) − f¯(Yr (t, x0 ))|2 ] dr

¯ r (t, x)) − B(Y ¯ r (t, x0 ))|2 ] dr E[|B(Y

≤ C2 |x − x | + C2



T s

E[|Yr (t, x) − Yr (t, x0 )|2 ] dr , 21

for some C2 > 0 which is independent of x, x0 ∈ Rd . Hence we obtain the desired result by Gronwall’s lemma. ¤ Lemma 4.2. Let Xs² (t, x) be the solution of (4.4). Then there exist K > 0 and ρ > 0 which are independent of ² such that ¯ ¯

]¯ ¯

[

sup ¯ E f (Xs² (t, x)/²) ¯ ≤ K|f |L∞ (Td ) e−ρ (s−t)/² , 2

x∈Rd

0 ≤ ∀t < ∀s ≤ T , (4.6)

for every bounded, Borel function f = f (y) on Td satisfying

∫ Td

f (y)m(y)dy = 0.

Proof. Let η = η(t; ξ) be the solution of the following SDE:   dη(t; ξ) = b(η(t; ξ)) dt + σ(η(t; ξ)) dw(t) ,  η(0; ξ) = ξ ∈ Td ,

where w = w(t) is a d-dimensional standard Brownian motion. Then f (Xs² (t, x)/²) and f (η((s − t)/²2 ; x/²)) have the same law. Thus, we can see (4.6) by the ergodicity of A1 (see Section 4 of Chapter 3 in [4]). ¤ 4.4. Proof of Theorem 4.1. Fix (t, x) ∈ [0, T ] × Rd and we consider two triplets (Xs² (t, x), Ys² (t, x), Zs² (t, x)) ,

(Xs (t, x), Ys (t, x), Zs (t, x)) ,

which are the solutions of (4.4) and (4.5) respectively. Then, it suffices to prove the convergence E[ |Yt² (t, x) − Yt (t, x)|2 ] −→ 0 as ² → 0. W From now on, we omit the sign (t, x) for simplicity. Since Yt² is Ft,T -measurable and ˜ = (W ˜ t ) are mutually independent, we can see W = (Wt ), W 

Yt² = E[g(XT² )] + E  ∫

= E[g(XT² )] +

T t



T t

¯ ¯    ¯ ¯ ∫ T ¯ W ¯ ← − − ² ² W ² ² B(Xr /², Yr ) dWr ¯¯ Ft,T  f (Xr /², Yr ) dr ¯¯ Ft,T  + E  t ¯ ¯ ∫ T

W E[f (Xr² /², Yr² )|Fr,T ] dr +

−− W ← E[B(Xr² /², Yr² )|Fr,T ] dWr ,

t

and ∫

Yt = E[g(XT )] +

T t

W ] dr + E[f¯(Yr )|Fr,T

22



T t

−− ¯ r )|F W ] ← E[B(Y r,T dWr .

Then, we have Yt² − Yt =E[g(XT² )] − E[g(XT )] + ∫

T

+ ∫

t



t

T

+ T

+ t



T t

W E[f (Xr² /², Yr² ) − f (Xr² /², Yr )|Fr,T ] dr

−− W ← E[B(Xr² /², Yr² ) − B(Xr² /², Yr )|Fr,T ] dWr W E[f (Xr² /², Yr ) − f¯(Yr )|Fr,T ] dr

−− ¯ r )|F W ] ← E[B(Xr² /², Yr ) − B(Y r,T dWr ,

which implies ¯2 ¯

¯ ¯

E[ |Yt² − Yt |2 ] ≤ C1 ¯ E[g(XT² )] − E[g(XT )] ¯ + C1 ∫

+ C1 ∫

+ C1

T t T

t



T t

E[ |Yr² − Yr |2 ] dr

[¯ ¯ ] ¯ ² W ¯2 ¯ E ¯E[B(Xr /², Yr ) − B(Yr )|Fr,T ]¯ dr ¯ ]



¯ W ¯2 ]¯ dr E ¯E[f (Xr² /², Yr ) − f¯(Yr )|Fr,T

for some C1 > 0 taking account of (4.3). Thus, we have E[ |Yt² − Yt |2 ] ≤ C2 eC2 (T −t) (J1² + J2² + J3² ) , where ¯ ¯

¯2 ¯

J1² = ¯ E[g(XT² )] − E[g(XT )] ¯ , ∫

J2²

[¯ ¯ ] ¯ ² W ¯2 ¯ E ¯E[B(Xr /², Yr ) − B(Yr )|Fr,T ]¯ dr ,

T

[¯ ¯ ] ¯ ² W ¯2 ¯ E ¯E[f (Xr /², Yr ) − f (Yr )|Fr,T ]¯ dr .

= ∫

J3²

T t

= t

Since J1² −→ 0 as ² → 0 (see for example p.401 of [4]), we have only to prove that J2² −→ 0 and J3² −→ 0 as ² → 0. Take a partition of [t, T ] denoted by ∆ = {t = t0 < t1 < · · · < tN = T } , and fix it for the moment. Then, J2² can be written as J2²

=

N −1 ∫ tk+1 ∑ k=0

tk

[¯ N −1 ¯ ] ∑ ¯ W ¯2 ² ¯ I (k) . E ¯E[B(Xr /², Yr ) − B(Yr )|Fr,T ]¯ dr =: k=0

23

By using the representation Yr = u(r, Xr ), we have I

(k)

≤4



[¯ ¯ ] ¯ W ¯2 ² ² E ¯E[B(Xr /², u(r, Xr )) − B(Xr /², u(r, Xtk ))|Fr,T ]¯ dr

tk+1 tk



tk+1



tk tk+1

+4 +4 tk (k)

[¯ ¯ ] ¯ W ¯2 ¯ ¯ E ¯E[B(u(r, Xr )) − B(u(r, Xtk ))|Fr,T ]¯ dr ¯2 ] ¯



¯ W ¯ ]¯ Xtk )) | Fr,T E ¯E[B(Xr² /², u(r, Xtk )) − B(u(r,

(k)

dr

(k)

=: 4I1 + 4I2 + 4I3 . (k) ˜ t ) are mutually independent, we have Let us consider I1 . Since (Wt ) and (W



)¯ [ ( ) ( ]¯ ¯ W ¯2 ¯ ¯ E ¯E B Xr² /², u(r, Xr ) − B Xr² /², u(r, Xtk ) ¯Fr,T

]

[¯ [ ( ) ( )]¯2 ] ¯ ¯ ² 0 ² 0 = Eω0 ¯Eω B Xr (ω)/², u(r, Xr (ω), ω ) − B Xr (ω)/², u(r, Xtk (ω), ω ) ¯ [ [ ]]

≤ C1 Eω0 Eω |u(r, Xr (ω), ω 0 ) − u(r, Xtk (ω), ω 0 )|2

for some C1 > 0, where Eω (resp. Eω0 ) denotes the P -expectation with respect to ω-variable (resp. ω 0 -variable). Therefore, by Lemma 4.1 and Fubini’s theorem, we obtain (k) |I1 |

≤ C1 ≤ C2 ≤ C3



tk+1

tk ∫ tk+1



tk tk+1 tk

[

[

Eω Eω0 |u(r, Xr (ω), ω 0 ) − u(r, Xtk (ω), ω 0 )|2

]]

dr

Eω [ |Xr (ω) − Xtk (ω)|2 ] dr (r − tk ) dr =

C3 (tk+1 − tk )2 . 2

(k)

Similarly, we can show |I2 | ≤ C4 (tk+1 − tk )2 . Therefore, we have J2² = 4

N −1 ∑

(k)

(k)

(k)

(I1 + I2 + I3 ) ≤ C5

k=0

≤ C5 (T − t) |∆| + 4

N −1 ∑ k=0

N −1 ∑

(tk+1 − tk )2 + 4

N −1 ∑

(k)

I3

k=0

(k)

I3 ,

k=1

where |∆| = max{ |tk+1 − tk | ; 0 ≤ k ≤ N − 1 }. Now, take δ > 0 arbitrarily, and fix a partition ∆ so that |∆| < C5 δ/T . Then, it (k) suffices to prove |I3 | −→ 0 as ² → 0 for each k. First, by the Markov property of X·² 24

and X· , we have [ (

)

(

)]

¯ u(r, Xt (ω), ω 0 ) Eω B Xr² (ω)/², u(r, Xtk (ω), ω 0 ) − B k 

[

² ¯ (0, x, ω 00 )/², u(r, x0 , ω 0 )) − B(u(r, x0 , ω 0 )) = Eω Eω00 B(Xr−t k

[

≤Ke

−ρ (r−tk )/²2



]



¯ ¯ sup ¯¯ B(y, u(r, Xtk (ω), ω 0 )) ¯¯

]

x=Xt² (ω) k x0 =Xtk (ω)



y∈Td

in view of (4.6). Therefore, we obtain (k) |I3 |

≤K



tk+1

[ −ρ (r−tk )/²2

e tk

Eω0 Eω

[

¯] ¯ ¯ 0 ¯ sup ¯B(y, u(r, Xtk (ω), ω ))¯

y∈Td

]

dr −→ 0

as ² → 0. Thus, we finally get J2² → 0. Similarly, we can show J3² → 0 as ² → 0. Hence we completed the proof. Acknowledgment. I would like to express my sincere thanks to the referee for his careful reading.

References [1] Bensoussan,A. (1991) “Homogenization of a class of stochastic partial differential equations”, Progr. Nonlinear Differential Equations Appl. 5, 47-65 (Birkhauser, Boston). [2] Bensoussan,A. (1992) Stochastic control of partially observable systems, (Cambridge University Press, Cambridge). [3] Bensoussan,A. and Blankenship,G.L. (1986) “Nonlinear filtering with homogenization”, Stochastics 17, 67-90. [4] Bensoussan,A. and Lions,J.L. and Papanicolaou,G. (1978) Asymptotic analysis for periodic structures (North-Holland, New York). [5] Buckdahn,R. and Hu,Y. (1998) “Probabilistic approach to homogenization of quasilinear parabolic PDEs with periodic structure”, Nonlinear Analysis, Theory, Methods & Applications 32, 609-619. [6] Buckdahn,R. and Hu,Y. and Peng,S. (1999) “Probabilistic approach to homogenization of viscosity solutions of parabolic PDEs”, Nonlinear differ. equ. appl. 6, 395-411. [7] Funaki,T. (1995) “The scaling limit for a stochastic PDE and the separation of phases”, Probab. Theory Relat. Fields. 102, 221-288. [8] Gaudron,G. and Pardoux,E. (2001) “EDSR, convergence en loi et homog´en´eisation d’EDP paraboliques semi-lin´eaires”, Ann. Inst. H. Poincar´e, Probab. Statist. 37, no.1, 1-42. [9] Holley,R.A. and Stroock,D.W. (1978) “Generalized Ornstein-Uhlenbeck processes and infinite particle branching Brownian motions”, Publ. RIMS, Kyoto Univ. 14, 741-788. [10] Lions,J.L. (1961) Equations Diff´erentielles Op´erationelles et probl`emes aux limits (SpringerVerlag, Berlin). 25

[11] M´etivier,M. (1988) Stochastic partial differential equations in infinite dimensional spaces (Quaderni, Scuola normale superiore). [12] Papanicolaou,G. (1978) “Asymptotic analysis of stochastic equations”, MAA Stud. Math. 18, 111-179. [13] Papanicolaou,G. and Stroock,D. and Varadhan,S.R.S. (1977) Martingale approach to some limit theorems, Duke Turbulence Conference Paper 6 (Duke Univ., Durham). [14] Pardoux,E. (1979) “Stochastic partial differential equations and filtering of diffusion processes”, Stochastics 3, 127-167. [15] Pardoux,E. (1999) “Homogenization of linear and semilinear second order parabolic PDEs with periodic coefficients”, Journal of Functional Analysis. 167, 498-520. [16] Pardoux,E. and Peng,S. (1994) “Backward doubly stochastic differential equations and systems of quasilinear SPDEs”, Propab. Theory Relat. Fields. 98, 209-227. [17] Peng,S. (1991) “Probabilistic interpretation for systems of quasilinear parabolic partial differential equations”, Stochastics and Stochstics Reports 37, 61-74. Graduate School of Mathematical Science, University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo 135-8914, Japan. E-mail address: [email protected]

26

HOMOGENIZATION PROBLEM FOR STOCHASTIC ...

partial differential equations (SPDEs) with small parameter ϵ > 0 :... duϵ(t, x) = Aϵuϵ(t, x)dt + Mϵ(uϵ(t,·))(x)dWt ,. uϵ(0,x) = u0(x) ∈ L2(Rd),. (1.1) where W = (Wt)t∈[0,T] is an n-dimensional standard Brownian motion, and Aϵ, Mϵ are the operators of the forms. (Aϵu)(x) = d. ∑ i,j=1 aij. ( x. ϵ. ) ∂2u. ∂xi∂xj. (x) +. 1.

175KB Sizes 0 Downloads 225 Views

Recommend Documents

HOMOGENIZATION FOR STOCHASTIC PARTIAL ...
(hk(x, z)) and f = (fk(z)) are assumed to be periodic with period 1 in all components. 2000 Mathematics Subject Classification. Primary 60H15; Secondary 35R60, 93E11. Short title. Homogenization for stochastic PDEs. Key words and phrases. homogenizat

24 Adaptive reliable shortest path problem in stochastic traffic ...
There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps.

Maximum Box Problem on Stochastic Points
or blue, where red points have weight +1 and blue points weight −∞. The random variable is the maximum number of red points that can be covered with a box ...

Homogenization for a nonlinear ferroelectric model
We call (u, D, q) an energetic solution of the problem associated with E and R, if the stability condition (S) and the energy balance. (E) hold for every t ∈ [0,T]:.

Conductive gels for field homogenization in ...
Oct 31, 2008 - in Biophysics, University of California at Berkeley, Berkeley, CA 94720 ... 4 Center for Bioengineering in the Service of Humanity and Society, School of Computer Science ..... Such ideal distribution is best approximated for gel.

Homogenization of locally resonant interface for wave ...
homogenization, we end up with interface parameters entering in jump ... Depending on the frequency, a perfect transmission (Left) or a perfect reflection.

Anticoncentration regularizers for stochastic combinatorial problems
Machine Learning Department. Carnegie Mellon University ... they are the solution to a combinatorial optimization problem, NP-hardness motivates the use of ...

Sensitivity summation theorems for stochastic ...
Sensitivity summation theorems for stochastic biochemical reaction systems ..... api А pi pi ј рa А 1Ю. X i. Chsj i pi. ; for all j = 0,...,M. Since the mean level at the ...

Simplex Elements Stochastic Collocation for ...
uncertainty propagation step is often computationally the most intensive in ... These input uncertainties are then propagated through the computational model.

Asynchronous Stochastic Optimization for ... - Research at Google
for sequence training, although in a rather limited and controlled way [12]. Overall ... 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) ..... Advances in Speech Recognition: Mobile Environments, Call.

Hydrological scenario reduction for stochastic ...
Data analysis, Scenario reduction, Stochastic modeling, Energy resources, ..... PLEXOS is a market modeling software capable of optimizing unit com- mitment ...

Uncertainty Quantification for Stochastic Subspace ...
Uncertainty Quantification for Stochastic Subspace Identification on. Multi-Setup Measurements. Michael Döhler, Xuan-Binh Lam and Laurent Mevel. Abstract— ...

STOCHASTIC ALGORITHM FOR PARAMETER ...
of using some “SAEM-like” algorithm to approximate the MAP estimator in the general. Bayesian ... Each image taken from a database is supposed to be gen-.

Asynchronous Stochastic Optimization for ... - Vincent Vanhoucke
send parameter updates to the parameter server after each gradient computation. In addition, in our implementation, sequence train- ing runs an independent ...

Asynchronous Stochastic Optimization for ... - Research at Google
Deep Neural Networks: Towards Big Data. Erik McDermott, Georg Heigold, Pedro Moreno, Andrew Senior & Michiel Bacchiani. Google Inc. Mountain View ...

Uncertainty Quantification for Stochastic Damage ...
finite element model of the structure. Damage localization using both finite element information and modal parameters estimated from ambient vibration data collected from sensors is possible by the Stochastic Dynamic Damage Location Vector (SDDLV) ap

Perturbation Methods for General Dynamic Stochastic Models.pdf ...
Perturbation Methods for General Dynamic Stochastic Models.pdf. Perturbation Methods for General Dynamic Stochastic Models.pdf. Open. Extract. Open with.

INTEGRO-DIFFERENTIAL STOCHASTIC RESONANCE
Communicated by Nigel Stocks. A new class of stochastic resonator (SRT) and Stochastic Resonance (SR) phenomena are described. The new SRT consist of ...

pdf-12101\strategies-for-creative-problem-solving-gs1140-problem ...
... the apps below to open or edit this item. pdf-12101\strategies-for-creative-problem-solving-gs1 ... stitute-custom-edition-from-usa-pearson-custom-ed.pdf.