1 Stochastic Boundedness in Biological Models Raffaello Seri1 and Christine Choirat2 1
2
CREST-LFA, Timbre J320, 15 bd Gabriel P´eri, 92245 MALAKOFF CEDEX, FRANCE, Homepage: http://www.crest.fr/pageperso/lfa/seri/seri.htm Email:
[email protected] Centre de Recherche Viabilit´e, Jeux, Contrˆ ole, Universit´e Paris Dauphine, 75775 PARIS CEDEX 16, FRANCE, Homepage: http://viab.dauphine.fr/~choirat Email:
[email protected]
Abstract. In this article1 , we give sufficient conditions for stochastic boundedness in population models.
1.1
Some Definitions
Consider an ecosystem described by the sizes of K populations. It is customary, in the analysis of ecosystems, to identify the persistence of the trophic links among the different species with the convergence of the population sizes towards a strictly positive random vector. This requirement is actually too strong and rules out some interesting asymptotic behaviors such as seasonality driven by deterministic external cycles (e.g. seasons) or even explosive behavior. Stochastic boundedness is a concept introduced by Chesson (see e.g. Chesson and Ellner, 1989) that allows for relaxing this requirement. A vector of population sizes Xt is stochastically bounded if it is bounded from below in distribution by a strictly positive random variable. Species i is said to be stochastically bounded persistent if there is a strictly positive random variable V i such that Xti is stochastically larger than V i for all t: © ª © ª P Xti ≥ x ≥ P V i ≥ x ,
∀x ∈ R+ , t ∈ N∗ .
According to Proposition 1, any first-order Markovian system (Xt )t∈N where Xt takes its values in a Borel space X admits a representation in terms of Iterated Random Functions: Xt = f (Xt−1 , Ut ) . 1
(1.1)
A more complete version can be retrieved from the homepages of the authors.
¡ ¢ In the case of a vector of population sizes Xt = Xt1 , . . . , XtK , we have: ¡ 1 ¢ 1 1 K Xt = f Xt−1 , . . . , Xt−1 , Ut , .. . ¡ ¢ K 1 K Xt = f K Xt−1 , . . . , Xt−1 , Ut . Remark that (Ut )t∈N is a sequence of independent and identically distributed random variables: indeed, this derives from the very construction of the Iterated Random Function Representation. It is not necessary to suppose that (Ut )t∈N represents the effect of the environment to obtain serial independence as done in Chesson and Ellner (1989). The result we are going to develop holds also for deterministic models since in this case the dependence on Ut of the function f is simply removed.
1.2
A General Result on Stochastic Boundedness
In this Section, we derive a general result on stochastic boundedness. The main steps of the method are the following: K
1. We show that any ecological model (Xt )t∈N , taking its values in (R+ ) ¡ ¢ K can be transformed in a new model Xt t∈N taking its values in [−∞, +∞) . ³ (k) ´ K taking its values in [−∞, +∞) : this 2. We build a new process Xt t∈N
process is described by an iterated random function representation (1.1) possessing a Lipschitz³property. ´ (k) has an invariant distribution, then the 3. We show that when Xt t∈N ³ (k) ´ ¡ ¢ model Xt t∈N is stochastically bounded. To prove that Xt has t∈N
an invariant distribution, we use a recent result of Diaconis and Freedman (1999). Now we pass to the Theorem implementing this principle. Theorem 1. Let (Xt )t∈N be a population process taking its values in RK + and described by equation (1.1). Then, stochastic boundedness holds if there exist K constants k1 , . . . , kK , with 0 ≤ kj < 1 such that: ¯ ( ( ¯ ¯)¯) ¯ ¯ zh ¯ ¯¯ f j (w, U ) ¯ ¯ ¯ < +∞. E ln 1 + sup ¯ inf ln + kj · sup ¯¯ln zj wh ¯ ¯ j=1,...,K ¯w∈RK h=1,...,K + In particular, this holds if for any j = 1, . . . , K: ¯ ( ¯ ¯)¯ ¯ ¯ zh ¯ ¯¯ f j (w, U ) ¯ ¯ ¯ < +∞. + kj · sup ¯¯ln E ¯ inf ln ¯w∈RK zj wh ¯ ¯ h=1,...,K +
Remark 1. (i) Any other norm can be used instead of the max norm: however, our choice is motivated by the fact that this norm guarantees a small Lipschitz constant. (ii) The only requirement on the error term U is serial independence: otherwise, it can be scalar of vector valued. As an example of the application of the previous Theorem, consider a population (Xt )t∈N characterized by the evolution equation Xt+1 = θt · Xtβ , where X0 is a strictly positive random variable and (θt )t∈N is a series of independent and identically distributed random variables independent of X0 such that P {θt = 0} = 0. Clearly, when β = 1 and θt has support (0, 1), the model is not stochastically bounded persistent. However, it is stochastically bounded if −1 < β < 1 and E ln (1 + |ln U |) < +∞. Therefore, if −1 < β < 1, U can even be uniformly distributed on (0, 1).
References 1. Chesson, P.L. and Ellner, S. (1989). Invasibility and stochastic boundedness in monotonic competition models, Journal of Mathematical Biology, 27, pp. 45– 76. 2. Kallenberg, O. (1997). Foundations of Modern Probability, Springer-Verlag, New York. 3. Diaconis, P. and Freedman, D. (1999). Iterated random functions, SIAM Review, 41, pp. 45–76.
A A.1
Mathematical Appendix Iterated Random Function Representation
It is customary to describe a biological system as a function of the form: Xt = ft (Xt−1 , Ut ) ,
(1.2)
where ft is allowed to depend on the time period t and Ut is a stochastic term that stands for the environmental (see e.g. Chesson and Ellner, 1989) or demographic variability, or for observational and measurement error. We call such a representation the iterated random function representation (irfr). Deterministic models can also clearly be cast in this framework but then the function ft does not depend on Ut . However, it is less clear what the relevance of representation (1.2) in stochastic biological models is. The following Theorem (Proposition 7.6 in Kallenberg, 1997, p. 122) shows that almost any Markov model having a biological relevance admits an irfr. It allows for writing a Markov chain as a randomized discrete time dynamical system.
Proposition 1. Let (Xt )t∈N be a process on N with values in a Borel space X. Then X is Markov if and only if there exist some measurable functions (that are not unique) f1 , f2 , . . . : X × [0, 1] → X and iid U [0, 1] random variables U1 , U2 , . . . ⊥ X0 such that Xt = ft (Xt−1 , Ut ) as for all t ∈ N. We may choose f1 = f2 = . . . = f if and only if X is time-homogeneous. A.2
Lipschitz Approximations
Given a function f : E → R and a real k ≥ 0, the k−Lipschitz approximation of f is defined by: f (k) (x) , inf {f (y) + k · ρ (x, y)} , y∈E
k ≥ 0.
When k varies, we get: f (0) (x) = inf f (y) y∈E
and
lim f (k) (x) ≤ f (x) .
k→∞
In particular, limk→∞ f (k) (·) is the lower semicontinuous regularization of f . It is simple to show that the k−Lipschitz approximation is k−Lipschitz and therefore continuous: ¯ ¯ ¯ ¯¯ ¯ ¯ (k) ¯ ¯f (x) − f (k) (z)¯ = ¯¯ inf {f (y) + k · ρ (x, y)} − inf {f (y) + k · ρ (z, y)}¯¯ y∈E y∈E ¯ ¯ ¯ ¯ ¯ = ¯ inf {f (y) + k · ρ (x, y)} + sup {−f (y) − k · ρ (z, y)}¯¯ y∈E
y∈E
≤ k · sup |ρ (x, y) − ρ (z, y)| ≤ k · ρ (x, z) ,
(1.3)
y∈E
and moreover: inf f (y) = f (0) (x) ≤ f (k) (x) ≤ inf {f (y) + k · ρ (x, y)}
y∈E
y∈E
≤ inf {f (y) + k · ρ (x, y)} = f (x) , y=x
A.3
∀x ∈ E.
(1.4)
Convergence Results for Iterated Random Functions
The following Theorem is basically Theorem 5.1 of Diaconis and Freedman (1999), but is based also on some of their following Remarks. Let (X, ρ) be a Polish space (that is a metrizable complete separable space). Then we define Kf (u) ,
sup (x,y)∈X2
ρ [f (x, u) , f (y, u)] . ρ (x, y)
Therefore the distribution of Kf is generated by the distribution of U through the relation © ª P {Kf (U ) ≤ k} = λ u : ρ [f (x, u) , f (y, u)] ≤ kρ (x, y) , ∀ (x, y) ∈ X2 , where λ is Lebesgue measure.
Theorem 2. Suppose µ is a probability on the Lipschitz functions. Suppose further that (i) ln [1 + Kf (U )] ∈ L1 , 1 (ii) ∃x R 0 : ln [1 + ρ (f (x0 , U ) , x0 )] ∈ L , (iii) [0,1] ln Kf (u) µ (du) ∈ [−∞, 0) . Consider a Markov chain on X that moves according to µ. Then, there is a unique invariant probability π. A.4
Proof of Theorem 1
Proof. Consider the function Xt = f (Xt−1 , Ut ). The random vectors Xt K takes their values in (R+ ) where K is the number of species. First of all, we write ln and exp to indicate the componentwise logarithmic and exponential transformations applied to a vector, and we transform the vector Xt and the K function f in order to extend their support to [−∞, +∞) : iT h £ ¤T 1 K Xt = X t · · · X t = ln Xt = ln Xt1 · · · ln XtK , f (·, u) = ln f (exp·, u) ¡ ¢ h 1¡ ¤T ¢ ¢ iT £ K¡ f Xt , u = f Xt , u · · · f = ln f 1 (Xt , u) · · · ln f K (Xt , u) . Xt , u ¡ ¢ ¡ ¢ Therefore, we can write ln Xt = ln f exp ln Xt−1 , Ut and Xt = f Xt−1 , Ut . K In the following, we will endow the space (R ∪ {−∞}) in which Xt takes its values with the pseudodistance ρ (x, y) = kx − yk = supj=1,...,K |xj − yj |. Now, for the j−th component of the function f , we take the kj −Lipschitz approximation of this function to get the new function: iT h (k) f (·, u) = f 1,(k1 ) (·, u) · · · f K,(kK ) (·, u) where: f
j,(kj )
(x, u) = inf
y∈RK
³ (k) ´ Let Xt
t∈N
n j o f (y, u) + kj · kx − yk ,
T
k = (k1 , . . . , kK ) .
be the process defined by the recurrence equation:
(kj ),j
Xt
=f
j,(kj )
³
´ (k) Xt−1 , Ut ,
j = 1, . . . , K,
t ∈ N.
(k)
If we fix the initial conditions X0 = X0 , then by equation (1.4) it is simple to show that there exists a negligible set N such that for any j = 1, . . . , K and for any ω ∈ Ω\N : (kj ),j
X1
(ω) = f
j,(kj )
¡
¢ ¢ j,(kj ) ¡ j X0 (ω) , U1 (ω) ≤ X 1 (ω) = f X0 (ω) , U1 (ω) .
(kj ),j
j
≤ X t and By induction on t, we have, P³− as, ´for any j = 1, . . . , K, X t (k) (kj ),j j j K exp X t ≤ exp X t = Xt . If Xt is R −valued and has an invariant ³t∈N (k) ´ distribution whose support is in RK , expXt is RK + −valued and has an t∈N
invariant distribution whose support is in RK ++ and (Xt )t∈N is stochastically bounded. ³ (k) ´ Therefore, we want to apply Theorem 2 to the process Xt . By t∈N ¡ ¢ (k) equation (1.3), f (·, u) is supj kj −Lipschitz and conditions (i) and (iii) in the statement of Theorem 2 are trivially verified if 1 > supj kj ≥ 0. Condition (ii) can be restated in the equivalent form: there exists a point x ∈ RK such that Z ° °o n ° (k) ° ln 1 + °f (x, u) − x° λ (du) [0,1]
is finite, where λ is Lebesgue measure. Now, we show that the choice of the point x ∈ RK is indifferent in this equation. Indeed, take a different point y ∈ RK and remark that ln (1 + k·k) is a distance too and satisfies the triangular inequality: Z
n
[0,1]
ln 1 + f
Z
= Z
[0,1]
≤ [0,1]
Z
+ [0,1]
[0,1]
o
(y, u) − y λ (du)
n
(k)
n
(k)
n
(k)
ln 1 + f ln 1 + f ln 1 + f
Z
≤
(k)
n
ln 1 + f
(k)
(y, u) − f (y, u) − f
(k)
(k)
(x, u) + f
(k)
o
(x, u) − y + x − x λ (du)
o
(x, u) λ (du) Z
o
(x, u) − x λ (du) +
o
ln {1 + kx − yk} λ (du) [0,1]
(x, u) − x λ (du) + ln {1 + k · kx − yk} + ln {1 + kx − yk} .
Now, we can write this expression in terms of the original process:
j,(kj ) f (x, u) − x j λ (du) j=1,...,K [0,1] [0,1] Z j = ln 1 + sup inf f (y, u) − xj + kj · sup |xh − yh | λ (du) K j=1,...,K y∈R h=1,...,K [0,1] ( ) Z j = ln 1 + sup inf ln f (w, u) − ln zj + kj · sup |ln zh − ln wh | λ (du) j=1,...,K w∈RK h=1,...,K [0,1] + ) ( Z zh f j (w, u) λ (du) , ln 1 + sup inf + kj · sup ln = ln K zj wh j=1,...,K w∈R+ h=1,...,K [0,1]
Z
n
ln 1 + f
(k)
o
Z
(x, u) − x λ (du) =
where we have taken x = ln (z).
ln 1 +
2
sup