Northwest Corner and Banded Matrix Approximations to a Markov Chain Yiqiang Q. Zhao, W. John Braun, Wei Li Department of Mathematics and Statistics, University of Winnipeg, Winnipeg, Manitoba, Canada R3B 2E9

Received January 1998; revised July 1998; accepted 2 September 1998

Abstract: In this paper, we consider approximations to discrete time Markov chains with countably infinite state spaces. We provide a simple, direct proof for the convergence of certain probabilistic quantities when one uses a northwest corner or a banded matrix approximation to the original probability transition matrix. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 187–197, 1999

1.

INTRODUCTION

Consider a discrete time Markov chain with an infinite state space and stochastic transition matrix P. In this paper, we provide simple proofs of convergence when a northwest corner or a banded matrix is used to approximate certain measures associated with P. Our treatment is unified in the sense that these probabilistic quantities are well defined for both ergodic and nonergodic Markov chains, and all results are valid for both approximation methods. Our proofs are simple in the sense that they only depend on one theorem from analysis: Weierstrass’ M-test. Our results include the convergence of stationary probability distributions when the Markov chain is ergodic. This work was directly motivated by the need to compute stationary probabilities for infinite-state Markov chains, but applications need not be limited to this. Computationally, when we solve for the stationary distribution of a countable-state Markov chain, the transition probability matrix has to be: (i) truncated in some way to a finite matrix; or (ii) banded in some way such that the computer implementation is finite. The second method is highly recommended for preserving properties of structured probability transition matrices. There are two questions which naturally arise here: (i) in which ways can we truncate or band the transition matrix and (ii) for a selected truncation or banded restriction, does the solution approximate the original probability distribution? By truncation methods, we refer to all methods where the northwest corner of the infinite transition matrix is considered and appropriate values (which may be all 0) are added to certain entries of the northwest corner. If the resulting matrix is required to be stochastic, then the Wei Li is also with the Institute of Applied Mathematics, Chinese Academy of Sciences. Correspondence to: W.J. Braun Contract grant sponsor: National Sciences and Engineering Research Council of Canada © 1999 John Wiley & Sons, Inc.

CCC 0894-069X/99/020187-11

188

Naval Research Logistics, Vol. 46 (1999)

methods are often referred to augmentations. If the resulting matrix is the northwest corner itself, the method is called the northwest corner approximation. Approximating the stationary probabilities of an infinite Markov chain in terms of augmentations was initially studied by Seneta [21]. Most of the current results by him or from his collaboration with other researchers are included in a paper by Gibson and Seneta [4]. Other researchers include Wolf [27], who used a different approach from that of Seneta et al., Heyman [9], who provided a probabilistic treatment of the problem, and Grassmann and Heyman [7], who justified convergence for a class of block structured Markov chains. There are relatively few references for comparison among various methods regarding the truncation errors or convergence speed. In many cases, the last column augmentation produces the minimal truncation error [4] or a faster rate of convergence [7], while the first column augmentation does the worst. That is the main reason why the last column augmentation is often preferred. However, the convergence of the probabilities cannot always be guaranteed (for example, [4] and [27]). The censoring method provides the minimal error for all cases [28] but is usually difficult to implement. This paper introduces a new alternative—the northwest corner approximation. We prove that the convergence results hold for all irreducible Markov chains. We do not make any assumption(s) on the northwest corner: for example, the irreducibility of the truncation which is usually required by augmentations. The northwest corner approximation is one of the easiest methods to implement. Numerical experiments carried out by us show that the truncation errors from using this method are almost identical to those obtained using the last column augmentation. A banded restriction of a transition matrix is the banded matrix which consists of the n 1 superdiagonals and n 2 subdiagonals of the original transition matrix. The entries outside the bands are assigned the value 0, and appropriate values (maybe all zeros) are added to the nonzero entries in a certain way. In the literature, see below, the resulting matrix is required to be stochastic. As can be seen in the example at the end of the paper, this is not always a good idea. We propose to leave the matrix unaugmented, and hence possibly substochastic. A banded stochastic matrix is often used by researchers for two reasons: (i) to keep the computer implementation finite (for example, see [7], [10] and [11], among others); and (ii) to allow the resulting matrix to inherit properties from the structured original transition matrix, including recursive formulas [15] or to overcome technical difficulties (for example, see Theorem 5.4 of [14]). Since this method has been widely adopted by many researchers, a unified justification of using this method seems of some interest. The limit theorem for the banded matrix approximation in this paper provides a formal justification for the first time. An intuitive justification for using a banded matrix is as follows. Since the row sums of P are one, we can always choose n 1 and n 2 such that all entries outside the bands will be indistinguishable from zero up to machine precision. In practice, n 1 and n 2 could be much smaller to achieve the required precision. This justification can be found in the Discussion of Assumptions in Grassmann and Heyman [7]. This justification suggests exactly the method used in this paper, but not just any of the banded stochastic matrices will do. In fact, we cannot always justify the use of a banded stochastic matrix since the resulting Markov chain could be transient with no stationary probability distribution while the original one is ergodic (see the example provided in Section 3). By contrast, the method proposed in the paper is always valid. The foregoing discussion concerns the stationary distribution of an ergodic Markov chain. In this paper, we define four important measures for any Markov chain, ergodic or nonergodic, and our proofs of convergence are valid for all four measures for any irreducible Markov chain. Under ergodic conditions, one of these four measures is equivalent to the stationary distribution. Throughout the paper, we will assume P is an irreducible countable-state discrete time Markov chain. Since the measures we study in the paper are independent of the initial states of the

Zhao, Braun, and Li: Approximations to a Markov Chain

189

Markov chain, we will not distinguish the transition matrix of the Markov chain and the Markov chain itself. We allow a substochastic matrix to be the transition probability matrix of a transient Markov chain. However, we always assume the original matrix P is stochastic. For the Markov chain P 5 ( p i, j ) i, j50,1,2,. . ., we define four measures: 1. a 0, j , the expected number of visits to state j before the process returns to state 0, given that it starts in state 0 ( j 5 1, 2, . . .); 2. b i,0 , the probability of visiting state 0 eventually, given that the process starts in state i (i 5 1, 2, . . .); 3. r i, j , the expected number of visits to state j before hitting any state in {0, 1, 2, . . . , j 2 1}, given that the process starts in state i ( j 5 1, 2, . . . , and i , j); 4. g i, j , the probability of hitting state j before the process hits any other state in {0, 1, 2, . . . , i 2 1}, given that the process starts in state i (i 5 1, 2, . . . , and j , i). These measures are widely used in the study of Markov chains, both theoretically and computationally. For example, the a measure is equivalent to the stationary probability vector if the Markov chain is ergodic, and it is called the generalized stationary probability measure if the Markov chain is null recurrent (for example, Chapter 11 of [12]). The a measure has been used to characterize recurrence and ergodicity of Markov chains (for example, 5.1 of [23] and [28]). Also, some highly stable numerical recursions have been deduced in terms of the a and r measures (for example, [25] and [5]). The b measure is essentially the “dual” of the a measure and plays a similar role. In fact, b is the first passage probability used in most books on Markov chains to classify states. The r measure is also often used to characterize the stationary n probabilities in a different way from a. In fact, we can write p n11 5 ¥ k50 p k r k,n11 . The a and r measures are frequently used by Asmussen [1] and play an important role. If P is partitioned into blocks, which means that all entries are finite matrices, then the a, b, r, and g measures can be generalized to their respective block measures. For example, let a 0, j 5 (a 0, j (r, s)), where a 0, j (r, s) is defined as the expected number of visits to state ( j, s) before hitting any state in level 0, given that it starts in state (0, r). These measures have been shown to play a dominant role in the study of several types of block-structured Markov chains. When P has a block structure, the matrix measures r and g become very important, which can be seen in the work of Neuts [16, 17] on Markov chains of GI/M/1 type and M/G/1 type and a large volume of related work, in the work of Gail, Hantler, and Taylor [2, 3] on M/G/1 and GI/M/1 type Markov chains with multiple boundary levels, and in the work of Grassmann and Hayman [6] and Zhao, Li, and Braun [29] on Markov chains with block repeating transition matrices. The generalization of our results to block-structured Markov chains is straightforward. The convergence theorems proved in this paper provide us approximations for these measures. Since all proofs are valid for both ergodic and nonergodic Markov chains, the results could also potentially be used to numerically study recurrence and ergodicity, which is very useful when a formal classification cannot be done. The methods introduced in this paper satisfy all of the conditions for approximating sequences for Markov decision chains [24], except the resulting transition matrices using northwest corner and banded matrix approximations are not necessarily stochastic, which is not essential for the approximating sequences. Therefore, our results give two more methods for approximating a Markov decision chain. In fact, not requiring stochastic transition matrices increases the range of systems which we can model (for example, [20] and Chapter 11 of [18]). For convenience, we use the following conventions, when necessary. Any finite vector of length n will be treated as an infinite vector by adding zeros after the nth element, and any finite

190

Naval Research Logistics, Vol. 46 (1999)

matrix of size n 3 n will be treated as an infinite matrix with a nonzero northwest corner and zero elements elsewhere. Results for continuous time Markov chains can be similarly obtained. The following two sections will treat the northwest corner approximation and the banded approximation, respectively. The final section contains additional remarks concerning measures for the censored Markov chain together with conclusions. 2.

APPROXIMATION USING A NORTHWEST CORNER

The justification of convergence for all measures considered in this paper heavily relies on a discrete version of the Dominated Convergence Theorem (for example, see page 27 of [8]). To be notationally more specific and convenient, we state the theorem as follows. THEOREM 1: If { f n,k } is a doubly-indexed sequence for n 5 1, 2, . . . and k 5 1, 2, . . . ` such that uf n,k u # h k for all n, k where ¥ k51 h k , ` and such that, for all k 5 1, 2, . . . , limn3` f n,k exists then

O f 5 O lim f `

lim

`

n,k

n3` k51

n,k

.

k51 n3`

Note that this is a standard theorem in analysis, based on Weierstrass’ M-test. We write a 5 ~a 0,1, a 0,2, . . .!, b 5 ~b 1,0, b 2,0, . . .! T, where T stands for the transpose. Let P be partitioned as P5

F pd

0,0

u Q

G.

` ˆ 5 ¥ k50 Since P is assumed to be irreducible, the fundamental matrix Q Q k of matrix Q is k ˆ is finite-valued and limk3` Q 5 0 according to Propositions 5-2 and 5-3 of [13]. In fact, Q the unique minimum non-negative inverse of I 2 Q according to Proposition 5-11 of [13]. Using Propositions 5-2 and 5-3 of [13], and conditional probability arguments, we can obtain the following.

LEMMA 2: ˆ, a 5 uQ

ˆ d, b5Q

Oa

n21

a 0,n 5 r 0,n 1

k51

r ,

0,k k,n

Zhao, Braun, and Li: Approximations to a Markov Chain

191

and

Og

n21

b n,0 5 g n,0 1

b .

n,k k,0

k51

REMARK 1: (i) As a generalization, we can define a i, j and b j,i for all i and j. Then for any ˆ , `. When the Markov fixed i, a i, j , and b j,i will play the same role as a 0,i and b i,0 . (ii) a 5 uQ chain is recurrent, this result can also be found in Section 3 of Chapter 11 of [12] or Theorem ` 3.3 of [1]. If the Markov chain is transient, then ¥ k50 P k , ` (for example, see the last equation on page 107 in Ross [19]). This means that the expected number of visits to state n starting from 0 is finite. Therefore, the expected number of visits to state n before hitting state 0 should also be finite. Let (n) Q be the northwest corner, having size n 3 n, of Q, let (n) u be the row vector of size n comprised of the first n elements of u, and let (n)d be the column vector of size n comprised of the first n elements of d. Then P5

~n!

F pd

u Q

~n!

0,0

~n!

~n!

G

is substochastic since P is irreducible. For the Markov chain notation (n) x for the corresponding measure x. Therefore, ~n!

a 5 ~ ~n!a 0,1,

~n! 0,2

b 5 ~ ~n!b 1,0,

~n! 2,0

~n!

a ,..., b ,· · ·,

(n) P,

we use the left-subscripted

ˆ, a ! 5 ~n!u ~n!Q

~n! 0,n

ˆ ~n!d. b ! T 5 ~n!Q

~n! n,0

Our main purpose in this section is to establish the convergence of (n) a 0, j , (n) b i,0 , (n) r i, j , and (n) g i, j to the corresponding measures, respectively. To this end, we first prove a lemma. To avoid notational collisions, we use X(i, j) to denote the (i, j)th element of a matrix X. LEMMA 3: For fixed i and j, ~ ~n!Q! k~i, j!mQ k~i, j!

for any fixed k $ 0,

and

ˆ ~i, j!mQ ˆ ~i, j!, Q

~n!

as n goes to infinity. PROOF: The second result follows directly from the first one together with Theorem 1. The first result is proven as follows. Since the result is obvious for k , 2, we take k $ 2. Then 0 # Q k~i, j! 2 ~ ~n!Q! k~i, j!

O Op `

5

`

...

j151

jk2151

p · · ·p jk21,j 2

i,j1 j1,j2

O· · · O p n

n

j151

jk2151

p · · ·p jk21,j

i,j1 j1,j2

192

Naval Research Logistics, Vol. 46 (1999)

O· · · O O `

5

`

j151

`

p i,j1p j1,j2· · ·p jk21,j 1

jk2251 jk215n11

n

O· · · O O `

j151

p i,j1p j1,j2· · ·p jk22,jk21 1

Op

`

Op n

p i,j1p j1,j2· · ·

jk2351 jk225n11

jk22,jk21

jk2151

n

j1,j2

···

j251

`

s51 j151

Op n

p i,j1

O O· · · O O `

O· · · O O `

j151

j15n11

k21

p · · ·p jk21,j

`

`

#

p · · ·p jk21,j

i,j1 j1,j2

jk2351 jk225n11 jk2151

i,j1 j1,j2

jk2251 jk215n11

O

n

jk2151

`

1···1

`

n

···

j15n11 j251

`

`

j151

O O Op `

1···1

#

O· · · O O O p

`

jk22,jk21

jk2151

`

p i,j1p j1,j2· · ·p js21,js.

js2151 js5n11

The second last inequality follows since p j k21, j # 1, and the last inequality follows from successive application of the inequality

Op n

jh21,jh

# 1.

jh51

We now show, for every value of s, the inner summation is zero as n goes to infinity. This is clear when s 5 1. When 2 # s # k 2 1, let

O· · · O O `

p i,j1

`

j251

`

p j1,j2· · ·p js21,js

js2151 js5n11

play the role of f n, j 1 in Theorem 1, and use the fact that Q is substochastic to find that

O O O `

p i,j1

`

`

···

j251

p j1,j2· · ·p js21,js # p i,j1

js2151 js5n11

and ¥ j`151 p i, j 1 # 1 , `, so Theorem 1 gives

O p O· · · O O `

lim

n3` j151

`

`

`

i,j1

j251

js2151 js5n11

O lim p O · · · O O `

p j1,j2· · ·p js21,js 5

`

`

`

i,j1

j151 n3`

Repeating this argument s 2 1 more times gives

j251

js2151 js5n11

p j1,j2· · ·p js21,js.

Zhao, Braun, and Li: Approximations to a Markov Chain

Op Op `

lim

n3` j151

`

i,j1

j251

O `

j1,j2· · ·

Op Op `

p js21,js 5

js5n11

`

i,j1

j151

O

193

`

j1,j2· · ·lim

j251

p js21,js 5 0.

n3` js5n11

THEOREM 4: (i) In the elementwise sense, as n goes to infinity, ~n!

ama

and

~n!

b m b.

(ii) As n goes to infinity, for i , j, r m r i,j

~n! i,j

and for i . j, g m g i,j.

~n! i,j

The result in (i) follows from Lemma 2, Lemma 3, and Theorem 1. As for the result in (ii), the proof is similar to the previous one, noting that if P is partitioned according to {0, 1, . . . , j 2 1} and { j, j 1 1, . . .}, P5

F DT UQ G ,

ˆ ( z , 1) and g j, j2i 5 Q ˆ (1, z ) D( j 2 i), where U( j 2 i) is the ith then r j2i, j 5 U( j 2 i)Q ˆ ˆ , and Q ˆ (1, z ) last row of U, D( j 2 i) is the ith last column, Q ( z , 1) is the first column of Q ˆ is the first row of Q . COROLLARY 5: For any Markov chain, ergodic or nonergodic, the convergence of (n) b to b in Theorem 4 is also in the sense of l 1 . For any ergodic Markov chain, the convergence of (n) a to a in Theorem 4 is also in the sense of l 1 . When the Markov chain P is ergodic, then p 0 a 0, j 5 p j according to Theorem 3.3 of [12]. Using a similar argument (also see Remark 1) we can show that p i a i, j 5 p j . REMARK 2: (i) When the Markov chain P is ergodic, then a is proportional to the stationary probability vector (p1, p2, . . .), and (a 0,0 , a 0,1 , a 0,2 , . . .) with a 0,0 5 1 is one of the regular measures of P. Using Theorem 4 and Lemma 2, one can show that a m a 0,j 5

~n! 0,j

pj , p0

where p j is the stationary probabilities of P. (ii) By Remark 1 concerning a i, j , we have actually shown that a m a i,j 5

~n! i,j

pj , pi

194

Naval Research Logistics, Vol. 46 (1999)

and since the expression for a i, j is similar to that for a 0, j in Lemma 2, we can show that a 5

~n! i,j

where (n) c i, j is the (i, j)th entry of (I 2 regarded as a consequence here. 3.

(n) P)

c , c ~n! i,i ~n! i,j

21

. This means that Theorem 7.1 in [23] can be

APPROXIMATION USING A BANDED MATRIX

In this section, we consider a different approximation method, which uses the inner 2m bands of the original matrix. That is,

m

Q5

3

p 1,1 p 2,1 · · · p m,1

p 1,2 p 2,2 · · · p m,2 p m11,1

··· ··· · · · ··· ··· ·· ·

p 1,m p 2,m p 2,m11 · · ·· · · · · · · · · · · · · · · p m,2m21 · · · · · · · · · p m11,2m21 p m11,2m · · · · · ·· · · · · · · · · · · ·

4

.

Let u 5 ~p 0,1,p 0,2, . . . , p 0,m21, 0, 0, . . . !,

m

d 5 ~p 1,0,p 2,0, . . . , p m21,0, 0, 0, . . . ! T,

m

then P5

m

F pd

0,0

m

u Q

m m

G

is a banded matrix of band width m. We will prove the convergence of the measures a, b, r, and g as the band width m goes to infinity. LEMMA 6: For a fixed entry (i, j), as m goes to infinity, m

Q k~i, j! m Q k~i, j!

for any fixed k $ 0

and

m

ˆ ~i, j! m Q ˆ ~i, j!. Q

The first result can be proved using Lemma 3 and noticing that for arbitrary m . 0, # n Q k for large enough n. The second one follows as in the proof of Lemma 3. THEOREM 7: (i) In the elementwise sense, as m goes to infinity, m

ama

(ii) As m goes to infinity, for i , j,

and

m

b m b.

(m) Q

k

Zhao, Braun, and Li: Approximations to a Markov Chain

195

r m r i,j

m i,j

and for i . j, g m g i,j.

m i,j

The proof is similar to that of Theorem 4. REMARK 3: As we mentioned in the Introduction, the traditional method is to augment the banded matrix into a stochastic one. However, this method does not always give a valid approximation. For example, if we want to approximate the limiting probabilities, using the banded augmentation, the stationary probability distribution may not exist since the resulting Markov chain could be transient even though the original one is ergodic. EXAMPLE: Consider the Markov chain

P5

3

d1 1 m1 l1 d2 1 m2 0 l2 d3 m3 0 l3 d4 m4 0 l4 d5 m5 0 l5 · ·· ·· ·· · · · · ·

4

.

This is ergodic, for example, when

ln 5

n2 2 1 ~n 1 1! 2

and

mn 5

1 . ~n 1 1! 2

However, for any band width m, we now show that the corresponding banded stochastic augmentation is transient. For m . 1 and j . 3, the transition probabilities of the banded stochastic augmentation are the same as those of P, except the (m 1 j, j) entries are now d j , and the ( j, 1) entries are now 0. Choose N large enough that N 2 2 1 . (2N 1 1)m 1 1, and consider the Markov chain whose transition probabilities P9(i, j) are the same as for the banded augmentation, when i , N, j , N, but where P9(m 1 j, j) 5 d N , P9( j 1 1, j) 5 m N , and P9( j, j 1 1) 5 l N , for j $ N, and all other entries are 0. Using the generalized traffic intensity criterion for random walks (for example, [1]), it is clear that P9 is transient, since the traffic intensity for P9 exceeds 1. The probability of eventual return to state N is higher for P9 than for the banded augmentation. Therefore, the banded augmentation must also be transient. 4.

CONCLUDING REMARKS

In this paper, we have modified the idea of using an augmentation to the northwest corner of the transition matrix and the idea of using a banded augmentation to allow for a substochastic matrix. Unlike traditional augmentation methods, the northwest corner and the banded matrix are always valid for approximating certain limiting measures. These include the approximation to the limiting probabilities as a special case. Our results also apply to nonergodic Markov

196

Naval Research Logistics, Vol. 46 (1999)

chains. In fact, we have shown that any irreducible Markov chain will be amenable to these methods, in contrast to the traditional augmentation methods. The justification of the convergence of the methods is unified by the use of the Dominated Convergence Theorem, which is also significantly simpler than for augmentation methods. This justification method could also be used for other approximation approaches. ACKNOWLEDGMENTS The authors thank the referee for constructive comments, which have resulted in a clearer presentation. REFERENCES [1] S. Asmussen, Applied probability and queues, Wiley, Chichester, 1987. [2] H.R. Gail, S.L. Hantler, and B.A. Taylor, Spectral analysis of M/G/1 and GI/M/1 type Markov chains, Adv Appl Prob 28 (1996), 114 –165. [3] H.R. Gail, S.L. Hantler, and B.A. Taylor, M/G/1 and G/M/1 type Markov chains with multiple boundary levels, Adv Appl Prob 29 (1997), 733–758. [4] D. Gibson and E. Seneta, Augmented truncations of infinite stochastic matrices, J Appl Prob 24 (1987), 600 – 608. [5] W.K. Grassmann, M.I. Taksar, and D.P. Heyman, Regenerative analysis and steady state distributions for Markov chains, Operations Res 33 (1985), 1107–1116. [6] W.K. Grassmann and D.P. Heyman, Equilibrium distribution of block-structured Markov chains with repeating rows, J Appl Prob 27 (1990), 557–576. [7] W.K. Grassmann and D.P. Heyman, Computation of steady-state probabilities for infinite-state Markov chains with repeating rows, ORSA J Comput 5 (1993), 292–303. [8] J.J. Hunter, Mathematical techniques of applied probability, Vol. I., Academic Press, New York, 1983. [9] D.P. Heyman, Approximating the stationary distribution of an infinite stochastic matrix, J Appl Prob 28 (1991), 96 –103. [10] E.P.C. Kao, Using state reduction for computing steady state probabilities of queues of GI/PH/1 type, ORSA J Comp 3 (1991), 231–240. [11] E.P.C. Kao, Using state reduction for computing steady state vectors in Markov chains of M/G/1 type, Queueing Syst 10 (1992), 89 –104. [12] S.K. Karlin and H.M. Taylor, A second course in stochastic processes, Academic Press, New York, 1981. [13] J.G. Kemeny, J.L. Snell, and A.W. Knapp, Denumerable Markov chains, 2nd ed, Springer-Verlag, New York, 1976. [14] G. Latouche, “Algorithms for infinite Markov chains with repeating columns,” Linear algebra, Markov chains and queueing models, C.D. Meyer and R.J. Plemmons (Editors), Springer-Verlag, New York 1993, pp. 231–265. [15] G.R. Murthy, M. Kim, and E.J. Coyle, Equilibrium analysis of skip free Markov chains: nonlinear matrix equations, Stoch Models 7 (1991), 547–571. [16] M.F. Neuts, Matrix-geometric solutions in stochastic models: An algorithmic approach, The Johns Hopkins University Press, Baltimore, 1981. [17] M.F. Neuts, Structured stochastic matrices of M/G/1 type and their applications, Marcel Decker, New York, 1989. [18] M.L. Puterman, Markov decision processes— discrete stochastic dynamic programming, Wiley, New York, 1994. [19] S.M. Ross, Stochastic processes, Wiley, New York, 1983. [20] U.G. Rothblum and P. Whittle, Growth optimality for branching Markov decision chains, Math Operations Res 7 (1982), 582– 601. [21] E. Seneta, Finite approximation to infinite non-negative matrices, Proc Camb Phil Soc 63 (1967), 983–992.

Zhao, Braun, and Li: Approximations to a Markov Chain

197

[22] E. Seneta, Finite approximation to infinite non-negative matrices, II, Proc Camb Phil Soc 64 (1968), 465– 470. [23] E. Seneta, Non-negative matrices and Markov chains, 2nd ed, Springer-Verlag, New York, 1981. [24] L.I. Sennott, The computation of average optimal policies in denumerable state Markov decision chains, Adv Appl Prob 29 (1997), 114 –137. [25] T.J. Sheskin, A Markov chain partitioning algorithm for computing steady state probabilities, Operations Res 33 (1985), 229 –235. [26] R.L. Tweedie, Criteria for classifying general Markov chains, Adv Appl Prob 8 (1971), 737–771. [27] D. Wolf, Approximation of the invariant probability measure of an infinite stochastic matrix, Adv Appl Prob 12 (1980), 710 –726. [28] Y.Q. Zhao and D. Liu, The censored Markov chain and the best augmentation, J Appl Prob 33 (1996), 623– 629. [29] Y.Q. Zhao, W. Li, and W.J. Braun, Infinite block-structured transition matrices and their properties, Adv Appl Prob 30 (1998), 365–384.

Northwest corner and banded matrix ... - Wiley Online Library

Abstract: In this paper, we consider approximations to discrete time Markov chains with countably infinite state spaces. We provide a simple, direct proof for the convergence of certain probabilistic quantities when one uses a northwest corner or a banded matrix approximation to the original probability transition matrix.

59KB Sizes 0 Downloads 102 Views

Recommend Documents

ELTGOL - Wiley Online Library
ABSTRACT. Background and objective: Exacerbations of COPD are often characterized by increased mucus production that is difficult to treat and worsens patients' outcome. This study evaluated the efficacy of a chest physio- therapy technique (expirati

Rockets and feathers: Understanding ... - Wiley Online Library
been much progress in terms of theoretical explanations for this widespread ... explains how an asymmetric response of prices to costs can arise in highly ...

XIIntention and the Self - Wiley Online Library
May 9, 2011 - The former result is a potential basis for a Butlerian circularity objection to. Lockean theories of personal identity. The latter result undercuts a prom- inent Lockean reply to 'the thinking animal' objection which has recently suppla

Openness and Inflation - Wiley Online Library
Keywords: inflation bias, terms of trade, monopoly markups. DOES INFLATION RISE OR FALL as an economy becomes more open? One way to approach this ...

Micturition and the soul - Wiley Online Library
Page 1 ... turition to signal important messages as territorial demarcation and sexual attraction. For ... important messages such as the demarcation of territory.

competition and disclosure - Wiley Online Library
There are many laws that require sellers to disclose private information ... nutrition label. Similar legislation exists in the European Union1 and elsewhere. Prior to the introduction of these laws, labeling was voluntary. There are many other ... Ð

Openness and Inflation - Wiley Online Library
related to monopoly markups, a greater degree of openness may lead the policymaker to exploit the short-run Phillips curve more aggressively, even.

Climate change and - Wiley Online Library
Climate change has rarely been out of the public spotlight in the first decade of this century. The high-profile international meetings and controversies such as 'climategate' have highlighted the fact that it is as much a political issue as it is a

Phenotypic abnormalities: Terminology and ... - Wiley Online Library
Oxford: Oxford University Press. 1 p]. The major approach to reach this has been ... Amsterdam, The Netherlands. E-mail: [email protected]. Received 15 ...

Wealth, Population, and Inequality - Wiley Online Library
Simon Szreter. This journal is devoted to addressing the central issues of population and development, the subject ... *Review of Thomas Piketty, Capital in the Twenty-First Century. Translated by Arthur Goldhammer. .... As Piketty is well aware, wit

Inconstancy and Content - Wiley Online Library
disagreement – tell against their accounts of inconstancy and in favor of another .... and that the truth values of de re modal predications really can change as our.

Scholarship and disciplinary practices - Wiley Online Library
Introduction. Research on disciplinary practice has been growing and maturing in the social sciences in recent decades. At the same time, disciplinary and.

Anaphylaxis and cardiovascular disease - Wiley Online Library
38138, USA. E-mail: [email protected]. Cite this as: P. Lieberman, F. E. R.. Simons. Clinical & Experimental. Allergy, 2015 (45) 1288–1295. Summary.

Enlightenment, Revolution and Democracy - Wiley Online Library
Within a century such typological or static evaluation had given way to diachronic analysis in Greek thought. However, in the twentieth century this development was reversed. This reversal has affected the way we understand democracy, which tends to

poly(styrene - Wiley Online Library
Dec 27, 2007 - (4VP) but immiscible with PS4VP-30 (where the number following the hyphen refers to the percentage 4VP in the polymer) and PSMA-20 (where the number following the hyphen refers to the percentage methacrylic acid in the polymer) over th

Recurvirostra avosetta - Wiley Online Library
broodrearing capacity. Proceedings of the Royal Society B: Biological. Sciences, 263, 1719–1724. Hills, S. (1983) Incubation capacity as a limiting factor of shorebird clutch size. MS thesis, University of Washington, Seattle, Washington. Hötker,

Kitaev Transformation - Wiley Online Library
Jul 1, 2015 - Quantum chemistry is an important area of application for quantum computation. In particular, quantum algorithms applied to the electronic ...

Ability of Matrix Models to Explain the Past and ... - Wiley Online Library
... University, 324 N Main Street, Petersham, MA 01366, U.S.A. email [email protected] .... The study sites for most species (13) were in the United. States ...

PDF(3102K) - Wiley Online Library
Rutgers University. 1. Perceptual Knowledge. Imagine yourself sitting on your front porch, sipping your morning coffee and admiring the scene before you.

Standard PDF - Wiley Online Library
This article is protected by copyright. All rights reserved. Received Date : 05-Apr-2016. Revised Date : 03-Aug-2016. Accepted Date : 29-Aug-2016. Article type ...

Authentic inquiry - Wiley Online Library
By authentic inquiry, we mean the activities that scientists engage in while conduct- ing their research (Dunbar, 1995; Latour & Woolgar, 1986). Chinn and Malhotra present an analysis of key features of authentic inquiry, and show that most of these

TARGETED ADVERTISING - Wiley Online Library
the characteristics of subscribers and raises advertisers' willingness to ... IN THIS PAPER I INVESTIGATE WHETHER MEDIA TARGETING can raise the value of.