A Martingale Decomposition of Discrete Markov Chains Peter Reinhard Hansen∗ European University Institute & CREATES † April 1, 2015

Abstract We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful for the analysis of time series that are confined to a grid, such as financial high frequency data.

Keywords: Markov Chain; Martingale; Beveridge-Nelson Decomposition. JEL Classification: C10; C22; C58



Address: Department of Economics, European University Institute, Villa San Paolo, Via Della Piazzuola 43, 50133 FI Fiesole, Italy. Email: [email protected] † The author wishes to thank Juan Dolado, James D. Hamilton, an anonymous referee, and seminar participants at Duke University for valuable comments. The author acknowledges support from CREATES – Center for Research in Econometric Analysis of Time Series (DNRF78), funded by the Danish National Research Foundation.

1

1

Introduction

We consider a d-dimensional time series, {Xt }, whose increments, ∆Xt = Xt − Xt−1 , follow a homogeP neous ergodic Markov chain with a countable state space. Thus, Xt = X0 + tj=1 ∆Xj , which makes Xt a (possibly non-stationary) Markov chain on a countable state space. We consider, E(Xt+h |Ft ), where Ft = σ(Xt , Xt−1 , . . .), is the natural filtration. The limit, as h → ∞, is particularly interesting, because it leads to a martingale decomposition,

Xt = Yt + µt + Ut , where µt is a linear deterministic trend, {Yt , Ft } is a martingale with Yt = limh→∞ E(Xt+h − µt+h |Ft ), and Ut is a bounded stationary process. We derive closed-form expressions for all terms in the representation of Xt . The martingale decomposition of finite Markov chains is akin to the Beveridge-Nelson decomposition for ARIMA processes, see Beveridge and Nelson (1981),1 and the Granger representation for vector autoregressive processes, see Johansen (1991). The decomposition has many applications, as the longrun properties of Xt are governed by the persistent component, Yt , while Ut characterizes the transitory component of Xt . In macro-econometrics Yt and Ut are often called “trend” and “cycle”, respectively, with Yt being interpreted as the long run growth while Ut defines the fluctuations around the growth path, see, e.g. Low and Anderson (2008). A martingale decomposition of a stochastic discount process can be used to disentangle economic components with long term and short run impact on asset valuation, see Hansen (2012). For the broader concept of signal extraction of the “trend”, see Harvey and Koopman (2002). In the context with high-frequency financial data (which often are confined to a grid), Yt and Ut may be labelled the efficient price and market microstructure noise, respectively. One could use the decomposition to estimate the quadratic variation of the latent efficient price Yt , as in Large (2011) and Hansen and Horel (2009), and the framework could be adapted to study market information share, see e.g. Hasbrouck (1995). Markov processes are often used to approximate autoregressive processes in dynamic optimization problems, see Tauchen (1986) and Adda and Cooper (2000), and the decomposition could be used to compare the long-run properties of the approximating Markov process with those of the autoregressive process. 1

The result, known as Beveridge-Nelson decomposition, appeared earlier in the statistics literature, e.g. Fuller (1976, theorem 8.5.1). See Phillips and Solo (1992) for further discussion. The martingale decomposition is also key for the central limit theorem for stationary processes by Gordin (1969).

2

The paper is organized as follows: We establish an expression for the filtered process within the Markov chain framework, in Section 2, which leads to the martingale decomposition. Concluding remarks with discussion of various extensions are given in Section 3, and all proofs are given in the Appendix.

2

Theoretical Framework

In this Section we show how the observed process, X0 , X1 , . . . , Xn , can be filtered in a Markov chain framework, using the natural filtration Ft = σ(Xt , Xt−1 , . . .). This leads to a martingale decomposition for Xt that is useful for a number of things. Initially we seek the filtered price, E(Xt+h |Ft ), and we use the limit, as h → ∞, to define the process, Yt = lim E(Xt+h − µt+h |Ft ), h→∞

where µt = tµ with µ = E(∆Xt ). We will show that {Yt , Ft } is a martingale, in fact, Yt is the martingale component of Xt that, in turn, reveals a martingale representation theorem for finite Markov processes. Note that the one step increments of E(Xt+h − µt+h |Ft ) are, in general, autocorrelated at all order (including those lower than h), however all autocorrelations vanish as h → ∞ and the martingale property of Y emerges. This filtering argument can be applied to any I(1) process for which a.s.

E(∆Xt+h |Ft ) → E(∆Xt ) as h → ∞, and this is the basic principle that Beveridge and Nelson (1981) used to extract the (stochastic) trend component of ARIMA processes.

2.1

Notation and Assumptions

In this section we review the Markov terminology and present our notation that largely follows that in Brémaud (1999, chapter 6). The following assumption is the only assumption we need to make. Assumption 1. The increments {∆Xt }nt=1 are ergodic and distributed as a homogeneous Markov chain of order k < ∞, with S < ∞ states. The assumption that S is finite can be dispensed with, which we detail in Section 3. For now we will assume S to be finite because it greatly simplifies the exposition. The transition matrix for price increments is denoted by P. For a Markov chain of order k with S basic states, P will be an S k × S k k

matrix. We use π ∈ RS to denote the stationary distribution associated with P , which is uniquely

3

defined by π 0 P = π 0 . The fundamental matrix is defined by2 Z = (I − P + Π)−1 , where Π = ιπ 0 is a square matrix and ι = (1, . . . , 1)0 , (so all rows of Π are simply π 0 ). We use er to denote the r-th unit vector, so that e0r A is the r-th row of a matrix A of proper dimensions. Let {x1 , . . . , xS } be the support for ∆Xt , with xs ∈ Rd . We will index the possible realizations for the k-tuple, ∆Xt = (∆Xt−k+1 , . . . , ∆Xt ), by xs , s = 1, . . . , S k , which includes all the perturbations, (xi1 , . . . , xik ), i1 , . . . , ik = 1, . . . , S. The transition matrix, P, is given by Pr,s = Pr(∆Xt+1 = xs |∆Xt = xr ). This matrix will be sparse when k > 1, because at most S transitions from any state have non-zero probability, regardless of the order of the Markov chain. For notational reasons it is convenient to introduce the sequence {st } that is defined by ∆Xt = xst , so that st denotes the observed state at time t. We also define the matrix f ∈ RS

k ×d

whose s-th row,

denoted fs = e0s f , is the realization of ∆X 0 in state s. It follows that ∆Xt = f 0 est and that the expected value of the increments is given by µ = E(∆Xt ) = f 0 π ∈ Rd . The auxiliary vector process, est , is such that E(est+1 |Ft ) = P 0 est , so that est can be expressed as a vector autoregressive process of order one with martingale difference innovations, see e.g. Hamilton (1994, p. 679).

2.2

Markov Chain Filtering

The filtered process E(Xt+h |Ft ), is simple to compute in the Markov setting, because E(Xt+h |Ft ) = P PS k 0 |∆X = x ) = 0 E(Xt+h |∆Xt ) and Xt+h = Xt + hj=1 ∆Xt+j with E(∆Xt+1 t r s=1 Pr,s fs = er P f. More 0 |∆X ) = e0 P h f, which shows that generally we have E(∆Xt+h t st

0 E(Xt+h |∆Xt )

=

Xt0

+

e0st

h X

P j f.

j=1

After subtracting the deterministic trend, µt+h , we let h → ∞ and define Yt = lim E(Xt+h − µt+h |Ft ), h→∞

2

The matrix, I − P + Π, is invertible since the largest eigenvalue of P − Π is less than one under Assumption 1.

4

which we label the filtered process of Xt . The process, Yt is well defined and adapted to the filtration Ft . We are now ready to formulate our main result. Theorem 1. The process and {Yt , Ft } is a martingale with initial value, Y0 = X0 + f 0 (Z 0 − I)es0 and its increments are given by ∆Yt0 = e0st Zf − e0st−1 P Zf . Moreover, we have Xt = Yt + µt + Ut ,

(1)

where Ut0 = e0st (I − Z)f is a bounded, stationary, and ergodic process with mean zero. All terms of the expression are given in closed-form, analogous to the Granger representation theorem by Hansen (2005). It can be shown that ∆Yt is a Markov process with S k+1 possible states values. Analogous to P and f , let Q and g denote the transition matrix for ∆Yt and its matrix of state values, respectively. The martingale property dictates that Qg = 0 ∈ RS

k+1 ×d

. Note that ∆Yt is typically conditionally

heterogeneous, as Q is not a matrix of rank one, which would be the structure corresponding to the case where ∆Yt is independent and identically distributed. The autocovariance structure of the terms in the martingale decomposition is stated next. Theorem 2. We have var(∆Yt ) = f 0 Z 0 (Λπ − P 0 Λπ P )Zf where Λπ = diag(π1 , . . . , πS k ) and cov(Ut , Ut+j ) = f 0 (I − Z)0 Λπ P |j| (I − Z)f = f 0 Z 0 P 0 Λπ P (P |j| − Π)Zf,

and the cross correlations are cov(∆Yt , Ut+j ) = f 0 Z 0 (−Λπ + P 0 Λπ P )P j+1 Zf,

for

j ≥ 0,

and cov(∆Yt , Ut+j ) = 0 for j < 0. The Theorem shows that the stationary component, Ut , is autocorrelated and, in general, correlated with current and past (but not future) increments, ∆Yt , of the martingale. In the context of financial high-frequency data, where Ut is labelled market microstructure noise, these features are referred to as serially dependent and endogenous noise, that are common empirical characteristics of high-frequency data, see Hansen and Lunde (2006). Let λ2 denote the second-largest eigenvalue in absolute value of P . Since, k P j − Π k= O(|λ2 |j ) and |λ2 | < 1 under Assumption 1, it follow that the autocovariances of Ut decay to zero at an exponential rate. 5

A corollary to Theorem 2 is that the following. Corollary 1. The variance of the observed increments, var(∆Xt ) = f 0 (Λπ − ππ 0 )f equals var(∆Yt ) + 2var(Ut ) − cov(Ut−1 , Ut ) − cov(Ut , Ut−1 ) + cov(∆Yt , Ut ) + cov(Ut , ∆Yt ) = f 0 Z 0 (I − P )0 Λπ (I − P )Zf.

3

Concluding Remarks and Extensions

The martingale decomposition of Xt has several applications, as is the case for the Beveridge-Nelson decomposition for ARIMA processes. In the context of macro time series Yt and Ut might be labelled the (stochastic) trend and cycle, respectively. In the context of financial high frequency prices, Yt and Ut could be labelled the efficient prices and market microstructure noise, respectively. In that context, both Yt and Ut are of separate interest. Moreover, extracting the martingale component, Yt , offers a motivation for the Markov chain-based estimator of the quadratic variation as in Hansen and Horel (2009). Their estimator is deduced from the long-run variance of Xt , that facilitates a central limit theory and readily available standard errors. To conclude, we will discuss extensions of the martingale decomposition to accommodate the cases with an infinite number of states (countable), jumps, and inhomogeneous processes. Suppose that the number of state values for ∆Xt is countable infinite. Then the number of Markov states for ∆Xt is countable infinite, and the Markov process can be characterized by Pr,s , r, s = 1, 2, . . .. The concept of ergodicity is well defined, and entails a unique stationary distribution, π, that satisfies P∞ P 2 πs = ∞ r=1 Pr,s πr . With [P ]r,s = j=1 Pr,j Pj,s and higher moments defined similarly, we can define

Zr,s = Ir,s + lim

h X

h→∞

([P j ]r,s − πs ),

j=1

that are well defined provided that the Markov chain is ergodic. It can now be verified that the expressions in Theorems 1 and 2 continue to be applicable to this case. In financial time series the increments, ∆Xt , are often concentrated about zero, with occasional large changes that are labelled as jumps, see e.g. Huang and Tauchen (2005) and Li (2013). Because jumps are prevalent in high-frequency financial data, the modeling of these data often entails a jump component. One can adapt the martingale decomposition (1) to include a jump component, Jt . This requires a procedure for classifying large increments as jumps and one can then proceed by removing

6

these jumps, e.g. using methods similar to those proposed in Mancini (2009) or Andersen et al. (2012), and then model the remaining returns by the Markov chain methods, to arrive at

Xt = Yt + Jt + µt + Ut , where Jt = Jt−1 + ∆Xt δj , µt = µt−1 + µ(1 − δt ), Ut0 = (1 − δt )e0st (I − Z)f , with δt being the indicator for the jumps. The case with an inhomogeneous Markov chain is theoretically straightforward provided that the transition matrix, Pr,s (t) = Pr(∆Xt = xs |∆Xt−1 = xr ), satisfies the ergodicity conditions for all t. From the time-varying transition matrix, P (t), one can deduce the increments ∆Yt and ∆µt , as well as Ut , that P all depend on P (t). A decomposition arises by piecing the terms together, i.e. Yt = Y0 + tj=1 ∆Yt , and again Yt can be verified to be a martingale, and similarly for other terms. A challenge to implementing this in practice will be to estimate P (t) with a suitable degree of accuracy. This may be achieved by assuming that P is locally homogeneous (piecewise constant), or by imposing a parsimonious structure for the dynamics of P (t), similar to that in the models by Hausman et al. (1992) and Russell and Engle (2005), that can induce an inhomogeneous Markov chain for high-frequency returns.

References Adda, J. and Cooper, R. (2000), ‘Balladurette and juppette: A discrete analysis of scrapping subsidies’, Journal of Political Economy 108, 778–806. Andersen, T., Dobrev, D. and Schaumburg, E. (2012), ‘Jump-robust volatility estimation using nearest neighbor truncation’, Journal of Econometrics 169, 75–93. Beveridge, S. and Nelson, C. R. (1981), ‘A new approach to decompositions of time series into permanent and transitory components with particular attentions to measurement of the ‘business cycle”, Journal of Monetary Economics 7, 151– 174. Brémaud, P. (1999), Markov chains: Gibbs fields, Monte Carlo Simulation, and Queues, Springer. Fuller, W. A. (1976), Introduction to Statistical Time Series, Wiley, New York. Gordin, M. I. (1969), ‘The central limit theorem for stationary processes’, Soviet Mathematics Doklady 10, 1174–1176. Hamilton, J. D. (1994), Time Series Analysis, Princeton University Press, Princeton N.J. Hansen, L. P. (2012), ‘Dynamic valuation decomposition within stochastic economies’, Econometrica 80, 911–967. Hansen, P. R. (2005), ‘Granger’s representation theorem: A closed-form expression for I(1) processes’, Econometrics Journal 8, 23–38.

7

Hansen, P. R. and Horel, G. (2009), ‘Quadratic variation by Markov chains’, working paper . Hansen, P. R. and Lunde, A. (2006), ‘Realized variance and market microstructure noise’, Journal of Business and Economic Statistics 24, 127–218. The 2005 Invited Address with Comments and Rejoinder. Harvey, A. C. and Koopman, S. J. (2002), ‘Signal extraction and the formulation of unobserved components models’, Econometrics Journal 3, 84–107. Hasbrouck, J. (1995), ‘One security, many markets: determining the contributions to price discovery’, Journal of Finance 50, 1175–1198. Hausman, J. A., Lo, A. W. and MacKinlay, A. C. (1992), ‘An ordered probit analysis of transaction stock prices’, Journal of Financial Economics 31, 319–379. Huang, X. and Tauchen, G. (2005), ‘The relative contribution of jumps to total price variation’, Journal of Financial Econometrics 3, 456–499. Johansen, S. (1991), ‘Estimation and hypothesis testing of cointegration vectors in gaussian vector autoregressive models’, Econometrica 59, 1551–1580. Large, J. (2011), ‘Estimating quadratic variation when quoted prices change by a constant increment’, Journal of Econometrics 160, 2–11. Li, J. (2013), ‘Robust estimation and inference for jumps in noisy high frequency data: a local-to-continuity theory for the pre-averaging method’, Econometrica 81, 1673–1693. Low, C. N. and Anderson, H. M. (2008), Economic applications: The beveridge-nelson decomposition, in R. J. Hyndman, A. B. Koehler, J. K. Ord and R. D. Snyder, eds, ‘Forecasting with Exponential Smoothing’, Springer, chapter 20, pp. 325–337. Mancini, C. (2009), ‘Non-parametric threshold estimation for models with stochastic diffusion coefficient and jumps’, Scandinavian Journal of Statistics 36(2), 270–296. Phillips, P. C. B. and Solo, V. (1992), ‘Asymptotics for linear processes’, The Annals of Statistics 20, 971–1001. Russell, J. R. and Engle, R. F. (2005), ‘A discrete-state continuous-time model of financial transactions prices and times: The autoregressive conditional multinomial-autoregressive conditional duration model’, Journal of Business & Economic Statistics 23, 166–180. Tauchen, G. (1986), ‘Finite state Markov-chain approximations to univariate and vector autoregressions’, Economics Letters 20, 177 – 181.

8

Appendix of Proofs Lemma A.1. Suppose that Assumption 1 holds. (i) (P − Π)j = P j − Π, P (ii) limh→∞ hj=1 (P − Π)j = Z − I, where Z = (I − P + Π)−1 , (iii) Zι = ι, π 0 Z = π 0 , and P Z = ZP = Z − I + Π, (iv) Z − I = (P − Π)Z. Parts of Lemma A.1 are well know, for instance parts (i) and (ii) are in Brémaud (1999, chapter 6). For the sake of completeness, we include the (short) proofs of all four parts of the Lemma. Proof. We prove (i) by induction. The identity is obvious for j = 1. Now suppose that the identity holds for j. Then (P − Π)j+1 = (P − Π)(P j − Π) = P j+1 − ΠP j + Π2 − P Π = P j+1 − Π, where the last identity follows from ΠP j = Π2 = P Π = Π.

(ii) Since the chain is ergodic we have kP −Πk < 1, so that P h converges to Π with P h − Π = O(|λ2 |h ), P∞ P j j where λ2 is the second largest eigenvalue of P . It follows that ∞ j=1 (P − Π) is j=1 (P − Π) = P P∞ j j −1 − I = Z − I. absolutely convergent with ∞ j=1 (P − Π) = j=0 (P − Π) − I = (I − (P − Π)) (iii) P j ι = ι and π 0 P j = π 0 for any j ∈ N; and Πι = ι and π 0 Π = π 0 , so that have (P j −Π)ι = π 0 (P j −Π) = P∞ P j+1 − Π) j 0. The first two results follow from Z = I + ∞ j=1 (P j=1 (P − Π). Next, P Z = ZP = P + and P+

∞ X

(P j+1 − Π) = P +

j=1

∞ X

(P j+1 − Π) − P + Π =

j=0

∞ X

(P j − Π) + Π = Z − I + Π.

j=1

Finally, the last result follows from (Z − I) = (I − Z −1 )Z = (I − I + P − Π)Z = (P − Π)Z.  0 |∆X = x ) = e0 P h f. So with ∆X = x we have Proof of Theorem 1. We have E(∆Xt+h t st t st st

E(Xt+h − µt+h |Ft ) = Xt − µt +

h X

E(∆Xt+j − µ|Ft )

j=1

= Xt − µt + f 0

h X

(P j − Π)0 est ,

j=1

where the last term is such that e0s

Ph

j=1 (P

j

− Π)f → e0s (Z − I)f as h → ∞ by Lemma A.1.ii. Hence,

Yt = lim E(Xt+h − µt+h |Ft ) = Xt − µt + f 0 (Z − I)0 est , h→∞

9

so that Y0 = X0 + f 0 (Z − I)0 es0 and the increments are given by ∆Yt0 = ∆Xt0 − µ0 + e0st (Z − I)f − e0st−1 (Z − I)f = e0st Zf − e0st−1 (Z + Π − I)f = e0st Zf − e0st−1 P Zf, where we used Lemma A.1.iii. This establishes the decomposition, Xt = Yt + µt + Ut , where Ut0 = e0st (I − Z)f . Since Ut is a simple function of ∆Xt it follows that Ut is a stationary, ergodic, and bounded process. That E(Ut ) = 0 follows P from E(Ut0 ) = πs e0s (I − Z)f = (π 0 − π 0 Z)f = 0, where we used Lemma A.1.iii. Moreover, {Yt , Ft } is a martingale, because Yt ∈ Ft and E(e0s Zf − e0r P Zf |∆Xt−1 = xr ) =

X

Pr,s e0s Zf − e0r P Zf = e0r P Zf − e0r P Zf = 0,

s

for any r = 1, . . . , S k , where r and s are short for st−1 and st , respectively (defined by ∆Xt−1 = xr and ∆Xt = xs ).  In the proof of Theorem 2 we use the following identities X

πr Pr,s er e0r =

r,s

X

πr Pr,s es e0s = Λπ ,

r,s

and

X

πr [P j ]r,s er e0s = Λπ P j ,

r,s

that are easily verified. Proof of Theorem 2. For the variance of the martingale increments we have E(∆Yt ∆Yt0 ) = E[(f 0 Z 0 est − f 0 Z 0 P 0 est−1 )(e0st Zf − e0st−1 P Zf )], X = πr Pr,s f 0 Z 0 (es − P 0 er )(e0s − e0r P )Zf r,s

=

X

πr Pr,s f 0 Z 0 (es e0s − es e0r P − P 0 er e0s + P 0 er e0r P )Zf

r,s 0 0

= f Z (Λπ − P 0 Λπ P − P 0 Λπ P + P 0 Λπ P )Zf = f 0 Z 0 (Λπ − P 0 Λπ P )Zf,

where we used (A.1) in the second last equality.

10

(A.1)

Concerning the stationary component of the decomposition we have for j ≥ 0 that 0 E(Ut Ut+j ) = E[f 0 (I − Z)0 est e0st+j (I − Z)f ] X πr [P j ]r,s f 0 (I − Z)0 er e0s (I − Z)f = r,s 0

= f (I − Z)0 Λπ P j (I − Z)f = f 0 Z 0 (Π − P )0 Λπ P j (Π − P )Zf = f 0 Z 0 P 0 Λπ P (P j − Π)Zf,

where we used Lemma A.1.iv in the second last equality. Finally, for the cross covariance we first note that, X

πr Pr,s [P j ]s,v es e0v =

r,s,v

X

X

πs [P j ]s,v es e0v = Λπ P j ,

s,v

πr Pr,s [P

j

]s,v er e0v

r,s,v

=

X

πr [P j+1 ]r,v er e0v = Λπ P j+1 ,

r,s,v

where the first identities in the two equations follow by

P

r

πr Pr,s = πs and

P

s Pr,s [P

j] s,v

= [P j+1 ]r,v ,

respectively, and the last equalities both follow from the last variant of (A.1). So for j ≥ 0 we have 0 E(∆Yt Ut+j ) = E[(e0st Zf − e0st−1 P Zf )0 e0st+j (I − Z)f ] X = πr Pr,s [P j ]s,v f 0 Z 0 (es − P 0 er )e0v (Π − P )Zf r,s,v 0 0

= f Z [Λπ P j (Π − P ) − P 0 Λπ P j+1 (Π − P )]Zf = f 0 Z 0 [ππ 0 − Λπ P j+1 − ππ 0 + P 0 Λπ P j+2 ]Zf = f 0 Z 0 (−Λπ + P 0 Λπ P )P j+1 Zf. 0 ) = 0 for j < 0 can be verified similarly. However, this is not required because the zero That E(∆Yt Ut+j

covariances are a simple consequence of martingale property of Yt that was established in the proof of Theorem 1.  Proof of Corollary (1). By substituting the expressions from Theorem 2 and using cov(∆Yt , Ut−1 ) =

11

0, one finds that the expression in Corollary 1 equals f 0 Z 0 AZf , where A = (Λπ − P 0 Λπ P ) + 2P 0 Λπ P (I − Π) − P 0 Λπ P (P − Π) − (P − Π)0 P 0 Λπ P +(−Λπ + P 0 Λπ P )P + P 0 (−Λπ + P 0 Λπ P ) = Λπ + P 0 Λπ P − 2ππ 0 + ππ 0 + ππ 0 − Λπ P − P 0 Λπ = (I − P )0 Λπ (I − P ), which proves the equality in the Corollary. That f 0 Z 0 AZf = f 0 (Λπ − ππ 0 )f follows from (I − P )0 Λπ (I − P ) = (I − P + Π)0 Λπ (I − P + Π) − ππ 0 = (I − P + Π)0 (Λπ − ππ 0 )(I − P + Π), which equals (Z −1 )0 (Λπ − ππ 0 )Z −1 . This completes the proof. 

12

A Martingale Decomposition of Discrete Markov Chains

Apr 1, 2015 - Email: [email protected] ... may be labelled the efficient price and market microstructure noise, respectively. ... could be used to compare the long-run properties of the approximating Markov process with those of.

433KB Sizes 2 Downloads 260 Views

Recommend Documents

Clustering Finite Discrete Markov Chains
computationally efficient hybrid MCMC-constrained ... data. However, this distribution is rather complex and all of the usual distribution summary values (mean,.

Lumping Markov Chains with Silent Steps
a method for the elimination of silent (τ) steps in Markovian process ...... [8] M. Bravetti, “Real time and stochastic time,” in Formal Methods for the. Design of ...

Mixing Time of Markov Chains, Dynamical Systems and ...
has a fixed point which is a global attractor, then the mixing is fast. The limit ..... coefficients homogeneous of degree d in its variables {xij}. ...... the 48th Annual IEEE Symposium on Foundations of Computer Science, FOCS '07, pages 205–214,.

Discrete Orthogonal Decomposition and Variational ...
Department of Mathematics and Computer Science, ... University of Mannheim, 68131 Mannheim, Germany. {yuanjing ..... Top The restored solenoidal flow u(Ω).

Finite discrete Markov process clustering
Sep 4, 1997 - Microsoft Research. Advanced Technology Division .... about the process clustering that is contained in the data. However, this distribution is ...

Finite discrete Markov process clustering
Sep 4, 1997 - about the process clustering that is contained in the data. ..... Carlin, Stern, and Rubin, Bayesian Data Analysis, Chapman & Hall, 1995. 2.

Lumping Markov Chains with Silent Steps
U = 1 0 0 0 0. 0 1. 2. 1. 2. 0 0. 0 0 0 1. 2. 1. 2. ˆ. Q = −(λ+µ) λ+µ 0. 0. −λ λ ρ. 0. −ρ. DSSG, Saarbrücken - Feb. 1; PAM, CWI Amsterdam - Feb. 22; PROSE - TU/e, ...

Lumping Markov Chains with Silent Steps - Technische Universiteit ...
VE. 0. VT E VT. ) . Con- dition 2 written in matrix form is now VEUE ( QE QET ) V = .... [9] H. Hermanns, Interactive Markov chains: The Quest for Quantified.

Using hidden Markov chains and empirical Bayes ... - Springer Link
Page 1 ... Consider a lattice of locations in one dimension at which data are observed. ... distribution of the data and we use the EM-algorithm to do this. From the ...

Phase transitions for controlled Markov chains on ...
Schrödinger operators, J. Math-for-Ind., 4B (2012), pp. 105–108. [12] F. den Hollander, Large deviations, Fields Institute Monographs 14, AMS, Prov- idence, RI, 2000. [13] A. Hordijk, Dynamic Programming and Markov Potential Theory, Math. Centre. Tra

evolutionary markov chains, potential games and ...
... of Computer Science. Georgia Institute of Technology ... him a lot, he is one of the greatest researchers in theoretical computer science. His book on ..... live in high dimensional spaces, i.e., they exhibit many degrees of freedom. Under- ... c

Ranking policies in discrete Markov decision processes - Springer Link
Nov 16, 2010 - Springer Science+Business Media B.V. 2010. Abstract An ... Our new solution to the k best policies problem follows from the property: The .... Bellman backup to create successively better approximations per state per iteration.

Identification in Discrete Markov Decision Models
Dec 11, 2013 - written as some linear combination of elements in πθ. In the estimation .... {∆πθ0,θ : θ ∈ Θ\{θ0}} and the null space of IKJ + β∆HMKP1 is empty.

Adaptive Martingale Boosting - Phil Long
has other advantages besides adaptiveness: it requires polynomially fewer calls to the weak learner than the original algorithm, and it can be used with ...

A martingale approach applied to the management of ...
Pierre Devolder. †. 19th June 2007. † Institut des sciences actuarielles. Université Catholique de Louvain (UCL). 1348. Louvain-La-Neuve, Belgium. Abstract.

MATRIX DECOMPOSITION ALGORITHMS A ... - PDFKUL.COM
[5] P. Lancaster and M. Tismenestsky, The Theory of Matrices, 2nd ed., W. Rheinboldt, Ed. Academic Press, 1985. [6] M. T. Chu, R. E. Funderlic, and G. H. Golub, ...

of polyisoprene chains
required for the analysis in benzene, such as the reduced residual chemical potential 1 and_its tempera- ... sidered that the available data are reliable for esti-.

Safe Markov Chains for ON/OFF Density Control with ...
tonomous systems operating in complex dynamic environ- ments like ... Note that, the sets of actions for the ON and OFF modes are the same in this example. II. RELATED RESEARCH. The proposed Markov model is applicable to both decision- making for ...