IMPROVED BOUNDS FOR THE INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

EZEQUIEL DRATMAN1,2 AND GUILLERMO MATERA2,3

Abstract. We obtain new bounds on the entries of the inverse of a diagonally-dominant

tridiagonal matrix. Our bounds improve the best previous bounds, due to H.-B. Li et al. We apply our bounds to the tridiagonal matrices arising in the second-order nitedierence discretization of certain boundary-value problem of parabolic type, establishing asymptotically optimal bounds. 1. Introduction Tridiagonal matrices arise in connection with several scientic and technical problems. For example, the discretization by nite dierences of second-order two-point boundaryvalue problems for ordinary dierential equations requires the solution of linear systems of large size dened by tridiagonal matrices (see, e.g., [AMR95] or [LeV07]). In particular, conditioning and computation of inverses of nonsingular tridiagonal matrices have been the subject of many studies (see, e.g., [Hig02, Ÿ15.6]). Explicit formulas for inverses of tridiagonal matrices are due to Gantmacher, Krein, Ikebe, Cao, Stewart, among others (see [Hig02, Ÿ15.7] for a brief historic account on these results). From such formulas one deduces an algorithm for computing all the entries of the inverse of a tridiagonal

n × nmatrix

with

O(n)

ops. Nevertheless, such a computation

may break down, due to overow and underow (see [Hig02, Ÿ15.6]). This suggests that estimates for the entries to be computed may be relevant. In this paper we shall be concerned with inverses of diagonally-dominant tridiagonal matrices. Such matrices has been intensively studied, and several estimates on its entries are available in the literature (see, e.g., [SJ96], [Nab98], [PP01], [LHLL10]).

The best

estimates, up to the authors knowledge, are due to [LHLL10]. Our main result establishes computable two-side bounds on the entries of the inverse of a diagonally-dominant matrix with real coecients which improve those of [LHLL10]. In fact, a comparison on two classes of tridiagonal matrices which arise in the discretization of certain unidimensional two-point boundary-value problems shows that there is an exponential gap between our bounds and those of [LHLL10]. We also determine the sign distribution and provide an ecient algorithm for computing the entries of the inverse of the matrix under consideration. Our approach relies on the analysis of the dependence of the diagonal entries of the inverse

A−1

αi and βi which are quotients of A. Then o-diagonal entries are −1 . We establish simple expressed in terms of these quantities and the diagonal entries of A

matrix

under consideration on certain quantities

consecutive upper-left and lower-right principal minors of

recursive formulas for these quantities which may be evaluated in such a way as to furnish an ecient algorithm for computing the entries of

A−1 .

Further, the sign distribution of

−1 is also obtained. the entries of A

Date : January 12, 2017. 1991 Mathematics Subject Classication. 65F50, 39A10, 15A45, 15B48. Key words and phrases. Real tridiagonal matrix, diagonally-dominant matrix, inverses, sign distribution, bounds, second-order nite-dierence discretization. The authors were partially supported by the grants PIP CONICET 11220130100598, PIO CONICETUNGS 14420140100027 and UNGS 30/3084. 1

2

E. DRATMAN AND G. MATERA

Our paper is organized as follows. In Section 2 we x notations and dene the quantities

αi

and

βi .

In Section 3 we obtain the formulas for the entries of the inverse matrix under

consideration in terms of the

αi

and

βi .

From these formulas we deduce the algorithm

for computing them, the sign distribution and the bounds. Finally, in Section 4 we apply our bounds to matrices arising from the discretization of certain boundary-value problems, comparing our results with those of [LHLL10]. 2. Assumptions and notations Let

A ∈ Rn×n

be a tridiagonal matrix, namely



b1

c1

  a A :=  2 



..

.

..

.

..

.

..

.

an

cn−1 bn

  . 

a1 := 0 and cn := 0. In the sequel, we shall A satises the following assumptions: (H1) ci ai+1 6= 0 for 1 ≤ i ≤ n − 1, (H2) bj > 0 for 1 ≤ j ≤ n, (H3) A is (row) diagonally dominant, that is, |bi | ≥ |ai | + |ci | for 1 ≤ i ≤ n. Let mi := bi − |ai | − |ci | for 1 ≤ i ≤ n. By (H3) it follows that mi ≥ 0 for 1 ≤ i ≤ n and

For convenience of notation, we further dene suppose that

A

can be expressed in the following way:

 (1)

  A= 

|a1 | + |c1 | + m1

c1

a2

|a2 | + |c2 | + m2 ..

.

 ..

. ..

an Our aim is to characterize the nonsingularity of and estimate the entries of its inverse

A−1 .

.

cn−1 |an | + |cn | + mn

  . 

A in terms of the coecients ai , ci

and

mi ,

Remark 1. Assumptions (H1), (H2) and (H3) do not restrict the generality of our arguments. More precisely: • if ci ai+1 = 0 for any 1 ≤ i ≤ n − 1, then the matrix A is not irreducible and the analysis of its invertibility may be reduced to that of two tridiagonal matrices of smaller size satisfying our hypotheses; • if bi = 0 for any 1 ≤ i ≤ n, then (H3) implies that the ith row of A is zero, and therefore A is singular; • if bi < 0 for any 1 ≤ i ≤ n, then we may multiply A by a suitable diagonal matrix so that all the diagonal elements of the product are positive; • if the matrix is column diagonally dominant, then we may argue with the transpose matriz At .

Throughout this paper, we shall use the following notations:

For

• δi := sgn(ci ai+1 ) and i := 1 − sgn(ci ai+1 ) for 1 ≤ i ≤ n − 1; • J(A) := {i ∈ {1, . . . , n} : mi > 0}; • ∆(A) := {i ∈ {1, . . . , n − 1} : δi < 0}. 1 ≤ r ≤ s ≤ n, we denote by A(r,s) ∈ Rs−r+1×s−r+1 the following   br cr . .    ar+1 . . . .  (r,s) A :=  . . .  .. .. cs−1  as bs

submatrix of

A:

BOUNDS FOR INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

We further denote

A(r,s) := 1

for

3

r > s.

Under hypotheses (H1), (H2) and (H3), if r > 1, then the submatrix A(r,s) is nonsingular. Indeed, as br ≥ |ar | + |cr | and |ar | > 0, we conclude that A(r,s) is diagonally dominant, satises (H1), (H2) and (H3) and it is strictly diagonally dominant in its rst row. As we shall assert in Proposition 4 below, this implies the claim. A similar argument shows that A(r,s) is nonsingular for any s < n. Remark 2.

For

1 ≤ i ≤ n,

we dene

det(A(i+1,n) ) det(A(1,i−1) ) , β := . i det(A(i,n) ) det(A(1,i) ) Remark 2 shows that αi and βi−1 are well-dened for 2 ≤ i ≤ n. On the other hand, α1 and βn are well-dened only if A is nonsingular. We have the following recursive identities for the quantities αi and βi . For the identities concerning α1 and βn we assume that A is nonsingular. αi :=

(2)

Lemma 3.

We have

1 (1 ≤ i ≤ n − 1), |ai | + mi + |ci |(1 − δi |ai+1 |αi+1 ) 1 βi = (2 ≤ i ≤ n), |ci | + mi + |ai |(1 − δi−1 |ci−1 |βi−1 )

1 , |an | + mn 1 . β1 = |c1 | + m1

αi =

Proof. with

The identities for

1 ≤ i ≤ n − 1,

αn

and

β1

αn =

are immediate from their denitions. Concerning

considering the expansion of the determinant

det(A(i,n) )

αi

by its rst

row we obtain

αi =

det(A(i+1,n) ) . (|ai | + |ci | + mi ) det(A(i+1,n) ) − ci ai+1 det(A(i+2,n) )

This readily implies the identity of the statement of the lemma. The proof of the recursive identity for

βi

with

2≤i≤n



follows by a similar argument.

3. On the entries of the inverse matrix of We start with a characterization of nonsingularity for matrices

A

A

as in (1). We have the

following result.

Let A ∈ Rn×n be a matrix as in (1) satisfying hypotheses (H1), (H2) and (H3). Then A is nonsingular if and only if J(A) ∪ ∆(A) 6= ∅. Proposition 4.

Proof.

A is nonsingular if J(A) 6= ∅ (see, e.g., [HJ85, Corollary 6.2.27]). Now assume that J(A) = ∅, namely bi = |ai | + |ci | for 1 ≤ i ≤ n. We claim that A is nonsingular if and only if ∆(A) 6= ∅. Arguing inductively, for n = 2 we have det(A) = b1 b2 − a2 c1 = |a2 c1 | − a2 c1 6= 0 if and only if c1 a2 < 0. (n−1)×(n−1) satisfying Next suppose that n > 2 and the claim holds for any matrix in R (H1), (H2) and (H3). As b1 = |c1 |, subtracting sgn(c1 ) times the rst column of A in its It is well-known that

second column we obtain

b1 0  a2 b2 − sgn(c1 )a2   a3 B :=    

 c2 .. ..

.

..

.

.

..

.

an If

c1 a2 < 0,

J(B (2,n) ) 6= ∅ and B is nonsingular by the inductive hypothesis. On the c1 a2 > 0 we have that B (2,n) satises conditions (H1), (H2) and (H3) and

then

other hand, for

cn−1 bn

   .  

4

E. DRATMAN AND G. MATERA

the inductive hypothesis implies that

is nonsingular if and only if

(2,n) ) if and only if ∆(B

∆(A) 6= ∅

this case

B

6= ∅,

and

det A = det B ,

∆(B (2,n) ) 6= ∅.

∆(A) 6= ∅.

nonsingular if and only if

As in

we conclude that

A

is



Proposition 4 yields an ecient test to decide if a tridiagonal matrix as in (1) is invertible. In spite of its simplicity, we have not been able to locate in the literature.

A−1 matrix A

αi

βi .

3.1. The entries of

in terms of the

the tridiagonal

of (1) is nonsingular and denote

express the diagonal elements

cii

and

In the sequel we shall assume that

in terms of the quantities

A−1 := (cij )1≤i,j≤n . αi and βi of (2).

Now we

For a tridiagonal matrix A ∈ Rn×n as in (1), we have

Lemma 5.

c11 = α1 , cii =

1 (2 ≤ i ≤ n − 1), mi + |ai |(1 − δi−1 |ci−1 |βi−1 ) + |ci |(1 − δi |ai+1 |αi+1 )

cnn = βn .

Proof. α1

and

By the expression of

βn ,

Next, for

in terms of the adjoint matrix of

A

and the denition of

we obtain

c11 = ith

A−1

2 ≤ i ≤ n−1

det(A(2,n) ) = α1 , det A

cnn =

det(A(1,n−1) ) = βn . det A

we consider the expression of

cii .

The expansion of

det(A)

by its

row yields:

det(A) =bi det(A(1,i−1) ) det(A(i+1,n) ) − ai ci−1 det(A(1,i−2) ) det(A(i+1,n) ) − ai+1 ci det(A(1,i−1) ) det(A(i+1,n) ). As a consequence,

cii =

(3)

1 det(A(1,i−1) ) det(A(i+1,n) ) = . det A bi − ai ci−1 βi−1 − ai+1 ci αi+1

By the denition of

mi , δi

and

δi+1

Next we consider the remaining entries Lemma 6.



we easily deduce the lemma.

cij

of

A−1 .

Let A ∈ Rn×n be a nonsingular matrix as in (1). If 1 ≤ j < i ≤ n, then cij = (−1)i+j aj+1 βj · · · ai βi−1 cii = (−1)i+j aj+1 αj+1 . . . ai αi cjj .

On the other hand, for 1 ≤ i < j ≤ n we have cij = (−1)i+j ci αi+1 · · · cj−1 αj cii = (−1)i+j ci βi . . . cj−1 βj−1 cjj .

Proof.

First suppose that

matrix of

A

1 ≤ j < i ≤ n.

By the expression of

A−1

in terms of the adjoint

we obtain

(−1)i+j det(A(1,j−1) ) det(A(i+1,n) )aj+1 · · · ai det A det(A(1,i−1) ) det(A(i+1,n) ) = (−1)i+j βj · · · βi−1 aj+1 · · · ai . det A

cij =

Then the rst identity of (3) shows the rst assertion. On the other hand, expressing the product

det(A(1,j−1) ) det(A(i+1,n) )

αj+1 · · · αi 1 ≤ i < j ≤ n follows

in terms of

second assertion. The statement for

in a similar manner proves the by a similar argument.



BOUNDS FOR INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

5

We shall further need the following quantities:

κn := 0,

κi := |ci |(1 − δi |ai+1 |αi+1 ) (1 ≤ i ≤ n − 1),

κ∗1

κ∗i := |ai |(1 − δi−1 |ci−1 |βi−1 ) (2 ≤ i ≤ n).

:= 0,

From Lemma 3 we easily deduce the following identities for

αi =

(4)

1 , |ai | + mi + κi

βi =

1 ≤ i ≤ n:

1 . |ci | + mi + κ∗i

Further, combining Lemma 3 and (4) we obtain the following identities: (5)

κn = 0,

(6)

κ∗1 = 0,

i |ai+1 | + mi+1 + κi+1 (1 ≤ i ≤ n − 1), |ai+1 | + mi+1 + κi+1 i−1 |ci−1 | + mi−1 + κ∗i−1 κ∗i = |ai | (2 ≤ i ≤ n). |ci−1 | + mi−1 + κ∗i−1

κi = |ci |

Finally, by Lemma 5 we easily obtain the following result. Corollary 7.

With notations as above, for 1 ≤ i ≤ n we have cii =

1 . mi + κi + κ∗i

i ∈ {0, 2} for 1 ≤ i ≤ n, from (5) and (6) it follows that κi ≥ 0 and κ∗i ≥ 0 for 1 ≤ i ≤ n. Thus (4) proves that αi > 0 and βi > 0 for 1 ≤ i ≤ n, and Corollary 7 implies cii > 0 for 1 ≤ i ≤ n. As a consequence, by Lemma 6 we can explicitly determine the −1 , as the following result asserts signs of all the remaining elements of the inverse matrix A As

(compare with [LHLL10, Theorem 3.1]). Corollary 8.

With notations as above, we have  sgn(cij ) =

3.1.1.

sgn((−1)i+j aj+1 . . . ai ) sgn((−1)i+j ci . . . cj−1 )

An algorithm for computing A−1 .

for 1 ≤ j < i ≤ n, for 1 ≤ i < j ≤ n.

As a byproduct of Lemma 6 and Corollary 7 we

obtain the following algorithm for computing all the entries of the inverse matrix

A−1 .

Algorithm 1.

(a2 , . . . , an ), (c1 , . . . , cn−1 ) and (m1 , . . . , mn ) as in (1). := (cij )1≤i,j≤n , or failure. ∗ Set κn := 0 and κ1 := 0 For i = 1, . . . , n − 1 do Compute ρn−i+1 := |an−i+1 | + mn−i+1 + κn−i+1 ∗ ∗ Compute ρi := |ci | + mi + κi Compute %n−i+1 := |cn−i |/ρn−i+1 **** [%n−i+1 = |cn−i |αn−i+1 ] ∗ ∗ ∗ Compute %i := |ai+1 |/ρi **** [%i = |ai+1 |βi ] Compute κn−i := %n−i+1 (ρn−i+1 − δn−i |an−i+1 |) ∗ ∗ ∗ Compute κi+1 := %i (ρi − δi |ci |) Input: Vectors

−1 Output: A

**** ****

End do For

i = 1, . . . , n do ∗ −1 Compute cii := (mi + κi + κi )

End do For

i = 1, . . . , n − 1 do For j = i, . . . , n − 1 ci,j+1 := −sgn(cj ) %j+1 · ci,j cj+1,i := −sgn(aj+2 ) %∗j+1 · cj,i End For

:= −cj · αj+1 · ci,j ] := −aj+2 · βj+1 · cj,i ]

**** [ci,j+1 **** [cj+1,i

**** ****

6

E. DRATMAN AND G. MATERA

End For. We prove the correctness of this algorithm and determine its complexity.

Algorithm 1 correctly computes the entries of A−1 with n2 + 4n − 4 multiplications/divisions and 7n − 6 additions/subtractions.

Lemma 9.

Proof.

We rst discuss the correctness. The fact that

αn−i+1 = 1/ρn−i+1

and

βi = 1/ρ∗i

1 ≤ i ≤ n − 1 follows from (4). From (5) and (6) we deduce the correctness of the κn−i and κ∗i+1 for 1 ≤ i ≤ n − 1. Then Lemma 6 and Corollary 7 justify the correctness of the computation of cij for 1 ≤ i, j ≤ n. for

computation of

Concerning the number of arithmetic operations, it is easy to see that the computa-

4n − 4 multiplications/divisions and 6n − 6 addiρn−i+1 as ρn−i+1 = |an−i+1 | + (mn−i+1 + κn−i+1 ) and keep the addition mn−i+1 + κn−i+1 for 1 ≤ i ≤ n − 1. This saves n − 1 additions in the computation of cii for 1 ≤ i ≤ n. Therefore, such a computation can be performed with P P n divisions and n additions. Finally, the last for-loop requires ni=1 nj=i 2 = n2 − n multions in the rst for-loop consist of

tions/subtractions.

We compute

tiplications. Taking into account the number of arithmetic operations of each for loop we



easily deduce the statement of the lemma.

Comparison with the existing literature indicates that Algorithm 1 improves all previous ones, except for the one in [LHLL10]. In fact, we have the following table comparing costs, most of which is borrowed from [LHLL10].

Table 1. Costs of dierent algorithms for computing

Algorithm

A−1 .

Multiplications/divisions

Additions/subtractions

3n2 + 2n − 4 3n2 + 2n − 4 n2 + 12n − 12 n2 + 6n − 6 n2 + 4n − 4

4n − 2 4n − 2 4n − 6 4n − 4 7n − 6

Huang & McColl [HM97] El-Mikkawy & Karawia [EMK06] Ikebe [Ike79] Li et al [LHLL10] Algorithm 1

Nevertheless, Algorithm 1 may behave signicantly better for certain particular families

∆(A) = ∅, then Proposition 4 implies J(A) 6= ∅, κi = 0 for jmax ≤ i ≤ n and κ∗i = 0 for 1 ≤ i ≤ jmin , where jmin := min J(A) and jmax := max J(A). On the other hand, for ∆(A) 6= ∅, we shall ∗ see in the next section that κi = 0 for imax < i ≤ n and κi = 0 for 1 ≤ i ≤ imin , where imax := max{{jmax − 1} ∪ ∆(A)} and imin := min {{jmin } ∪ ∆(A)}. As a consequence, we

of (nonsingular) matrices

A.

In fact, if

and in the next section we prove that

have the following result.

If ∆(A) = ∅, then Algorithm 1 requires n2 + 3n − 3 + jmax − jmin multiplications/divisions and 5n − 4 + 2(jmax − jmin ) additions/subtractions, while for ∆(A) 6= ∅ Algorithm 1 can be implemented to run with n2 +3n−3+imax −imin multiplications/divisions and 5n − 4 + 2(imax − imin ) additions/subtractions.

Corollary 10.

3.2. Bounds for the entries of

A−1 .

Next we obtain two-side bounds for the entries

cij

−1 . Our bounds will be expressed in terms of the following quantities: of A

mmax := max mj ,

amax := max |aj |,

cmax := max |cj |

mmin := min mj ,

amin := min |aj |,

cmin := min |cj |.

1≤j≤n

1≤j≤n

1≤j≤n

1≤j≤n

1≤j≤n

1≤j≤n

A critical step for our estimates will be to bound the quantities depend on whether

∆(A) = ∅

κi

and

κ∗i .

or not, cases that will be treated separately.

The bounds

BOUNDS FOR INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

Bounds for κi and κ∗i .

3.2.1.

∆(A) = ∅.

Assume that

Proposition 4 implies

7

J(A) 6= ∅,

which shows that the following quantities are welldened:

jmin := min J(A), mi = 0

Since

conclude that for

jmax < i ≤ n, and i = 0 for 1 ≤ i ≤ n because ∆(A) = ∅, by (5) we κi = 0 for jmax ≤ i ≤ n. A similar argument using (6) proves that κ∗i = 0

for

1 ≤ i ≤ jmin .

(7)

(8)

jmax := max J(A).

As a consequence, we may rewrite (5) and (6) in the following way:

κi = 0 (jmax ≤ i ≤ n),

κi =

κ∗i = 0 (1 ≤ i ≤ jmin ),

κ∗i =

We start nding upper bounds for the Lemma 11.

κi

and

|ci | 1+

|ai+1 | mi+1 +κi+1

|ai | 1+

|ci−1 | mi−1 +κ∗i−1

(1 ≤ i < jmax ), (jmin < i ≤ n).

κ∗i .

For 1 ≤ i ≤ n, we have the following upper bounds:

p (mmax + amin − cmax )2 + 4cmax mmax , κi ≤ κ := 2 p amax − mmax − cmin + (mmax + cmin − amax )2 + 4amax mmax κ∗i ≤ κ∗ := . 2 Proof. Consider the sequences (ˆκi )i∈N and (ˆκ∗i )i∈N dened as follows: cmax amax κ ˆ 1 := 0, κ ˆ i+1 := , κ ˆ ∗1 := 0, κ ˆ ∗i+1 := . amin cmin 1 + mmax +ˆκi 1 + mmax +ˆ κ∗ cmax − mmax − amin +

i

κi ≤ κ ˆ n+1−i and κ∗i ≤ κ ˆ ∗i para 1 ≤ i ≤ n. Further, both are bounded increasing sequences, because κ ˆ i ≤ cmax and κ ˆ ∗i ≤ amax for every i ≥ 1. ∗ As a consequence, they converge and their limits, denoted by κ and κ respectively, satisfy     amin cmin 1+ κ = cmax , 1+ κ∗ = amax . mmax + κ mmax + κ∗ From (7) and (8) we easily see that



The statement of the lemma easily follows. From (7) and (8) and Lemma 11 we readily obtain the following two-side bounds. Corollary 12.

where λi :=

We have λi ≤ κi ≤ Λi for 1 ≤ i < jmax and λ∗i ≤ κ∗i ≤ Λ∗i for jmin < i ≤ n,

|ci |mi+1 |ci |(mi+1 + κ) |ai |mi−1 |ai |(mi−1 + κ∗ ) , λ∗i := . , Λi := , Λ∗i := |ai+1 | + mi+1 |ai+1 | + mi+1 + κ |ci−1 | + mi−1 |ci−1 | + mi−1 + κ∗

Now we assume that

∆(A) 6= ∅.

As a consequence of Proposition 4, the matrix

invertible. Proceeding as previously, we start bounding the numbers

κi

and

κ∗i

A

is

in this case.

Let

∆min := min ∆(A), ∆max := max ∆(A), imax := max{jmax − 1, ∆max }, imin := min{jmin , ∆min }. Observe that

i = mi+1 = 0

for

i > imax ,

while

i−1 = mi−1 = 0

for

i ≤ imin .

As a

consequence, we may rewrite (5) and (6) in the following way: (9)

κi = 0 (imax < i ≤ n),

(10)

κ∗i = 0 (1 ≤ i ≤ imin ),

i |ai+1 | + mi+1 + κi+1 |ai+1 | + mi+1 + κi+1 i−1 |ci−1 | + mi−1 + κ∗i−1 κ∗i = |ai | |ci−1 | + mi−1 + κ∗i−1 κi = |ci |

Analogous to Lemma 11, we have the following result.

(1 ≤ i ≤ imax ), (imin < i ≤ n).

8

E. DRATMAN AND G. MATERA

Let par(i) ∈ {0, 1} denote the parity of i ∈ N. For 1 ≤ i ≤ n, we have

Lemma 13.

κ∗i ≤ χ∗par(i) ,

κi ≤ χpar(i) ,

where 



χ0 := cmax 1 +

amax amax +mmin



,

χ1 :=

 χ∗0 := amax 1 +

cmax cmax +mmin



,

χ∗1 :=

Proof.

Consider the sequences

κ ˆ 1 := 0, κ ˆ i+1 κ ˆ ∗1 := 0, κ ˆ ∗i+1

(amax +cmax +mmin )2 +4amax cmax , 2



(ˆ κi )i∈N

amax −cmax −mmin +

(ˆ κ∗i )i∈N

and

(amax +cmax +mmin )2 +4amax cmax . 2

dened as follows:

  amax := cmax 1 + amax + mmin + κ ˆi   cmax := amax 1 + cmax + mmin + κ ˆ ∗i

i ≥ 1,

for

for

i ≥ 1.

κ∗i ≤ κ ˆ ∗i for 1 ≤ i ≤ n. Further, it is easy to see that bounded sequences. Indeed, for j ≥ 2 it follows that     ∗ cmax cmax < κ ˆ j ≤ cmax 1 + amaxamax , a < κ ˆ ≤ a 1 + max max j +mmin cmax +mmin . κi ≤ κ ˆ n+1−i

Observe that are

cmax −amax −mmin +

and

Finally, we claim that the following inequalities hold for

The fact that

j ≥ 1:

κ ˆ 2j−1 < κ ˆ 2j+1 < κ ˆ 2j ,

κ ˆ ∗2j−1 < κ ˆ ∗2j+1 < κ ˆ ∗2j ,

κ ˆ 2j+1 < κ ˆ 2j+2 < κ ˆ 2j ,

κ ˆ ∗2j+1 < κ ˆ ∗2j+2 < κ ˆ ∗2j .

κ ˆ1 < κ ˆ2

follows by denition, and this in turn implies

rst three inequalities in the rst line hold for

j = 1.

κ ˆ3 < κ ˆ2,

namely the

Arguing inductively, assuming that

the rst three inequalities of the rst line hold, we deduce that

κ ˆ 2j+1 < κ ˆ 2j+3 < κ ˆ 2j+2 ,

these

κ ˆ 2j > κ ˆ 2j+2 > κ ˆ 2j+1 .

Hence,

which completes the proof of the rst three inequalities in the rst

κ ˆ 2j+1 < κ ˆ 2j+2 is a direct consequence of the inequality κ ˆ 2j+1 < κ ˆ 2j+2 < κ ˆ 2j holds due to the inequality κ ˆ 2j−1 < κ ˆ 2j+1 . The corresponding ∗ inequalities for the κi are established following the same argument mutatis mutandis. We conclude that the subsequences of (ˆ κi )i∈N corresponding to even and odd indices are monotone and bounded, and thus convergent. In particular, the subsequence (ˆ κ2i−1 )i∈N is increasing and therefore upper bounded by its limit κ ˆ , which satises the following identity:  

line. Next the inequality

κ ˆ 2j ,

(11)

while

 κ ˆ = cmax  1 +

Similarly, the sequence

amax  amax + mmin + cmax 1 +

(ˆ κ∗2i−1 )i∈N

amax amax + mmin + κ ˆ

  .

is increasing and thus upper bounded by its limit

κ ˆ∗,

which satises the identity

 (12)

 κ ˆ ∗ = amax  1 +

 c max cmax + mmin + amax 1 +

From (11) and (12) one easily sees that bounds of the statement for even

i

κ ˆ

and

κ ˆ∗

cmax cmax + mmin + κ ˆ∗

equal

χ1

and

follow by the denition of

χ∗1

κ ˆi

respectively. The lower

and

As a consequence of Lemma 13, we obtain the following result.

  .

κ ˆ ∗i .



BOUNDS FOR INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

We have λi ≤ κi ≤ Λi (1 ≤ i ≤ imax ) and λ∗i ≤ κ∗i ≤ Λ∗i (imin < i ≤ n),

Corollary 14.

where

λi := |ci |

λ∗i := |ai |

Proof.

i χpar(i+1) 2 i χpar(i+1) 2

i |ai+1 | + mi+1 + |ai+1 | + mi+1 +

|ci−1 | + mi−1 +

(2−i )χpar(i+1) 2 , Λi := |ci | (2−i )χpar(i+1) |ai+1 | + mi+1 + 2 (2−i−1 )χ∗par(i−1) i−1 |ci−1 | + mi−1 + 2 Λ∗i := |ai | (2−i−1 )χ∗par(i−1) |ci−1 | + mi−1 + 2

i |ai+1 | + mi+1 +

,

i−1 χ∗par(i−1) 2 i−1 χ∗par(i−1) 2

i−1 |ci−1 | + mi−1 +

,

.

According to (9) and (10),

κi = |ci | κ∗i = |ai | It follows that functions of

κi

1

(1 ≤ i ≤ imax ),

1+

(1−i )|ai+1 | i |ai+1 |+mi+1 +κi+1

1+

(1−i )|ci−1 | i−1 |ci−1 |+mi−1 +κ∗i−1

1

(imin < i ≤ n).

κ∗i are decreasing functions of κi+1 and κ∗i−1 for i = 2, and increasing ∗ and κi−1 for i = 0. Combining this remark with Lemma 13, the lemma 

and

κi+1

readily follows. 3.2.2.

9

Bounds for the entries of A−1 .

On the basis of Corollaries 12 and 14, we can now

unify the arguments of the previous section.

∆(A) = ∅, and as for 1 ≤ i ≤ n. As

in Corollary 14 for

Dene

∆(A) 6= ∅.

Observe that, if

a consequence, the formulas for

∆(A) = ∅. 12 if ∆(A) = ∅,

λi , Λi , λ∗i , Λ∗i

λi , Λi , λ∗i , Λ∗i

as in Corollary 12 for

∆(A) = ∅,

then

i = 0

of Corollary 14 specialize

to those of Corollary 12 in the case Combining (7) with Corollary

or (9) with Corollary 14 if

∆(A) 6= ∅,

we

obtain (13) for

|ci |

i |ai+1 | + mi+1 + κi+1 i |ai+1 | + mi+1 + κi+1 ≤ κi ≤ |ci | |ai+1 | + mi+1 + Λi+1 |ai+1 | + mi+1 + λi+1

1 ≤ i ≤ imax . Similarly, combining (8) ∆(A) 6= ∅, it follows that

with Corollary 12 for

∆(A) = ∅,

or (10) with

Corollary 14 for (14) for

|ai |

imin < i ≤ n.

i−1 |ci−1 | + mi−1 + κ∗i−1 i−1 |ci−1 | + mi−1 + κ∗i−1 ∗ ≤ κ ≤ |a | i i |ci−1 | + mi−1 + Λ∗i−1 |ci−1 | + mi−1 + λ∗i−1 From these bounds we obtain our nal bounds on the

Proposition 15.

κi

and

With notations and assumptions as above, we have

σi ≤ κi ≤ Σi (1 ≤ i ≤ imax ),

σi∗ ≤ κ∗i ≤ Σ∗i (imin < i ≤ n),

where σi :=

Σi := σi∗ :=

imax X+1

k Y

k=i+1

j=i+1

(k−1 |ak | + mk )

imax X+1

k Y

k=i+1

j=i+1

(k−1 |ak | + mk )

i−1 X

(k |ck | + mk )

k=imin

Σ∗i :=

i−1 X k=imin

i−1 Y j=k

(k |ck | + mk )

i−1 Y j=k

|cj−1 | , |aj | + mj + Λj |cj−1 | , |aj | + mj + λj

|aj+1 | , mj + |cj | + Λ∗j |aj+1 | . mj + |cj | + λ∗j

κ∗i .

10

E. DRATMAN AND G. MATERA

Proof.

We rst prove the assertions concerning

bound for

κimax

κi

for

1 ≤ i ≤ imax .

The upper and lower

are direct consequences of (13) and (14) and the fact that

i + 1 ≤ imax

Next, assume that the assertions for

σi := |ci |

i |ai+1 | + mi+1 + σi+1 , |ai+1 | + mi+1 + Λi+1

Σi := |ci |

i |ai+1 | + mi+1 + Σi+1 , |ai+1 | + mi+1 + Λi+1

together with (13) and (14), imply the upper and lower bound for The bounds for

κ∗i (imin < i ≤ n)

κimax +1 = 0.

hold. Then the identities

κi . 

follow by a similar argument.

Now we proceed to bound the entries of the inverse of the tridiagonal matrix this purpose, we rst bound the quantities

αi

and

βi

A of

(1). For

of (2). As an immediate consequence

of (4) and Proposition 15 we have the following result. Corollary 16.

Let A ∈ Rn×n be a matrix as in

1 1 ≤ αi ≤ , |ai | + mi + Σi |ai | + mi + σi

. For 1 ≤ i ≤ n, we have

(1)

1 1 ≤ βi ≤ , ∗ |ci | + mi + Σi |ci | + mi + σi∗

where σi := 0 and Σi := 0 for imax < i ≤ n, and σi∗ := 0 and Σ∗i := 0 for 1 ≤ i ≤ imin . A second corollary of Proposition 15 yields bounds for the diagonal entries of Corollary 17.

A−1 .

With notations as in Corollary 16, for 1 ≤ i ≤ n we have 1 1 ≤ cii ≤ , mi + Σi + Σ∗i mi + σi + σi∗

Proof.



The bounds follows from Corollary 7 and Proposition 15.

Finally, by Lemma 6 we obtain bounds for the remaining entries of Corollary 18.

A−1 .

With notations as in Corollary 16, if 1 ≤ j < i ≤ n, then

        ∗ −1  (m + Σ + Σ∗ )−1  (m + Σ + Σ ) j j j i i i−j i , i amin max i−1 ≤ Q  Q   ∗   (|ak | + mk + Σk )   (|ck | + mk + Σk ) k=j

k=j+1

        ∗ −1  (m + σ + σ ∗ )−1  (m + σ + σ ) j j j i i i−j i |cij | ≤ amax min i−1 , i .   Q Q   ∗   (|ak | + mk + σk )   (|ck | + mk + σk ) k=j

k=j+1

On the other hand, for 1 ≤ i < j ≤ n we have     

    

(mj + Σj + Σ∗j )−1 (mi + Σi + Σ∗i )−1 , ≤ j j−1   Q Q   ∗  (|ak | + mk + Σk ) (|ck | + mk + Σk )    k=i+1 k=i        ∗ )−1   (m + σ + σ ∗ )−1  (m + σ + σ j j j i i j−i i |cij | ≤ cmax min , . j j−1   Q Q   ∗  (|ak | + mk + σk ) (|ck | + mk + σk )   

cj−i min max

k=i+1

Proof.

k=i

The proof readily follows from Lemma 6 and Corollaries 16 and 17.



BOUNDS FOR INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

11

4. Examples In this section we specialize the bounds of Corollaries 17 and 18 to two classical families of examples of tridiagonal matrices arising from the application of nite dierence methods to one-dimensional boundary-value problems.

In both cases, we express the bounds in

terms of the parameters dening the family under consideration and show that our bounds compare favorably with existing results of the literature. The boundary-value problems are semilinear of parabolic type. Discretizations of semilinear second-order boundary-value problems by nite dierences methods frequently lead to consider large linear systems dened by tridiagonal matrices satisfying assumptions (H1), (H2), (H3). In this section we analyze the tridiagonal matrices arising in such discretizations with the two most common boundary conditions, namely Dirichlet and Neumann ones. Our aim is to obtain asymptotically sharp bounds on the coecients of the inverses of the corresponding matrices. For this purpose, we shall give to these cases a dierent treatment, which is nevertheless achieved by either applying directly the bounds of Corollaries 17 and 18 or the underlying approach. Consider the following general second-order two-point boundary-value problem with nonlinear boundary conditions:

u00 (x) = f (x, u(x))

(15)

in

(a, b),

 g u(a), u0 (a), u(b), u0 (b) = 0,

f : [a, b] × R → R≥0 and g : R4 → R2 are functions numbers with a < b. The usual numerical approach to the

C2

and

a, b

where

of class

real

solutions of (15) consists

of considering a secondorder nitedierence discretization, with a uniform mesh. convenience of presentation, we shall assume that the interval under consideration is

are For

[a, b] :=

[0, 1], and let x1 , . . . , xn dene a uniform partition of [0, 1]. Denote h := (n − 1)−1 and let uk := u(xk ) be the value of a solution u of (15) at x1 , . . . , xn . The discrete version of (15) is the following nonlinear system:

   0 = 1 u k+1 − 2uk + uk−1 − f (xk , uk ), 2 h   0 = g u1 , ∆h u(x1 ), un , ∆h u(xn ) ,

(16)

where

∆h u(x1 ), ∆h u(xn )

are the discrete approximations of

(2 ≤ k ≤ n − 1)

u0 (a)

u0 (b) under function g depends

eration. We remark that in the case of Dirichlet conditions the

∆h u(x1 )

on

nor on

∆h u(xn ).

and

considneither

In the case of Neumann conditions the discretization typi-

cally adopted is that of centered dierences

∆h u(x) := (u(x + h) − u(x − h))/2h (see,

e.g.,

[AMR95] or [LeV07]). As it is customary, we shall assume that

f

is positive and

f

and its partial derivatives

are globally bounded (see, e.g., [Kac02]). Considering a linearization of (16) we are led to a linear system of the following form:

( (17)

0 = uk+1 − 2uk + uk−1 − h2 fk uk , (1 ≤ k ≤ n)  0 = g u1 , ∆h u(x1 ), un , ∆h u(xn ) .

Our hypotheses imply that there exist for

1 ≤ k ≤ n.

When the function

g

fmin > 0

and

fmax > 0

such that

fmin ≤ fk ≤ fmax

represents Dirichlet or Neumann conditions we obtain

the families of tridiagonal matrices which we now discuss.

4.1. Neumann boundary conditions. We start with a Neumann boundary condition, that is, we consider the following family of problems of type (15):

u00 (x) = f (x, u(x))

in

(0, 1),

u0 (0) = α,

u0 (1) = β.

12

E. DRATMAN AND G. MATERA

In this case, the nite-dierence discretization (17) is the following:

 0 = uk+1 − 2uk + uk−1 − h2 fk uk ,    0 = u1 − u0 − 2hα    0 = un − un+1 − 2hβ, where

u0 , un+1

represent the values of a solution

respectively. The extra unknowns last two equations.

u0 , un+1

u

(2 ≤ k ≤ n − 1)

at

x0 := −h

and

xn+1 := 1 + h

can be eliminated by taking into account the

As a consequence, we have to solve a linear system whose dening

matrix has the following form:

1 + f1 h2 −1  −1 2 + f2 h2 −1   .. .. .. A :=  . . .   −1 2 + fn−1 h2 −1 −1 1 + fn h2 

(18)

where

fi > 0

for

1 ≤ i ≤ n.

    ,  

Denote

fmin := min{fi : 1 ≤ i ≤ n},

fmax := max{fi : 1 ≤ i ≤ n}.

In the notations of Sections 2 and 3.2, we have

mi = fi h2 (1 ≤ i ≤ n), ai = −1 (2 ≤ i ≤ n), ci = −1 (1 ≤ i ≤ n − 1), mmin = fmin h2 , mmax = fmax h2 , amin = amax = cmin = cmax = 1. δi := sgn(ci ai+1 ) = 1 for 1 ≤ i ≤ n − 1, and thus ∆(A) := {i ∈ {1, . . . , n − 1} :

Observe that

δi < 0}

is the empty set. As a consequence, Lemma 11 yields the following estimates:

√ √ √ mmax + 4 − mmax mmax + 4 − mmax √ ∗ ∗ , κi ≤ κ := mmax . 2 2 λi , λ∗i , Λi , Λ∗i as in Corollary 12, it follows that

√ κi ≤ κ := mmax Further, dening



λi ≥ λmin :=

mmin mmax + κ , Λi ≤ Λmax := , 1 + mmin 1 + mmax + κ

mmin mmax + κ∗ , Λ∗i ≤ Λ∗max := . 1 + mmin 1 + mmax + κ∗ √ κ and κ∗ we readily conclude that κ = κ∗ ≤ mmax = O(h).

λ∗i ≥ λ∗min := By the denition of

mmax ,

follows that

λmin = o(h2 ),

λ∗min = o(h2 ),

Λmax = Λ∗max ≤



mmax = O(h).

Now we apply Proposition 15 to obtain

n X

σi ≥

mmin (1+mmax +Λmax )k−i

i−n

max +Λmax ) ≥ mmin 1−(1+m mmax +Λmax

mmin ≥ (n − i) (1+mmax , +Λmax )n−i

k=i+1

Σi ≤

n X

mmax (1+mmin +λmin )k−i

≤(n − i)mmax .

k=i+1 Similarly, we see that

mmin σi∗ ≥ (i − 1) (1+mmax +Λ∗

i−1 max )

,

Σ∗i ≤ (i − 1)mmax .

Combining these bounds with Corollaries 17 and 18 we obtain the following result.

It

BOUNDS FOR INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

13

Let A ∈ Rn×n be the matrix of (18) and let A−1 := (cij )1≤i,j≤n be its inverse matrix. Then the following bound holds for 1 ≤ i, j ≤ n: Theorem 19.

√ (1 + mmax + mmax )n−1 1 ≤ |cij | ≤ . mmax n(1 + n mmax )|i−j| mmin n(1 + mmin )|i−j|

Proof. Σi

The lower bound for

and

Σ∗i

cii

is a direct consequence of Corollary 17 and the bounds for

above. On the other hand, by the previous bounds for



n−i + mi + σi + σi∗ ≥ mmin 1 + (1+√mmax +mmax )n−i mmin n ≥ . √ (1 + mmax + mmax )n−1

(19)

cii

The upper bound for For

1 ≤ j < i ≤ n,

σi

and

σi∗

we obtain

i−1 √ (1+ mmax +mmax )i−1



now follows from Corollary 17.

according to Corollary 18 and (19), we have

√ (1 + mmax + mmax )n−1 (mmax n)−1 , ≤ |c | ≤ ij (1 + min{i − 1, n − j} mmax )i−j mmin n(1 + mmin )i−j which readily imply the corresponding assertions in the statement of the theorem. Finally, if

1 ≤ i < j ≤ n,

then

√ (1 + mmax + mmax )n−1 (mmax n)−1 ≤ |c | ≤ . ij (1 + min{j − 1, n − i} mmax )j−i mmin n(1 + mmin )j−i 

This nishes the proof of the theorem. We conclude that

cij = Ω(n)

for

1 ≤ i, j ≤ n.

bounds are asymptotically optimal.

This shows that our upper and lower

Further, we signicantly improve the best bounds

available in the literature, where lower and upper bounds do not have the same asymptotic behavior. More precisely, we compare our bounds with those of [LHLL10], which are the sharpest ones in the literature up to the authors knowledge. According to [LHLL10, Theorem 3.4], we have

Qi

Qi 1+

˜k k=2 µ 2 h f1 + µ ¯2

≤|ci,1 | ≤

Qi

˜k k=j+1 µ

≤|ci,j | ≤ 2 + h2 fj + τ¯j−1 + µ ¯j+1 Qj−1 ˜k k=i τ ≤|ci,j | ≤ 2 2 + h fj + τ¯j−1 + µ ¯j+1 Qn−1 ˜k k=i τ ≤|ci,n | ≤ 2 1 + h fn + τ¯n−1

¯k k=2 µ , 2 1 + h f1 − µ ¯2 Qi ¯k k=j+1 µ 2 + h2 fj − τ¯j−1 − Qj−1 ¯k k=i τ 2 2 + h fj − τ¯j−1 − Qn−1 ¯k k=i τ , 2 1 + h fn − τ¯n−1

µ ¯j+1 µ ¯j+1

for

2 ≤ j < i,

for max{2, i}

≤ j < n,

where

  2f 2f i+1 i+1 2+2h2 (f2+h  6+2h2 (f 2+h for 2 ≤ i ≤ n−2 , for 2 ≤ i ≤ n−2, 4 4 i +fi+1 )+h fi fi+1 i +fi+1 )+h fi fi+1 µ ¯i := µ ˜ := i 1+h2 fi+1 1+h2 fi+1   for i ≥ n − 1, for i ≥ n − 1, 1+h2 (fi +2fi+1 )+h4fi fi+1 2(n−i)+1+h2 (fi +2fi+1)+h4fi fi+1

τ¯i :=

 1+h2 fi−1  1+h2 (f +2f )+h4 f f i



i−1

2+h2 f

i i−1

i−1

2+2h2 (fi +fi−1 )+h4 fi fi−1

for i = 1, 2, for 3 ≤ i < n,

We have the following result.

τ˜i :=

 2f i−1  2i−1+h2 (f1+h +2f )+h4 f f i



i−1

2+h2 f

i i−1

i−1

6+2h2 (fi +fi−1 )+h4 fi fi−1

for i = 1, 2, for 3 ≤ i < n.

14

E. DRATMAN AND G. MATERA

Lemma 20.

For any n ≥ max{3, fmax }, the following bounds hold:

1 − 6fmax h2 ≤ µ ¯i , τ¯i−1 ≤ 1 (2 ≤ i ≤ n), 2 + h2 fj − τ¯j−1 − µ ¯j+1

1 fmax 2 1 fmax 2 − h ≤µ ˜i , τ˜i ≤ + h (2 ≤ i < n), 3 9 3 3 ≤ 17fmax h2 (2 ≤ i < n),

1 + h2 f1 − µ ¯2 ≤ 5fmax h2 , 1 + h2 fn − µ ¯n−1 ≤ 5fmax h2 .

Proof.

To prove the bounds for

1−µ ¯i =

µ ¯i ,

suppose rst that

2 ≤ i ≤ n − 2.

We have

h2 (2fi + fi+1 + h2 fi fi+1 ) ≤ 4fmax h2 . 2 + 2h2 (fi + fi+1 ) + h4 fi fi+1

0 ≤ 1−µ ¯n−1 ≤ 6fmax h2 and 0 ≤ 1 − µ ¯n ≤ fmax h2 . shows our claim. The bounds for τ ¯i follows by the same argument mutatis mutandis. Next, concerning the bounds for µ ˜i and τ˜i , for 2 ≤ i ≤ n − 2 we see that

A similar argument shows that

− Similarly,

This

fmax 2 1 fmax 2 h2 (2fi − fi+1 + h2 fi fi+1 ) ≤ h ≤ −µ ˜i = h . 2 4 18 3 18 + 6h (fi + fi+1 ) + 3h fi fi+1 6

2 − fmax 9 h ≤

1 3

−µ ˜n−1 ≤

2fmax 2 9 h , which implies our claim. The bounds for

τ˜i

follows with the same arguments. Finally, we consider the remaining bounds. For

4 ≤ j ≤ n − 3,

by a direct calculation

we easily see that

2 + h2 fj − τ¯j−1 − µ ¯j+1 ≤

(16 + 44h + 34h2 + 10h3 + h4 )fmax h2 ≤ 12fmax h2 . 4 

The remaining bounds follow by similar calculations. Now we are able to establish the asymptotic behavior of the previous bounds for the Corollary 21.

cij .

For n ≥ 6fmax and 2 ≤ j < i, we have

Qi

¯k k=j+1 µ

2 + h2 fj − τ¯j−1 − µ ¯j+1

1 − 6fmax h , ≥ 17fmax h2

Qi

˜k k=j+1 µ

2 + h2 fj + τ¯j−1 + µ ¯j+1



1 . 3i−j

On the other hand, if n ≥ 6fmax and max{2, i} ≤ j < n, then Qj−1

¯k k=i τ

2+

Proof.

h2 fj

− τ¯j−1 − µ ¯j+1

1 − 6fmax h ≥ , 17fmax h2

Qj−1

˜k k=i τ

2+

h2 fj

+ τ¯j−1 + µ ¯j+1



1 3j−i

.

According to Lemma 20, we have

i Y

µ ¯k ≥ (1 − 6fmax h2 )i−j ≥ 1 − 6(i − j)fmax h2 ≥ 1 − 6fmax h.

k=j+1 A similar argument proves that

i Y

Qj−1

¯k k=i τ

≥ 1 − 6fmax h.

On the other hand,

µ ˜k ≤ 3i−j (1 + fmax h2 )j−i ≤ 3i−j efmax h ≤ 2 · 3i−j .

k=j+1

Qj−1

˜k k=i τ

≤ 2 · 3j−i .

Then the corollary follows by Lemma 20.



It follows that the upper bounds deduced from [LHLL10, Theorem 3.4] are of order

Ω(n2 ),

Similarly, we see that

while the lower bounds are of order

Ω(3−|i−j| ).

In other words, there is an exponential gap

in the asymptotic behavior of lower and upper bounds for the matrix

A

of (18).

BOUNDS FOR INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

15

4.2. Dirichlet boundary conditions. Next we consider the following family of problems of type (15) with a Dirichlet boundary condition:

u00 (x) = f (x, u(x))

(20)

We consider a uniform mesh

in

(0, 1),

x0 , . . . , xn+1

of

u(0) = α,

[0, 1]

u(1) = β.

and denote

h := 1/(n − 1).

Therefore,

the nite-dierence discretization (17) has the following form:

 0 = uk+1 − 2uk + uk−1 − h2 fk uk ,    0 = u0 − α    0 = un+1 − β, where

u0 , un+1

represent the values of a solution

u0 ,

respectively. Disregarding the extra unknowns

(1 ≤ k ≤ n)

u at x0 := −h and xn+1 := 1 + h un+1 , which can be eliminated by the

last two equations, we have to solve a linear system whose dening matrix has the form

   A :=  

(21)

where

fi > 0

for

1 ≤ i ≤ n.

2 + f1 h2 −1 −1



..

.

..

.

..

.

..

.

−1 −1 2 + fn h2

  , 

As before, we denote

fmin := min{fi : 1 ≤ i ≤ n},

fmax := max{fi : 1 ≤ i ≤ n}.

Further, with the notations of Sections 2 and 3.2, we have

m1 = 1 + f1 h2 ,

mi = fi h2 (2 ≤ i ≤ n − 1),

ai = −1 (2 ≤ i ≤ n), Since

mn = 1 + fn h2 ,

ci = −1 (1 ≤ i ≤ n − 1).

δi := sgn(ci ai+1 ) = 1 for 1 ≤ i ≤ n − 1, the set ∆(A) := {i ∈ {1, . . . , n − 1} : δi < 0} i = 0 for 1 ≤ i ≤ n, by the denition of κi , κ∗i we obtain the

is empty in this case. As following estimates:

κn = κn = κn = 0, 1 + fmax h2 1 + fmin h2 ≤ κ ≤ κ := =: κ , n−1 n−1 n−1 2 + fmin h2 2 + fmax h2 fmin h2 + κi+1 fmax h2 + κi+1 =: κ ≤ κ ≤ κ := i i i 1 + fmin h2 + κi+1 1 + fmax h2 + κi+1

(1 ≤ i ≤ n − 2),

κ∗1 = κ∗1 = κ∗1 = 0, 1 + fmax h2 1 + fmin h2 ∗ ∗ ∗ ≤ κ ≤ κ := =: κ , 2 2 2 2 + fmin h2 2 + fmax h2 fmax h2 + κ∗i−1 fmin h2 + κ∗i−1 ∗ ∗ ∗ =: κ ≤ κ ≤ κ := i i i 1 + fmin h2 + κ∗i−1 1 + fmax h2 + κ∗i−1 Observe that

κi = κ∗n+1−i

and

κi = κ∗n+1−i

for

1 ≤ i ≤ n.

(3 ≤ i ≤ n).

Applying Corollary 12 in this

case does not give asymptotically optimal bounds. For this reason, in the next result we obtain an improved version of Corollary 12. Lemma 22.

Dene λ∗1 := 0, Λ∗1 := 0, and λ∗i :=

1 (i − 1)2 fmin h2 + , i (3 + fmin ) i

Λ∗i :=

1 (i − 1)2 fmax h2 + i 2i

16

E. DRATMAN AND G. MATERA

for 2 ≤ i ≤ n. Further, let λi := λ∗n+1−i and Λi := Λ∗n+1−i for 1 ≤ i ≤ n. Then the following bounds hold for 1 ≤ i ≤ n: λ∗i ≤ κ∗i ≤ Λ∗i .

λi ≤ κi ≤ Λi ,

Proof.

κ∗i

It suces to prove the assertions concerning

for

2 ≤ i ≤ n.

For this purpose, we

claim that 2

κ∗i for

2≤i≤n i = 2, it

For



2

1 + (i − 1)i fmin2 h

2

i + (i − 1)i(i + 1) fmin6 h

,

κ∗i

h 1 + (i − 1)i fmax 2



2

h i + (i − 1)i fmax 2

κ∗2 and κ∗2 equal i − 1 ≥ 2. We have

is easy to check that both

Now suppose that the claim holds for

κ∗i =

the corresponding expressions.

fmin h2 + κ∗i−1 1 + fmin h2 + κ∗i−1 2



1 + (i − 1)i fmin2 h + (i − 2)(i − 1)i

2 h4 fmin 6

2

2

i + (i − 1)i fmin2 h + (i − 2)(i − 1)i fmin6 h + (i − 2)(i − 1)i

2 h4 fmin 6

2



1 + (i − 1)i fmin2 h

2

i + (i − 1)i(i + 1) fmin6 h

.

On the other hand, 2

κ∗i

2

max h /2 h fmax h2 + 1+(i−2)(i−1)f fmax h2 + κ∗i−1 1 + (i − 1)i fmax i−1 2 ≤ = . = 2 fmax h2 max h /2 1 + fmax h2 + κ∗i−1 i + (i − 1)i 1 + fmax h2 + 1+(i−2)(i−1)f 2 i−1

This proves the claim. Now we prove the bounds in the statement of the lemma. For

λ∗2 ≤ Next, for

i = 2,

3 + fmin h2 1 + fmax h2 2 + fmax h2 1 + fmin h2 ∗ ∗ = κ , κ = ≤ = Λ∗2 . ≤ 2 2 6 + fmin 2 + fmin h2 2 + fmax h2 4

i ≥ 3,

according to the claim we have 2

κ∗i



1 + (i − 1)i fmin2 h

2

i + (i − 1)i(i + 1) fmin6 h

≥ λ∗i ,

where the last inequality can be veried by a direct calculation. On the other hand, by the claim we see that

κ∗i ≤

2 + (i − 1)ifmax h2 ≤ Λ∗i . 2i + (i − 1)ifmax h2 

This nishes the proof of the lemma. Next we bound the quantities

1 + fmin h2 + for

1 ≤ i ≤ n.

αi

and

βi

1)2 fmin h2

1 (i − + i (3 + fmin )i

of (2). Combining (4) and Lemma 22, we obtain

≤ βi−1 ≤ 1 + fmax h2 +

1 (i − 1)2 fmax h2 + i 2i

We readily conclude that

   1   i+1 fmin i+1 1 −1 2 1+i h ≤ βi ≤ 1 + ifmax + h2 i 9 + 3fmin i 2 3 + fmax for

1 ≤ i ≤ n.

αi (1 ≤ i ≤ n) are obtained κi (1 ≤ i ≤ n). Summarizing, we have

On the other hand, corresponding bounds for

by considering (4) and the bounds of Lemma 22 for the following result.

BOUNDS FOR INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

Corollary 23.

17

The following bounds hold for 1 ≤ i ≤ n: −1 n−i+1 n−i+1 1 + (n − i + 1)fmax h2 ≤ αi ≤ , n−i+2 n−i+2 −1 i i 1 + ifmax h2 ≤ βi ≤ . i+1 i+1

Now we are ready to bound the entries

cij

of the inverse matrix of

A.

We start with the

diagonal entries. Corollary 24.

For 1 ≤ i ≤ n, we have i(n − i + 1) i(n − i + 1) (1 + fmax h)−1 ≤ |cii | ≤ . n+1 n+1

Proof.

Fix

i

with

1 ≤ i ≤ n.

By Corollary 7 it follows that

|cii |−1 = mi + κi + κ∗i .

Taking

into account the bounds in Lemma 22, we obtain

  (i − 1)2 (n − i)2 n+1 + fmax h2 + +1 i(n − i + 1) 2i 2(n + 1 − i) n+1 n+1 ≤ + fmax h ≤ (1 + fmax h). i(n − i + 1) i(n − i + 1)

|cii |−1 ≤

On the other hand,

|cii |

−1

  n+1 (i − 1)2 (n − i)2 2 ≥ + fmin h + +1 i(n − i + 1) (3 + fmin )i (3 + fmin )(n + 1 − i)   n+1 fmin n+1 fmin ≥ + h≥ 1+ h . i(n − i + 1) 9 + 3fmin i(n − i + 1) 18 + 6fmin 

This implies the corollary. Finally, we obtain asymptotically optimal bounds for the remaining Theorem 25.

cij .

If 1 ≤ j < i ≤ n, then n+1 n+1 ≤ |cij |−1 ≤ e3fmax . j(n + 1 − i) j(n + 1 − i)

On the other hand, for 1 ≤ i < j ≤ n we have n+1 n+1 ≤ |cij |−1 ≤ e3fmax . i(n + 1 − j) i(n + 1 − j)

Proof.

According to Lemma 6, if

1 ≤ j < i ≤ n,

then

|cij | = βj · · · βi−1 |cii |.

As a conse-

quence, from Corollaries 23 and 24 it follows that

i−1

−1

|cij |

Y k+1  n+1 ≤ (1 + fmax h) 1 + kfmax h2 i(n + 1 − i) k k=j

i−1

X n+1 ≤ (1 + fmax h) Πk (j, . . . , i − 1)(fmax h2 )k j(n + 1 − i) k=0 k i−1 X Π1 (j, . . . , i − 1)fmax h2 n+1 n+1 ≤ (1 + fmax h) ≤ e3fmax . j(n + 1 − i) k! j(n + 1 − i) k=0

Here

Πk ∈ R[X1 , . . . , Xi−j ]

i − j.

On the other hand,

denotes the

|cij |−1 ≥

k th

elementary symmetric polynomial for

i−1 Y n+1 k+1 n+1 = . i(n + 1 − i) k j(n + 1 − i) k=j

0≤k≤

18

E. DRATMAN AND G. MATERA

This proves the rst assertion. The assertion for

mutatis mutandis.

1≤i
follows arguing as above



The lower and upper bounds for the entries of the inverse of the matrix

A

of (21) of

Theorem 25 are asymptotically optimal. Now we compare our result with what is obtained in [LHLL10]. Applying [LHLL10, Theorem 3.4] to

Qi

˜k k=j+1 µ

(22)

h2 fj

≤ |ci,j | ≤

A we Qi

have

¯k k=j+1 µ

h2 fj

+ τ¯j−1 + µ ¯j+1 2+ − τ¯j−1 − µ ¯j+1 Qj−1 Qj−1 ˜k ¯k k=i τ k=i τ ≤ |ci,j | ≤ 2 + h2 fj + τ¯j−1 + µ ¯j+1 2 + h2 fj − τ¯j−1 − µ ¯j+1 2+

(23)

τ¯0 := 0, µ ¯n+1 := 0,  2+h2 fi+1  2 4

for

j < i,

for

i ≤ j,

where

for 2 ≤ i ≤ n − 2,

2+2h (fi +fi+1 )+h fi fi+1 µ ¯i := 2+h2 fi+1  for i = n−1, n, 4−n+i+2h2 (fi +fi+1 )+h4 fi fi+1

  

2+h2 fi+1 for 2 ≤ i ≤ n − 2, 4 i +fi+1 )+h fi fi+1 µ ˜i := 2+h2 fi+1   4+n−i+2h2 (fi +fi+1 )+h4 fi fi+1 for i = n − 1, n, with

6+2h2 (f

f0 := 0

and

fn+1 := 0.

 

2+h2 fi−1 for i = 1, 2, 5−i+2h2 (fi−1 +fi )+h4 fi−1 fi τ¯i := 2 2+h fi−1  for 3 ≤ i < n, 2+2h2 (fi +fi−1 )+h4 fi fi−1

 

2+h2 fi−1 for i = 1, 2, 3+i+2h2 (fi−1 +fi )+h4 fi−1 fi τ˜i := 2 2+h fi−1  for 3 ≤ i < n, 6+2h2 (fi +fi−1 )+h4 fi fi−1

The asymptotic behavior of all these quantities is established

in the following result. Lemma 26.

For any n ≥ max{3, fmax }, the following bounds hold:

1 fmax 2 1 fmax 2 − h ≤µ ˜i , τ˜i ≤ + h (2 ≤ i < n), 3 9 3 3 2 fmax 2 2 fmax 2 h ≤µ ˜n−1 , τ˜2 ≤ + h , 2 + h2 fi − τ¯i−1 − µ ¯i+1 ≤ 17fmax h2 (4 ≤ i ≤ n − 3), − 5 5 5 25 1 fmax 1 2 2 − h2 ≤µ ¯n , µ ˜n , τ¯1 , τ˜1 ≤ , − fmax h2 ≤ µ ¯n−1 , τ¯2 ≤ . 2 4 2 3 3 Proof. All the bounds concerning the µ¯i and µ˜i with 2 ≤ i ≤ n − 2, and the τ¯i and τ˜i with 3 ≤ i ≤ n − 1 follow by Lemma 20, since these quantities match the corresponding ones in 2 Lemma 20. In particular, the bounds for 2+h fi − τ ¯i−1 − µ ¯i+1 ≤ 17fmax h2 for 4 ≤ i ≤ n−3 1 − 6fmax h2 ≤ µ ¯i , τ¯i+1 ≤ 1 (2 ≤ i ≤ n − 2),

follow by Lemma 20. Concerning

µ ¯n ,

we have

This proves the claim for

τ˜1 .

h2 fn fmax 1 −µ ¯n = ≤ h2 . 2 2 4 + 2h fn 4 µ ¯n and µ ˜n = µ ¯n . A similar argument

shows the claim for

τ¯1

and

On the other hand,

which proves

2 h2 4fn−1 + fn + 2h2 fn−1 fn −µ ¯n−1 = ≤ fmax h2 , 3 3 3 + 2h2 (fn−1 + fn ) + h4 fn−1 fn our claims for µ ¯n−1 and τ¯2 . Finally,

fmax 2 2 h2 4fn−1 − fn + 2h2 fn−1 fn fmax 2 h ≤ −µ ˜n−1 = ≤ h . 2 4 25 5 5 5 + 2h (fn−1 + fn ) + h fn−1 fn 5 proves the claim for µ ˜n−1 and τ˜2 and nishes the proof of the lemma. −

This



Now we are able to establish the asymptotic behavior of the lower and upper bounds for

|ci,j |

of (22) and (23). As the casuistics are more involved than in Corollary 21, for ease

of presentation we do not establish on the asymptotic behavior of all the bounds obtained from [LHLL10, Theorem 3.4]. The reader can easily deduce those not reported here from the bounds in Lemma 26.

BOUNDS FOR INVERSES OF DIAGONALLY DOMINANT TRIDIAGONAL MATRICES

Corollary 27.

19

For n ≥ 6fmax and 4 ≤ j < i ≤ n − 2, we have

Qi

Qi ˜k 1 − 6fmax h 1 k=j+1 µ ≥ ≤ i−j . , 2 2 2 2 + h fj − τ¯j−1 − µ ¯j+1 17fmax h 2 + h fj + τ¯j−1 + µ ¯j+1 3 On the other hand, if n ≥ 6fmax and 4 ≤ i ≤ j ≤ n − 3, then Qj−1 Qj−1 ¯k ˜k 1 − 6fmax h 1 k=i τ k=i τ ≥ ≤ j−i . , 2 2 2 2 + h fj − τ¯j−1 − µ ¯j+1 17fmax h 2 + h fj + τ¯j−1 + µ ¯j+1 3 ¯k k=j+1 µ

Proof.

All the bounds are established as in the proof of Corollary 21.

We conclude that the upper bounds for

i, j



|cij | deduced from [LHLL10, Theorem 3.4], with Ω(n2 ), while the corresponding lower

as in the statement of Corollary 27, are of order

bounds are of order

Ω(3−|i−j| ), namely there is an exponential gap between lower and upper

bounds. We remark that asymptotically optimal bounds for similar matrices arising in two-point boundary-value problems with Dirichlet conditions were obtained in [MS86]. References

[AMR95] U. Ascher, R. Mattheij, and R. Russell, Numerical solution of boundary value problems for ordinary dierential equations, Classics in Applied Mathematics, vol. 13, SIAM, Philadelphia, 1995. [EMK06] M. El-Mikkawy and A. Karawia, Inversion of general tridiagonal matrices, Appl. Math. Lett. 19 (2006), no. 8, 712720. [Hig02] N. Higham, Accuracy and stability of numerical algorithms, 2nd ed. ed., SIAM, Philadelphia, PA, 2002. [HJ85] R. Horn and C. Johnson, Matrix analysis, Cambridge Univ. Press, Cambridge, 1985. [HM97] Y. Huang and W. McColl, Analytical inversion of general tridiagonal matrices, J. Phys. A 30 (1997), no. 22, 79197933. [Ike79] Y. Ikebe, On inverses of Hessenberg matrices, Linear Algebra Appl. 24 (1979), 9397. [Kac02] B. Kacewicz, Complexity of nonlinear two-point boundary-value problems, J. Complexity 18 (2002), 702738. [LeV07] R. LeVeque, Finite dierence methods for ordinary and partial dierential equations. Steady-state and time-dependent problems, SIAM, Philadelphia, PA, 2007. [LHLL10] H.-B. Li, T.-Z. Huang, X.-P. Liu, and H. Li, On the inverses of general tridiagonal matrices, Linear Algebra Appl. 433 (2010), no. 5, 965983. [MS86] R. Mattheij and M. Smooke, Estimates for the inverse of tridiagonal matrices arising in boundaryvalue problems, Linear Algebra Appl. 73 (1986), 3357. [Nab98] R. Nabben, Twoside bounds on the inverses of diagonally dominant tridiagonal matrices, Linear Algebra Appl. 287 (1998), no. 13, 289305. [PP01] R. Peluso and T. Politi, Some improvements for two-sided bounds on the inverse of diagonally dominant tridiagonal matrices, Linear Algebra Appl. 330 (2001), no. 13, 114. [SJ96] P. Shivakumar and C. Ji, Upper and lower bounds for inverse elements of nite and innite tridiagonal matrices, Linear Algebra Appl. 247 (1996), 297316. 1

Instituto de Ciencias, Universidad Nacional de General Sarmiento, J.M. Gutiérrez 1150 (B1613GSX) Los Polvorines, Buenos Aires, Argentina

E-mail address : [email protected] 2

National Council of Science and Technology (CONICET), Argentina

3

Instituto del Desarrollo Humano, Universidad Nacional de General Sarmiento, J.M. Gutiérrez 1150 (B1613GSX) Los Polvorines, Buenos Aires, Argentina

E-mail address : [email protected]

1. Introduction

tions in the first for-loop consist of 4n − 4 multiplications/divisions and 6n − 6 addi- ... Taking into account the number of arithmetic operations of each for loop we.

390KB Sizes 1 Downloads 345 Views

Recommend Documents

1 Introduction
Sep 21, 1999 - Proceedings of the Ninth International Conference on Computational Structures Technology, Athens,. Greece, September 2-5, 2008. 1. Abstract.

1 Introduction
Jul 7, 2010 - trace left on Zd by a cloud of paths constituting a Poisson point process .... sec the second largest component of the vacant set left by the walk.

1 Introduction
Jun 9, 2014 - A FACTOR ANALYTICAL METHOD TO INTERACTIVE ... Keywords: Interactive fixed effects; Dynamic panel data models; Unit root; Factor ana-.

1 Introduction
Apr 28, 2014 - Keywords: Unit root test; Panel data; Local asymptotic power. 1 Introduction .... Third, the sequential asymptotic analysis of Ng (2008) only covers the behavior under the null .... as mentioned in Section 2, it enables an analytical e

1. Introduction
[Mac12], while Maciocia and Piyaratne managed to show it for principally polarized abelian threefolds of Picard rank one in [MP13a, MP13b]. The main result of ...

1 Introduction
Email: [email protected]. Abstract: ... characteristics of the spinal system in healthy and diseased configurations. We use the standard biome- .... where ρf and Kf are the fluid density and bulk modulus, respectively. The fluid velocity m

1 Introduction
1 Introduction ... interval orders [16] [1] and series-parallel graphs (SP1) [7]. ...... of DAGs with communication delays, Information and Computation 105 (1993) ...

1 Introduction
Jul 24, 2018 - part of people's sustained engagement in philanthropic acts .... pledged and given will coincide and the charity will reap the full ...... /12/Analysis_Danishhouseholdsoptoutofcashpayments.pdf December 2017. .... Given 83 solicitors an

Abstract 1 Introduction - UCI
the technological aspects of sensor design, a critical ... An alternative solu- ... In addi- tion to the high energy cost, the frequent communi- ... 3 Architectural Issues.

1 Introduction
way of illustration, adverbial quantifiers intervene in French but do not in Korean (Kim ... effect is much weaker than the one created by focus phrases and NPIs.

1 Introduction
The total strains govern the deformed shape of the structure δ, through kinematic or compatibility considerations. By contrast, the stress state in the structure σ (elastic or plastic) depends only on the mechanical strains. Where the thermal strai

1. Introduction
Secondly, the field transformations and the Lagrangian of lowest degree are .... lowest degree and that Clay a = 0. We will show ... 12h uvh = --cJ~ laVhab oab.

1 Introduction
Dec 24, 2013 - panel data model, in which the null of no predictability corresponds to the joint restric- tion that the ... †Deakin University, Faculty of Business and Law, School of Accounting, Economics and Finance, Melbourne ... combining the sa

1. Introduction - ScienceDirect.com
Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Received November ..... dumping in trade to a model of two-way direct foreign investment.

1 Introduction
Nov 29, 2013 - tization is that we do not require preferences to be event-wise separable over any domain of acts. Even without any such separability restric-.

1 Introduction
outflow is assumed to be parallel and axially traction-free. For the analogous model with a 1-d beam the central rigid wall and beam coincide with the centreline of their 2-d counterparts. 3 Beam in vacuo: structural mechanics. 3.1 Method. 3.1.1 Gove

1 Introduction - Alexander Schied
See also Lyons [19] for an analytic, “probability-free” result. It relies on ..... ential equation dSt = σ(t, St)St dWt admits a strong solution, which is pathwise unique,.

1 Introduction
A MULTI-AGENT SYSTEM FOR INTELLIGENT MONITORING OF ... and ending at home base that should cover all the flight positions defined in the ... finding the best solution to the majority of the problems that arise during tracking. ..... in a distributed

1. Introduction
(2) how to specify and manage the Web services in a community, and (3) how to ... of communities is transparent to users and independent of the way they are ..... results back to a master Web service by calling MWS-ContractResult function of ..... Pr

1 Introduction
[email protected] ... This flaw allowed Hongjun Wu and Bart Preneel to mount an efficient key recovery ... values of the LFSR is denoted by s = (st)t≥0. .... data. Pattern seeker pattern command_pattern. 1 next. Figure 5: Hardware ...

1 Introduction
Sep 26, 2006 - m+1for m ∈ N, then we can take ε = 1 m+1 and. Nδ,1,[0,1] = {1,...,m + 2}. Proof Let (P1,B = ∑biBi) be a totally δ-lc weak log Fano pair and let.

1 Introduction
Sep 27, 2013 - ci has all its moments is less restrictive than the otherwise so common bounded support assumption (see Moon and Perron, 2008; Moon et al., 2007), which obviously implies finite moments. In terms of the notation of Section 1, we have Î

1 Introduction
bolic if there exists m ∈ N such that the mapping fm satisfies the following property. ..... tially hyperbolic dynamics, Fields Institute Communications, Partially.

1 Introduction
model calibrated to the data from a large panel of countries, they show that trade ..... chain. Modelling pricing and risk sharing along supply chain in general ...