Online Updating the Generalized Inverse of Centered Matrix Qing Wang and Liang Zhang School of Computer Science, Fudan University, Shanghai, 200433, China

Abstract In this paper, we present the exact updating formulae for the generalized inverse of centered matrix, when a row or column vector is inserted or deleted. The computational cost is O(mn) on matrix with size m × n. Experimental results validate its accuracy and efficiency. Keywords: Generalized Inverse, Centered Matrix, Online Learning, Data Stream

1. Introduction The generalized inverse of an arbitrary matrix, also called Moore-Penrose inverse or Pseudoinverse, is an generalization of the inverse of full rank square matrix ( Israel and Greville (2003)). It has many applications in machine learning (Gutman and Xiao (2004)), computer vision (Ng et al. (2007)), data mining (Korn et al. (2000)), etc. For example, it allows for solving least square systems, even with rank deficiency, and the solution has the minimum norm which is the desired property under regularization. Online learning algorithm often needs to update the trained model when some new observations arrive and/or some observations become obsolete. Online updating for the generalized inverse of original data matrix when a row or column vector is inserted or deleted, is given by the well-known Greville algorithm and Cline algorithm, respectively. The computational cost for one updation is O(mn) on matrix with size m × n. However, in many machine learning algorithms, computation for the generalized inverse of centered data matrix other than the original data matrix is needed. For example, computing the generalized inverse of the laplacian matrix (Gutman and Xiao (2004)) for some graph-based learning, computing the least square formulation for a class of generalized eigenvalue problems (Liu et al. (2009); Sun et al. (2009)) which include LDA, CCA, OPLS, etc. In this paper, we present the exact updating formulae for generalized inverse of centered matrix, when a row or column vector is inserted or deleted. The computational cost is also O(mn) on matrix with size m × n. Experimental results show that it could achieve high accuracy with low time cost. Notations: Let C m×n denotes the set of all m × n matrices over the field of complex numbers. The symbols A† and A∗ stand for the generalized inverse and the conjugate transpose of matrix A ∈ C m×n , respectively. I is the identity matrix and 1 is a vector of all ones. 2. Updating for the Original Data Matrix In this section, we briefly introduce Greville and Cline algorithm for updating the generalized inverse of the original data matrix when a column vector is appended or the first column vector is deleted. Extension to insertion or deletion of any column can be easily transformed based on equation (AQ)† = Q∗ A† where Q is unitary matrix. To the case of the row, it can be transformed into column case based on equation A† = [(A∗ )† ]∗ . The incremental computation for the generalized inverse of matrix is given by the well-known Greville algorithm (Israel and Greville (2003)), which is shown in Lemma 1 below. Email address: {wangqing, lzhang}@fudan.edu.cn (Qing Wang and Liang Zhang) Preprint submitted to ...

March 5, 2011

b = [A, a] ∈ C m×n , where A ∈ C m×(n−1) and a ∈ C m×1 . Define c = (I − AA† )a, Lemma 1. (Greville Algorithm) Let A b then the generalized inverse of matrix A is given by b† = A where b∗ is defined as: b∗ =

{

[

A† − A† ab∗ b∗

c† (1 + a A A a)−1 a∗ A†∗ A† ∗ †∗ †

] (1) if c , 0 if c = 0

(2)

The decremental computation for the generalized inverse of matrix is given by Cline Algorithm (Cline (1964)), which is shown in Lemma 2 below. [ ∗ ] b = [a, A] ∈ C m×n , where a ∈ C m×1 and A ∈ C m×(n−1) . And A b† = d Lemma 2. (Cline Algorithm) Let A ∈ C n×m , G where d ∈ C m×1 and G ∈ C (n−1)×m . Define λ = 1 − d∗ a, then the generalized inverse of matrix A is given by †

{

A =

G + λ1 Gad∗ G − d1∗ d Gdd∗

if λ , 0 if λ = 0

(3)

3. Updating for Centered Data Matrix In this section, we will present the updating formulae for generalized inverse of centered matrix when a column vector is appended or the first column is deleted. Extension to insertion or deletion of any column, and row case is similar to original data matrix. 3.1. Appending a new column Let A ∈ C m×(n−1) be the the original data matrix and m ∈ C m×1 be the the column mean of A. And X ∈ C m×(n−1) be the column centered matrix of A and X † be the generalized inverse of X. In the updating process, the mean of original data matrix m, the column centered data matrix X and its generalized inverse X † are kept and updated during the process. e = m + n1 (a − m). Then the When a column vector a is appended, the mean m is first updated according to m centered data matrix is updated. After the append of a, X should be re-centered according to e = [X − 1 (a − m)1∗ , n − 1 (a − m)] X n n

(4)

e† is calculated according to Theorem 3. where 1 denotes the column vector of all ones with size n − 1. Then X e X, X † , a and m are defined above. Define e = (I − XX † )(a − m), then Theorem 3. Let X, e† = X where the h is defined as :

    h∗ =   

[

X † − X † (a − m)h∗ − h∗ e† (n−1)(a−m)∗ X †∗ X † n+(n−1)(a−m)∗ X †∗ X † (a−m)

1 ∗ n−1 1h

if e , 0 if e = 0

The detailed proof of this updating formulae can be found at Appendix 6.3.

2

] (5)

(6)

Table 1: Computational time and error of SVD and our method for column centered data matrix when the n-th column is inserted, then randomly chosen n-th column is deleted inversely. Method n time ∥XX † X − X∥F ∥X † XX † − X † ∥F ∥(XX † )∗ − XX † ∥F ∥(X † X)∗ − X † X∥F SVD method 50 0.0398 3.39E-13 9.62E-16 2.09E-14 2.00E-14 our method 0.0031 4.94E-14 1.56E-16 3.47E-15 1.33E-15 SVD method 100 0.1152 6.06E-13 1.76E-15 3.77E-14 3.52E-14 our method 0.0075 8.38E-14 2.73E-16 6.83E-15 3.20E-15 SVD method 200 0.3728 1.07E-12 3.28E-15 6.68E-14 6.36E-14 our method 0.0162 1.75E-13 6.04E-16 1.25E-14 9.06E-15 SVD method 400 1.5537 1.95E-12 6.81E-15 1.27E-13 1.22E-13 our method 0.0374 4.33E-13 1.89E-15 2.59E-14 3.54E-14 SVD method 600 4.3321 2.80E-12 1.14E-14 1.95E-13 1.85E-13 our method 0.0655 8.41E-13 5.17E-15 4.29E-14 1.08E-13 SVD method 800 9.9805 3.44E-12 1.89E-14 2.82E-13 2.58E-13 our method 0.0971 1.61E-12 1.77E-14 6.88E-14 3.71E-13 SVD method 800 9.9764 3.43E-12 1.89E-14 2.82E-13 2.58E-13 our method 0.0962 1.60E-12 1.76E-14 6.93E-14 3.70E-13 SVD method 600 4.2745 2.91E-12 1.23E-14 2.03E-13 1.94E-13 our method 0.0649 9.98E-13 5.43E-15 1.10E-13 1.26E-13 SVD method 400 1.5303 1.97E-12 6.74E-15 1.26E-13 1.21E-13 our method 0.0358 6.74E-13 2.51E-15 1.04E-13 5.46E-14 SVD method 200 0.3589 1.07E-12 3.26E-15 6.60E-14 6.28E-14 our method 0.0160 3.70E-13 1.19E-15 8.18E-14 2.69E-14 SVD method 100 0.1076 5.89E-13 1.70E-15 3.59E-14 3.32E-14 our method 0.0073 2.04E-13 6.37E-16 6.07E-14 1.43E-14 SVD method 50 0.0412 3.34E-13 9.64E-16 2.07E-14 1.97E-14 our method 0.0032 1.09E-13 3.34E-16 4.36E-14 7.30E-15

3.2. Deleting the first column

] l∗ e e be the column mean of the original matrix corresponding to X. ∈ C n×m and m U 1 e − n−1 When the first column x is deleted, the mean vector m is first updated according to m = m x. Then the centered b should be re-centered according to data matrix is updated. After the deletion of x, X e = [x, X] b ∈ C m×n , X e† = Let X

[

b+ X=X

1 x1∗ n−1

(7)

Then X † is calculated according to Theorem 4. Theorem 4. Let X, U, x and l are defined above. Define θ = 1 − X† =

{

n U + 1θ ( n−1 U xl∗ + 1 ∗ U − l∗ l Ull

n ∗ n−1 l x,

1 ∗ n−1 1l )

then

if θ , 0 if θ = 0

(8)

The detailed proof of this updating formulae can be found at Appendix 6.4. 4. Experiments In this experiment, we compare the accuracy and efficiency of our method to SVD method Golub and Loan (1996) for the computation of generalized inverse of centered matrix. The results are obtained by running the matlab (version R2008a) codes on a PC with Intel Core 2 Duo P8600 2.4G CPU and 2G RAM. We generate synthetic matrix of size m = 1000 and n = 800 which entry is random number in [−1, 1]. And the rank deficiency is produced by random choosing 10% columns be replaced by other random columns in the matrix. We start with a matrix X composed of the first column of the generated matrix A, then sequentially insert each column of A into X. After all the columns of A are inserted into X, we turn to inversely delete one randomly chosen 3

column in X each time until X is null . At each step, the accuracy of algorithms is examined in the error matrices corresponding to the four properties characterizing the generalized inverse: XX † X − X, X † XX † − X † , (XX † )∗ − XX † and (X † X)∗ − X † X. The process is repeated ten times and the averaged value is reported. Fig. 1 shows the running time (second) and the four errors of certain steps. From Fig.1, we can see that the computational error of our method is lower than 10−12 in all cases and is often very closed to SVD method. Moreover the computational time of our method is significantly lower than SVD method, especially when the matrix is large. So, we can conclude that our method is a robust and efficient tool for online computing the generalized inverse of centered matrix. 5. Conclusions Recently, it has been shown that (Sun et al. (2009)), under a mild condition, a class of generalized eigenvalue problems in machine learning can be formulated as least squares problem through using specific class indicator matrix. This class of problems include Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), Partial Least Squares (PLS), Hypergraph Spectral Learning (HSL), etc. In particular, these equivalent least squares algorithms all require the data matrix be centered. Therefore, a main contribution of our methods is that it can directly make this class of problems suitable for online learning. References Cline, R.E., 1964. Representations for the generalized inverse of a partitioned matrix. Journal of SIAM 12, 588–600. Golub, G.H., Loan, C.F.V., 1996. Matrix Computations. The Johns Hopkins University Press, Baltimore, MD. 3rd edition. Gutman, I., Xiao, W., 2004. Generalized inverse of the laplacian matrix and some applications. Bulletin de Academie Serbe des Sciences at des Arts (Cl. Math. Natur.) 129, 15–23. Israel, A.B., Greville, T.N.E., 2003. Generalized inverses: Theory and applications. Springer, New York, NY. 2nd edition. Korn, F., Labrinidis, A., Kotidis, Y., Faloutsos, C., 2000. Quantifiable data mining using ratio rules. The VLDB Journal 8, 254–266. Liu, L.P., Jiang, Y., Zhou, Z.H., 2009. Least square incremental linear discriminant analysis, in: Proceedings of the 9th International Conference on Data Mining (ICDM 2009), pp. 298–306. Ng, J., Bharath, A., Kin, P., 2007. Extrapolative spatial models for detecting perceptual boundaries in colour images. International Journal of Computer Vision 73, 179–194. Sun, L., Ji, S.W., Ye, J., 2009. A least squares formulation for a class of generalized eigenvalue problems in machine learning, in: Proceedings of the 26th International Conference on Machine Learning (ICML 2009), Morgan Kaufmann. pp. 1207–1216.

6. Appendix In this appendix, we give the detailed proof of Eq.(5) and Eq.(8). We begin by giving the definition and some useful property of generalized inverse. 6.1. Definition of generalized inverse For a matrix A ∈ C m×n , the generalized inverse of A which is denoted by A† , is the unique matrix M ∈ C n×m that satisfying the following four equations: AMA = A, MAM = M, (AM)∗ = AM, (MA)∗ = MA.

(9)

6.2. Some useful property of generalized inverse Various useful properties of the generalized inverse have been summarized in Israel and Greville (2003). Here we list some properties in Lemma 5 that will be used subsequently. Lemma 5. Some useful properties of the generalized inverse.

4

A†† = A, A†∗ = A∗† , ∗ †

∗† †



∗ †

(10) ∗† † ∗†

(AA ) = A A , (A AA ) = A A A , †



∗ †



(11)

† ∗

A = A (AA ) = (A A) A ,

(12)

R(A) = R(AA† ) = N(I − AA† ),

(13)





AA and I − AA are all hermitian and idempotent, †

(14)

∗ † ∗

(PAQ) = Q A P , if P and Q are unitary matrix.

(15)

In Lemma 5, the symbols A∗ , R(A) and N(A) stand for the conjugate transpose, the range and the null space A ∈ C m×n , respectively. 6.3. The proof of Theorem 3 ] [ † 1 X − X † (a − m)h∗ − n−1 1h∗ Proof. Let N = where h is defined in Eq.(6). Since X is column centered, it h∗ follows that X1 = 0, X †∗ 1 = [X ∗ (XX ∗ )† ]∗ 1 = (XX ∗ )†∗ X1 = 0, (16) If e = (I − XX † )(a − m) , 0, then h∗ = e† = h∗ X = 0,

e∗ e∗ e

=

(a−m)∗ (I−XX † ) , (a−m)∗ (I−XX † )(a−m)

X † h = 0,

and it follows that

h∗ (a − m) = 1 = (a − m)∗ h,

(17)

In view of Eq.(16) and the definition of h, it follows that e = XX † − XX † (a − m)h∗ + (a − m)h∗ = XX † + (e∗ e)hh∗ , XN

(18)

e ∗ = XN. e In view of Eq.(16), Eq.(17) and the definition of h, it follows Thus, since XX † and hh∗ are hermitian, (XN) that [ † ] 1 X X + n(n−1) 11∗ − 1n 1 e NX = (19) n−1 − n1 1∗ n e = (N X) e ∗ . In view of Eq.(16), Eq.(17) and the definition of h, it follows Thus, since X † X and 11∗ are hermitian, N X that e X e = (XN) e X e = [X − 1 (a − m)1∗ , n − 1 (a − m)] = X, e XN (20) n n and [ † ] 1 X − X † (a − m)h∗ − n−1 1h∗ e e N XN = N(XN) = = N, (21) h∗ e† = N when e = (I − XX † )(a − m) , 0. Therefore, based on Eq.(18 – 21), we have X † If e = (I − XX )(a − m) = 0, first notice that, the formulae for h in the second case of Eq.(6) is well defined in that the denominator of h is greater than 0 in this case. It follows from the definition of h that XX † h = h,

h∗ XX † = h∗ ,

h∗ X =

n−1 (a − m)∗ X †∗ , k

(22)

where k = n + (n − 1)(a − m)∗ X †∗ X † (a − m) denotes the denominator of h. In view of Eq.(16) and the fact that (a − m) = XX † (a − m), it follows that e = XX † − XX † (a − m)h∗ + (a − m)h∗ = XX † , XN

(23)

In view of Eq.(22), it follows that e= NX

[

e1,1 NX 1 ∗ ∗ h X − n h (a − m)1∗ 5

X ∗ h − 1n h∗ (a − m)1 n−1 ∗ n h (a − m)

] ,

(24)

e1,1 = X † X − n−1 X † (a − m)(a − m)∗ X †∗ − 1 X † (a − m)1∗ − 1 1(a − m)∗ X †∗ + 1 h∗ (a − m)11∗ . Thus, since where N X k k k n(n−1) e1,1 is hermitian, (N X) e ∗ = N X. e In view of the fact that (a − m) = XX † (a − m), it follows that NX e X e = (XN) e X e = [X − 1 (a − m)1∗ , n − 1 (a − m)] = X, e XN n n and in view of h∗ XX † = h∗ , it follows that e = N(XN) e = N XN

[

X † − X † (a − m)h∗ − h∗

1 ∗ n−1 1h

(25)

] = N,

(26)

e† = N when e = (I − XX † )(a − m) = 0. Therefore, based on Eq.(23 – 26), we have X Remark: The study for updating the generalized inverse of centered matrix when a new column is appened has been studied in Liu et al. (2009) which aims to provide a way for incremental learning of LDA. However, Liu et al. e − rank(X) = 1 which falls into the first case of Theorem 3 as (2009) only gives the updating formulae when rank(X) e shown by following: If rank(X) − rank(X) = 1, then (a − m) < R(X) = R(XX † ) = N(I − XX † ) by Eq.(13) in Lemma 5, thus e = (I − XX † )(a − m) , 0. 6.4. The proof of Theorem 4 e is column centered, in view of Eq.(16), it follows that Proof. Since X b = −x, X1

U ∗ 1 = −l,

(27)

According to the definition of generalized inverse (Eq.(9)), we have: eX e† = XU b + xl∗ is hermitian matrix , X e† X e= X

[

b l∗ X b UX

l∗ x Ux

] is hermitian matrix ,

eX e† X e = [XU b x + xl∗ x, XU b X b + xl∗ X] b = [x, X], b X e† X eX e† = X If θ = 1 − follows that

n ∗ n−1 l x

[

b + l∗ xl∗ l∗ XU b + U xl∗ U XU

n , 0 and let N = U + 1θ ( n−1 U xl∗ +

(28)

1 ∗ n−1 1l ).

]

[ =

l∗ U

(29) (30)

] ,

(31)

In view of Eq.(30), Eq.(31) and the definition of θ, it

b x = 1 + (n − 1)θ x, l∗ XU b = 1 + (n − 1)θ l∗ , XU n n In view of Eq.(27) and Eq.(32), it follows that b + xl∗ , XN = XU

(32)

(33)

Thus, based on Eq.(28), XN is hermitian. In view of Eq.(29), it follows that b+ NX = U X

1 n 1 b+ b + 1 − θ 11∗ , U x1∗ + 1l∗ X U xl∗ X θ(n − 1) θ(n − 1) θ(n − 1) θn(n − 1)

(34)

b = (U x)∗ and U X b is hermitian implied by Eq.(29), NX is hermitian. In view of Eq.(30), it follows that Thus, since l∗ X b+ XNX = (XN)X = X

6

1 x1∗ = X, n−1

(35)

and in view of Eq.(31) and Eq.(32), it follows that 1 n 1 NXN = N(XN) = U + ( U xl∗ + 1l∗ ) = N, θ n−1 n−1

(36)

e† = N when θ , 0. Therefore, based on Eq.(33 – 36), we have X n ∗ 1 If θ = 1 − n−1 l x = 0 and let N = U − l∗ l Ull∗ . In view of Eq.(32) and θ = 0, it follows that b x = 1 x, XU n

b = l∗ XU

1∗ l, n

(37)

In view of Eq.(27), it follows that b − 1 XUll b ∗, XN = XU l∗ l Thus, based on Eq.(28) and Eq.(37), XN is hermitian. In view of l∗ x = b− NX = U X

(38) n−1 n

and Eq.(37)it follows that

1 11∗ , n(n − 1)

(39)

b is hermitian implied by Eq.(29) , NX is hermitian. In view of Eq.(30), it follows that Thus, since U X b+ XNX = X

1 x1∗ = X, n−1

(40)

1 Ull∗ = N, l∗ l

(41)

and in view of Eq.(31) and Eq.(37), it follows that NXN = U −

e† = N when θ = 0. Therefore, based on Eq.(38 – 41), we have X 6.5. Extension to insertion and deletion of any column Extension to insertion or deletion of any column can be easily transformed based on equation (AQ)† = Q∗ A† where Q is unitary matrix. For example, based on Lemma 2, we provide the updating formula for the generalized inverse when any column of the original data matrix is deleted, which is shown in Lemma 6 below. Extension to insertion and centered cases is similar to this.    E  b = [L, a, K] ∈ C m×n , where a ∈ C m×1 , L ∈ C m×(k−1) , K ∈ C m×(n−k) and 1 ≤ k ≤ n . And A b† =  h∗  ∈ Lemma 6. Let A   F [ ] E C n×m , where h ∈ C m×1 , E ∈ C (k−1)×m and F ∈ C (n−k)×m . Denote A = [L, K] and G = . Then the generalized F inverse of matrix A is given by †

A =

{

G+ G−

1 ∗ 1−h∗ a Gah 1 ∗ h∗ h Ghh

if 1 − h∗ a , 0 if 1 − h∗ a = 0

(42)

b only differ in the order of the columns, there is a Proof. Define matrix Q = [A, a] = [L, K, a]. Since Q and A b Then we have Q† = P∗ A† by Eq.(15) in Lemma 5. Now P as unitary permutation matrix P, such that Q = AP. b and P∗ as a left multiplier permutes the rows of A b† in the same order,so a right[ multiplier permutes columns of A, ] G Q† = . Then it follows from Eq.(3) that A† can be written in the form of Eq.(42). h∗ 7

Online Updating the Generalized Inverse of Centered Matrix

Keywords: Generalized Inverse, Centered Matrix, Online Learning, Data ... Online updating for the generalized inverse of original data matrix when a row or.

61KB Sizes 1 Downloads 142 Views

Recommend Documents

Matrix Multiplication and Inverse in Excel.pdf
fashion. Matrices consisting of a single row or a single column are called. vectors. Even though the functions are “named” with matrix there is no. help in Excel ...

Generalized Information Matrix Tests for Copulas
†University of Sydney Business School, Sydney; email: [email protected]. ‡Lehrstuhl ... This process – known as a pair-copula ... simulations. Recently, Huang and Prokhorov (2014) proposed a “blanket” test based on the informati

Generalized Information Matrix Tests for Copulas
vine (R-vine) copula models, which can have a relative large dimension, yet ... (X1,...,Xd) ∈ Rd. All tests we consider are based on a pseudo-sample U1 = (U11 ...

Mixtures of Inverse Covariances
class. Semi-tied covariances [10] express each inverse covariance matrix 1! ... This subspace decomposition method is known in coding ...... of cepstral parameter correlation in speech recognition,” Computer Speech and Language, vol. 8, pp.

Impact of the updating scheme
May 21, 2008 - 2 Department of Physics, Korea Advanced Institute of Science and Technology,. Daejeon ... Online at stacks.iop.org/JPhysA/41/224010. Abstract ..... when we vary the degree of asynchronous update as parameterized by p.

The M-matrix inverse problem for the Sturm-Liouville ...
Mar 7, 2009 - enables us to recover, from the M-matrix, the boundary conditions and the ... up to a unitary equivalence for co-normal boundary conditions.

the existence of an inverse limit of an inverse system of ...
Key words and phrases: purely measurable inverse system of measure spaces, inverse limit ... For any topological space (X, τ), B(X, τ) stands for the Borel σ- eld.

Inverse Functions and Inverse Trigonometric Functions.pdf ...
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

The Inverse Care Law
general practice, and many took the only positions open to them, bringing with them new standards of care. Few of those doctors today would choose to work .... planning and chiropody services) and subsequently (dental and prescription charges, rising

Online Determination of Track Loss Using Template Inverse Matching
Sep 29, 2008 - match between the real target and one of the virtual targets. Under this .... The experiments are implemented on a standard PC. (Pentium IV at ...

Updating the tumor/external surrogate correlation function - College of ...
Sep 8, 2007 - Online at stacks.iop.org/PMB/52/1. Abstract ... The final selection of which model will be used during treatment is based on the comparison of a modified ...... Radiology and Oncology (ASTRO): 49th Annual Meeting pp 42–3.

Updating Contact Information.pdf
Jun 5, 2017 - Network Participation. ○ Disclosure information. ○ ACC opt-in changes. 1. Login to Provider Web Portal. 2. Click Provider Maintenance.

Updating Affiliations.pdf
Page 1 of 17. UPDATED 03/12/17. Page 1 of 17. Provider Maintenance - Provider Web Portal Cheat Sheet: Individual within a Group Provider Maintenance .

Regular updating - Springer Link
Published online: 27 February 2010. © Springer ... updating process, and identify the classes of (convex and strictly positive) capacities that satisfy these ... available information in situations of uncertainty (statistical perspective) and (ii) r

Electronically Filed Intermediate Court of ... - Inverse Condemnation
Electronically Filed. Intermediate Court of Appeals. CAAP-14-0000828. 31-MAR-2016. 08:24 AM. Page 2. Page 3. Page 4. Page 5. Page 6. Page 7. Page 8. Page 9. Page 10. Page 11. Page 12. Page 13. Page 14. Page 15. Page 16. Page 17. Page 18. Page 19. Pag

Online PDFMedical-Surgical Nursing: Patient-Centered Collaborative ...
PDF online, PDF new Medical-Surgical Nursing: Patient-Centered Collaborative Care (2 Volume Set), Online PDF Medical-Surgical Nursing: Patient-Centered Collaborative Care (2 Volume Set) Read PDF Medical-Surgical Nursing: Patient-Centered Collaborativ

The Reading Matrix:
Jan 15, 2018 - An International Online Journal invites submissions of previously ... the hypertext and multimedia possibilities afforded by our World Wide Web ...

Detroit and the Decline of Urban America - Inverse Condemnation
Sep 23, 2013 - years. He was one of the lawyers who represented the Sisters of Mercy ... touched; it made no real effort to fix its abysmal schools, to nurture new ..... pensation” include damage to business stock in trade, for moving expenses,.

Consistent updating of social welfare functions
ten of the kind and busy ants came out and carried the poor grasshopper into their house. ...... random utility profiles for group J conditional on E. Let {Et} be the ...

The regularity of generalized solutions of Hamilton ...
A function u(t,x) in Lip([0,T) × Rn) is called a Lipschitz solution of. Problem (1.1)–(1.2) if u(t,x) satisfies (1.1) almost everywhere in and u(0,x)= (x) for all x ∈ Rn . We give here a brief presentation of method of characteristics of the Cau

Updating your ePortfolio
Showcase your works here. All your hard work should not go to waste! It can be an art prep work, a picture of your completed artifact, your recipe and cooking etc. Write about your achievements here. Start young and collect your stories. More importa

mixtures of inverse covariances: covariance ... - Vincent Vanhoucke
Jul 30, 2003 - archive of well-characterized digital recordings of physiologic signals ... vein, the field of genomics has grown around the collection of DNA sequences such ... went the transition from using small corpora of very constrained data (e.

Town of Silverthorne v. Lutz - Inverse Condemnation
Feb 11, 2016 - real property by condemnation through the power of eminent domain.” Id. at § 9. .... source of the funds with which the Town would pay for the property it sought to ..... consider alternative locations for the Trail. ¶ 36. Assuming