LECTURE 4: LINEAR ALGEBRA • We wish to solve for a linear system of equations. For now assume equal number of equations as variables N. E.g. N=4:

Ax=b: formal solution is x=A-1b; A is NxN matrix. Matrix inversion is very slow. b

Gaussian elimination • Multiply any row of A by any constant, and do the same on b • Take linear combination of two rows, adding or subtracting them, and the same on b • We can keep performing these operations until we set all elements of first column to 0 except 1st one • Then we can repeat the same to set to 0 all elemets of 2nd column, except first 2… • We end up with an upper diagonal matrix with 1 on the diagonal

Backsubstitution

Back just means we start at the bottom and move up

Pivoting • What if the first element is 0? Swap the rows (partial pivoting) or rows and columns (full pivoting)! In practice simply pick the largest element

LU decomposition • We want to solve Ax=b varying b, so we’d like to have a decomposition of A that is done once and then can be applied to several b • Suppose we can write A=LU, where L is lower diagonal and U is upper diagonal: L(Ux)=b • Then we can first solve for y in Ly=b using forward substitution, followed by Ux=y using backward substitution • Operation count N3.

We have N(N+1)/2 components for L and N(N-1)/2 for U because Uii=1. So in total N2 as in A

What if the matrix A is symmetric and positive definite? • A12=A21 hence we can set U=LT, so the LU decomposition is A=LLT • Symmetric matrix has N(N+1)/2 elements, same as L • This is the fastest way to solve such a matrix and is called Cholesky decomposition (still N3) • Pivoting is needed for LU, while Cholesky is stable even without pivoting • Use numpy.linalg import solve • L can be viewed as a square root of A, but this is not unique

Inverse and determinant • AX=I and solve with LU (use inv in linalg) • det A=L00L11L22… (note that Uii=1) times number of row permutations • Better to compute ln detA=lnL00+lnL11+…

Tridiagonal and banded matrices

solved with gaussian substitution: O(N) instead of N3 in CPU, N instead of N2 in storage

Same approach can be used for banded matrices

General sparse matrices: allow solutions to scale faster than N3

General advice: be aware that such matrices can have much faster solutions Look for specialized software NR, Press etal, chapter 2.7

Sherman-Morrison formula

• If we have a matrix A we can solve (eg tridiagonal etc) and we can add rank 1 component then we can get A-1 in 3N2:

Example: cyclic tridiagonal systems

This happens for finite difference differential equations with periodic boundary conditions

Where A is tridiagonal

Generalization: Woodbury formula • Successive application of Sherman-Morrison to rank P, with P<
• U and V are now NxP matrices • Proof same as for Sherman-Morrison

Inversion by partitioning • Sometimes we can decompose the matrix into block sub-matrices P and S (dimension p x p) and Q and R (p x s and s x p)

Vandermode and Toeplitz matrices • Can be solved with N2

• Vandermonde

• Toeplitz

Summary: linear algebra allows us to solve linear systems of equations, compute inverse and determinant of a matrix… N3 scaling is very steep: we cannot do it above N=104-105 For larger dimensions iterative methods are needed: these will be discussed when we discuss optimization

Literature • Numerical Recipes, Press et al., Ch. 2, 11 (http://apps.nrbook.com/c/index.html) • Computational physics, Newman, Ch. 6

lecture 4: linear algebra - GitHub

Inverse and determinant. • AX=I and solve with LU (use inv in linalg). • det A=L00. L11. L22 … (note that Uii. =1) times number of row permutations. • Better to compute ln detA=lnL00. +lnL11. +…

1MB Sizes 1 Downloads 330 Views

Recommend Documents

Lecture 1 - GitHub
Jan 9, 2018 - We will put special emphasis on learning to use certain tools common to companies which actually do data ... Class time will consist of a combination of lecture, discussion, questions and answers, and problem solving, .... After this da

Transcriptomics Lecture - GitHub
Jan 17, 2018 - Transcriptomics Lecture Overview. • Overview of RNA-Seq. • Transcript reconstruc囉n methods. • Trinity de novo assembly. • Transcriptome quality assessment. (coffee break). • Expression quan懿a囉n. • Differen鶯l express

Linear Algebra Reformed for 21st C Application - GitHub
Jan 16, 2017 - 2 Systems of linear equations. 94. 2.1 Introduction to ... professor-svd.html [9 Jan 2015] .... and geometric interpretation of theoretical ideas in 2-.

Lectures / Lecture 4
Mar 1, 2010 - Exam 1 is next week during normal lecture hours. You'll find resources to help you prepare for the exam, which will be comprehensive, on the.

Lectures / Lecture 4
Mar 1, 2010 - course website. After lecture today, there will also be a review section. • Assignments are graded on a /–, /, /+ basis whereas exams are graded.

Lecture: 4
Page 1 ... WAP to print ASCII value of a given digit or alphabet or special character. WAP to input two ... WAP to create a Guessing game using three player.

lecture 15: fourier methods - GitHub
LECTURE 15: FOURIER METHODS. • We discussed different bases for regression in lecture. 13: polynomial, rational, spline/gaussian… • One of the most important basis expansions is ... dome partial differential equations. (PDE) into ordinary diffe

lecture 12: distributional approximations - GitHub
We have data X, parameters θ and latent variables Z (which often are of the ... Suppose we want to determine MLE/MAP of p(X|θ) or p(θ|X) over q: .... We limit q(θ) to tractable distributions. • Entropies are hard to compute except for tractable

4 - GitHub
Feb 16, 2016 - devspecs/abi386-4.pdf, which describes the ABI for processors compati- ble with ..... cation software. The state of ..... Generally, a shared object is built with a 0 base virtual .... C++ language semantics, but it does provide some b

4 - GitHub
文件,其效果见husttrans-example.pdf。 1 .... ItalicFont={Adobe Kaiti Std}]{Adobe Song Std}. 41 ... 48. \newCJKfontfamily\FANGSONG{Adobe Fangsong Std}. 11 ...

Lecture 4 of 4.pdf
Page 2 of 13. Lecture 4. • REFERENCING EXTERNAL FILES. • ODS. • LAG & RETAIN. • ARRAYS. • SAS GRAPH. •MACROS. •STATA. Page 2 of 13 ...

4 - GitHub
Jun 18, 2016 - 用于生成邮箱地址。如\email{[email protected]}会生成如下效果的地址: ...... \email. 450 \def\email#1{. 451. \href{mailto:#1}{\texttt{#1}}. 452 }.

Linear and Discrete Optimization - GitHub
This advanced undergraduate course treats basic principles on ... DISCLAIMER : THIS ONLINE OFFERING DOES NOT REFLECT THE ENTIRE CURRICULUM ... DE LAUSANNE DEGREE OR CERTIFICATE; AND IT DOES NOT VERIFY THE.

lecture 16: ordinary differential equations - GitHub
Bulirsch-Stoer method. • Uses Richardson's extrapolation again (we used it for. Romberg integration): we estimate the error as a function of interval size h, then we try to extrapolate it to h=0. • As in Romberg we need to have the error to be in

LINEAR ALGEBRA Week 2 1.7 Linear Independence ...
Dec 15, 2016 - The set {v1,v2,...,vp} in Rn is linearly dependent if there exists weights {c1 .... (c) This is a linearly independent set of two vectors, since they are.

Old Dominion University Lecture 2 - GitHub
Old Dominion University. Department of ... Our Hello World! [user@host ~]$ python .... maxnum = num print("The biggest number is: {}".format(maxnum)) ...

lecture 5: matrix diagonalization, singular value ... - GitHub
We can decorrelate the variables using spectral principal axis rotation (diagonalization) α=XT. L. DX. L. • One way to do so is to use eigenvectors, columns of X. L corresponding to the non-zero eigenvalues λ i of precision matrix, to define new

C2M - Team 101 lecture handout.pdf - GitHub
Create good decision criteria in advance of having to make difficult decision with imperfect information. Talk to your. GSIs about their experience re: making ...

LECTURE 8: OPTIMIZATION in HIGHER DIMENSIONS - GitHub
We want to descend down a function J(a) (if minimizing) using iterative sequence of steps at a t . For this we need to choose a direction p t and move in that direction:J(a t. +ηp t. ) • A few options: fix η. • line search: vary η ... loss (co

Chapter 4 - GitHub
The mathematics: A kernel is any function K such that for any u, K(u) ≥ 0, ∫ duK(u)=1 and ∫ uK(u)du = 0. • The idea: a kernel is a nice way to take weighted averages. The kernel function gives the .... The “big-Oh” notation means we have

pdf-171\linear-algebra-and-geometry-algebra-logic-and ...
... the apps below to open or edit this item. pdf-171\linear-algebra-and-geometry-algebra-logic-and- ... ons-by-p-k-suetin-alexandra-i-kostrikin-yu-i-manin.pdf.

lecture 2: intro to statistics - GitHub
Continuous Variables. - Cumulative probability function. PDF has dimensions of x-1. Expectation value. Moments. Characteristic function generates moments: .... from realized sample, parameters are unknown and described probabilistically. Parameters a