LINEAR ALGEBRA Week 2 1.7 Linear Independence 1.8 Introduction to Linear Transformations 1.9 The Matrix of a Linear Transformation Neil V. Budko

December 15, 2016

Neil V. Budko

Lay 1.7, 1.8, 1.9

1.7 Linear Independence

Neil V. Budko

Lay 1.7, 1.8, 1.9

1.7 Linear Independence

Linearly dependent and linearly independent vectors Linear dependence and solutions of homogeneous systems When the set of vectors is linearly dependent/independent?

Neil V. Budko

Lay 1.7, 1.8, 1.9

Definition of linear independence

DEFINITION: A set of vectors {v1 , v2 , . . . , vp } in Rn is linearly independent if the vector equation: x1 v1 + x2 v2 + · · · + xp vp = 0 has only the trivial solution (all weights are zero x1 = x2 = · · · = xp = 0). The set {v1 , v2 , . . . , vp } in Rn is linearly dependent if there exists weights {c1 , c2 , . . . , cp , not all equal to zero, such that: c1 v1 + c2 v2 + · · · + cp vp = 0

Neil V. Budko

Lay 1.7, 1.8, 1.9

Connection to homogeneous equations

A set of vectors {v1 , v2 , . . . , vp } in Rn is linearly independent if the homogeneous matrix equation: Ax = 0, where A = [v1 v2 . . . vp ] has only the trivial solution x = 0. The set {v1 , v2 , . . . , vp } in Rn is linearly dependent if the above homogeneous matrix equation has a nontrivial solution x 6= 0.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.7.1 Let       1 4 2 v1 = 2 , v2 = 5 , v3 = 1 3 6 0 (a) Are these vectors linear independent? (b) If they are not, show one linear dependence relation between them. SOLUTION: (a) Write down the augmented matrix of the equivalent homogeneous matrix equation A = [v1 v2 v3 0] and row-reduce it:       1 4 2 0 1 4 2 0 1 4 2 0 2 5 1 0 ∼ 0 −3 −3 0 ∼ 0 −3 −3 0 3 6 0 0 0 −6 −6 0 0 0 0 0

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.7.1 (continued) Clearly x3 is a free variable. Hence, the homogeneous system Ax = 0 has a nontrivial solution x 6= 0 for each value of x3 . (b) To show a linear row-reduction of the  1 4 2 0 −3 −3 0 0 0

dependence relation perform the complete augmented matrix:      0 1 4 2 0 1 0 −2 0 0 ∼ 0 1 1 0 ∼ 0 1 1 0 0 0 0 0 0 0 0 0 0

Now, we see that x1 = 2x3 , x2 = −x3 , and x3 is free. Choose any value for x3 , say x3 = 5. Then, x1 = 10 and x2 = −5. These are the possible values of the weights in the linear dependence relation between v1 , v2 , and v3 . Hence, we can write this relation down as: 10v1 − 5v2 + 5v3 = 0 Neil V. Budko

Lay 1.7, 1.8, 1.9

Linear independence of matrix columns

The columns of matrix A are linearly independent if and only if the homogeneous equation Ax = 0 has only the trivial solution x = 0. (If there exists a nontrivial solution x 6= 0, then the columns of A are linearly dependent.)

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.7.2 Determine if the columns of the matrix A are linearly independent.   0 1 4 A = 1 2 −1 5 8 0

SOLUTION:       0 1 4 0 1 2 −1 0 1 2 −1 0 1 2 −1 0 ∼ 0 1 4 0 ∼ 0 1 4 0 5 8 0 0 5 8 0 0 0 0 13 0 There are no free variables, and the only possible solution is x = 0. Hence, the columns are linearly independent.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Seeing linear dependence by inspection

Set of one vector {v} is linear dependent if and only if v = 0, since in that case cv = 0 for any c 6= 0. Set of two vectors {v1 , v2 } is linear dependent if and only if one of these vectors is a scalar multiple of another. For example, let v2 = av1 , then the linear dependence relation is: c1 v1 + c2 v2 = c1 v1 + c2 av1 = (c1 + c2 a)v1 = 0 Hence, we can always find c1 and c2 both not equal to zero, such that c1 + c2 a = 0, i.e. pick any c2 6= 0 and set c1 = −c2 a.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.7.3

Consider the following sets for linear dependence:         3 6 3 6 (a) v1 = , v2 = ; (b) v1 = , v2 = . 1 2 2 2

SOLUTION: These are sets of two vectors, so we may use the inspection technique. (a) Clearly v2 = 2v1 , and the set is linearly dependent. (b) There does not exist a constant a, such that v2 = av1 . Hence, the set is linearly independent.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Sets of two or more vectors

Theorem. A set S = {v1 , . . . , vp } of two or more vectors is linearly dependent if and only if at least one of these vectors is a linear combination of the other vectors. In fact, if S is linearly dependent and v1 6= 0, then there is a vector vj in this set which is a linear combination of the preceding vectors v1 , v2 , . . . , vj−1 . Remark. There may be vectors in S that are not linear combinations of the preceding vectors. There may also be vectors in S that are not linear combinations of other vectors (from S) at all.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Linear dependence and the size of the set

Theorem. If the set S = {v1 , . . . , vp } in Rn contains more vectors than the length of its vectors, i.e., if p > n, then the set S is linearly dependent.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Linear dependence and the zero vector

Theorem. Every set containing the zero vector 0 is linearly dependent.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.7.6 Consider the following sets for linear dependence:                   −2 3 1 2 3 4 2 0 1  4  −6    (a) 7 , 0 , 1 , 1 (b) 3 , 0 , 1 (c)   6  , −9 6 9 5 8 0 0 8 10 15

SOLUTION: (a) This is a set of 4 vectors each length 3. It is linearly dependent since 4 > 3. (b) This set is linearly dependent because it contains the zero vector. (c) This is a linearly independent set of two vectors, since they are not scalar multiples of each other.

Neil V. Budko

Lay 1.7, 1.8, 1.9

1.8 Introduction to Linear Transformations

Neil V. Budko

Lay 1.7, 1.8, 1.9

1.8 Introduction to Linear Transformations

Transformations, their domain, codomain, and range Analysis of matrix transformations Linear transformations

Neil V. Budko

Lay 1.7, 1.8, 1.9

Domain, codomain, and range

1

A transformation T : Rn → Rm from Rn to Rm is a rule T that assigns to each vector x ∈ Rn a vector T (x) ∈ Rm

2

The set Rn is called the domain of the transformation T

3

The set Rm is called the codomain of the transformation T

4

For a given vector x ∈ Rn the vector T (x) ∈ Rm is called the image of x under the action of T . The set of all images T (x) is called the range of T

Neil V. Budko

Lay 1.7, 1.8, 1.9

Domain, codomain, and range

Fact: The range of T can be smaller than the codomain of T .

Neil V. Budko

Lay 1.7, 1.8, 1.9

Matrix multiplication as a transformation

Ax = b, and Au = 0     1       1      5 4 −3 1 3  4 −3 1 3 1  4 = 0 = and 0 8 2 0 5 1 −1 2 0 5 1 1 3 1

Neil V. Budko

Lay 1.7, 1.8, 1.9

Matrix transformations

Let x ∈ Rn and let T (x) be computed as Ax, where A ∈ Rm×n . Then, The domain is Rn The codomain is Rm The range is a subset of Rm consisting of all possible linear combinations of the columns of A, i.e., the Span of the columns of A

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.8.1 (analysis of matrix transformations)

Let 

       1 −3 3 3 2      5 ,u = A= 3 , b = 2 , c = 2 , −1 −1 7 −5 5 and define the transformation T : R2 → R3 as T (x) = Ax. (a) Find T (u), the image of u under the transformation T . (b) Find a vector x whose image under T is b. (c) Is there more than one vector whose image under T is b? (d) Determine if c is in the range of T .

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.8.1 (continued) (a) To find the image of u simply compute Au:     1 −3   5 2 5 Au =  3 = 1  −1 −1 7 −9 (b) To find pre-image of b, solve Ax = b for x:     3 1 −3   x1   3 5 = 2 x2 −5 −1 7 Row-reduction of  1 −3 3 5 −1 7

the augmented matrix:      3 1 −3 3 1 −3 3 2  ∼ 0 14 −7 ∼ 0 14 −7 ∼ −5 0 4 −2 0 0 0 Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.8.1 (continued)

Row-reduction continued:     1 −3 3 1 0 1.5 ∼ 0 1 −0.5 ∼ 0 1 −0.5 0 0 0 0 0 0 The solution is the vector  x=

1.5 −0.5



(c) This pre-image vector is unique (since there are no free variables). Hence, there are no other x whose image is b.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.8.1 (continued)

(d) To find if c is in the range of T , perform row-reduction of the matrix A augmented by the vector c:       1 −3 3 1 −3 3 1 −3 3 3 5 2 ∼ 0 14 −7 ∼ 0 14 −7 −1 7 5 0 4 8 0 0 10 There is a pivot in the right-hand-side column (last row). Hence, the system is inconsistent, i.e., there is no x such that Ax = c. Therefore, c is not in the range of A, i.e., not in the range of T .

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.8.2 (projection)

The following matrix-vector product defines the projection transformation of points on R3 to the (x1 , x2 )-plane in R3 :      x1 1 0 0 x1 0 1 0 x2  = x2  0 x3 0 0 0

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.8.3 (shear) This matrix-vector product is a shear transformation in (x1 , x2 )-plane:      1 3 x1 x1 + 3x2 = 0 1 x2 x2

Check the transformation of each corner.

Neil V. Budko

Lay 1.7, 1.8, 1.9

General linear transformations

Definition. A transformation T is linear if: 1

T (u + v) = T (u) + T (v) for all u and v in the domain of T .

2

T (cu) = cT (u) for all scalars c and all u in the domain of T .

Fact. Every matrix transformation is a linear transformation: A(u + v) = Au + Av,

Neil V. Budko

A(cu) = cAu

Lay 1.7, 1.8, 1.9

Beware of transformations that look linear

Consider the affine transformation: T (x) = Ax + b Is it linear? It sure looks linear. Wait, checking... T (u + v) = A(u + v) + b = Au + Av + b while T (u) + T (v) = Au + b + Av + b No, it is not!

Neil V. Budko

Lay 1.7, 1.8, 1.9

Superposition principle

If T is a linear transformation, then: 1

T (0) = 0.

2

T (cu + dv) = cT (u) + dT (v) for all scalars c and d, and all u and v in the domain of T .

The general superposition principle: T (c1 v1 + c2 v2 + · · · + cp vp ) = c1 T (v1 ) + c2 T (v2 ) + · · · + cp T (vp ) Given a system that performs a linear transformation (linear system), a linear combination of inputs will result in a linear combination of outputs with the same weights (same linear combination).

Neil V. Budko

Lay 1.7, 1.8, 1.9

Contraction and dilation

Multiplication by a scalar r is a linear transformation T (x) = r x (prove it). It is called contraction, if 0 ≤ r < 1, and dilation, if r > 1.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.8.5 (rotation) This is an example of a counterclockwise rotation transformation by π/2 radian in (x1 , x2 )-plane:      0 −1 x1 −x2 = 1 0 x2 x1

Neil V. Budko

Lay 1.7, 1.8, 1.9

1.9 The Matrix of a Linear Transformation

Neil V. Budko

Lay 1.7, 1.8, 1.9

1.9 The Matrix of a Linear Transformation

Standard matrix for the linear transformation Geometric linear transformations on R2 Existence and uniqueness questions

Neil V. Budko

Lay 1.7, 1.8, 1.9

The identity matrix The identity matrix In is a n-by-n matrix with ones on the main diagonal. For example a 3-by-3 identity matrix I3 is   1 0 0 I3 = 0 1 0 0 0 1 The main property of I is In x = x, for all x ∈ Rn The columns of the identity matrix are denoted by ei , e.g. in R3 ,       1 0 0 e1 = 0 , e2 = 1 , e1 = 0 0 0 1 Hence, every x ∈ Rn can be represented as: x = In x = x1 e1 + x2 e2 + · · · + xn en . Neil V. Budko

Lay 1.7, 1.8, 1.9

The standard matrix of a linear T (x)

Theorem. Let T : Rn → Rm be a linear transformation. Then, there exists a unique matrix A such that T (x) = Ax, for all x ∈ Rn . In fact, A is an m-by-n matrix whose j-th column is the vector T (ej ) ∈ Rm obtained by acting with T on the vector ej (j-th column of the n-by-n identity matrix): A = [T (e1 ) T (e2 ) . . . T (en )]

Neil V. Budko

Lay 1.7, 1.8, 1.9

The standard matrix - Proof The claim is that every linear transformation T (x) between Rn and Rm is a matrix transformation Ax with a special kind of m-by-n matrix A. Indeed: T (x) = T (In x) = T (x1 e1 + x2 e2 + · · · + xn en ) = T (x1 e1 ) + T (x2 e2 ) + · · · + T (xn en ) = x1 T (e1 ) + x2 T (e2 ) + · · · + xn T (en )   x1  x2    = [T (e1 )T (e2 ) . . . T (en )]  .  = Ax  ..  xn Fact: Every (discrete) linear transformation is a matrix transformation and vise versa. The columns of the standard matrix can be obtained by acting with T on the columns of the unit matrix. Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.9.2 (finding the standard matrix)

Find the standard matrix of the dilation transformation T (x) = 3x for x ∈ R2 . SOLUTION: We compute the columns of A by acting with T on the columns of the unit matrix I2 :         1 3 0 0 T (e1 ) = 3e1 = 3 = , T (e2 ) = 3e2 = 3 = 0 0 1 3   3 0 A= 0 3

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.9.3 (standard matrix of the rotation) Find the standard matrix of the transformation T (x) that rotates x ∈ R2 by ϕ radians in the positive direction. SOLUTION: It can be proven geometrically that this is a linear transformation. Hence, we only need to consider the action of T on the columns of the unit matrix in R2 . These columns are the basis vectors e1 = i and e2 = j of the 2D Cartesian system. Hence, from elementary geometry:     x-component of the rotated e1 vector cos(ϕ) T (e1 ) = = y -component of the rotated e1 vector sin(ϕ)     x-component of the rotated e2 vector − sin(ϕ) T (e2 ) = = y -component of the rotated e2 vector cos(ϕ)

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.9.3 (continued)

The standard matrix of this rotation transformation is:   cos(ϕ) − sin(ϕ) A= sin(ϕ) cos(ϕ)

Neil V. Budko

Lay 1.7, 1.8, 1.9

Geometrical transformations on R2

Neil V. Budko

Lay 1.7, 1.8, 1.9

Geometrical transformations on R2

Neil V. Budko

Lay 1.7, 1.8, 1.9

Geometrical transformations on R2

Neil V. Budko

Lay 1.7, 1.8, 1.9

Geometrical transformations on R2

Neil V. Budko

Lay 1.7, 1.8, 1.9

Geometrical transformations on R2

Neil V. Budko

Lay 1.7, 1.8, 1.9

Geometrical transformations on R2

Neil V. Budko

Lay 1.7, 1.8, 1.9

Onto transformations

Definition. A mapping T : Rn → Rm is said to be onto Rm if each b in Rm is the image of at least one x in Rn . How to check: The transformation is onto, if its standard matrix has pivots in each row.

Neil V. Budko

Lay 1.7, 1.8, 1.9

One-to-one transformations Definition. A mapping T : Rn → Rm is said to be one-to-one if each b in Rm is the image of at most one x in Rn .

“At most one” is tricky! If some b is not in the range of T , then it is not an image of any x, but it is still “an image of at most one x”. Think about it. How to check: The transformation is one-to-one, if its standard matrix has pivots in each column.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.9.4 (onto and one-to-one stuff) Given the standard matrix   1 −4 8 1 A = 0 2 −1 3 0 0 0 5 is the transformation defined by this matrix onto, one-to-one? SOLUTION: The transformation is between R4 and R3 . A is already in the echelon form. Pivots are present in all rows. Hence, Ax = b is consistent, i.e., there exists a solution x ∈ R4 for each b ∈ R3 . (each b is an image of some x). Thus, T is onto. However, not all columns have pivots (one free variable). Hence, the solution of Ax = b is not unique (more than one x has b as an image). Thus, T is not one-to-one.

Neil V. Budko

Lay 1.7, 1.8, 1.9

One-to-one and the homogeneous equation

Theorem. A linear transformation T : Rn → Rm is one-to-one if and only if the equation T (x) = 0 has only the trivial solution.

Neil V. Budko

Lay 1.7, 1.8, 1.9

Onto and one-to-one for the standard matrix Theorem. Let A be the standard matrix of the linear transformation T : Rn → Rm . Then: T is onto Rm if and only if the columns of A span Rm T is one-to-one if and only if the columns of A are linearly independent To analyze a linear transformation: 1

Check if T : Rn → Rm is indeed a linear transformation

2

Get the standard matrix A of T by working with T on the columns of In

3

To check onto property: get the echelon form of A, check if each row has a pivot

4

To check one-to-one property: inspect columns of A or get the echelon form of A and check if each column has a pivot

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.9.5 (weird notation) Let T be defined as: T (x) = T (3x1 + x2 , 5x1 + 7x2 , x1 + 3x2 ) where the input vector is in R2 with components x1 and x2 , and the output vector (see comas inside T ) is in R3 . Is T onto, one-to-one? SOLUTION: Deducing the standard matrix by inspection:       ? ?   3x1 + x2 3 1   x x T (x) = 5x1 + 7x2  = ? ? 1 = 5 7 1 x2 x2 x1 + 3x2 ? ? 1 3

Neil V. Budko

Lay 1.7, 1.8, 1.9

Example 1.9.5 (continued) So the standard matrix is   3 1 A = 5 7 1 3 Applying the Theorem. (a) T : R2 → R3 is onto R3 if and only if the columns of A span R3 . This is not possible, since we only have two columns, and there will be at most two pivots, while we need three. So T is not onto R3 (b) T : R2 → R3 is one-to-one if and only if the columns of A are linearly independent. Since we have two columns, we only need to check if they are scalar multiples of each other. They are not. Hence, they are linearly independent, and T is one-to-one.

Neil V. Budko

Lay 1.7, 1.8, 1.9

LINEAR ALGEBRA Week 2 1.7 Linear Independence ...

Dec 15, 2016 - The set {v1,v2,...,vp} in Rn is linearly dependent if there exists weights {c1 .... (c) This is a linearly independent set of two vectors, since they are.

520KB Sizes 1 Downloads 217 Views

Recommend Documents

lecture 4: linear algebra - GitHub
Inverse and determinant. • AX=I and solve with LU (use inv in linalg). • det A=L00. L11. L22 … (note that Uii. =1) times number of row permutations. • Better to compute ln detA=lnL00. +lnL11. +…

Linear Algebra (02) Solving Linear Equations.pdf
find that formula (Cramer's Rule) in Chapter 4, but we want a good method to solve. 1000 equations in ... used to solve large systems of equations. From the ...

Infinite Algebra 2 - U3 - Systems of Linear Equations ...
What is the cost each of one package of white chocoloate chip cookie dough and one package of gingerbread cookie dough? 31) Kim and Mark are selling cheesecakes for a school fundraiser. Customers can buy New York style cheesecakes and apple cheesecak

Linear Algebra-1 Notes 2.pdf
The Abdus Salam International Centre for Theoretical Physics, P.O.B. 586, 34100 Trieste, Italy,. Landau Institute for Theoretical Physics, 2 Kosygina st., 117940 ...

Matrices and Linear Algebra -
Of course you can call the help on-line also by double clicking on the Matrix.hlp file or from the starting ...... The matrix trace gives us the sum of eigenvalues, so we can get the center of the circle by: n. A ...... g (siemens) electric conductan

Matrices and Linear Algebra -
MATRIX addin for Excel 2000/XP is a zip file composed by two files: ..... For n sufficiently large, the sequence {Ai} converge to a diagonal matrix, thus. [ ]λ. = ∞. →.

Linear Algebra Review and Reference
Oct 7, 2008 - It is easy to show that for any matrix A ∈ Rn×n, the matrix A + AT is ..... into a (very large) polynomial in λ, where λ will have maximum degree n.

Linear-Algebra-Demystified-David-McMahon.pdf
Page 3 of 268. Linear-Algebra-Demystified-David-McMahon.pdf. Linear-Algebra-Demystified-David-McMahon.pdf. Open. Extract. Open with. Sign In. Main menu.

livro algebra linear boldrini pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. livro algebra linear boldrini pdf. livro algebra linear boldrini pdf. Open. Extract. Open with. Sign In. Mai

OSNAP: Faster numerical linear algebra ... - Semantic Scholar
in a low rank matrix A have been revealed, and the goal is to then recover A. This ...... Disc. Math. and Theor. Comp. Sci., AC:145–154, 2003. [30] Bo'az Klartag ...

MATH2121 Linear Algebra Quiz
(2) The set of nilpotent n × n matrices is a subspace of Mn×n(R). (F). (A is nilpotent means Ak = 0 for some positive integer k.) (3) {(A, B) | AB = BA} is a subspace ...